X

Contact Us

0%

Artificial Intelligence approaches and paradigms

When we speak of artificial intelligence, we are referring to computers that can be trained and programmed in such a way as to replicate the typical processes of a human brain: from associating data, to recognising patterns, to developing new strategies to perform these tasks more effectively.

But when was AI born and what are the main models on which it is based?

The birth of Artificial Intelligence

The founding moment of Artificial Intelligence, as a science and as a new field of study, dates back to 1956: it was first coined by the computer scientist and cognitive scientist John McCarthy during a summer seminar at Dartmouth College attended by cognitive scientists, physicists, mathematicians, engineers and computer scientists who in the following years distinguished themselves through research on the topic of digital information and AI.

Symbolic Artificial Intelligence

Since its inception and for almost three decades, the most popular approach to artificial intelligence among scientists and researchers has been to guide AI by giving it precise instructions to complete a set task. This is known as Gofai (Good Old-Fashioned ai) or Symbolic Artificial Intelligence, based on the manipulation of symbols as a method of approximating human intelligence. In this approach, knowledge is represented through declarative sentences according to the logic of a logical-mathematical language. From a series of knowledge (statements or declarations), the system deduces effects and, in this way, the reasoning process produces new knowledge. It is an approach that works and gives good results when working with systems of rules and stable relationships, such as those governing the game of chess, or mathematics.

From the symbolic to the sub-symbolic

Since the 1980s, and with an exponential increase after 2012, however, a second AI paradigm has emerged, called sub-symbolic (Machine Learning) and based on the application of statistical or numerical procedures. Closely linked to neuroscience studies, this method is much more flexible in explaining the world to a machine: it consists in teaching it to understand it on its own by enabling it to learn from time to time. This is done in a process that in many ways mimics human learning processes and is based on layers of artificial neural networks. In machine learning-based AI, the programmer essentially provides a learning method, to be applied to the data to which the machine has access, in order to automatically extract from it the decision rules to be applied to the concrete case.

Limits of symbolic and sub-symbolic artificial intelligence

In recent years, technological developments and the spread of AI approaches have in fact decreed the success of the sub-symbolic approach, partly due to its advantages in terms of scalability and contextual knowledge management capabilities. However, this approach lacks one of the main assets of symbolic techniques: comprehensibility. Sub-symbolic techniques, in fact, often produce predictors that are difficult for a human observer to understand, which makes decisions based on them difficult for humans to interpret.  Conversely, symbolic models are easily comprehensible, but they have limitations both in terms of performance and in terms of the ability to learn, which has greatly limited their dissemination.

The third way: hybrid AI models

In recent years, in an attempt to overcome the limitations of both approaches, a new field of research has emerged that aims to unify and exploit the merits of the symbolic and sub-symbolic paradigms in a synergetic manner. These are the so-called hybrid models, which combine the techniques of the two approaches, exploiting the ability of a neural network to process and learn from unstructured data while simultaneously using logical-symbolic techniques. Furthermore, hybrid models allow exceptional results to be obtained with less data during training.

Contact Us

Book your visit
Contact Us
Skip to content