Does deep learning need symbols and logic?

The history of Artificial Intelligence (AI) is the history of the struggle between symbolic and neural AI: between logical manipulation of symbols of knowledge and deep learning (neural AI). “Deep Learning” models learn rules themselves from a lot of data without being explicitly programmed for it.
Deep learning still intoxicates and excites me. At the same time, however, I see more and more that deep learning has its limits in terms of intelligence:
Neural models, no matter what size, can only determine representations of the characteristics of the training data. Generative neuronal models such as GANs or the GPT Transformer language models can reassemble these features into the unprecedented, into non-existent faces or stories, but only these features, nothing more is possible:
Just as a robot would assemble new cars from a large mountain of car parts. He can also combine parts from different cars to create completely new vehicles, but only use parts he finds in his car parts mountain.
You build bigger and bigger Transformer language models with more and more parameters, which are trained on more and more data: GPT-3 has 175 billion parameters, more than a mouse has in the brain synapses. The machine was trained on 500 billion words of Internet texts.
Left: A fully connected neural network (also called feedforward network – the simplest "deep learning" model). The strengths of the connections between the punctiform neurons are called weights. These weightings are the parameters of the model, which are adjusted or recalculated during its training on a large number of examples until the model provides a satisfactory answer to its task. The parameters of our brain are synapses – these are adjusted when we learn. Right: The evolution of the large Transformer language models in billions of parameters.
Such Transformer language models are becoming more and more eloquent in the language, but they still don't have and won't have reason, morality, and common sense, no matter how big we make them.
So that they don't produce racist and misogynist sentences, they would have to be trained exclusively with texts that don't drip with racism and other prejudices. Unfortunately, there aren't any of those on the internet.
In his new article “Deep Learning Is Hitting a Wall”, psychologist, AI expert and author Gary Marcus advocates hybrid models of symbolic AI and deep learning. And I, too, now think that the only way to teach the great Transformer language models morals is with a symbolic set of rules. What do you think?

Dear visitor,

Welcome to my Brain & AI SciLogs blog.

I want to write here about all possible aspects of artificial intelligence research. I am very happy about every comment and every discussion about it, because as my mother often said:
"As long as language lives, man is not dead."
I often post updates about artificial intelligence, artificial neural networks, and machine learning on my Facebook page: Machine Learning
Here's something about my career: I studied chemistry at the Technical University of Munich and then did my doctorate at the Chair for Theoretical Chemistry at the Technical University of Munich on the formation of the genetic code and double-strand coding in nucleic acids.
After my PhD, I continued my research there for a few years on the genetic code and the complementary coding on both strands of nucleic acids:
Neutral adaptation of the genetic code to double-strand coding.
Keywords for my scientific work: molecular evolution, theoretical molecular biology, bioinformatics, information theory, genetic coding.
I am currently a lecturer in artificial intelligence at the SRH Fernhochschule and the Spiegel Academy, AI keynote speaker, writer, stage writer and science communicator.
i.a. I am two-time vice champion of the German-speaking poetry slam championships.
My book "Doktorspiele" was filmed by 20th Century FOX and ran successfully in German cinemas in 2014. The new edition of the book was published by Digital Publishers.
My non-fiction book about artificial intelligence "Is that intelligent or can that go away?" released in October 2020.
Tessloff-Verlag publishes my children's crime novels "Data detectives", beautifully illustrated by Marek Blaha, with a lot of reference to AI, robots and digital worlds.
Have fun with my blog and all the discussions here :-).

Jaromir

Related Posts

Leave a Reply

%d bloggers like this: