Recently developed artificial intelligence (AI) models are capable of many impressive feats, including recognizing images and producing human-like language. But just because an AI can behave like humans doesn’t mean it can think or understand like humans.
As a researcher studying how humans understand and reason about the world, I think it is important to emphasize the way AI systems “think” and that learning is fundamentally different from how humans do. There are – and we have a long way to go before AI really thinks so. We.
a widespread misconception
The development of AI has created systems that can perform very human behavior. language model GPT-3 can produce text that is often indistinguishable from human speech. another model, Palmcan devise explanations for jokes he’s never made seen before,
Recently, a general-purpose AI known as Gato has been developed which do hundreds of jobs, which includes captioning images, answering questions, playing Atari video games, and even controlling a robot arm to stack blocks. And DALL-E is a system that has been trained to create modified images and artwork from text descriptions.
These breakthroughs have made some bold claims about the potential of such AI, and what it can tell us about human intelligence.
For example, Nando de Freitas, a researcher at Google’s AI company DeepMind, argues that scaling up existing models would be enough to produce human-scale artificial intelligence. others have echoed this scene.
In all the excitement, it’s easy to assume that human-like behavior means human-like understanding. But there are several important differences between how AI and how humans think and learn.
neural nets vs human brain
Made from the most recent AI artificial neural networks, or “nerve nets” for short. The term “nerve” is used because these networks are inspired by the human brain, in which billions of cells called neurons form complex webs of connections with each other, processing information as they fire signals back and forth. .
Neural nets are a highly simplified version of biology. A real neuron is replaced by a simple node, and the strength of the connections between nodes is represented by a number called “weight”.
With enough connected nodes in enough layers, neural nets can be trained to recognize patterns and evengeneralization“To stimuli that are similar (but not identical) to those observed previously. Simply, generalization refers to the ability of an AI system to take what it has learned from some data and apply it to new data.
Being able to identify features, recognize patterns, and generalize from results is central to the success of neural nets – and mimics the techniques used by humans for such tasks. Nevertheless there are important differences.
Neural nets are usually “trained by”supervised learningThey are therefore presented with a single input and several instances of the desired output, and then gradually the connection weights are adjusted until the network “learns” to produce the desired output.
To learn a language task, a neural net can be presented with one sentence at a time, and will gradually learn to predict the next word in the sequence.
This is very different from the way humans learn in general. Much of human learning is “unsupervised”, meaning that we are not explicitly told what is the “correct” response to a given stimulus. We have to do this work ourselves.
For example, children are not instructed how to speak, but are taught a . learn through complex process Exposure to adult speech, imitation and reaction.
Another difference is the large scale of data used to train the AI. The GPT-3 model was trained 400 billion words, mostly taken from the Internet. It would take about 4,000 years for a human to read this much text at 150 words per minute.
Such calculations suggest that humans cannot possibly learn like an AI. We have to make more efficient use of small amounts of data.
Neural nets can learn in ways we can’t
Another more fundamental difference relates to the way neural nets learn. To match a stimulus with a desired response, neural nets use an algorithm called “backpropagation” to pass errors backward through the network, allowing the weights to be adjusted accurately.
Some researchers have proposed variations Backpropagation can be used by the brain, but there is no evidence yet that the human brain can use such learning methods.
Instead, humans learn by making structured mental concepts, in which many different properties and associations are linked together. For example, our concept of “banana” includes its size, yellow colour, knowledge of its being a fruit, how to hold it, etc.
As far as we know, AI systems do not form conceptual knowledge in this way. They rely solely on extracting complex statistical associations from their training data, and then applying these in similar contexts.
Efforts are underway to create AI that Combines different types of input (such as images and text) – but it remains to be seen whether this will be enough for these models to learn the same kinds of rich mental representations that humans use to understand the world.
We still don’t know much about how humans learn, understand and reason. However, what we do know indicates that humans perform these tasks very differently to AI systems.
for example, Many researchers believe Before we can build machines that think and learn like humans, we’ll need new approaches and more fundamental insight into how the human brain works.