The successes of AI algorithms are inspiring new theories of how brains learn
Five decades of research into artificial neural networks have earned Geoffrey Hinton the moniker of the Godfather of artificial intelligence (AI). Work by his group at the University of Toronto laid the foundations for today‘s headline-grabbing AI models, including ChatGPT and LaMDA. These can write coherent (if uninspiring) prose, diagnose illnesses from medical scans and navigate self-driving cars. But for Dr Hinton, creating better models was never the end goal. His hope was that by developing artificial neural networks that could learn to solve complex problems, light might be shed on how the brain's neural networks do the same.
Brains learn by being subtly rewired: some connections between neurons, known as synapses, are strengthened, while others must be weakened. But because the brain has billions of neurons, of which millions could be involved in any single task, scientists have puzzled over how it knows which synapses to tweak and by how much. Dr Hinton popularised a clever mathematical algorithm known as backpropagation to solve this problem in artificial neural networks. But it was long thought to be too unwieldy to have evolved in the human brain. Now, as AI models are beginning to look increasingly human-like in their abilities, scientists are questioning whether the brain might do something similar after all.
Working out how the brain does what it does is no easy feat. Much of what neuroscientists understand about human learning comes from experiments on small slices of brain tissue, or handfuls of neurons in a Petri dish. It's often not clear whether living, learning brains work by scaled-up versions of these same rules, or if something more sophisticated is taking place. Even with modern experimental techniques, wherein neuroscientists track hundreds of neurons at a time in live animals, it is hard to reverse-engineer what is really going on.
One of the most prominent and longstanding theories of how the brain learns is Hebbian learning. The idea is that neurons which activate at roughly the same time become more strongly connected; often summarised as "cells that fire together wire together". Hebbian learning can explain how brains learn simple associations think of Pavlov's dogs salivating at the sound of a bell. But for more complicated tasks, like learning a language, Hebbian learning seems too inefficient. Even with huge amounts of training, artificial neural networks trained in this way fall well short of human levels of performance.
Today's top AI models are engineered differently. To understand how they work, imagine an artificial neural network trained to spot birds in images. Such a model would be made up of thousands of synthetic neurons, arranged in layers. Pictures are fed into the first layer of the network, which sends information about the content of each pixel to the next layer through the AI equivalent of synaptic connections. Here, neurons may use this information to pick out lines or edges before sending signals to the next layer, which might pick out eyes or feet. This process continues until the signals reach the final layer responsible for getting the big call right: "bird" or "not bird".
Integral to this learning process is the so-called backpropagation-of-error algorithm, often known as backprop. If the network is shown an image of a bird but mistakenly concludes that it is not, then once it realises the gaffe - it generates an error signal. This error signal moves backwards through the network, layer by layer, strengthening or weakening each connection in order to minimise any future errors. If the model is shown a similar image again, the tweaked connections will lead the model to correctly declare: "bird".
Neuroscientists have always been sceptical that backpropagation could work in the brain. In 1989, shortly after Dr Hinton and his colleagues showed that the algorithm could be used to train layered neural networks, Francis Crick, the Nobel laureate who co-discovered the structure of DNA, published a takedown of the theory in the journal Nature. Neural networks using the backpropagation algorithm were biologically "unrealistic in almost every respect" he said.
For one thing, neurons mostly send information in one direction. For backpropagation to work in the brain, a perfect mirror image of each network of neurons would therefore have to exist in order to send the error signal backwards. In addition, artificial neurons communicate using signals of varying strengths. Biological neurons, for their part, send signals of fixed strengths, which the backprop algorithm is not designed to deal with.
All the same, the success of neural networks has renewed interest in whether some kind of backprop happens in the brain. There have been promising experimental hints it might. A preprint study published in November 2023, for example, found that individual neurons in the brains of mice do seem to be responding to unique error signals, one of the crucial ingredients of backprop-like algorithms long thought lacking in living brains.
Scientists working at the boundary between neuroscience and AI have also shown that small tweaks to backprop can make it more brain-friendly. One influential study showed that the mirror-image network once thought necessary does not have to be an exact replica of the original for learning to take place (albeit more slowly for big networks). This makes it less implausible. Others have found ways of bypassing a mirror network altogether. If artificial neural networks can be given biologically realistic features, such as special-ised neurons that can integrate activity and error signals in different parts of the cell, then backprop can occur with a single set of neurons. Some researchers have also made alterations to the backprop algorithm to allow it to process spikes rather than continuous signals.
Other researchers are exploring rather different theories. In a paper published in Nature Neuroscience earlier this year, Yuhang Song and colleagues at Oxford University laid out a method that flips backprop on its head. In conventional backprop, error signals lead to adjustments in the synapses, which in turn cause changes in neuronal activity. The Oxford researchers proposed that the network could change the activity in the neurons first, and only then adjust the synapses to fit. They called this prospective configuration.
When the authors tested out prospective configuration in artificial neural networks they found that they learned in a much more human-like way-more robustly and with less training-than models trained with backprop. They also found that the network offered a much closer match for human behaviour on other very different tasks, such as one that involved learning how to move a joystick in response to different visual cues.
Learning the hard wayFor now though, all of these theories are just that. Designing experiments to prove whether backprop, or any other algorithm, is at play in the brain is surprisingly tricky. For Aran Nayebi and colleagues at Stanford University this seemed like a problem AI could solve.
The scientists used one of four different learning algorithms to train over a thousand neural networks to perform a variety of tasks. They then monitored each network during training, recording neuronal activity and the strength of synaptic connections. Dr Nayebi and his colleagues then trained another supervisory metamodel to deduce the learning algorithm from the recordings. They found that the meta-model could tell which of the four algorithms had been used by recording just a couple of hundreds of virtual neurons at various intervals during learning. The researchers hope such a meta-model could do something similar with equivalent recordings of a real brain.
Identifying the algorithm, or algorithms, that the brain uses to learn would be a big step forward for neuroscience. Not only would it shed light on how the body's most mysterious organ works, it could also help scientists build new AI-powered tools to try to understand specific neural processes. Whether it could lead to better AI algorithms is unclear. For Dr Hinton, at least, backprop is probably superior to whatever happens in the brain.