Neuron Bursts Can Mimic a Famous AI Learning Strategy

But for this teaching signal to solve the credit assignment problem without hitting “pause” on sensory processing, their model required another key piece. Naud and Richards’ team proposed that neurons have separate compartments at their top and bottom that process the neural code in completely different ways.

“[Our model] shows that you really can have two signals, one going up and one going down, and they can pass one another,” said Naud.

To make this possible, their model posits that treelike branches receiving inputs on the tops of neurons are listening only for bursts—the internal teaching signal—in order to tune their connections and decrease error. The tuning happens from the top down, just like in backpropagation, because in their model, the neurons at the top are regulating the likelihood that the neurons below them will send a burst. The researchers showed that when a network has more bursts, neurons tend to increase the strength of their connections, whereas the strength of the connections tends to decrease when burst signals are less frequent. The idea is that the burst signal tells neurons that they should be active during the task, strengthening their connections, if doing so decreases the error. An absence of bursts tells neurons that they should be inactive and may need to weaken their connections.

At the same time, the branches on the bottom of the neuron treat bursts as if they were single spikes—the normal, external world signal—which allows them to continue sending sensory information upward in the circuit without interruption.

“In retrospect, the idea presented seems logical, and I think that this speaks for the beauty of it,” said João Sacramento, a computational neuroscientist at the University of Zurich and ETH Zurich. “I think that’s brilliant.”

Others had tried to follow a similar logic in the past. Twenty years ago, Konrad Kording of the University of Pennsylvania and Peter König of Osnabrück University in Germany proposed a learning framework with two-compartment neurons. But their proposal lacked many of the specific details in the newer model that are biologically relevant, and it was only a proposal—they couldn’t prove that it could actually solve the credit assignment problem.

“Back then, we simply lacked the ability to test these ideas,” Kording said. He considers the new paper “tremendous work” and will be following up on it in his own lab.

With today’s computational power, Naud, Richards, and their collaborators successfully simulated their model, with bursting neurons playing the role of the learning rule. They showed that it solves the credit assignment problem in a classic task known as XOR, which requires learning to respond when one of two inputs (but not both) is 1. They also showed that a deep neural network built with their bursting rule could approximate the performance of the backpropagation algorithm on challenging image classification tasks. But there’s still room for improvement, as the backpropagation algorithm was still more accurate, and neither fully matches human capabilities.

“There’s got to be details that we don’t have, and we have to make the model better,” said Naud. “The main goal of the paper is to say that the sort of learning that machines are doing can be approximated by physiological processes.”

Source

Author: showrunner