Longstanding models of the brain describe it as something like a biological computer. According to this traditional picture, the brain processes information like a relay. Individual neural cells detect a stimulus, then pass that data along from one neuron to the next, through a sequence of gates.
The model isn’t wrong, but it leaves a lot unexplained, particularly how sensory cells in animals can react differently to the same stimulus. For example, a quick flash of light might normally activate a sensory cell in an animal, but the sensory cell might not activate if the animal’s attention is focused on something else, other than the light. Experts want to know why that might happen.
In a recent paper, a team of researchers from the Salk Institute for Biological Studies, in San Diego, Cali., offer a new mathematical model and possible explanation. Rather than comparing an interaction between individual neurons to a relay, it might make more sense to compare it to ocean waves.
Information processing might in some cases be better described as an interaction of waves, explains Sergei Gepshtein, a scientist specializing in perceptual psychology and sensorimotor neuroscience and one of the authors of the study.
Instead of one neuron responding to a given stimulus, distributed patterns of neuronal activity across the brain form a wave pattern of alternating peaks and troughs, just like the peaks and troughs of electromagnetic waves or ocean waves.
And like those more familiar waves, waves of brain activity — what the researchers call neural waves — either augment or cancel each other when they meet.
“Sensory experiences arise in your mind as result of this interaction,” explains Gepshtein.
The researchers tested their mathematical model physiologically and behaviorally. In the behavioral study, researchers showed subjects briefly two light patterns made up of alternating strips of black and white lines, called luminance gratings. Between the patterns, a faint vertical line, called a probe, appears. Researchers asked subjects if the probe appeared in the top or bottom half of the luminance gratings.
The subject’s ability to detect the probe was better at some locations and worse at others. When researchers plotted the results, they formed the wave pattern that the mathematical model predicted. In other words, the ability to see the probe depended on how the neural waves were superimposed at any particular location.
There are many potential uses for this new framework for understanding perception. For example, scientists suggest it could clarify how organisms, including humans, process spatial information.
While this study focused on visual perception, Gepshtein points out that neural waves are a property of many parts of the cerebral cortex. Scientists could then use this model to understand other kinds of perception as well. They could also use it to design artificial intelligence, he says.
Gepshtein stresses that this new model does not replace the traditional one, but instead complements it.
“It’s a different way of thinking about how the brain processes information, and it helps to understand phenomena that were difficult to understand from the traditional point of view,” says Gepshtein.
A good analogy, he says, is the particle-wave duality in chemistry and physics — the discovery that electromagnetic waves, including light, have properties of both particles and waves. When thinking about how the brain processes information, we can sometimes use the traditional model of individual neurons responding to stimuli. But in many cases, we can get a clearer picture of what’s going on by thinking of the process as a wave of neuronal activity.