Brain Scans Show Why Our Mind’s Eye Sees The World So Differently to Everyday Vision

Researchers have discovered a neural overlap between human and machine that helps to explain why what we see in our mind’s eye is different to the information being processed by our actual eyes when we’re peering at something in reality.

With the help of an fMRI scanner and an artificial neural network – an AI engine designed to mimic the brain – the new study draws parallels between the way the human brain works and the way a computer system works.

As well as explaining why a dog in your head doesn’t exactly match a picture of a real dog, the findings could have important implications for research into both mental health issues and the development of artificial neural networks.

“We know mental imagery is in some ways very similar to vision, but it can’t be exactly identical,” says neuroscientist Thomas Naselaris from the Medical University of South Carolina. “We wanted to know specifically in which ways it was different.”

The team used a generative neural network – one that can create pictures as well as identify them, given enough training data – and studied how it behaved as it was both trying to analyse sample images and produce its own.

This analysis was then compared with activity in human brains, as measured by an fMRI scanner. At different stages, volunteers were asked to look at images on a screen, and also to imagine mental pictures of their own inside their minds.

The neural activity in the artificial network and the human brain matched up, at least to a certain extent. The researchers were able to note similarities in the way that both artificial and human neural networks passed signals between lower, fuzzier levels of cognition and more precise, higher levels.

In terms of the human brain, looking at something involves precise signalling from the retina of the eye to the visual cortex of the brain. When we’re only imagining something, that signalling gets more blurred and less precise.

“When you’re imagining, brain activity is less precise,” says Naselaris. “It’s less tuned to the details, which means that the kind of fuzziness and blurriness that you experience in your mental imagery has some basis in brain activity.”

minds eye 2An example training image for a neural network. (Zachi Evenor/Wikimedia Commons/CC BY 4.0)

minds eye 3Comparison image generated by computer. (Guenther Noack/Wikimedia Commons/CC BY 4.0)

Neural activity elsewhere in the brain but outside of the visual cortex seems to match up for both imagined and viewed images – a link that might help scientists better understand how our brains can suffer and recover from trauma.

In the case of post-traumatic stress disorder (PTSD), for example, those affected are often troubled by intrusive flashbacks and pictures in their minds. Once we can get a handle on why these imagined pictures are so vivid, we might be able to stop them.

The researchers acknowledge that there are limitations and alternative explanations for their results. For instance, it’s possible that the subjects aren’t recalling pictures as such, but broad subjects. It’s virtually impossible to determine what a mind’s eye representation of an image looks like to our brains, leaving room for interpretation.

Even so, the study offers lots of interesting data on how pictures inside our heads are represented in terms of neural activity – and how we might be able to train artificial neural networks to get better at mimicking the same trick.

“The extent to which the brain differs from what the machine is doing gives you some important clues about how brains and machines differ,” says Naselaris. “Ideally, they can point in a direction that could help make machine learning more brain-like.”

The research has been published in Current Biology.


Author: showrunner