The Long Search for a Brain Computer Interface That Speaks Your Mind

Here’s the research setup: A woman speaks Dutch into a microphone, while 11 tiny needles made of platinum and iridium record her brain waves.

The 20-year-old volunteer has epilepsy, and her doctors stuck those 2-millimeter-long bits of metal—each studded with up to 18 electrodes—into the front and left side of her brain in hopes of locating the origin point of her seizures. But that bit of neural micro-acupuncture is also a lucky break for a separate team of researchers because the electrodes are in contact with parts of her brain responsible for the production and articulation of spoken words.

That’s the cool part. After the woman talks (that’s called “overt speech”), and after a computer algorithmically equates the sounds with the activity in her brain, the researchers ask her to do it again. This time she barely whispers, miming the words with her mouth, tongue, and jaw. That’s “intended speech.” And then she does it all one more time—but without moving at all. The researchers have asked her to merely imagine saying the words.

It was a version of how people speak, but in reverse. In real life, we formulate silent ideas in one part of our brains, another part turns them into words, and then others control the movement of the mouth, tongue, lips, and larynx, which produce audible sounds in the right frequencies to make speech. Here, the computers let the woman’s mind jump the queue. They registered when she was think-talking—the technical term is “imagined speech”—and were able to play, in real time, an audible signal formed from the interpolated signals coming from her brain. The sounds weren’t intelligible as words. This work, published at the end of September, is still somewhat preliminary. But the simple fact that they happened at the millisecond-speed of thought and action shows astonishing progress toward an emerging use for brain computer interfaces: giving a voice to people who cannot speak.

That inability—from a neurological disorder or brain injury—is called “anarthria.” It’s debilitating and terrifying, but people do have a few ways to deal with it. Instead of direct speech, people with anarthria might use devices that translate the movement of other body parts into letters or words; even a wink will work. Recently, a brain computer interface implanted into the cortex of a person with locked-in syndrome allowed them to translate imagined handwriting into an output of 90 characters a minute. Good but not great; typical spoken-word conversation in English is a relatively blistering 150 words a minute.

The problem is, like moving an arm (or a cursor), the formulation and production of speech is really complicated. It depends on feedback, a 50-millisecond loop between when we say something and hear ourselves saying it. That’s what lets people do real-time quality control on their own speech. For that matter, it’s what lets humans learn to talk in the first place—hearing language, producing sounds, hearing ourselves produce those sounds (via the ear and the auditory cortex, a whole other part of the brain) and comparing what we’re doing with what we’re trying to do.

Source

Author: showrunner