Two people regain speech thanks to brain implants
Researchers were able to translate neural signals into words with experimental devices in one woman with ALS and another who suffered a stroke
Jaimie Henderson became interested in people who lose the ability to communicate at a very early age. In a video call presentation of his latest research in this field, the Stanford University researcher recalls that, when he was five years old, his father was involved in a very serious car accident. “He kept telling jokes, and I laughed at his jokes, but I couldn’t understand him because his ability to speak was so impaired,” he said. That led Henderson to study how neurons encode movement and speech, and then search for ways to recover those abilities in people with neurological damage. Henderson is the lead author of one of the two papers published today in Nature that offer hope for many people, like his father, to regain their ability to communicate.
The first study was conducted at Stanford University, with patient Pat Bennet, a 68-year-old woman who was diagnosed with ALS (amyotrophic lateral sclerosis) in 2012. Of the various manifestations of the disease, Bennet has a version that has allowed her to continue to move, albeit with increasing difficulty, but robbed her of her speech. Although her brain is not impaired in its ability to generate language, the muscles in her lips, tongue, larynx and jaw do not allow her to say anything.
That problem was solved, at least in part, with two sensors — smaller than a fingernail in size — implanted in her brain to collect signals from individual neurons in two regions associated with language: the ventral premotor cortex and Broca’s area (the latter was not useful for the purpose of the study). The researchers used these neural implants and software to link the brain signals and Bennet’s attempts to pronounce words. After four months of learning, the system combined all this information with a computer language model that made it possible for the patient to produce sentences at 62 words per minute. The figure is just under half the speed of normal speech and, using a vocabulary of more than 100,000 words, there was one error for every four words spoken, but the results are three times better than similar communication systems that have been tested to this point.
The second study, led by Edward Chang of the University of California, San Francisco (UCSF), obtained similar results with a somewhat different system. In this case, the brain implants (consisting of 253 microelectrodes) collected signals from more diverse regions of the brain of a woman, Ann, who lost her speech over 17 years ago due to a stroke. They managed to reach 78 words per minute with a base vocabulary of just over 1,000 words. The error rate was 25.5% when vocal tract movements were included to reconstruct the words and 54.4% when brain signals were translated directly into speech via a synthesizer. Although this is still far from being considered a practical solution for such ailments, the results were substantially better than those from previous experiments.
The UCSF team also wanted to add an avatar to their brain-machine interface because, as researcher Sean Metzger explained, “the goal is to restore the ability to communicate and connect with loved ones, not just to help transmit a few words. When you speak, there is a sound, emphasis and other subtleties that are lost when there is only text.” This personalized avatar — which would translate other communicative elements, such as facial expression, from brain signals — would help improve the patient’s connection with his or her interlocutors. To recreate the voice, the team used a recording of Ann speaking at her wedding, before she suffered the stroke.
Significant progress toward a practical solution
In a joint online presentation, both research teams stated that their results were comparable and that it was interesting to see for the first time that both methods of collecting signals — one more localized and the other taking them from more areas — show that these technologies can offer a practical solution. Videos from the trials show that patient communication is still not smooth, but the authors of the two studies believe their results validate each other and that they are on the right track. Three years ago, Chang’s group demonstrated that their method could decode four words in people with paralysis. Since then, the progress has been exponential.
So far, only fifty people have been implanted with brain-computer interfaces with microelectrodes to enable their communication. In addition to increasing the speed of communication, improvements for the future include developing wireless devices that do not require patients to be connected to a machine. It will also be necessary to explore whether these systems can recover speech in people who are completely trapped in their bodies, using only their brain signals to reestablish their communication.
To achieve these goals, it will also be necessary to increase the number of patients with whom researchers work, beyond the two women who participated in the two studies published today in Nature. For example, scientists need to find out if what the algorithms learn during the tedious hours of training can be used to decode the speech in a different person’s brain and study whether, when interpreting what others say, the other brain signals produced by the patient can cause failures in generating their own speech.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition