Select Language

English

Down Icon

Select Country

Spain

Down Icon

A neuroprosthesis allows a person with ALS to 'speak' and 'sing' in real time.

A neuroprosthesis allows a person with ALS to 'speak' and 'sing' in real time.

One of the devastating consequences of amyotrophic lateral sclerosis (ALS) is the inability of those who suffer from it to communicate. Locked inside their own bodies and unable to communicate, is how some patients have described the disease.

Although there are no treatments capable of reversing or alleviating the disease, fortunately, technology is trying to alleviate the confinement of these people through technology. This pioneering technology has enabled a person with ALS to speak in real time.

Until now, the systems used were similar to text messaging. The new system, presented in the journal Nature , allows for more natural conversations.

Although this is just one case—the participant is enrolled in the BrainGate2 clinical trial at UC Davis Health —his ability to communicate through a computer was made possible by an investigational brain-computer interface (BCI). The new technology can instantly translate brain activity into speech when a person attempts to speak, effectively creating a digital vocal tract.

The system allowed the patient to "talk" to his family through a computer in real time, change his intonation, and "sing" simple melodies.

Sergey Stavisky, the paper's lead author, compares it to a real-time voice call. This way, neuroprosthesis users will be able to participate more actively in a conversation. For example, they'll be able to interrupt, and others will be less likely to accidentally interrupt them, Stavisky said.

Decoding brain signals is at the heart of new technologies

The investigational brain-computer interface (BCI) consists of four microelectrode arrays surgically implanted in the region of the brain responsible for speech production.

The devices record the activity of neurons in the brain and send it to computers that interpret the signals to reconstruct the voice.

"The main barrier to synthesizing speech in real time was not knowing exactly when and how the person with speech loss was trying to speak," says Maitreyee Wairagkar, first author of the study. "Our algorithms map neural activity to the desired sounds at each moment. This allows us to synthesize the nuances of speech and give the participant control over the cadence of their BCI voice."

The patient using the interface UC Regents

The brain-computer interface was able to translate the study participant's neural signals into audible speech, reproduced through a speaker very quickly: one-fortieth of a second. This brief delay is similar to the delay a person experiences when speaking and hearing their own voice.

The technology also allowed the participant to say new words (words the system didn't yet know) and use interjections. He was able to modulate the intonation of his computer-generated voice to ask a question or emphasize specific words in a sentence.

The participant also took steps toward varying pitch by singing simple, short melodies.

The process of instantly translating brain activity into synthesized speech is facilitated by advanced artificial intelligence algorithms.

The authors explain that the new system's algorithms were trained with data collected while the participant was asked to attempt to pronounce sentences displayed on a computer screen. This provided the researchers with information about what the participant was trying to say.

"Our voice is part of what defines us as people. Losing the ability to speak is devastating for people living with neurological conditions," says David Brandman, the neurosurgeon who performed the participant's implant.

The results of this research offer hope to those who want to speak but cannot. However, the researchers note that while the findings are promising, brain-to-speech neuroprosthetics are still in their early stages.

abc

abc

Similar News

All News
Animated ArrowAnimated ArrowAnimated Arrow