Thursday , September 19 2019
Home / africa / New system translates brain signals to speech

New system translates brain signals to speech



First, scientists have created a system that translates thoughts into understandable, recognizable speech that can help people who cannot speak, restore their ability to communicate with the outside world.

By controlling someone's brain activity, a technology developed by scientists from Columbia University in the United States can recreate words that a person hears with unprecedented clarity.

A breakthrough that uses the power of speech synthesizers and artificial intelligence can lead to new ways of interacting computers with the brain.

It also lays the foundation for helping people who cannot speak, for example, living with amyotrophic lateral sclerosis (ALS) or recovering from a stroke, to restore their ability to communicate with the outside world, researchers say.

“Our voices help connect us with our friends, family and the outside world, so the loss of voice power due to injury or illness is so destructive,” said Nima Mesgarani from Columbia University in the USA.

“With today's research, we have a potential way to regain this power. We have shown that with the right technology, the thoughts of these people can be deciphered and understood by any listener, ”said Mesgarani, the principal investigator of the study published in Scientific Reports.

Decades of research have shown that when people say or even imagine that characteristic patterns of activity appear in their brains.

A different signal structure also arises when we listen to someone speak, or we imagine we are listening.

Experts, trying to write down and decipher these patterns, see a future in which thoughts should not remain hidden inside the brain, but instead can be translated into oral speech at will.

However, to accomplish this feat was not easy. Early researchers' efforts to decode brain signals focused on simple computer models that analyzed spectrograms, which are a visual representation of sound frequencies.

However, due to the fact that this approach did not give anything similar to understandable speech, instead the team turned to a vocoder, a computer algorithm that can synthesize speech after learning to record people's conversations.

“This is the same technology that Amazon Echo and Apple Siri use to verbally answer our questions,” said Mesgarani.

The researchers then plan to test more complex words and sentences, and they want to perform the same tests for brain signals emitted when a person speaks or imagines speech.

Ultimately, they hope that their system may be part of an implant, similar to those worn by some epileptic patients, who translate the wearer's thoughts directly into words.


Source link