Tuesday , November 12 2019
Home / zimbabwe / Scientists teach AI to turn brain signals into speech

Scientists teach AI to turn brain signals into speech



GettyImages-91560242

Researchers worked with epilepsy patients who had brain surgery.

Paseika Scientific Photo Library / Getty Images

Neuro engineers have developed a breakthrough device that uses machine learning neural networks to read brain activity and transform it into speech.

An article published on Tuesday in the journal Scientific Reports tells how a team from the Institute of Behavior of the Mind and Brain of the Columbia University Brain used deep learning algorithms and the same technologies that drive devices like Siri from Apple and Amazon Echo to make thought "accurate" and intelligible reconstructed speech. " study was reported earlier this month but the magazine article goes deeper.

The human-computer system can ultimately provide patients who have lost the ability to speak, the ability to use their thoughts for oral communication through the synthesized voice of a robot.

“We have shown that with the right technology, the thoughts of these people can be deciphered and understood by any listener,” said Nima Mesgarani, the project’s principal investigator, in a statement.

When we speak, our brains light up by sending electrical signals around the old thought. If scientists can decipher these signals and understand how they are associated with the formation or listening to words, then we will come one step closer to translating them into speech. With enough understanding and enough computing power, this can create a device that directly translates thinking into speech.

And this is what the team did by creating a “vocoder,” which uses algorithms and neural networks to convert signals into speech.

For this, the research team turned to five patients with epilepsy who have already had brain surgery. They attached the electrodes to different exposed surfaces of the brain, and then the patients listened to the spoken sentences of 40 seconds in length, which were repeated six times at random. Listening to the stories helped train the vocoder.

Then patients listened to speakers counting from zero to nine, and their brain signals returned to the vocoder. The Vocoder algorithm, known as MIR, then spat out its own sounds, which were cleared by the neural network, which ultimately led to a speech-robotic system imitating an account. You can hear how it sounds here. It’s not perfect, but it’s certainly understandable.

“We found that people can understand and repeat sounds in about 75% of cases, which is much higher than any previous attempt,” said Mesgarani.

The researchers concluded that the accuracy of the reconstruction depends on how many electrodes were installed on the patient's brain and how long the vocoder had been studying. As expected, increasing the electrodes and increasing the learning time allows the vocoder to collect more data and leads to better reconstruction.

Looking ahead, the team wants to check which signals are being emitted when a person simply imagines a speech, but does not listen to a speech. They also hope to test a more complex set of words and sentences. Improving algorithms with large amounts of data can ultimately lead to a brain implant that completely bypasses speech, turning a person’s thoughts into words.

This would be a monumental step forward for many.

“This would give anyone who has lost the ability to speak, whether due to injury or illness, a new chance to connect with the outside world,” said Mesgarani.

NASA is 60 years old. The space agency has taken humanity farther than anyone else, and it plans to go further.

Taking it to extremes: mix insane situations – volcanic eruptions, nuclear accidents, 30-foot waves – with everyday technology. This is what happens.


Source link