Our brain signals are a window into our souls. In a broader sense, the interface between the brain and the computer that reads these signals, and the accompanying algorithms that process the signals, are windows. Researchers at Columbia University have worked to understand what our brain is thinking, and, in particular, to interpret the signals generated by the auditory cortex into a coherent speech.
The researchers used intracranial encephalography (iEEG), also known as electrocorticography (ECoG), which involves placing an implant directly on the surface of the brain to read the signals. Special algorithms based on deep neural network methods were used to process the input data in reasonably clear speech.
Since such experiments cannot be performed on volunteers, the researchers worked with epilepsy patients who are going to undergo surgery. These patients already had access to their brain using craniotomy, which made it possible to test the technology without additional operations.
“We found that people can understand and repeat sounds in about 75% of cases, which is significantly superior to any previous attempts,” said a statement by Dr. Nima Mesgarani, the senior author of an article appearing in the magazine. Scientific reports, “Sensitive vocoder and powerful neural networks represented the sounds that patients initially listened to with amazing precision.”
Here is a sound reconstruction of one person counting numbers, derived exclusively from the brain waves of this person:
<img src = "data: image / gif; base64, R0lGODlhAQABAIAAAAAAAP /// yH5BAEAAAAALAAAAAABAAAAAIBRAA7" class = "lazy lazy-hidden" data-lazy-type = "iframe" data-lazy-sc"alt =" "/>
Study in a journal Scientific reports: Towards restoring intelligible speech from the human auditory cortex …
Via: Columbia University …
Image: Nima Mesgarani, PhD. Credit: John Abbott.