Neuroscientists at Columbia University in New York state that they have created a system that can translate human thought into recognizable speech that would change not only medicine but also communication.
By monitoring the brain activity of subjects, it was possible that Mortimer B. Zuckerman's Mind Brain Behavior Institute researchers in Colombia were able to train artificial intelligence and convert thought patterns into comprehensible sentences, according to an article published on Tuesday in the Scientific Reports. The authors perceive patients with speech disorder due to illness or trauma as the first adopters of emerging technology.
"We have shown that with the right technology, the ideas of these people can be decoded and understood by any listener," said Nima Mesgarani, lead author of the article.
After an early attempt by scientists to turn brain activity into recognizable speech, it was not possible to turn to a computer algorithm that could generate a speech called a vocoder. The algorithm is improving the more "Trained" recording human speech.
Researchers carry brain signals directly into the speechhttps: //t.co/qEStGOoPOW
This breakthrough, which employs the power of speech synthesizers and artificial intelligence, can lead to new ways of communicating computers directly with the brain. #ai#BCI#speech#Science
– Neuroscience News (@NeuroscienceNew) January 29, 2019
"It's the same technology used by Amazon Echo and Apple Siri to provide verbal answers to our questions," said Dr. Mesgarani, who is also a professor at Columbia's Fu Foundation School of Engineering and Applied Science.
Vokoder was trained to interpret brain activity with the help of Dr. Ashesh Dinesh Mehta, a neurosurgeon at the Northwell Health Neuroscience Institute in Long Island, and co-author of this article.
"When working with Dr. Mehtem, we have asked patients with epilepsy who have already undergone a brain surgery to listen to the sentences spoken by different people while we have measured patterns of brain activity" said Dr. Mesgarani. "These neural patterns were trained by the watercolor."
After the end of this training, the next phase began. Patients listened to a person who read numbers from 0 to 9, while the algorithm scanned brain activity and tried to translate it into sound. The result was the reading of the number of robotic voices that the human listeners could understand and repeat with an accuracy of 75 percent.
This may seem quite modest, but Dr. Mesgarani said that such a result is "Above any previous attempts." Scientists plan to further improve the system so that it can accept as brain input patterns a person who thinks about speech and does not listen to it.
"It would be a player. Anyone who has lost the ability to talk, whether by injury or illness, has a renewed opportunity to connect with the outside world," he said. Said Dr. Mesgarani.
Technology will also have to work with more complex words and sentences to make it more practical. The ultimate goal of the team is to create an implant that synthesizes speech directly from thinking.
Do you think that your friends are interested? Share this story!