Tuesday , June 18 2019
Home / zimbabwe / Tell me what ?! The AI ​​system can decode brain signals into the speech • Register

Tell me what ?! The AI ​​system can decode brain signals into the speech • Register



AI algorithms can help scientists process brainwaves and convert them directly into speech, according to new research.

"Our voices help us connect with our friends, family and the world around us, and that's why losing power over the wounds or illness is so devastating," said Nima Mesgarani, lead author of a report published in the journal Science and Researcher at Columbia University. "Today's study has a potential way to restore that power, and we have shown that with the right technology, the thoughts of these people can be decoded and understood by any listener."

Neurons in brain cerebral cortex are excited every time we listen to people – or even imagine that people are talking. How precisely the brain makes sense to mix sound waves or builds the facsimile of the process when we think that people who speak are still unknown. However, neurologists have shown that brain formulas emitted during a task can be joined together to reconstruct speech words. This propelled the idea of ​​building neuroprotection, a device that acts as a brain and computer interface.

A group of scientists attempted to improve the technique known as the reconstruction of auditory stimulation using a neural network. First, an auto-encoder was trained to convert audio signals to spectrograms and to explain the different sound frequencies of 80 hours of speech recordings.

In addition, researchers placed electrodes directly into the brain of five participants undergoing epilepsy brain surgery to detect electrical activity. Everyone had normal hearing. Everybody listened to a short story for 30 minutes. The stories were accidentally paused and were asked to deliver the last sentence for vocoder training. Vocoder has learned to map specific brain patterns to speech.

The participants listened to a string of forty digits – zero to nine. Recorded brain signals were passed through the transducer to generate audio signals, and these samples were then fed back to the autocenter for analysis so that the system could repeat the reconstructed digits.

Here you can listen to an example. It's a little robotic and sheet metal, and it's only zero to nine.

"We found that people can understand and repeat the sounds about 75 percent of the time, which is much more than any previous attempts." Sensitive vocoder and strong neural networks were sounds that the patients originally listened to with surprising accuracy, "said Mesagarani.

Although the experiment is still very simplistic. The system can only reconstruct signals from listeners, so they are not their own thoughts. It's also just a recital of digits and not full numbers or even sentences. Scientists hope to check their system with more complex words, and if it works by inducing people to talk or imagine speaking.

"In this scenario, if a user thinks" I need a glass of water, "our system could take brain signals generated by this idea and convert them into synthesized speech," Mesagarani said. "It would be a player who would lose his ability to talk, whether by injury or illness, a renewed opportunity to connect with the outside world." ®


Source link