A new technology that is able to translate our brain signals into speech has been developed by a team of neuroscientists, neurosurgeons and engineers at Duke University in North Carolina.
The device—known as a speech prosthetic—is significantly faster than the best speech-decoding technologies available today. It offers hope to patients who have lost their ability to speak.
“This technology would help patients suffering from debilitating neurological disorders such as ALS [amyotrophic lateral sclerosis] and locked-in-syndrome, who have lost the ability to speak and communicate,” Gregory Cogan told Newsweek. He is a professor of neurology at Duke University and one of the lead researchers on this project. “The current tools available to allow them to communicate are generally very slow and cumbersome.” Brain decoder For this project, Cogan teamed up with fellow Duke researcher Jonathan Viventi. He runs a biomedical engineering lab that specializes in creating high-density, ultra-thin and flexible brain sensors. For this project, the team packed 256 microscopic brain sensors onto a piece of flexible, medical-grade plastic the size of a postage stamp.
“Neural speech prostheses work by directly reading the brain signals that control speech motor movement, and then translate these signals directly into readable outputs that can be used to create speech sounds,” Cogan said. “They read your intentions to speak and translate this intention to sound. “These devices would be fitted through a small craniotomy in the skull and implanted directly onto the motor cortex of the brain. We are currently working on a project that will allow for one of these devices to work wirelessly, so that patients could move around freely while using it,” Cogan added.
To test the implant, the team recruited four patients who were already undergoing brain surgery for other conditions. The experiment was fast-paced and involved the team placing the device temporarily in the patients’ brains and asking them to repeat a series of simple words out loud.
“I like to compare it to a NASCAR pit crew,” Cogan said. “We don’t want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said ‘Go!’, we rushed into action and the patient performed the task.”
Afterwards, Suseendrakumar Duraivel, a biomedical engineering graduate student at Duke, fed this data into a machine learning algorithm to see how accurately it could predict the sounds that were being made based solely on the patients’ recorded brain activity. The results were published in the journal Nature Communications on November 6.
Sign up for Newsweek’s daily headlines
“We were surprised at how good the results were,” Cogan said. “[This] technology demonstrates a very large improvement over current technology: we achieved 57 times higher spatial resolution and 48 percent higher neural signal strength compared to standard recordings. This increased signal quality improved our ability to read speech brain signals by 35 percent compared to standard tools.
“We expected results that were better than previous methods, but it is very promising to see the results pan out and it really opens the door for better neural speech prostheses in the near future,” Cogan added.
Overall, the decoder was accurate 40 percent of the time. The team members hope to resolve their technology further, while also developing a wireless version of the device to allow patients to move around without restrictions.
“We’re at the point where it’s still much slower than natural speech,” Viventi told Duke Magazine . “But you can see the trajectory where you might be able to get there.”
“The next steps are to get FDA [Food and Drug Administration] approval for our devices, so that we can put them in patients long-term to enable the restoration of their speech and communicative abilities,” Cogan said.
Request Reprint & Licensing Submit Correction View Editorial Guidelines