Duke Scientists Create Brain Implant That May Enable Communication From Thoughts Alone
Prosthetic decodes signals from brain鈥檚 speech center to predict what sound someone is trying to say.
To improve on past limitations, Cogan teamed up with fellow Duke Institute for Brain Sciences faculty member , whose biomedical engineering lab specializes in making high-density, ultra-thin, and flexible brain sensors.
For this project, Viventi and his team packed an impressive 256 microscopic brain sensors onto a postage stamp-sized piece of flexible, medical-grade plastic. Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it鈥檚 necessary to distinguish signals from neighboring brain cells to help make accurate predictions about intended speech.
鈥淚 like to compare it to a NASCAR pit crew. We don't want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said 鈥楪o!鈥 we rushed into action and the patient performed the task.鈥
Greg Cogan, Ph.D.
After fabricating the new implant, Cogan and Viventi teamed up with several 老牛影视 Hospital neurosurgeons, including , , and , who helped recruit four patients to test the implants. The experiment required the researchers to place the device temporarily in patients who were undergoing brain surgery for some other condition, such as treating Parkinson鈥檚 disease or having a tumor removed. Time was limited for Cogan and his team to test drive their device in the OR.
鈥淚 like to compare it to a NASCAR pit crew,鈥 Cogan said. 鈥淲e don't want to add any extra time to the operating procedure, so we had to be in and out within 15 minutes. As soon as the surgeon and the medical team said 鈥楪o!鈥 we rushed into action and the patient performed the task.鈥
The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like 鈥渁va,鈥 鈥渒ug,鈥 or 鈥渧ip,鈥 and then spoke each one aloud. The device recorded activity from each patient鈥檚 speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.
Afterwards, Suseendrakumar Duraivel, the first author of the new report and a biomedical engineering graduate student at Duke, took the neural and speech data from the surgery suite and fed it into a machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.
For some sounds and participants, like /g/ in the word 鈥済ak,鈥 the decoder got it right 84% of the time when it was the first sound in a string of three that made up a given nonsense word.
Accuracy dropped, though, as the decoder parsed out sounds in the middle or at the end of a nonsense word. It also struggled if two sounds were similar, like /p/ and /b/.
Overall, the decoder was accurate 40% of the time. That may seem like a humble test score, but it was quite impressive given that similar brain-to-speech technical feats require hours or days-worth of data to draw from. The speech decoding algorithm Duraivel used, however, was working with only 90 seconds of spoken data from the 15-minute test.
Duraivel and his mentors are excited about making a cordless version of the device with a from the National Institutes of Health.
鈥淲e're now developing the same kind of recording devices, but without any wires,鈥 Cogan said. 鈥淵ou'd be able to move around, and you wouldn't have to be tied to an electrical outlet, which is really exciting.鈥
While their work is encouraging, there鈥檚 still a long way to go for Viventi and Cogan鈥檚 speech prosthetic to hit the shelves anytime soon.
鈥淲e're at the point where it's still much slower than natural speech,鈥 Viventi said in a about the technology, 鈥渂ut you can see the trajectory where you might be able to get there.鈥
This work was supported by grants from the National Institutes for Health (R01DC019498, UL1TR002553), Department of Defense (W81XWH-21-0538), Klingenstein-Simons Foundation, and an Incubator Award from the Duke Institute for Brain Sciences.
CITATION: 鈥淗igh-resolution Neural Recordings Improve the Accuracy of Speech Decoding,鈥 Suseendrakumar Duraivel, Shervin Rahimpour, Chia-Han Chiang, Michael Trumpis, Charles Wang, Katrina Barth, Stephen C. Harward, Shivanand P. Lad, Allan H. Friedman, Derek G. Southwell, Saurabh R. Sinha, Jonathan Viventi, Gregory B. Cogan. Nature Communications, November 06 2023. DOI: