
You could call it ‘speaking your mind’ — your words detected from your brainwaves, then decoded with an AI system and artificially produced, a machine saying what you wanted to say milliseconds earlier. Years from now, the system could provide an alternative to people whose speech is impaired.
The prestigious science journal Nature reports that scientists at the University of California, San Francisco used a deep learning model to find the brain instructions that orchestrate the lips, tongue,
Two examples are in the following audio file. The first passage is spoken by a human, then as it sounds when generated by the AI system.
Audio from Chang lab, UCSF Dept. of Neurosurgery
OUR TAKE
- An example of humans-with-AI that may become commonplace over time. Reports suggest general availability may be up to 10 years away while refinements are made.
- The AI system would be 15x faster than present methods — equalling the rate of normal human speech at 150 words a minute. It’s another example of the change in timescales ahead.
‘Making the leap from single syllables to sentences is technically quite challenging and is one of the things that makes the current work so impressive’
Chethan Pandarinath, Emory University in Atlanta, Georgia, commenting in the story
Brain signals translated into speech using artificial intelligence
NATURE | April 24,
RELATED
Scientists turn brain signals into speech with help from AI
NBC NEWS MACH | April 26,