Brain signals translated into speech using artificial intelligence | NATURE


You could call it ‘speaking your mind’ — your words detected from your brainwaves, then decoded with an AI system and artificially produced, a machine saying what you wanted to say milliseconds earlier. Years from now, the system could provide an alternative to people whose speech is impaired.

The prestigious science journal Nature reports that scientists at the University of California, San Francisco used a deep learning model to find the brain instructions that orchestrate the lips, tongue, laryn, and jaw when they produce words. Once identified, words intended by the human subject could be reproduced by a voice synthesizer.

Two examples are in the following audio file. The first passage is spoken by a human, then as it sounds when generated by the AI system.

Audio from Chang lab, UCSF Dept. of Neurosurgery

OUR TAKE

  • An example of humans-with-AI that may become commonplace over time. Reports suggest general availability may be up to 10 years away while refinements are made.
  • The AI system would be 15x faster than present methods — equalling the rate of normal human speech at 150 words a minute. It’s another example of the change in timescales ahead.

‘Making the leap from single syllables to sentences is technically quite challenging and is one of the things that makes the current work so impressive’

Chethan Pandarinath, Emory University in Atlanta, Georgia, commenting in the story

SEE FULL STORY

Brain signals translated into speech using artificial intelligence
NATURE | April 24,
2019 | by Giorgia Guglielmi

RELATED

Scientists turn brain signals into speech with help from AI
NBC NEWS MACH | April 26, 2019 | by Jaclyn
JeffreyWilensky 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.