Neuroscientists deciphered people’s thoughts using brain scans

Date:

Like Dumbledore’s wand, scanning can pull long chains of stories straight from a person’s brain—but only with that person’s cooperation.

This is “mind reading”, described on May 1 in journal Nature Neuroscience , has a long way to go before it can be used outside of complex laboratories. But the end result could lead to seamless devices that help people who can’t easily talk or otherwise communicate. The study also raises privacy concerns about unwanted neural eavesdropping.

“I found it fascinating,” says Gopala Anumanchipalli, a neuroengineer at the University of California, Berkeley, who was not involved in the study. “It’s like, ‘Wow, we’re here,'” he says. “I was glad to see it.”

Unlike implantable devices that have shown recent promise, the new system does not require surgery. And unlike other external approaches, it creates continuous streams of words instead of having a more limited vocabulary.

For the new study, three people lay inside a bulky MRI machine for at least 16 hours each. They listened to stories, mostly from a podcast The Moth , while a functional MRI scan revealed changes in blood flow in the brain. These changes are intermediate indicators of brain activity, although they are slow and imperfect.

With this neural data in hand, computational neuroscientists Alexander Hut and Jerry Tang of the University of Texas at Austin and their colleagues were able to match patterns of brain activity to specific words and ideas. This approach was based on the language model created with GPT, one of the forerunners that enabled today’s AI chatbots.

Once the researchers figured out which patterns of brain activity corresponded to the words in the stories, the team could work backwards, using the brain patterns to predict new words and ideas. The process was iterative. The decoder ranked the likelihood of words appearing after the previous word , then used patterns of brain activity to help pick a winner and ultimately settle on a core idea.

“It’s definitely not going to hit every word,” Hutt says. The verbatim error rate was actually quite high at 92 to 94 percent. “But that doesn’t explain how it rephrases things,” he says. “It inspires ideas.” For example, when a person heard, “I don’t have a driver’s license yet,” the decoder spat out, “She hasn’t even started learning to drive yet.”

A new attempt to decode the brain makes sense of what a person hears, but does not provide an exact wording.© JERRY TANG/BOARD OF REGENTS, UNIVERSE. OF THE TEXAS SYSTEM

These responses suggested that decoders struggle with pronouns, although the researchers don’t know why. “He doesn’t know who is doing what to whom,” Huth said during an April 27 briefing.

The decoders were also able to approximate stories from people’s brains in two different scenarios: when people silently told themselves a rehearsed story and when they watched a silent movie. The fact that these situations could be decoded was exciting, Huth says, because “it meant that what we were getting at with this decoder was not low-level language.” Instead, “we begin to understand the idea.”

Using computational models and brain scans, scientists were able to decode ideas from people’s brains as they listened to speech, watched a movie, or imagined themselves telling a story

“This study is very impressive, and it gives us a glimpse of what might be possible in the future,” said Sarah Wandelt, a computational neuroscientist at Caltech who was not involved in the research.

Rapid progress in deciphering the brain may fuel debates about mental privacy, researchers say in a new study. “We know it can look creepy,” Hutt says. “It’s amazing that we can put people in the scanner and read what they’re thinking.”

But the team found that the new method wasn’t universal: each decoder was quite personalized and only worked for the person whose brain data helped create it. Moreover, the person had to voluntarily cooperate in order for the decoder to identify the ideas. If the person was not paying attention to the audio narration, the decoder could not pick up the story from the brain signals. Participants could thwart listening attempts by simply ignoring the story and thinking about animals, doing math problems, or focusing on another story.

“I’m glad that these experiments are being done to understand privacy,” says Anumanchipalli. “I think we have to be careful because it’s hard to go back and stop research after the fact.”

Share post:

Popular

More like this
Related

Australian Open qualifying: Britain’s Jody Burrage and Lily Miyazaki are through

Britain's Jodie Burrage and Lily Miyazaki eased into the...

Always under the protection of good forces – Klitschko congratulated everyone on Kyiv Day

The mayor of the capital published a vivid video...

Konstantin Kryvopust: professional sorcerers lived in the mountains of Eilat

A study recently published by Dr. Itamar Taxel of...

VAKS fined People’s Deputies Shakhov and Pashkovskyi for not appearing in court, CPC

Anti-Corruption Center reported about the fact that the people's...