It is reported that scientists can now "decode" their thoughts without touching people's heads.
The past mind reading techniques mainly depend on implanting electrodes deep in people's brains. A report published on the preprint database bioRxiv on September 29 describes a new approach that relies on a non-invasive brain scanning technology called functional magnetic resonance imaging (fMRI).
FMR imaging can track the flow of oxygenated blood in the brain because active brain cells require more energy and oxygen, and this information provides an indirect measure of brain activity.
By its essence, this scanning method cannot capture real-time brain activity because brain cells release electrical signals much faster than blood moves in the brain.
But it is worth noting that the authors of the study found that they could still use this imperfect alternative to decode the semantics of people's thoughts, although they cannot translate verbatim. "If you asked any cognitive neuroscientist in the world 20 years ago, 'Is this possible?', they would laugh at you,'" said Alexander Huth, a senior author of
and neuroscientist at the University of Texas at Austin.
In this new study, which has not yet been peer-reviewed, the research team scans the brains of a woman and two men, who are in their 20s and 30s, respectively. Each participant listened to a total of 16 hours of different podcasts and radio programs in several sessions of the scanner.
Then, the team inputs these scans into a computer algorithm they call "decoder" that compares patterns in the audio with recorded brain activity patterns.
Neuroscientist Alexander Huss said the algorithm can take fMRI records and generate a story based on its content that "very fits well" with the original plot of a podcast or radio show.
In other words, the decoder can infer what stories they heard based on the brain activity of each participant.
Still, the algorithm does make some mistakes, such as switching pronouns for characters, using first and third persons. Alexander Huss said it "knows very accurately what is going on, but not who is doing these things."
In other tests, the algorithm explains fairly accurately the plot of the silent film that participants watched in the scanner. It can even retell the stories that participants imagine in their minds.
In the long run, the research team's goal is to develop this technology that enables it to be used for brain-computer interfaces designed for people who cannot speak or type.
If friends like it, please stay tuned for "Zhixin Le"!