With brain scanners and AI, US researchers have been able to at least roughly capture certain types of thoughts in willing subjects. A decoder they developed was able to use so-called fMRT images in certain experimental situations to roughly reproduce what was going through the mind of the participants, as the team writes in the journal “Nature Neuroscience”.
This brain-computer interface, which does not require surgery, could one day help people who have lost their ability to speak, for example as a result of a stroke, the researchers hope. However, experts are skeptical.
The study authors from the University of Texas emphasize that their technology cannot be used to secretly read out thoughts.
How does it work?
Brain-computer interfaces (BCI) are based on the principle of reading human thoughts through technical circuits, processing them and translating them into movement or language. In this way, paralyzed people could use mind control to control an exoskeleton, or people with locked-in syndrome could communicate with their outside world. However, many of the corresponding systems currently being researched require the surgical implantation of electrodes.
In the new approach, a computer forms words and sentences based on brain activity. The researchers trained this speech decoder by having three test subjects hear stories for 16 hours while they lay in a functional magnetic resonance imaging (fMRI) scanner. With such an fMRI, blood flow changes in brain areas can be made visible, which in turn are an indicator of the activity of the neurons.
In the next step, the subjects heard new stories while their brains were examined again in the fMRI tube. The previously trained speech decoder was now able to create sequences of words from the fMRT data, which the researchers said reproduced the content of what was heard largely correctly.
The system did not translate the information recorded in the fMRI into individual words. Rather, it used the connections recognized in the training and artificial intelligence (AI) to assign the measured brain activities to the most likely phrases in new stories
Rainer Goebel, Head of the Department of Cognitive Neurosciences at Maastricht University in the Netherlands, explains this approach in an independent classification: “A central idea of the work was to use an AI language model to count the number of possible phrases that can be combined with consistent with a pattern of brain activity.”
Still buggy
In a press conference on the study, co-author Jerry Tang illustrated the results of the tests: The decoder reproduced the sentence “I don’t have my driver’s license yet” as “She hasn’t even started learning to drive”. According to Tang, the example illustrates a difficulty: “The model is very bad with pronouns – but we don’t yet know why.”
Overall, the decoder is successful in that many selected phrases in new, i.e. untrained, stories contain words from the original text or at least have a similar meaning, according to Rainer Goebel.
“But there were also quite a few errors, which is very bad for a full brain-computer interface, since for critical applications – for example communication with locked-in patients – it is particularly important not to generate false statements.” Even more errors were generated when the subjects were asked to independently imagine a story or watch a short animated silent film and the decoder was asked to play back events in it.
skepticism among experts
For Goebel, the results of the system presented are too bad overall to be suitable as a trustworthy interface: “I dare to predict that fMRI-based BCIs (unfortunately) will probably also be limited to research work with a few test subjects in the future – as is the case in this study will stay.”
Christoph Reichert from the Leibniz Institute for Neurobiology is also skeptical: “If you look at the examples of the presented and reconstructed text, it quickly becomes clear that this technology is still a long way from reliably generating an “imagined” text from brain data. ” Nevertheless, the study hints at what could be possible if measurement techniques improved.
There are also ethical concerns: Depending on future developments, measures to protect intellectual privacy could be necessary, the authors themselves write. However, tests with the decoder showed that the test subjects had to cooperate both for the training and for the subsequent application.
“If they counted in their heads during decoding, named animals or thought of another story, the process was sabotaged,” describes Jerry Tang. The decoder also performed poorly when the model had been trained with another person.