Science decodes ‘internal voices’


Researchers have demonstrated a striking method to reconstruct words, based on the brain waves of patients thinking of those words.The technique reported in PLoS Biology relies on gathering electrical signals directly from patients’ brains.

Based on signals from listening patients, a computer model was used to reconstruct the sounds of words that patients were thinking of.The method may in future help comatose and locked-in patients communicate.

Several approaches have in recent years suggested that scientists are closing in on methods to tap into our very thoughts; the current study achieved its result by implanting electrodes directly into a part of participants’ brains.

In a 2011 study, participants with electrodes in direct brain contact were able to move a cursor on a screen by simply thinking of vowel sounds.A technique called functional magnetic resonance imaging to track blood flow in the brain has shown promise for identifying which words or ideas someone may be thinking about.

By studying patterns of blood flow related to particular images, Jack Gallant’s group at the University of California Berkeley showed in September that patterns can be used to guess images being thought of – recreating “movies in the mind”.

Now, Brian Pasley of the University of California, Berkeley and a team of colleagues have taken that “stimulus reconstruction” work one step further.”This is inspired by a lot of Jack’s work,” Dr Pasley said. “One question was… how far can we get in the auditory system by taking a very similar modelling approach?”

The team focused on an area of the brain called the superior temporal gyrus, or STG.This broad region is not just part of the hearing apparatus but one of the “higher-order” brain regions that help us make linguistic sense of the sounds we hear.

The team monitored the STG brain waves of 15 patients who were undergoing surgery for epilepsy or tumours, while playing audio of a number of different speakers reciting words and sentences.

The trick is disentangling the chaos of electrical signals that the audio brought about in the patients’ STG regions.To do that, the team employed a computer model that helped map out which parts of the brain were firing at what rate, when different frequencies of sound were played.

With the help of that model, when patients were presented with words to think about, the team was able to guess which word the participants had chosen.They were even able to reconstruct some of the words, turning the brain waves they saw back into sound on the basis of what the computer model suggested those waves meant.

Post your Comments