Despite using this system each and every day, we don’t fully understand how the brain makes meaning of a sequence of words. In an effort to paint a better picture of this process, a group of researchers used artificial intelligence (AI) and neuroimaging to analyze a person’s brain as they read. The findings published in the Journal of Neuroscience revealed that various regions of the brain work together to make meaning of sentences, and could inform the development of treatment for various forms of cognitive impairment.

The Research

The study looked at the brain activity of fourteen individuals monitored via functional MRI as they read 240 different sentences. These sentences had been encoded by InferSent, an artificial intelligence model trained to produce semantic sentence representations. The scans revealed activity occurred across a network of different regions in the brain, which indicates that, rather than one site serving as the center for sentence understanding, multiple cortical regions work together to accomplish this task. This particular AI is significant in that it proved to predict elements of fMRI activity that can’t be predicted by other common computational models. This allowed researchers to predict fMRI activity that reflects the encoding of sentence meaning across brain regions. “The findings provide a new picture of the network in our brains that are engaged in comprehending sentence meaning,” says lead researcher Andrew Anderson, PhD, of the University of Rochester. “As we all know, sentences are formed from sequences of words, however the meaning of a sentence is more than the sum of its word parts.” Anderson points to the example of “The car ran over the cat.” vs. “The cat ran over the car.” Despite the fact that both sentences contain the same words, our brain understands that they each mean different things. The signaling system that allows us to process language in this way is incredibly complex, but AI can help us better understand it. Through machine learning, a computational model can approximate the meaning of language. By then matching that computational model to the fMRI information highlighting brain activity during language comprehension, we can discern which brain regions are active in this task. “It is not properly understood where such ‘holistic’ representations of meaning are encoded as sentences are read,” Anderson says. “Are they localized to a single brain region or more widely distributed across multiple regions? Our findings point towards the latter, that sentence meaning is encoded throughout a distributed brain network, that spans regions of temporal, parietal and frontal cortex.”

AI and Our Brains

As illustrated by this study, AI helps us to better understand the human brain. At the same time, studying the human brain helps us to develop more sophisticated AI. It’s a fascinating and beneficial circular relationship. “Almost every breakthrough in AI has drawn from neuroscience and psychology, with deep neural networks and reinforcement learning being perhaps the two most prominent examples,” says neural engineer Dhonam Pemba, PhD. Pemba has founded several AI companies, focusing specifically on education and language acquisition. Most recently, he co-founded Kidx, an AI education platform for children. He notes that, while learning and thinking like the human brain is the ultimate goal of AI, it requires immense amounts of data and training to even come close. Artificial intelligence cannot generalize and extrapolate as the human brain does in learning and processing language. “Our brain for language learning is able to bootstrap learning from previous knowledge,” Pemba says. “For example, we learn sentence patterns and are able to use new words in these patterns without being explicitly told, or we can learn the new meaning of a word quicker once we have learned other similar words.”

The Potential of Artificial Neural Networks

Artificial neural networks have vastly improved computational models, and experts say major advances will be made in language-based AI tasks over the next decade. With further advances in language processing, Anderson believes we’ll eventually reach a better understanding of brain dysfunction, as well. Using AI, it could be possible to assess how brain regions affected by neurodegenerative diseases like Alzheimer’s encode meaning. “Additionally, we can test whether brain networks have rewired to enable other less diseased brain regions to take on the role of diseased regions,” he says. “This could help characterize disease progression, and possibly even assist in forecasting which high pathophysiology individuals will succumb to dementia and those who won’t.” But progress like this will take time, and the advances made in the field are never perfect. “I still think there is a lot of challenges left to mimic the human brain,” Pemba says. “First we still don’t fully understand it enough to engineer it, and second we are using computer and math to represent what we don’t know. The key to improving AI and mimicking the brain would be to allow artificial neural networks to learn the same way as actual biological neural networks do. “But another question is do we actually need to completely mimic it? Airplanes don’t fly like birds.”