AI in Neuroscience: Researchers at MIT, IBM Helping to Accelerate Understanding of the Human Brain

1725
AI is being used in computational neuroscience to accelerate the scientific understanding of the human brain. Credit: Geralt/Pixabay

Two recent developments demonstrate how AI is being employed in neuroscience: 

MIT researchers are learning to segment anatomical brain structures from a single segmented brain scan image, thereby automating neuroscientific image segmentation using AI. 

And IBM researchers have created a cloud-based neuroscience model for studying neurodegenerative disorders, using algorithms that mimic biological evolution to solve complex problems.

In Psychology Today, writer Cami Russo described a presentation from MIT researchers at a recent Conference on Computer Vision and Pattern Recognition. Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and Computer Science and Artificial Intelligence Laboratory (CSAIL), and the first author on the research, initially sought to create an app using convolutional neural network technology. Zhao took a picture with her smartphone of cards from the game “Magic: The Gathering.” 

Amy Zhao is a PhD student at MIT working on computer vision and machine learning.

She needed a data set of 20,000 cards, and also many more images of each card with variations in their appearance and attributes such as lighting. Creating such a data set manually would take too much time, so Zhao set out to create the data set by synthesizing “warped” versions of any card in the date set.

A convolutional neural network (CNN), a class of deep learning algorithms with artificial neural network architecture that is somewhat inspired by the visual cortex of the biological brain, was trained with a small subset of data. Using 200 cards with 10 photos of each card, the CNN was trained to learn how to manipulate a card for various positions and appearances such as brightness, reflections and photo angles—resulting in the ability to synthesize realistic warped versions of any card in the data set.

Zhao realized this warping could be applied to magnetic resonance imaging (MRIs). The pattern-recognition capabilities of AI deep learning, a subset of machine learning, is helping neuroscientists to perform complicated analysis of brain images. However, training machine learning algorithms can be a costly, labor-intensive challenge.

Zhao, along with MIT postdoctoral associate Guha Balakrishnan, professor Frédo Durand, professor John V. Guttag, and senior author Adrian V. Dalca, automated the neuroscience image segmentation process using a single labeled segmented brain MRI scan, along with a set of a hundred unlabeled patient scans.

The research team tested their image segmentation system on 30 types of brain structures on 100 test scans in comparison to both existing automated and manual segmentation methods. The results demonstrated significant improvements over state-of-the-art methods for image segmentation, especially for smaller brain structures such as the hippocampus.

“The segmenter out-performs existing one-shot segmentation methods on every example in our test set, approaching the performance of a fully supervised model,” wrote the researchers in their paper. “This framework enables segmentation in many applications, such as clinical settings where time constraints permit the manual annotation of only a few scans.”

IBM Applying Evolutionary Algorithms to Brain Science

The IBM research, published recently in Cell Reports and summarized in Psychology Today, represents an advance in deep learning techniques. Deep learning is modeled loosely on the biological brain with layers of neural networks. These tend to be single-purpose point solutions that require extensive training with massive data sets, which are then customized for specific environments. Evolutionary algorithms solve a problem by requiring little to no data.

Different classes of evolutionary algorithms include genetic algorithms, evolution strategies, differential evolution and estimation of distribution algorithms. What these classes have in common is the process of evolution. The EA process of evolution involves generating, usually randomly, populations of search points, also called agents, chromosomes, candidate solutions or individuals. These populations of search points are put through “variation” operations and “selection” through multiple generations. The concept of a variation operation is similar to biological mutation and recombination processes.

The “fitness” of each search point is calculated after each iteration—the ones with the “strongest” (higher objective values) are kept, the “weakest” (lower objective values) are removed. In this way, the population of search points “evolve” over generations to produce the optimal solution to the problem. The “fittest” variation survives.

Evolutionary algorithms are distributed in nature, making it well matched for cloud-based or massively parallel multi-core processing. In this neuroscience study focused on Huntington’s disease, the researchers used a state-of-the-art non-dominated sorting differential evolution (NSDE) algorithm hosted on IBM Cloud.

James Kozloski is a neuroscientist and IBM Master Inventor

“We introduced a ‘soft thresholding’ of the error function coupled with a neighborhood penalty to prevent systematic bias due to targeting exact feature values,” said James R. Kozloski, neuroscientist and IBM Master Inventor, who worked on the research study.

Evolutionary algorithms present a flexible, adaptive alternative to AI deep learning, and are currently being used in computational neuroscience to accelerate the scientific understanding of the human brain.  

Read the source articles in Psychology TodayCell Reports and Psychology Today.