Artificial intelligence is learning to read your mind—and display what it sees

Artificial intelligence has taken us one baby step closer to the mind-reading machines of science fiction. Researchers have developed “deep learning” algorithms—roughly modeled on the human brain—to decipher, you guessed it, the human brain. First, they built a model of how the brain encodes information. As three women spent hours viewing hundreds of short videos, a functional MRI machine measured signals of activity in the visual cortex and elsewhere. A popular type of artificial neural network used for image processing learned to associate video images with brain activity. As the women watched additional clips, the algorithm’s predicted activity correlated with actual activity in a dozen brain regions. It also helped the scientists visualize which features each area of the cortex was processing. Another network decoded neural signals: Based on a participant’s brain activity, it could predict with about 50% accuracy what she was watching (by selecting one of 15 categories including bird, airplane, and exercise). If the network had trained on data from a different woman’s brain, it could still categorize the image with about 25% accuracy, the researchers report this month in Cerebral Cortex. The network could also partially reconstruct what a participant saw, turning brain activity into pixels, but the resulting images were little more than white blobs. The researchers hope their work will lead to the reconstruction of mental imagery, which uses some of the same brain circuits as visual processing. Translating from the mind’s eye into bits could allow people to express vivid thoughts or dreams to computers or to other people without words or mouse clicks, and could help those with strokes who have no other way to communicate.