Shinji Nishimoto Uses MRI to See Through Other Eyes
Neuroscientist Shinji Nishimoto has used MRI scans of subjects' brains to create digital projections of what a subject is seeing.
Neuroscientist Shinji Nishimoto has devised a way to display a subject’s visual impressions of a video using a functional MRI (fMRI) to scan for subtle changes in blood flow to areas of the brain. This work represents a big step toward one of the holy grails of brain scientists — watching actual dreams.
The images produced by Nishimoto’s setup aren’t as vivid and clear as the videos that the subjects were watching. They have a watery, dreamlike quality but correspond closely enough that one can make out the outlines of a human face or a blurry rendering of the dominant features of a landscape. That isn’t surprising since what is being received on Nishimoto’s monitor isn’t a mechanical transcription of the video itself but an interpretation of a video filtered through each subject’s own subjective perceptions, then translated via an algorithm in a primitive stage of refinement.
“We need to know how the brain works in naturalistic conditions,” wrote Nishimoto of the rationale for the study of which he is the lead author. The study appeared in this week’s issue of Current Biology. “For that, we need to first understand how the brain works while we are watching movies.”
“If you can decode movies people saw, you might be able to decode things in the brain that are movie-like but have no real-world analog, like dreams,” said UC Berkeley psychology professor Jack Gallant, one of Nishimoto’s co-authors.
“The brain isn’t just one big blob of tissue. It actually consists of dozens, even hundreds of modules, each of which does a different thing,” noted Gallant. “We hope to look at more visual modules, and try to build models for every single part of the visual system.”
That means inputting vast amounts of data linking brain scan data to the corresponding video images to create algorithms that result in more vivid renderings of each MRI scan. The resulting body of data would require immensely powerful computers to sift through an assemble.
Much work remains before the method can be used to access more visual brain data. Nishimoto’s study used fMRI data from only one the brain’s V1 area, also known as the primary visual cortex, leaving out much additional data related to visual processing. And the visual models were customized to each subject to minimize the sheer volume of data that would have to be processed. Still, the study is a big step toward developing a digital model to interpret impressions received in the mind’s eye, something that was not been thought possible with fMRI.
As Nishimoto points out, ultimately what is needed for more vivid representations of what the mind’s eye sees is more studies for a better understanding of how the brain perceives and processes visual experiences. But the early results raise hopes of creating devices that could work in reverse — give the paralyzed or the blind ways to experience visual experiences otherwise denied them. It would also create a new pathway for direct two-way communications with such people.
Ultimately, the process offers the hope of being able to share not only dreams but uniquely personal memories.
UC Berkeley neuroscientist Shinji Nishimoto has devised a method to use fMRI to create digital images that correspond to video images being watched by a subject.