A video has been sweeping the web this week that shows the result of UC Berkeley scientists taking fMRI data, dividing the brain into “pixels,” connecting those pixels to thousands of YouTube videos, and generating a novel image that purports to show the subjects’ inner representations of visual stimuli. The setup is crude and the results are not altogether convincing– but it certainly signals the start of a technology that could one day yield startling results.
In practice, test subjects viewed some video clips, and their brain activity was recorded by a computer program, which learned how to associate the visual patterns in the movie with the corresponding brain activity.
Then, test subjects viewed a second set of clips. The movie reconstruction algorithm was fed 18 million seconds of random YouTube videos, which were used to teach the program how to predict the brain activity evoked by film clips. Finally, the program chose 100 clips which were most similar to the movie the subject had seen, which were merged to create a reconstruction of the original movie.
The result is a video that shows how our brain sees things, and at moments it’s eerily similar to the original imagery. [source]