The Beautiful Brain explores the latest findings from the ever-growing field of neuroscience through monthly long-form essays, reviews, galleries, short-form blog posts and more, with particular attention to the dialogue between the arts and sciences.
Yaron Steinberg has created an installation to show how he imagines his brain.
We know about the neurons, the synapses, the neurotransmitters, and some of us have had the privilege of seeing these in person, under the disconcertingly objective lens of a microscope. But to place the idea of thought and emotion with these strangely mundane and tangible elements does not do our brains justice.
Steinberg’s installation gives us insight into what the artist thinks of his thoughts–a tightly packed neighborhood that has developed throughout his life, and perhaps a grayness reflecting the dull veneer of neurons and chemicals that hides all the complexity within this community.
But the piece also invites us to imagine what our brains would look like if it could reflect how we think. Is it a green, Swedish field filled with full-bodied aromas, or a city slum where it’s hard to find redemption? Either way, this imagination-exercise forces a very personal metacognition that one doesn’t encounter everyday.
A video has been sweeping the web this week that shows the result of UC Berkeley scientists taking fMRI data, dividing the brain into “pixels,” connecting those pixels to thousands of YouTube videos, and generating a novel image that purports to show the subjects’ inner representations of visual stimuli. The setup is crude and the results are not altogether convincing– but it certainly signals the start of a technology that could one day yield startling results.
In practice, test subjects viewed some video clips, and their brain activity was recorded by a computer program, which learned how to associate the visual patterns in the movie with the corresponding brain activity.
Then, test subjects viewed a second set of clips. The movie reconstruction algorithm was fed 18 million seconds of random YouTube videos, which were used to teach the program how to predict the brain activity evoked by film clips. Finally, the program chose 100 clips which were most similar to the movie the subject had seen, which were merged to create a reconstruction of the original movie.
The result is a video that shows how our brain sees things, and at moments it’s eerily similar to the original imagery. [source]
The annual International Symposium on Electronic Art was held this year in Istanbul, wrapping up last week. One panel in particular caught our eye: Neuroarts.
Here’s the description from the conference’s website:
Philosophies of scale within NeuroArts: from the scale of the single cell to the mesoscopic scale of brain emulations through to emergent large-scale phenomena including self-hood and consciousness. What are the relationships between plasticity, stimulation and firing patterns in small brain circuits? And, how can their adaptation in artistic projects alongside synaptic plasticity, and cellular topologies be exploited to make adaptive art?
Very interesting. We only wish we could have attended. If this catches your eye, make sure to check out the paper abstracts from the various presenters featured on the Neuroarts panel at the ISEA website.
I am skeptical of the idea that the Internet–structurally– is a completely unprecedented innovation in human history. I didn’t know exactly how to express this until I came across a passage by Steven Pinker in “The Mind,” a series of insightful Q&A-structured essays by prominent scientists and philosophers, edited by John Brockman.
Pinker begins by comparing the Internet to the human brain in its ability to share a lot of information very rapidly. But then he goes on to give us some humbling history as to where the Internet as a tool fits into our evolutionary past.
“Even nonindustrial hunter-gatherer tribes pool information by the use of language. That has given them remarkable local technologies– ways of trapping animals, using poisons, chemically plant foods to remove the bitter toxins, and so on. That is also a collective intelligence that comes from accumulating discoveries over generations, and pooling them among a group of people living at one time. Everything that’s happened since, such as writing, the printing press, and now the Internet, are ways of magnifying something that our species already knew how to do, which is to pool expertise by communication. Language was the real innovation in our biological evolution; everything since has just made our words travel farther or last longer.”
“Creepy” may be the more apt colloquial term, but “uncanny” is how scientists describe the feeling you get when looking at an almost-but-not-quite-real-enough simulation of a human’s face. It may be a robot, the AI in your favorite video game, or that CG version of Jeff Bridges, but there seems to be an unusual feeling we get from seeing a face that “just isn’t right.”
For example, in a study by Seyama and Nagayama (2007), the researchers note that abnormally large eyes on face with higher degrees of realism induce an “uncomfortable (unpleasant) impression.” Below is a general, graphical representation of this effect, and that dip is called the “uncanny valley.”
One theory behind this occurrence is that we have been evolutionarily programmed to avoid corpses, therefore we have a gut reaction to faces that just aren’t right. The visual artists for the film The Curious Case of Benjamin Button had the challenge of crossing this valley, and it’s generally accepted that they did very well in this endeavor. Seems like those millions of dollars paid off, at least in terms of modern computer generated imagery.