The Beautiful Brain explores the latest findings from the ever-growing field of neuroscience through monthly long-form essays, reviews, galleries, short-form blog posts and more, with particular attention to the dialogue between the arts and sciences.
NMDA receptors are neurological celebrities. They’ve been implicated in the most basic, neccasary forms of learning and synaptic plasticity, highlighted by their ability to activate only when certain conditions are met in both “pre” and “post” synaptic neurons.
The now-old neuroscience adage “those that fire together, wire together,” is a fundamental truism primarily because of the work of NMDARs. So it isn’t surprising that a recent paper in Neuron, by Joe Tsien et al, argues that NMDA receptors play a vital role in habit formation. Check out the video abstract above for more.
In a recent Charlie Rose interview of John Lasseter, the chief creative officer of Pixar revealed more about the next Pixar project, to be directed by Pete Docter. The animator said that it will take place inside the mind of a girl, with her emotions as characters.
Lasseter said that the film concept came from the simple question, “What is going on in people’s heads?” Good question. Though this doesn’t sound like the neuroscience-epic-IMAX-3D-blockbuster I’ve been dreaming of, maybe Pixar will be able to reflect both past and the latest neuroscience/psychology research, as well as shedding some light on the matter with their artistic freedom.
At the least, it’ll be a good time at the theatres.
Surely our behaviors are the results of physical processes in the brain, and thus can, every one, ultimately be linked to a neurological root. However, the shape and architecture of such “roots” is shaped by our world, and in the case of personal responsibility and ownership of actions, they’re shaped by our social world. Gazzaniga believes we’re talking about free will the wrong way – ownership of our actions happens in our interactions with others, and is affected by those interactions. Should be an interesting read.
BBC Radio 4 has been running a series by Dr. Geoff Bunn of Manchester Metropolitan University, who takes the listener on a journey through 5,000 years of human understanding of the brain. He starts with fossil skulls that suggest trepanation, the oldest form of neurosurgery—quite literally a hole in the head. The latest one posted covers the 18th century curiosity with electricity, and how that influenced ideas about the brain. I’m particularly looking forward to the next hundred years…
Shout to Mindhacks, where I found these. You can check them out on iTunes, or here.
Yaron Steinberg has created an installation to show how he imagines his brain.
We know about the neurons, the synapses, the neurotransmitters, and some of us have had the privilege of seeing these in person, under the disconcertingly objective lens of a microscope. But to place the idea of thought and emotion with these strangely mundane and tangible elements does not do our brains justice.
Steinberg’s installation gives us insight into what the artist thinks of his thoughts–a tightly packed neighborhood that has developed throughout his life, and perhaps a grayness reflecting the dull veneer of neurons and chemicals that hides all the complexity within this community.
But the piece also invites us to imagine what our brains would look like if it could reflect how we think. Is it a green, Swedish field filled with full-bodied aromas, or a city slum where it’s hard to find redemption? Either way, this imagination-exercise forces a very personal metacognition that one doesn’t encounter everyday.
A video has been sweeping the web this week that shows the result of UC Berkeley scientists taking fMRI data, dividing the brain into “pixels,” connecting those pixels to thousands of YouTube videos, and generating a novel image that purports to show the subjects’ inner representations of visual stimuli. The setup is crude and the results are not altogether convincing– but it certainly signals the start of a technology that could one day yield startling results.
In practice, test subjects viewed some video clips, and their brain activity was recorded by a computer program, which learned how to associate the visual patterns in the movie with the corresponding brain activity.
Then, test subjects viewed a second set of clips. The movie reconstruction algorithm was fed 18 million seconds of random YouTube videos, which were used to teach the program how to predict the brain activity evoked by film clips. Finally, the program chose 100 clips which were most similar to the movie the subject had seen, which were merged to create a reconstruction of the original movie.
The result is a video that shows how our brain sees things, and at moments it’s eerily similar to the original imagery. [source]