Unlike astrophysics, neuroscience lacks a crystallizing event, a mythic historical frame that the culminating successes of the Apollo years lent to the space sciences. Rather, our current neuroscientific endeavors have declared moonshots to great fanfare, but then stumbled or stagnated, and now find solace (and continued funding) in the humble blacksmithing of new tools to peer down into the ever-deepening neural data mines.
The space race of the fifties and sixties still defines conversations about the successes and failures of present-day Big Science. It seems almost unbelievable, especially to millenials, that there was once a time when an entire nation hung onto the latest dispatches from a publically-funded research organization making headway into the unknown, and when children grew up wanting to be astronauts– for real. But then the rug was pulled out from under the bubble of this science-utopia: the Cold War ended, and with it the political and economic will to sustain the scientific enterprise of NASA. Since then, the only comparable zone of scientific investment and public glorification has been related to the development of the internet, fueled principally by the dark matter of intellectual property: pornography, which still accounts for most of the internet’s economic fuel but has managed to live largely outside of its mainstream discourse. The aspirations of the internet youth are thus brewed in a strange mix of hedonistic neoliberalism in which the best comparison to the astronauts of yore might be a Zuckerberg/Jobs hybrid, where the ultimate, glorious moonshot is to cloak the evangelical pursuit of market profit in as impenetrable a position on the universal moral righteousness of internet connectivity as possible. There’s not even something juicy and conspiratorial to look for here, like a faked moon landing– for the arena of internet success already contains its own mirages, like the idea that Egypt would emerge more democratic than before Tahir Square because of widespread use of social media, or that San Francisco would not see the worst levels of income inequality and homelessness in decades.
Demonstrations came and went, state power and better riot gear came and stayed, and it’s into this era neuroscience has blossomed. Codified in its modern form by Santiago Ramon y Cajal’s discovery of atomized neurons in the late 19th century, injected with materialistic heft by Karl Lashley’s probing electrodes in the 1930s, and cemented with a Nobel for Hubel and Wiesel’s journey to the center of the cat visual cortex in the 1950s, the field coasted into Bush’s Decade of the Brain in the 1990s to arrive full of promise yet still opaque enough to receive the technological projection-du-jour of the 21st century, that of the networked personal computer. And to fully bridge this gap– to let the computers impose their semantic metaphors onto the brain, as they had done to the genetic code— we would need to organize things as we’d done since the Cold War: clustered in grand pursuits and fixated on the potential of Big Data. And so, for a confluence of factors– faster computers, new data mining tools, aging boomers with extended life-expectancies yielding rising rates of neurodegenerative diseases, veteran care, defense agencies hoping to get a bionic limb-up on the cognitive battlefield of the not-so-distant future– the economic and political will had arrived.
When Obama launched the U.S. Brain Initiative in April 2013, many believed it to be a response to the E.U.’s recently announced 1 billion Euro flagship award for the Human Brain Project, an ambitious and much-criticized attempt to understand the brain through biologically-tuned supercomputer simulations of its physical structures. These twin projects, straddling the Atlantic, would soon be joined by similar efforts in Japan, China, and India, all with the stated goal of unifying research and making rapid advances in understanding over the next decade, with the Human Genome Project and CERN standing as the Big Science siblings to these newly hatched pursuits.
But because the brain was still so opaque, it was imperative to marshal metaphors to describe what exactly we were setting out to do. Metaphors would help legitimize the nervous beginnings of these new pursuits by aligning them with clear predecessors from the history of exploring opaque things, like the New World. And indeed, the neuroscientists I’ve interviewed over the years have regularly turned to Big Exploration analogies to frame their quests, unintentionally linking their own ultimate service of state and private power (half of the funding for Obama’s BRAIN Initiative, which will dole out $100 million to neuroscientists over the next ten years, comes from DARPA) to that of their precolonial forbearers, the mythic explorers who were able to stand behind a similar moral veil of pure discovery, even as they secured new lands for the exploits of their ruling classes.
But the analogies of contemporary brain explorers to New World conquistadors and Cold War astrophysicists is missing one key variable: a clear destination. Where, when, and how will the flag of victory in the race to understand the brain be planted?
Henry Markram famously planted his flag seven years ago at TED, promising that a hologram from his brain simulation would return to deliver a talk on his behalf in ten years. Hundreds of millions of taxpayer euros, seven years and one Cell paper later, Markram doesn’t speak about computer-simulated holograms anymore– that particular torch seems to have been passed on to Elon Musk, who thinks we might all be living inside of one already.
I argue that the real damage to the intellectual framing of 21st-century neuroscience as analogous to the space race or 15th century naval exploration is in the confident implication that there a clear end-domain: a moonshot of neural self-understanding that our electrodes, simulations, and supermagnets will neatly land us on, a fertile patch of cortex ready for our flags.
It’s not that there aren’t a host of fragmented, specific end-domains for neuroscience to tackle. Any of the targeted medical aims of the field, from treating and potentially curing individual degenerative conditions to developing neuroprosthetics to help damaged nervous systems (and for DARPA’s half of the pie, bionic soldiers), are clearly defined moons within the orbital reach of this century of neuroscience. But we know what those moons are like: the curing of diseases, the development of prosthetic organs like the Pacemaker– we’ve seen these flags planted in other scientific fields. Rather, when we’re talking about Obama and Markram’s moon of full self-understanding, of consciousness explained, of subjective phenomena ascribed to the activity of neurons, we are implying an entirely new end-domain– the one that sells the sexy books and articles, the brain-infused events, and the interdisciplinary exhibitions.
And it’s not that there’s anything inherently wrong with bringing basic neuroscience to the public and applying its lessons to other disciplines– on the contrary, even more responsible applications are needed in hopes of realigning the ship, centered around established neural principles like plasticity, as Catherine Malabou has done. It’s rather that the overselling of neuro-truth as an ontological end-domain for certain questions of art (something I’ve participated in myself on this website over the years), philosophy, even economics and law– sets us up for the understandable backlash from the humanities in recent years, led by philosophers like Alva Noë, which has yielded equally unproductive dismissals of the entire enterprise that overlook basic tenets of the field as established by Cajal, Lashley, Hubel and Wiesel, Hebb, and as re-articulated and re-framed by contemporary observers like Malabou. Whether it flows from genuine excitement over early indications from neuroscience research itself or comes in reaction to backlash from the humanities, the overhyping of neuro-truth as an end-domain has led those of us actively interested in interdisciplinary dialogues to the precipice of our own disappointment: the sinking feeling that neuroscience might not be able to land us on that moon where we’d hoped to one day plant our flags and release our answers to the dizzying array of applicable questions we’ve been lassoing into our neuro-pen.
I confronted this sinking feeling after an interdisciplinary overreach of my own making. I was invited to speak at a neuroaesthetics symposium at the 2013 Venice Biennale alongside Alva Noë and a group of art historians, philosophers, and neuroscientists. We were tasked with responding to a piece by performance artist Tino Seghal, staged in the central Giardini pavilion of the Biennale. I imagine I was there because I had written an angry, paragraph-by-paragraph response to Noë’s NYT piece on “Art and the Limists of Neuroscience.” I met Noë that morning in Venice for the first time– he said that he had read my response to his article, and we shook hands and went about the day’s activities. I remember sitting in the apartment I was sharing with the event organizers, writing out pages of notes on Tino Seghal’s piece, convinced I would be able to connect it to established principles in cognitive science, and thus clearly illustrate how Noë’s position that neuroscience has nothing to say about art was dead wrong.
At the symposium the following day, Noë opened the session with some general remarks that reinforced his view of the artwork as best analyzed in the space between viewer and performer, certainly not in terms seen through the neuroscientific lens. And then it came time for my remarks, and I offered the brain-centric reading of Seghal’s piece I had been preparing, using the ambiguity of the dancers’ movements and vocal sounds and the time required to glean any sense of the piece’s internal logic as a good illustration of the neural principle of top-down versus bottom-up processing. Seghal’s piece– I argued– only fully formed its meaning somewhere in the muddy waters between those opposing flows in our brains, and part of the delight of watching the piece was in noticing our minds grappling with those opposing flows. I said my piece and sat back.
As soon as the open discussion part of the symposium began, art historian Sigrid Weigel turned to me and voiced her objection to my statement. “When you talked on top-down and bottom-up, the metaphor irritates me. It is not enough,” Weigel said. What followed was an all-out turf battle, with the art historians and philosophers challenging the endeavor of relating neuroscientific ideas like top-down versus bottom-up processing to the artwork at hand, and the neuroscientists– Olaf Blanke and Vittorio Gallese– forced to restate the very basics of their scientific enterprise.
I highlight this particular episode here not because it expressed a new tension between the disciplines, but rather because it embodies a basic misunderstanding that derails many encounters between the neurosciences and other disciplines. When the sciences and humanities come together for dialogue, usually one hopes for something to emerge that is more than the sum of its parts, blossoming into C.P. Snow’s magical Third Culture. But when all was said and done in Venice (and at many other events I’ve attended over the years), the interdisciplinary dialogue mysteriously seemed to become less than the sum of its parts. Its participants are reduced to restating their methodological first principles, with artists claiming the exotic turf of inexplicable creativity and scientists doubling-down on the orthodoxy of materialism.
So having seemingly triggered the turf battle at that Venetian symposium with my top-down versus bottom-up comment, I eventually realized that Noë had been right all along: that I had fallen into the very trap he warned against in his New York Times piece. In the rush to make the leap from the artwork and land on the brain, I had blasted off and left the art behind, nuzzling up to the explanatory power of neuroscientific principles without closing the loop and bringing the art back into the discussion in any meaningful way. It was a one-way street: I started with Seghal’s piece, then headed straight for a neuroscientific truth I thought might relate, prosecuted the case for its relation, and then sat back, trusting that the neuro-speak would hold its ground.
It didn’t, there, and very often in communicating neuroscience to the public at large– despite a seductive appeal– it doesn’t. When this happens, we’re left with the dissonant feeling that on one hand we seem to have received information from an advanced position of scientific understanding that feels highly relevant to the matter at hand, because the matter at hand inevitably involves brains. But on the other hand, before we close the loop and try to move from correlation to causation, for example, we’re asked to step off the neuro-train, left somewhat unsure of what to do with this intriguing new information. “Further studies” or “just beginning to understand” is how most papers, books, and articles conclude, but having seen dazzling images of the brain along the way, studies have shown that we’re more likely to believe it if it mentions something neuroscience-y.
So in recent interdisciplinary exchanges I’ve been testing out a new approach to neuroscience, one that acknowledges the strong gravitational pull of a neural end-domain but offers a hand-picked analogy from the space race era to replace the classic image of a flag-planting triumph. I argue that when the humanities, social sciences, or any other discipline engages with– or is engaged by– the neurosciences, the metaphor we ought to keep in mind is that of Apollo 13, for it was in that near-disaster that the human agents were able to transform their intended end-domain from the ominous site of an inevitable crash-landing to the engine for their “slingshot effect”, and thus a source of renewed momentum for the trip back home.
In the same way, we might imagine dialogues that swing close to neuroscience for its new ideas, tuning into the pull of its material lessons but remaining acutely aware that they may never offer end-all answers to our questions. The Apollo 13 approach is more curious about how the gravitational pull of neuroscience can help us get back to the personal and the political, and thus how its transformational knowledge can be best used for all.