Subscribe via RSS Feed

What We Talk About When We Talk About Genius

[ 10 ] April 25, 2013

In a famous scene from Johann Wolfgang von Goethe’s drama Faust, two new lovers, Gretchen and Faust, sit together in a garden. “Tell me the truth . . .” Gretchen says. “As best I can!” Faust replies. Gretchen, a pure-hearted virgin, needs to know if her man can be trusted. “Do you believe in God?” she asks. The consummate academic, Faust responds with long-winded sophistry and rhetorical aplomb; he is more or less trying to get laid without lying. The reader remembers, however, that Faust has promised his soul to the devil Mephistopheles in exchange for the fulfillment of his own wishes. Gretchen will suffer greatly from her seduction: her mother and brother are killed and she will find herself in prison, separated from her sanity, having drowned her illegitimate child. “My darling, who can say I believe in God?”[1] Faust’s cynical credo begins. With this initial negation, Faust cannot help but reveal the truth about himself: his soul is with the devil already. The Germans have a word for a question like this: Gretchenfrage (“Gretchen question”). Like spies behind enemy lines, Gretchenfragen can surprise their way to where the secrets are kept inside you, in the darkness, unexposed.

Last year, a Nobel laureate visited the science writing workshop that I attend. There was a buzz around the room before he arrived. I was standing in on a conversation, treading water with a few words here and there as we waited. Then the young man who organized the event, a friendly colleague with a German name, engaged me. “Have you ever met a genius before?” he asked. Suddenly my guts got all tangled, my chest fell open like a trap door. After scanning for names of every eminent person I could remember coming across—the Nobel laureate, an Academy Award-winner, even a Macarthur “Genius” grantee—all I could answer was “No.”

What do we talk about when we talk about genius? Although the dictionary offers multiple definitions, the word seems impossible to define. Nevertheless, countless quotable people have weighed in on the idea. Psychometricians, who measure the mind using standardized intelligence tests, have even given it a number. Ironically, however, an IQ score of over 200 is said to mean “immeasurable genius.”[2] The longest-running longitudinal study in the world is Genetic Studies of Genius. In 1921, at Stanford University, Lewis Terman began tracking high-IQ children. After most of his subjects became ordinary citizens, Terman concluded that “intellect and achievement are far from perfectly correlated.”[3] The word “genius” no longer appears in the name of the study, now known as the Terman Study for the Gifted. Yet in the arts and sciences alike scholars devote whole careers to study of genius.

After the death of Albert Einstein, the man whose name is practically synonymous with the word, his brain was preserved for examination. Although it was neither larger nor heavier than average, there were extraordinarily intricate patterns on its parietal lobes and prefrontal cortex. Some scientists have suggested that these patterns may have allowed for Einstein’s visuospatial and mathematical aptitudes as well as for his famous capacity for thought experiments.[4] Einstein envisioned new laws of the universe and articulated them in his revolutionary theories. To the general public, however, Einstein remains the face of genius — static-electric white hair, tongue sticking out. (In my college dorm room I had a black-and-white poster picturing him on a bicycle, smiling, looking positively enlightened.) Einstein himself, in a letter to his biographer Carl Seelig, Einstein maintained that he was “only passionately curious.”[5] This would seem to hint at a conundrum somewhat similar to the so-called “omnipotence problems” in theology: is a genius truly a genius if even he does not recognize his own genius? Such extreme subjectivity would seem like a logical disproof. If Einstein swears he was no Einstein, who are we to say?

In a recent Nature op-ed, Dean Keith Simonton, the foremost academic expert on the subject, writes that genius is now extinct. The major continents on the scientific map have all been discovered and their foundations, for the most part, have settled over time. There is nothing new under the sun, nothing left to explore. Simonton says that Einstein was the last genius, the kind of intrepid creature that can author four revolutionary papers and launch a new physics in a single year by himself.[6] On the other hand, one of the most influential works of literary theory—”The Death of the Author” (1967)—claims that there never was a thing called genius, scientifically or historically, that genius is a grandiose capitalistic illusion.[7] “There is never anything more than the man who writes,” the text reads, “just as I is no more than the man who says I.” The man who writes named Roland Barthes contrasts our notion of authorship with the idea of the medium, admired for storytelling ability but never elevated above the work itself. The Odyssey, for example, was an oral tradition that has become a permanent text; however, Homer, historically speaking, is an imaginary attribution. Nevertheless, the literary critic Harold Bloom, who has championed literary genius throughout his career, celebrates Homer and others in the book Genius: A Mosaic of One Hundred Exemplary Creative Minds (2003).[8] Scholars still debate the identity of Shakespeare as though it would inform his plays, which after all were conceived to be experienced by a live audience and not analyzed to death. We are obsessed with biographies of legends perhaps because, as Harvard literature professor Marjorie Garber has written, we seek “a certain kind of emulative high, an intoxication of the superlative.” “It’s not that there is no such thing as genius,” she summarizes in an Atlantic article titled “The Genius Problem,” “but, rather, that genius is an assessment or an accolade often retrospectively applied to an individual or an idea—not an identifiable essence.”[9] Perhaps genius is ineffable, then.

My father, a lifelong lover of literature, used to complain that the word genius is overused. I remember sitting at the kitchen table with him long after dinner naming our favorite writers. I must have been home from college because that was when I could rattle off the immortal qualities of the canonical heavy-hitters like the back of a baseball card. I can still see my father’s expression when I talked about Kafka: eyes turned to the floor, head shaking slowly, as if utterly at a loss. The torture of being in between worlds, that alienated feeling of foreignness, the meaningless maze of modern life. “He was something,” my father said.

Surprisingly, the book that led me to consider all of this is ostensibly about something else. It is called Borges and Memory: Encounters with the Human Brain (MIT Press), written by Rodrigo Quian Quiroga, a neuroscientist in Argentina who investigates the way our brains form memories. In 2005, Quian Quiroga and his team discovered certain neurons that fire only in association with general concepts.[10] These neurons were popularly referred to by the name of Jennifer Aniston, whose likeness the researchers used in experiments. Regardless of the photograph, subjects could always name the actress —Jennifer Aniston in profile is still Jennifer Aniston, Jennifer Aniston with bangs is still Jennifer Aniston, and Jennifer Aniston is always Jennifer Aniston. “Jennifer Aniston neurons” were storing key information of her identity; the variable specifics of her appearance seemed irrelevant. Therefore, the researchers concluded that the formation of long-term memories depends on abstract synthesis rather than precise detail. This process ensures that complex visual percepts are reduced to useful representations, like conceptual buoys helping us to chart our course through an overwhelming water-world of information.

“Like many others,” Quian Quiroga writes in the introduction to his book, “I discovered Borges as a teenager.”[11] While re-reading “Funes the Memorious”—the famous story about a man who remembers absolutely everything—Quian Quiroga found that Borges “had the perfect words to express the results of his own research.”[12] Borges’ fictional character—the protagonist Ireneo Funes—functions for Quian Quiroga as a sort of extreme counterexample: What if there were no fading away, no forgetting? Funes, we find, is a “perpetual prisoner,” “the solitary and lucid spectator of a multiform, instantaneous and almost intolerably precise world” that the narrator imagines must be “vertiginous.” In the most stirring moment, culminating the description of Funes’ superhuman mental abilities, Borges writes: “I don’t know how many stars he could see in the sky.”[13] The simple line terrifies me like the verge of absolute infinity. For Funes, reality is cumulative rather than continuous; the past would besiege us with its inescapable presence. He would be simultaneously aware of every bit of information. (In other words, we would be surrounded by innumerable Jennifer Anistons, one for each time we experienced her image.) This fantasy is both an absurdity and a tragedy. Upon rereading the story, I found myself shaking my head. Jorge Luis Borges is a genius.

Like most boys, I suspect, my first genius was my father. I love books because he read them out loud to me. I have always wanted to imitate him. I played what he played, I studied what he studied. For me, this is the secret behind genius. Although genius can appear in many forms, mine has always been connected to fatherhood. Consider the etymological origin of the word: gen, signifying to produce, is an ancestral root that is prevalent throughout our so-called Indo-European phylum of languages. In Latin, the verb gigno/genui/genitus meant to beget and the noun gens meant a clan descended from a common male ancestor, whom the ancient Romans worshipped.[14] This was the spirit of every family; from birth, each individual was under a fatherly protection. The word for this became genius. Houses, doors, gates, and streets were all overseen by their own geniuses. Its usage proliferated from there, and the rest is history.[15] Great men are often eulogized as “fathers”—Freud is the father of psychoanalysis, Oppenheimer is a father of the atomic bomb, Jefferson is one of the Founding Fathers of America.

Rodrigo Quian Quiroga found a guiding spirit in Jorges Luis Borges, a father of Argentine letters in Borges and Memory. The genius writer helped the scientist to communicate his discoveries. Scholars in Argentina have been relating the work of Borges to the sciences during conferences since the 1970s, penning papers that seem almost like intellectual paternity tests. Connection to a genius source could legitimize a theory or an argument, although the temptation of such interdisciplinary connections can lead scholars to build an interpretive bridge too far.[16] Borges himself recognized his own extremely personal genius. “I knew I would go blind,” he once said in an interview, “because my father, my paternal grandmother, my great-grandfather, they had all gone blind.”[17] We read in him a spirit of eternal imagination. What would his mind have become without this degenerative inheritance?

Franz Kafka, that genius of my father, claimed to remember only one incident from his childhood. One night, after he was supposed to be asleep, little Franz kept complaining for a glass of water. His father locked him out on the balcony. “For years thereafter,” Kafka wrote in a now famous letter to his father, Hermann, “I kept being haunted by fantasies of the giant of a man, my father, the ultimate judge, coming to get me in the middle of the night.” He was thirty-six. Does this memory not reveal the nature of Kafka? His cries were rejected. He was punished, left out in the cold. “As far as he was concerned,” Kafka continued, “I was an absolute Nothing.”[18]  Like it or not, his father constrained his life and work in a definitive way. We cannot escape this kind of legacy. Genius is in our intellectual genes, which, after all, are copied from those of our forebears.

My father says that he has always loved to read and write and that his father—my grandfather—despised him for this. Literature was for homosexuals; so was any affection. My grandfather dealt in these derogatories and dismissals. When my father gave his father one of his own stories, my grandfather said one summary sentence: “Well, it had a lot of words.” He was a well-respected, successful doctor who cared for his patients. Every year he would take his family to the farm in Canada for the summer. My father hated the place. There are black-and-white photographs of him, a child standing in mud holding the reins of a horse ten times his size, squinting out the sun. His escape was a barn on the property and the books he hid there. I like to imagine him sitting against a wall with no ladder, fortified by hay, to which my grandfather was allergic. The truth is that genius is always present, even in its absence.

At some point, when my childhood was over, in encountering the world at large, I decided that my father was not a genius. He is not Einstein, he is not Homer, he is not Shakespeare, he is one of those legends I was formally studying. I think this is normal as well; my qualifications for genius shifted, excluding him. There is more than a little bit of rebellion in this, the desire to escape toward independence. When I heard my Gretchenfrage, however, I was surprised to feel terror, doubt, and insecurity. I could not recall any genius in that moment. There was this vacuum, the disappearance of an unmistakable monument, like a skyscraper gone from the horizon. It is natural to search the past for what produced us, to form an image of the genius that is safely abstracted away from disorienting change. The more exact truth—the fact that my father is not a genius to the world, the moments when he appears to be the opposite, the arbitrary, contingent, and ultimately self-important nature of these distinctions—must be ignored for the sake of navigation. Otherwise we run the risk of becoming like an anti-Funes, not one who remembers everything but one who forgets the most important thing. Genius is atavism at its finest, a reversion to our ancestral past, the need to sculpt our own legacy simply by trying to understand it. We can never meet genius; because genius is our greatest memory.

 


[1] Translated by Stuart Atkins.

[2] From a few generic websites (i.e. ehow.com).

[3]  Terman, Lewis Madison; M. H. Oden (1947). Genetic Studies of Genius …: The gifted child grows up; twenty-five years’ follow-up of a superior group (4 ed.). Stanford University Press. p. 352. Retrieved 2011-06-02.

[4] Falk, D., et al. The cerebral cortex of Albert Einstein: a description and preliminary analysis of unpublished photographs. Brain (2012), doi: 10.1093/brain/aws295

[5] from The Expanded Quotable Einstein, http://press.princeton.edu/chapters/s6908.html

[6] Simonton, Dean Keith. After Einstein: Scientific genius is extinct. Nature 493, 602 (31 January 2013), doi:10.1038/493602a

[7] Full text can be found at http://www.tbook.constantvzw.org/wp-content/death_authorbarthes.pdf

[8] Published by Grand Central Publishing.

[9] Garber, Marjorie. Our Genius Problem. The Atlantic (December 2002), http://www.theatlantic.com/magazine/archive/2002/12/our-genius-problem/308435/

[10] Quian Quiroga, Rodrigo, et al. Invariant visual representation by single neurons in the human brain. Nature 435, 1102-1107 (23 June 2005), doi:10.1038/nature03687

[11] He loves in Borges “the mathematical precision with which he describes what defies every logic, […] the way he starts from seemingly irrefutable premises —often reinforced by obscure or even blatantly apocryphal quotations—to lead us inexorably to unreal worlds as though we were hallucinating or dreaming, living in a fantastic realism where everything is possible and ideas rule above all else.”

[12] Quian Quiroga, p. 4.

[13] Borges, Jorge Luis. Labyrinths (trans. Donald A. Yates and James E. Irby). New Directions: 1962.

[14] Harper, Douglas. Online Etymology Dictionary, http://www.etymonline.com/index.php?term=genius

[15] Mainly from Wikipedia, http://en.wikipedia.org/wiki/Genius_(mythology)

[16] This is the case with Jonah Lehrer and Proust Was a Neuroscientist. Proust was not a neuroscientist, and Lehrer would be officially discredited later for similar overreaches. There are, in fact, some examples of great literary figures who also contributed to the groundwork of the sciences, such as Goethe and Nabokov.

[17] Shenker, Israel. Borges, a Blind Writer With Insight. The New York Times (6 April 1971), http://www.nytimes.com/books/97/08/31/reviews/borges-insight.html

[18] Translated by Hannah Stokes and Richard Stokes, http://www.almaclassics.com/excerpts/dearestfather.pdf

The Cold Humanists

[ 7 ] January 18, 2013

Exploring the trend of neuro-rejectionism.

Neuroscience is in vogue. In the mainstream news and on pop-science bestseller lists, in academic departments and in deli refrigerators, interest in all things brain-related continues to grow, to be sold, and to be consumed. But the growth in public interest in the brain— and the hope that research into its vastly complex workings will unveil deep truths relevant to our daily lives— is still somewhat unspecific in its ends, for most present-day insights into the workings of the brain, gained from very specific research (and usually on mouse, rat, or fruit fly brains), examine quite basic and elementary features, ask more new questions than they answer, open more doors onto future lines of research than they solve or complete, and continually remind us of how much there is left to explore, especially when it comes to the human brain.

For the curious seeker of new insights about the brain, this can be frustrating—especially if one relies mostly on the popular press. In this age of facts at our fingertips, the reality of a surging public interest in a field being continually met with a lack of clear-cut answers to the questions we want answered can lead to cynicism about the whole pursuit of neuroscience. The research itself continues to deliver clear-cut results on its own terms, in academic journals and at conferences. But for most people, $35 is too much to pay for access to a PDF behind a paywall, the language of pure, unfiltered neuroscience can be unapproachable to those outside of the academic and research communities, and we want something easy to chew and digest.

But when results from neuroscience research do make it to the wide public for easier consumption, they can bear an aura of explanatory significance in the form of a big bold headline, a colorful picture, or an out-of-context quote from a researcher, which all can create a misleading sense of a big and final answer at hand—the TED effect, some would say. We are sometimes led to believe that the colorful fMRI images in the Science Times are—in and of themselves—revealing the seat of love circuits in the brain; we are told we can look inside children’s brains and see them learning, in real time. In these cases, healthy skepticism is necessary to guide us closer to the reality of the situation. Certain sources, such as the Neuroskeptic, are dependable whistle-blowers for what Alyssa Quart in the New York Times (in a refreshingly skeptical piece, for NYT’s own usual standards of brain science hype reporting) recently dubbed “brain porn.”

But there is a different breed of skeptics who don’t care to underscore their whistle-blowing with any kind of enthusiastic guidance towards the reality of well-done brain research (Quart, in her NYT piece, also doesn’t seem to be too keen on telling us about good neuroscience, either). This trend of skeptical voices has been arriving mostly from the humanities, and their attitude toward the brain sciences is cold, cynical, and doubtful— as if neuroscience has long overstayed its welcome, and must now be hurried out the door. We shouldn’t misread this cooling trend as another strain of the healthy skepticism mentioned earlier; rather, this strain, from the humanities, rolls the science and the cultural distribution and consumption of it into one big scapegoat, and is thus a classic mistake of confusing the message with the messenger.

For starters, take an ongoing series of essays currently being published by the online magazine Triple Canopy under the heading “Common Minds” (the series is supported by some prominent institutions, such as The Brown Foundation, The New York City Department of Cultural Affairs, and the National Endowment for the Arts, among others). In Triple Canopy editor Alexander Provan’s opening essay for the series, “A Note on ‘Common Minds’”, this cold attitude towards neuroscience is in full view, right from the opening paragraph. Speaking of the history of neuroscience as a discipline since the 1980s and 90s, Provan argues that neuroscience

turned inward—mirroring the putative narcissism of the era—and strived to reveal the immaculate mechanics lodged within our skulls. This nascent discipline cannibalized psychology, linguistics, and sociology, planting flags in far-flung realms of academia, medicine, and law. Today neuroscience is a central prism through which nearly all aspects of life are regarded.

Neuroscience is portrayed here in strangely violent terms—cannibalization, the planting of flags. Provan soon explains why he feels so let down by neuroscience:

We now know more about diseases and disorders like epilepsy, aphasia, Alzheimer’s, and Tourette’s syndrome than ever before, but none have been cured. We have an inkling of the cognitive processes that beget consciousness, which is both a product and central feature of the brain; but philosophers and neuroscientists alike still have trouble defining, much less locating, the phenomenon and explaining the emergence of subjectivity—if not the more mundane matter of self-awareness… We have not found the brain’s “God spot” or “buy button.”

… Recent criticism in magazines and academic journals has justifiably deflated—and perhaps tempered public enthusiasm for—the tumid, reductive claims of pop-science scribes. But such venues tend be dominated by professional journalists and scientists debunking other professional journalists and scientists. Departing from the conventions of this discourse, Triple Canopy has invited artists, poets, novelists, philosophers, historians, and psychologists to contribute analytic essays, linguistic compendia, and prose poems.

This sounds like an interesting and very worthwhile initiative to me. But, as evidenced in this introductory essay and the first two essays published to date in the “Common Minds” series (which I will discuss below), these essayists seem to be using the very “tumid, reductive claims of pop-science scribes” that Provan casts aside here as the very source material for their own attempts at “debunking” neuroscience from its cannibalizing, flag-planting, attacking position. Just look at Provan’s exaggerated expectations for the field—that it should have fully cured diseases now, solved consciousness, and most problematic—have found the brain’s “God spot” or “buy button”. This last statement uses language that any humble neuroscientist would most certainly caution us against—for this is language plucked not from good neuroscience research itself, but rather from the overhyped messengers of pop-science reporting. And by doing so, the messages from neuroscience—the specific studies or general types of experiments, even the statements of scientists themselves in talking about their own work—are missing from this critique.

The violent rejection of neuroscience on these grounds leaves no room for a more interesting dialogue, free of the pop-science hype language. Instead of seeing what is out there, we get the sour “We have not found the brain’s ‘God spot.’” The modus operandi for the cold humanists, as interesting as it sounds to invite these essayists to contribute their thoughts, seems to be to do the significantly easier work of praying on readers’ suspicions that they’ve been promised some answers by big flashy pop-neuro headlines, but haven’t been given them yet. A cultural critique of the overhyping is necessary and important, as discussed above—the problem here is that all of neuroscience is under attack, not just the overhyping messengers. Similarly, Quart ends her NYT piece mentioned earlier with the following passage:

It’s not hard to understand why neuroscience is so appealing. We all seek shortcuts to enlightenment. It’s reassuring to believe that brain images and machine analysis will reveal the fundamental truth about our minds and their contents. But as the neuro doubters make plain, we may be asking too much of neuroscience, expecting that its explanations will be definitive. Yet it’s hard to imagine that any functional magnetic resonance imaging or chemical map will ever explain “The Golden Bowl” or heaven. Or that brain imaging, no matter how sophisticated and precise, will ever tell us what women really want. [my emphasis]

I would argue that Quart has misplaced the notion of “reassurance” here. It’s actually bits like this that display a more callous form of “reassurance,” a form repeated over and over in the sweeping prohibitive statements at the ends of articles, essays, and books from the cold humanists. Ending your article with a final few lines that reassure readers that neuroscience, as represented by “brain imaging, no matter how sophisticated,” will never get to answer certain deep questions, is exactly the kind of careless reassurance we would do well to replace with some positive probing into what questions real neuroscience is asking today, and what the road ahead for the field does look like, treating fMRI as just one of a plethora of techniques.

In Provan’s “Common Minds” introduction, we do get a bit of talk about the range of neuroscience work being done today. But when Provan does talk about the real work, we see what a narrow vision of the field we have before us:

To us, [Common Minds] is itself a strike against biological reductionism. There have been many enthralling, if provisional, discoveries regarding the unconscious processes underlying our behavior, the adaptive and plastic nature of the brain, and the relationship between vision and cognition. But what, besides consciousness, prompts such efforts, impels us to engineer experiments, parse data, invent theories? The core of who we are and why we do what we do—that union of memory, consciousness, and the body that marks us as human, grounding our various representations of the world in what Kant called a “single common subject”—remains obscure.

It certainly does remain obscure, and may be so for a long time to come. The cold humanists prefer to remind us of this fact— to harp on the puffed-up, unmet goals, rather than get into any positive contributions; to make it feel fine not to pay attention to neuroscience, because it’s all just “obscure” still.

So what of the middle ground between dense academic research papers on one hand, and the puffed-up pop-science articles on the other—the ground occupied by the accessible and responsible books by the likes of David Eagleman, Patricia Churchland, or V.S. Ramachandran? What about, for example, the fascinating research into the seat of morality in the brain, done by Rebecca Saxe of MIT, which doesn’t rely on fMRI? Or—perhaps of particular interest to those in the humanities—the neuroscience of language processing, as explored by Stanislas Dehaene, or the neuroscience of visual art, as explored by Margaret Livingstone and Semir Zeki? Well, it turns out Provan is bothered by how, for example,

Eagleman uses Ulysses to analyze the housing bubble: By bounding himself in anticipation of his ship passing the Sirens, Ulysses shorted the same “instant-gratification circuits” that produced the subprime-mortgage crisis. But this account neglects predatory lending practices and the failure of government regulation—anything besides the cognitive processes of homeowners…  such books tend to strip away the construction of subjective experience from objective reality in order to reveal the gears at work and marvel at the mechanics… Often this means pitching a ream of case studies and summaries of academic papers as a “journey,” “search,” or “quest.” At the end of the road there tends to be either a solution to everyday problems—leadership, love, sales, addiction, productivity—or a sense of pure wonder. Dwelling on the magic of cognition, while insisting on its biological basis, returns the brain to a fantastical realm just as it is being demystified.

Eagleman’s “instant-gratification circuits” analogy is a new voice in the conversation, drawing new parallels and provoking us to consider a new layer of behavioral analysis of a complex sociopolitical issue—another tool in our belt. The analogy is not what Provan makes it out to be—a take-all, flag-planting cannibalistic account seeking to sweep away the voices of social and political science in a wash of ignorant reductionism. It’s a shame Provan casts him away so bluntly— Eagleman is one of the most positive, accessible and humble communicators of brain science around today. But he won’t see the light of day in the cynical passage at hand—indeed, the formulation here: “such books tend to strip away the construction of subjective experience from objective reality in order to reveal the gears at work and marvel at the mechanics…” is quite telling in its reliance on the cold, lifeless imagery of gears and mechanics, and in its strangely dualistic suggestion that subjective experience happens somewhere else, besides the brain. That’s not to say we’ve solved the deep questions of binding subjective experience to neural activity—but if you start from a position of vague dualism, always talking about the brain with a certain scorn, like it’s a lifeless chamber of gears and mechanics—what hope do we have of moving the conversation forward? Is it still that threatening to consider that that cognition, in all its “magic,” might have a fully biological basis? Apparently so, and Provan indicates his stance on the matter by suggesting one of the central tenents of, yes, the quest of modern neuroscience—that cognition does have a biological basis—is akin to a fantasy. And fantasy, here, is meant not as an interesting comment about the nature of brain-based cognition, but rather as another attempt at “debunking”  the whole endeavor of modern neuroscience.

Following Provan’s introductory piece, the first two full essays in the Triple Canopy “Common Minds” series display a similar central philosophy. “Popular Science” by Jena Osmon draws a line from phrenology to contemporary fMRI work, making the case that the former’s festishizing of specific, localized functions of the mind (based on the shape of the exterior surface of the skull) has birthed its modern equivalent in the latter, and that we should guard against such reductionists’ games of localizing complex cognitive states.

Where did she arrive at this criticism? Osmon, “while passing time at an airport magazine kiosk… noticed several magazines with cover stories on the brain…” and deduced that “The soft science of Whitman’s day—with its desire to attach character traits to specific regions of the brain—is alive and well in the popular press. Internet browsing confirms the trend.”

This is a very fair criticism of the popular press’ handling of fMRI studies, which too often places emphasis on localization results above all else, declaring their significance to a degree beyond what the actual scientists behind the study at hand would likely say about it themselves. And it is true that at times fMRI experiments, as they are initially designed, may simply try to put their explanatory arms around too much. So, if you want to take this one step beyond a cultural media critique, you could criticize the amount of funding allocated to fMRI work, while reminding us of the proportions of the field– that fMRI is only one of the plethora of research techniques currently at play in modern neuroscience. But to enter into a direct critique of what fMRI is looking at, as is suggested Osmon’s essay, takes us somewhere else—somewhere these authors aren’t really interested in going into with any more detail than a quick dismissal leaning on a tenuous parallel: “The link of location to behavior provides a seductive narrative structure of legible cause and effect. The power of that structure is just as evident today as it was in Whitman’s phrenological era.”

Many have fairly critiqued fMRI for providing only an indirect measure of brain activity, and thus being somewhat misleading about true cause and effect, like Osmon does here. But even given the indirectness, the fMRI signal is still tightly correlated to neural metabolism and local field potentials, and it does show us something that is going on inside the brain— a something that seems to be fairly well correlated with the reporting of distinct mental states. It’s the interpretation of that data that’s often the problem, be it in the discussion section of a paper itself, or an article in the popular press.

But there is another line of criticism regarding fMRI that questions the actual necessity of the work, (the 19th-century botany-styled correlating of regions with mental states or behaviors, with unclear explanatory significance), within the field as whole, as compared to other methods of neuroscientific inquiry—and I find this line of criticism much more intriguing. It was perhaps best articulated by Jerry Fodor back in 1999:

I had supposed that dualistic metaphysics was now out of fashion, in the brain science community most of all. Brain scientists are supposed to be materialists, and materialists are supposed not to doubt that distinct mental states have ipso facto got different neural counterparts. That being so, why does it matter where in the brain their different counterparts are?

While Fodor questions the point of excessive localization research within the field, Osmon is one step removed— she sticks to a questioning of the seductive localization narratives themselves, as presented in airport kiosk magazines, which of course speaks more to popular press than it does to actual, peer-reviewed, good fMRI work.

In ending her piece, Osmon pastes in a selected passage from a notorious letter to the editor of the New York Times from November 14, 2007, which arrived in response to a particularly onerous pop-science hype article entitled “This is Your Brain on Politics.” The letter was signed by seventeen cognitive neuroscientists. Osmon’s selected passage from the letter:

We know that it is not possible to definitively determine whether a person is anxious or feeling connected simply by looking at activity in a particular brain region. This is so because brain regions are typically engaged by many mental states, and thus a one-to-one mapping between a brain region and a mental state is not possible.

But while the above passage serves Osmon’s argument neatly, and makes it seem as though this letter was an indictment of the entire technique of fMRI, it’s telling that Osmon didn’t include the final passages of that letter. Here they are:

Unfortunately, the results reported in the article were apparently not peer-reviewed, nor was sufficient detail provided to evaluate the conclusions.

As cognitive neuroscientists, we are very excited about the potential use of brain imaging techniques to better understand the psychology of political decisions. But we are distressed by the publication of research in the press that has not undergone peer review, and that uses flawed reasoning to draw unfounded conclusions about topics as important as the presidential election.

The cold humanists aren’t interested in the fact that real scientists, like these ones who signed the NYT letter, are hashing out the line between good and bad science, and managing to remain “very excited” about the future use of brain imaging techniques. And they’re not interested in the fact that some articles about the brain, by some great science journalists, contain solid, peer-reviewed results, while others, like this one, apparently do not. These nuances fall away amid the broad strokes of their neuro-rejectionism.

The second essay in the series, Jan Estep’s “Semblance of Fact: How brain scans are presented and consumed as photographs” does a better job of critiquing fMRI results as presented through the lens of the popular press, yet there is again a coldness towards neuroscience, an unwilligness to let go of vague dualism and see neuroscience as anything more than an overstepping discipline full of lifeless, mechanical answers to the deepest questions.

In her essay for Triple Canopy, Estep leans on a quote from another cold humanist, Alva Noë, near the end of her piece (I have previously responded to Noe’s views on the limits of neuroscience, as published by the New York Times last year), before declaring that

Despite the persistence of hardcore materialists, cognitive scientists increasingly understand that the relationship between mind and brain cannot be reduced to a matter of neural correlates…Brain scans provide a certain image of the powerful biological forces that condition our experiences, but our understanding of ourselves is also culturally coded. Like photographs, our selves never exist in isolation: They are embedded in specific contexts, they depend on one another, and they are historically situated.

Estep, like Noë, dismisses the endeavor of neuroscience by reminding us that we don’t know everything right now from the fMRI work done so far—that fMRI isn’t able to move from correlation to any degree of causation, and therefore we should be skeptical of all of neuroscience, here on out. At this point, we would do better to tune in to another voice from the humanities, the brilliant Garry Kennard, who is well aware that fMRI is just one of many tools at work in these early days of exploring the brain, and who rightly keeps the relationship between mind and brain where it belongs—inside the brain. Here’s Kennard, in a passage from his new book “Essays and Images” that relates directly to the Noë-Estep “Out of Our Heads” formulations above:

There are those who say we cannot exist solely in our individual brains. That consciousness depends on an interaction of brains and that this interaction is where we live our essential lives—existing as truly sentient beings only in what we call ‘culture’… No. We may from time to time communicate with our fellows and gain from this all that is said of community—enlarged ideas, possibilities of action and thought, affection. But the reality is that for the most part we talk to ourselves in an endless solipsistic conversation, alone in our divided, multiple, shifting selves. And in the end we must admit that there is nowhere else for the experience of this ‘culture’ to exist than our individual brain and it is there we are obliged to begin any explorations of our perceptions of the world. (49)

These suggestions may be disorienting for those in the humanities who remain firmly opposed to the idea that the mind is what the brain does, like Provan, Osmon, Estep and Noë. It may even seem threatening, compelling them to turn a cold shoulder on neuroscience and all its techniques, and, in their writing intended for a wide audience, to keep the mind safely embedded in a diffuse web of externalities, avoiding the “gears” and “mechanics” between our ears. But Kennard’s ideas should not be read as flag-planting cannibalization—for they are far from that—and they need not provoke coldness. Rather, if taken with a proper dose of openness and humility, they allow new doors to be opened in the conversation between the humanities and the brain sciences. Kennard’s “Essays and Images” is a stellar presentation of these ideas, the beginnings of new conversations—it’s one of the best examples I’ve come across of the possibility of this truly interdisciplinary dialogue. When he discusses religious experience and visual art in the context of the mind, Kennard is not out to “debunk” neuroscience and point fingers at “biological reductionists,” for he seems able to differentiate between the pop-science hype and the insights gained from well-done research. Himself an artist, Kennard’s warmer and significantly more receptive approach to the field is a welcome antidote to the cold skepticism and threatened stance that these humanists feel when they brush up against contemporary neuroscience.

Unfortunately, however, some contemporary voices in the dialogue between the visual arts and brain sciences seem to be stuck in attack-mode. Take the recent essay “Sensing God and the Limits of Neuroscience” by Richard Gunderman, published on The Atlantic’s site on New Years Eve. Like Raymond Tallis, Gunderman seems altogether fed up with neuroscientific approaches to questions of art and religion. Check out Gunderman’s line of reasoning in this passage, and his choice of language:

What are we to make of the fact that some experiences attributed to a divine presence or an encounter with a transcendent reality are associated with characteristic changes in the function of a particular part of the brain, and that stimulating a part of the brain can produce such experiences in experimental subjects? Does this indicate that these experiences, and perhaps all such experiences, are therefore false? … Are such experiences of the transcendent mere misfirings of the brain?

For me, this passage communicates more about the author’s own cold disposition towards the brain, and neuroscience as its explorer, than it does about the question of reality versus unreality. My answers to his questions are all no— this does not indicate that all experiences are “false”—it actually shows us the reality of the very fact that he seems to deny, that we’re poking around in the seat of the mind. We have to be ready—and not threatened—by the fact that the physical body, and specifically the brain, may be where all these experiences exist it their totality. Indeed—the very fact that stimulating a brain region, as described here, is correlated with that subject’s reporting of a particular subjective experience, holds within it the vastly more revolutionary kernel of truth.

Again, Kennard, in “Essays and Images,” avoids negative attitudes towards the brain and neuroscience, and is much more instructive and truly progressive when it comes to discussing religious transcendence and the brain:

What I am calling a contact with the subconscious others will call a transcendent contact with a god or gods or some other religious experience. If any progress is made on this subject, we must reject the notion of a supernatural god existing somewhere outside our brains. I am quite aware that the vast majority of the world’s population will still, at this time, disagree with me. However, we must recognize that the projections that humans make on the physical world—to help them make sense of it and to give themselves a sense of identity—are entirely of human construction—conscious or not. It is becoming progressively clear from research into the way our brains function that we are in effect ‘dreaming’ the world and only have to alter that dream when we, as it were, trip over a piece of matter.

Gunderson moves from his discussion of religious transcendence to the experience of great art, hoping to expose neuroscience for being overly reductive when approaching the most profound expressions of human subjectivity:

 A neurologist might come along and explain that I am merely experiencing the transduction of kinetic energy into electrical energy as processed by neurons in the auditory and higher associative cortices of the brain. And yet, there is something about the music that is hard to reckon in such terms. It would be like saying that a passionate embrace is merely the pressing of flesh on flesh… The mere fact that neurochemical changes are taking place does nothing to help us distinguish between good and bad, the great and the merely insipid. The truth or falsehood of such expressions is not simply a matter of correspondence with some verifiable material state. It is also a matter of elegance, rhythm, balance, and above all, beauty, qualities that are to some degree transcendent. Ultimately, we cannot define the beautiful in strictly material terms.

What if elegance, rhythm, balance, and beauty are words we use to describe sensations that feel transcendent in what we might hope to be an immaterial, dualistic sense, but are, as neuroscience instructs us in the very experiments Gunderson references, entirely contained in our bodies? That this transcendence  may be, as Kennard describes it, a glimpsing of the non-conscious, non-verbal depths resting just below our tip-of-the-iceberg consciousness in the brain, provoking in us profound, unspeakable feelings of “depth”?

This notion seems to be too frightening to authors like Gunderson, who still view the “material” as some sort of cold netherworld of gears and mechanisms– certainly not us. They refuse to entertain the notion that these states described as immaterial and transcendent are themselves “verifiable material states” (this language still sounds cold—as if the states of transcendence were trying to pass through a fluorescently-lit security line to be given a ‘verified’ stamp), and that neuroscience is just beginning to explore the material seat of these states. If that sounds alarming or overly reductive, ask yourself why you feel that way. Are you, in all your staggering biological complexity, not enough, on your own, to feel beauty, to feel elegance? If you follow the cold humanists to the end of their logic, the answer is no—there is something about it that is not in your body. I disagree—I believe it is, and I believe you are. I can’t be certain of this yet, but I can be certain that I will pay attention to neuroscience for all it is worth, as long as I’m around.

Art and Neuroscience: a State of the Union

[ 5 ] September 9, 2012

To prepare for Thursday’s This is Your Brain on Art panel at 3rd Ward, in Brooklyn, NY, I outlined several distinct approaches in the current conversation between art and neuroscience, a field of inquiry often dubbed neuroaesthetics. The following outline is most likely incomplete. It is an attempt to quickly organize the many strains of research and thought on these issues, so please post any additions you think of in the comments section below. I tried to identify a few lines of inquiry into the relationship between art and the brain, and to describe the angle of each line’s approach to that relationship.

Here are three approaches:

1.  Art —-> Brain. The perception of art by the brain.

This is the approach that studies what happens to art when it enters the brain: how our brains reconstruct, assess, and fasten judgement to works of art. This includes not only bottom-up flows (sensory input moving higher and higher, up into the cortex), but also top-down flows (expectations influencing the viewing or listening process; jogged memories coloring our incoming perceptions). These flows are what the vast majority of current neuroaesthetics research is concerned with, and indeed what most books concerning art and the brain investigate. What this approach is most interested in is perception and analysis of basic aesthetic details: how we see color, detect motion, hear sound, recognize faces, feel rhythm, and what the peculiarities of each perceptual system tell us about the way the brain stitches these properties together. Then, at the next level, we can begin to untangle emotional and executive areas of the brain and their involvement in making and viewing art. Art’s effects can be correlated with the production of fear via the amygdala, pleasure in the nucleus accumbens, mystery/problem solving in the prefrontal cortex, disgust in the insula. Also involved at these higher levels is our empathetic connection to the work, be it a character in a film, or a melody in a song, and the top-down control it has over the perception of the work at hand.

This approach can focus on any artform as it enters the brain, such as:
    • Visual art. How the brain sees paintings and sculptures, from color and luminance to faces and perspective. Perhaps the most work is being done here, with big names like Livingstone, Zeki, and Ramachandran. Here is a review of some visual neuroaesthetics work by those aforementioned heavyweights, and a podcast with Bevil Conway, a neuroscientist and painter who is interested in the relationship between those two disciplines.
    • Music. This line of inquiry moves from perception of sound by mechanical sensors in our cochlea to the processing of it in auditory cortex, and the rich tapestry of emotion that music can provoke within us. This is Your Brain on Music is a popular book in this field by Dan Levitin, exploring findings from the neuroscience of music. Elsewhere, we’ve posted about Charles Limb, who studies the brains of improvising jazz musicians. Many more labs touch on these questions; please post in the comments with your own specific examples.
    • Literature. Though there’s some interesting work being done here, it’s perhaps the haziest of the artistic disciplines to approach with the tools of present- day neuroscience. Most real science being done that involves the literary arts concerns very elemental stages of reading– one word, one sentence– and their neural correlates, as seen through a fMRI scanner. Here’s a NY Times article about some of this recent fMRI work on the neuroscience of the written word. Another strain of this inquiry involves literature in the context of Darwinian evolution– here’s an overview from Beautiful Brain contributor Ben Ehrlich about the “Literary Darwinists.”
    • Dance. This line of inquiry concerns the perception of movement, and perhaps most importantly, the alleged mirror neuron system. The Rubin Museum’s Brainwave series featured choreographer Mark Morris and neuroscientist Bevil Conway in discussion about the relationship between dance and the brain. Cognitive neuroscientist Mark Changizi has a theory about the relationship between dance, music, and the brain, that’s worth looking into: Changizi believes music has been culturally selected over time to sound like human movement.
    • Theater & Film. Theater and film have a special relationship with the brain (see next section). Both must necessarily be studied in the broadest of terms– for unlike visual art or music, both are truly multisensory experiences and thus harder to study at the level of isolated perceptual systems. There have been some inroads made, especially along the “neurocinematics” avenue (there’s a good review of the field by the Neurocritic).

2.   Art <—-> Brain. The parallels between art and the brain.

This is the approach that lines up art and the brain next to each other and examines the similarities between the processes of our mind and the artwork that those minds create. It’s here where I think the conversation around cinema and theater really takes off. In particular, film, in its aesthetic and sensory richness (and best viewing space: a darkened theater) gets the closest, as an artform, to the sensory and emotional unity of human consciousness, and maybe more specifically, human dreaming, as housed in the activity of the brain.

Some further thoughts on these parallels, specific to film.

  • A film is a constructed subjective experience, very much like one’s own consciousness. The construction of a film includes editing (mirroring our own selective and sometimes modified memories), framing (where we look, how we hear, what came before), rhythm (day and night, patterns of movement, a beating heart).
  • A film has a scope, be it of a historical event, a range of emotion in a specific moment, a day in a person’s life, and so on, that is achieved both by what is seen and heard in the film, and also by what is not seen and heard. The power of suggestion, of the unsaid, can bear tremendous weight in a film, mirroring the tip-of-iceberg consciousness we all experience, and the vast scope of our unconscious experience, resting just beneath that surface. In this way, film not only parallels our conscious, edited experience, but also the non-conscious, suggested experience, which can color much of the way we see the world.

3.   Art <—- Brain. The brain, as seen through the lens of art.

This is an approach that feels less common, but that I think has huge potential, and it’s the one I’m personally the most interested in. This is the approach that believes art to be a valuable lens through which to observe and understand subjective, first-person consciousness in the brain. In other words, it’s interesting to study the brain during an artistic experience, but once we’ve lined up all subjective experience to constellations of firing neurons and distinct chemical washes in the brain, and we understand the general architecture of the brain, maybe we actually need to look at the art itself as a unique mirror of the internal landscapes of subjective experience. For if we’ve understood the architecture, maybe in order to break new ground on understanding what goes on inside those rooms, where first-person consciousness takes flight, we’ll need to look more closely at how different modes and styles of art are true reflections of the neural landscapes they emerged from. John Onian’s concept of “Neuroarthistory”  is the closest current approach to this line of thinking that I’m aware of (here’s a podcast I taped with Mr. Onians).

This third approach is related to #2 above, but here you’re really stitching together everything from #1 and #2 to make strides in understanding the highest functions of the mind through the art it makes and sees. You’re not treating the art as a passive artifact that only comes to life once it’s ingested by the nervous system, which you then set about studying. In this third approach, you’re treating the art as a sort of living record of the nervous system from which it emerged. For one such attempt at this type of approach, here’s an essay I wrote on this site about abstract art and its roots in hierarchical neural architecture.

_________________________________________________________________________
Finally, if you’re skeptical about this whole dialogue between the arts and brain sciences, maybe you will agree with this critique of neuroaesthetics by Alva Noë. Here is my response to Noë’s essay, in defense of neuroscience and neuroaesthetics. Please chime in with your views in the comments section below.

 

 

A Response to Alva Noë’s “Art and the Limits of Neuroscience”

[ 16 ] December 5, 2011

Alva Noë

A philosopher wrote a blog post on the New York Times’ website, and I don’t agree with him. I started this website–The Beautiful Brain– two years ago with the intent to explore the very pursuits this philosopher deems misguided, so I’ve written the following to keep track of my differences of opinion, and to provide an alternate point of view for anyone interested.

In an essay published in the New York Times’ Opinionator blogs section, philosopher Alva Noë (author of the 2009 book Out of Our Heads) takes aim at the kingdom of present-day neuroscience by directing his attacks at one of this kingdom’s most speculative and remote outposts: Neuroaesthetics. The following is my paragraph-by-paragraph response to Noë’s essay. The essay can be read first in its entirety here.

Art and the Limits of Neuroscience

By ALVA NOË

What is art? What does art reveal about human nature? The trend these days is to approach such questions in the key of neuroscience.

Noë’s first-down play call is an immediate flag for me. The trend these days? Just a major trend, generally, out in society, these days? This opening generalization blows up a balloon Noë will set out to deflate. But the balloon is filled with unfounded air.

Yes, some approach the mega-question “What is art?” in the key of neuroscience. But I’d imagine even those people who do occasionally set forth speculative, neuroscience-infused ideas about why we create art and what it reveals about human nature would acknowledge that neuroscience is still, today, just one way among very many ways to talk about art– that art historical discussion is still a major “trend,” not yet offset by the trend (in Noë’s formulation) of looking at it through the lens of neuroscience.

For example, the entry wall text at the Guggenheim’s current Cattelan show in New York City makes some big statements about the role of art in our world, but none of which are injected with an ounce of current neuroscience. This is because– even for those interested in the neuroscience of art– neuroscience is, for now, but one key still buried in a thick book of musical theory about how to approach the meaning of art– and when it comes to someone as embedded in the politics and culture of his time as Cattelan, there would be no sense yet in incorporating the neuroscience we have at our disposal in 2011 to display in this wall text for the general public visiting the show. Thousands will see the Cattelan show at the Guggenheim. None will read about any neuroscience there, and for good reason.

Noë is doing some overly negative poking at a young and humbly speculative field. This field is eager to test out new tools and put forth some bold ideas in journals and specialty books, all backed by empirical studies. But it is a field which is not claiming full explanation or revelation, not claiming it’s ready to replace the art historical wall text at the Guggenheim with a few paragraphs on edge detection, peak shift, color opposition, and association cortex. This is not the trend. Rather, this is someone you’d want to invite to dinner because the wide-ranging conversation about the perception of art could suffer from being a bit outdated and run-of-the-mill for 2011 if they aren’t there.

Continuing:

“Neuroaesthetics” is a term that has been coined to refer to the project of studying art using the methods of neuroscience. It would be fair to say that neuroaesthetics has become a hot field. It is not unusual for leading scientists and distinguished theorists of art to collaborate on papers that find their way into top scientific journals.

I’ll step aside here and let the conclusions of one of the papers Noë links to in this section speak for itself. From Zeki and Lamb, 1994 (Perhaps this neuroscience-of-art trend is a little less recent than Noë led us to believe with his above “these days..” formulation), after pages of inspired work:

In the last few pages, we have tried to use kinetic art and its development as a means of illustrating our general point that, in creating his art, the artist unknowingly undertakes an experiment in which he studies the organization of the visual brain. We have tried to analyse kinetic art in terms of the known neurology of the brain in general and of the pathways subserving visual motion in particular. We have shown that area V5 must be critical for kinetic art. We have therefore also shown that it is possible to relate the experience of kinetic art to the healthy activation of small parts of the brain. We do not mean to imply that the resulting aesthetic experience is due solely to the activity of V5 but only that V5 is necessary for it. It is perhaps a measure of how far we have come along in visual physiology that we can do so and can also begin to enquire into the relationship between physiology and visual art. It goes without saying that there is much in kinetic art which we have left unexplored, even at this level, and there is much at a higher level which we are not even competent to explore. The relationship of brain organization to aesthetics, the symbolism inherent not only in kinetic art, but in all art, the relationship of art to sexual impulses — these are all subjects which are worthy of study, though in a millennial future when we have learned a great deal more about the brain. In other ways, however, the millennial future which poets and artists have dreamed about is already here and, however small our contribution, it is satisfying to us to try to formulate the beginnings of an understanding of the relationship between the organization of the brain and its manifestation in art. [full paper]

Does any of the above warrant attack for overreaching or presupposing? To me, this is a humble but exciting new voice in the conversation, not the end-all answer-man shouting over everyone in the room. Personally, I’d like to hear more in years to come. Noë, it appears, would not like to hear much more. He goes on to speak directly about Zeki, the author of the above passage:

Semir Zeki, a neuroscientist at University College London, likes to say that art is governed by the laws of the brain. It is brains, he says, that see art and it is brains that make art. Champions of the new brain-based approach to art sometimes think of themselves as fighting a battle with scholars in the humanities who may lack the courage (in the words of the art historian John Onians) to acknowledge the ways in which biology constrains cultural activity. Strikingly, it hasn’t been much of a battle. Students of culture, like so many of us, seem all too glad to join in the general enthusiasm for neural approaches to just about everything.

I interviewed John Onians in 2009 after I attended a neuroaesthetics conference in Copenhagen. You can listen to our talk in a podcast here– the interview gets underway just after 8 minutes in. (please excuse the production value, it was the first episode!). Here’s an excerpt from about 11:50 in to give you a sense of how different this John Onians (quoted directly) was than the Onians which Noë refers to (without specific quotes) above:

____

Me: What can the findings of neuroscience– the hard cellular data, the brain scans– what can that add, or further, in our understanding of art and art history?

John Onians: The more I learned about neuroscience the more I discovered that there were some areas of knowledge that were particularly helpful to art historians… But it is certainly true that there is not a large body of data which can be presented as a single, coherent framework. I think it’s quite helpful for scientists if people in the humanities come into this area. Because in the humanities, we can use the material in the way we use all the other knowledge and theoretical frameworks in the humanities. We’re not making scientific claims about our work. We’re saying, “I have a hunch about how this may help me.”

_____

I’m not sure where Noë is looking, but when I talked to Onians, I got the sense that neuroscience is a new tool that he is encouraging those in the humanities to add to their toolkit, not as a field that is “fighting a battle” with the humanities. Noë is actually being more divisive here than the chief example he uses to further his argument has ever been.

Continuing:

 What is striking about neuroaesthetics is not so much the fact that it has failed to produce interesting or surprising results about art, but rather the fact that no one — not the scientists, and not the artists and art historians — seem to have minded, or even noticed. What stands in the way of success in this new field is, first, the fact that neuroscience has yet to frame anything like an adequate biological or “naturalistic” account of human experience — of thought, perception, or consciousness.

This is an outrageous claim. Neuroscience is young, and we actually do have some absolutely astounding accounts of human experience from the thousands of brain scientists who have carried out steady, empirical work over the decades (check out John Kubie in the comments section of Noë’s piece for a nice response from a member of the scientific community).

Take the biology that gives an account for the very real experience of our visual blind spot, for one small example. And if we’re talking about the neuroscience of art, are you going to tell me that the underlying neuroscience of color and luminance as applied to the study of Monet’s “Impression Sunrise” is of no interest?

And that no one has noticed? A section on these neuroscientific insights appears on the Wikipedia entry for Monet’s “Impression Sunrise” painting, where they take up almost as much space as the “History” section.

Noë continues:

The idea that a person is a functioning assembly of brain cells and associated molecules is not something neuroscience has discovered. It is, rather, something it takes for granted. You are your brain. Francis Crick once called this “the astonishing hypothesis,” because, as he claimed, it is so remote from the way most people alive today think about themselves. But what is really astonishing about this supposedly astonishing hypothesis is how astonishing it is not! The idea that there is a thing inside us that thinks and feels — and that we are that thing — is an old one. Descartes thought that the thinking thing inside had to be immaterial; he couldn’t conceive how flesh could perform the job. Scientists today suppose that it is the brain that is the thing inside us that thinks and feels. But the basic idea is the same. And this is not an idle point. However surprising it may seem, the fact is we don’t actually have a better understanding how the brain might produce consciousness than Descartes did of how the immaterial soul would accomplish this feat; after all, at the present time we lack even the rudimentary outlines of a neural theory of consciousness.

I thought the user “Dave” in the New York Times’ comment section responded to this section well. Here’s Dave:

False. The insight that the brain operates similarly to a computer put us light years ahead of Descartes in terms of understanding how the brain might produce consciousness. We still have a LONG way to go, but certainly theories like Daniel Dennett’s “multiple drafts” or Benard Baars’ “global workspace” are more on target than Descartes’ ghost in the machine.

Noë is severely undervaluing the work of a lot of important thinkers since Descartes. Continuing:

What we do know is that a healthy brain is necessary for normal mental life, and indeed, for any life at all. But of course much else is necessary for mental life. We need roughly normal bodies and a roughly normal environment. We also need the presence and availability of other people if we are to have anything like the sorts of lives that we know and value. So we really ought to say that it is the normally embodied, environmentally- and socially-situated human animal that thinks, feels, decides and is conscious. But once we say this, it would be simpler, and more accurate, to allow that it is people, not their brains, who think and feel and decide. It is people, not their brains, that make and enjoy art. You are not your brain, you are a living human being.

I re-read this section several times to try to figure out Noë’s deductive steps. In the meantime, here’s Dave the commenter again, with more valid criticisms:

Let me see if I can understand the authors’ argument. It seems to go something like this:

1) We need a healthy body, a normal environment, and social contact in order to be mentally healthy (that is, in order to not have a mental illness).

2) Therefore, we need a healthy body, a normal environment, and social contact in order to be CONSCIOUS.

It should be apparent that the leap from 1 to 2 is just plain silly. Mental health does not equal consciousness. If it did, we would have to say that people suffering from depression or schizophrenia do not have conscious experiences, or that moving to a deserted island would make your consciousness disappear.

The author seems to think that because the brain interacts with its environment, consciousness must therefore take place in the environment instead of in the brain. Perhaps I’m missing something, but this just seems loopy.

The statement “It is people, not their brains, that make and enjoy art” sounds like the denial stage of grieving over the decades-in-the-making entry of modern neuroscience into the discussion of art objects.

Moreover, it’s like saying, “It is people, not their stomachs, that process food.” Remove the stomach and try to process food. Remove the brain and try to make and enjoy art. But remove a finger or three, a limb or two, even another internal organ or more, pluck us away at a young age and put us in a remote territory, and we’re still making and enjoying art, thanks to our intact brains. I can’t go along with Noë’s argument here, just as I couldn’t believe in a lot of his arguments in Out of Our Heads. He rightly points out the need for integrative neuroscience, yet doesn’t take us anywhere new. As the Scientific American MIND review of his book noted, “The problem is that where Noë clears away stale ideas, he offers little of substance to replace them. One comes away from the book without a definitive example of a conscious state that would require more than a brain.”

Noë then offers more towards the above claim:

We need finally to break with the dogma that you are something inside of you — whether we think of this as the brain or an immaterial soul — and we need finally take seriously the possibility that the conscious mind is achieved by persons and other animals thanks to their dynamic exchange with the world around them (a dynamic exchange that no doubt depends on the brain, among other things). Importantly, to break with the Cartesian dogmas of contemporary neuroscience would not be to cave in and give up on a commitment to understanding ourselves as natural. It would be rather to rethink what a biologically adequate conception of our nature would be.

At their best, Noë’s ideas remind us that the brain is an embodied organ; that the nervous system extends to all the far reaches of the body; that the brain is shaped through learning, which takes place in a dynamic environment where we interact with others. But this interaction in the environment leads to neuronal reorganization at every step of the way inside our heads.

At their worst, as in the above passage, Noë’s ideas start to sound like a vague “everything is connected” New Age agenda. It’s provocative to ask the reader to consider a “break” with deep-seated understandings of contemporary neuroscience– but in the end, there are no alternatives to be found here. We’d do better to turn back to the heavy-hitters: Dennett, Damasio, Edelstein, and countless others who prefer to explore the mind as it is achieved by what rests between our two ears.

Noë continues:

But there is a second obstacle to progress in neuroaesthetics. Neural approaches to art have not yet been able to find a way to bring art into focus in the laboratory. As mentioned, theorists in this field like to say that art is constrained by the laws of the brain. But in practice what this is usually taken to come down to is the humble fact that the brain constrains the experience of art because it constrains all experience. Visual artists, for example, don’t work with ultraviolet light, as Zeki reminds us, because we can’t see ultraviolet light. They do work with shape and form and color because we can see them.

Now it is doubtless correct that visual artists confine themselves to materials and effects that are, well, visible. And likewise, it seems right that our perception of works of art, like our perception of anything, depends on the nature of our perceptual capacities, capacities which, in their turn, are constrained by the brain.

But there is a problem with this: An account of how the brain constrains our ability to perceive has no greater claim to being an account of our ability to perceive art than it has to being an account of how we perceive sports, or how we perceive the man across from us on the subway. In works about neuroaesthetics, art is discussed in the prefaces and touted on the book jackets, but never really manages to show up in the body of the works themselves!

What works has Noë been reading? Not Margaret Livingstone’s, whose Vision and Art is bursting with… art, including the Monet example given above. Apparently not much of Zeki either, who consistently deals with real art in the body of his works, including the very paper that Noë links to.

And what theorists like to say that art is “constrained” by the laws of the brain, a supposition that Noë keeps returning to in this essay? That’s like saying that sports are constrained by the laws of physics. In fact, it’s the laws of physics that give rise to every physical aspect of sports– the outcomes, the boundaries, even the miracles. There is something dissonant about Noë’s conception of cause-and-effect when it comes to art and the brain. Writing, as Noë does above, that the “brain constrains our ability to perceive” seems to suggest that we first have an ability to perceive, and then the brain comes along, and somehow constrains perception. Explain to me the cause and effect in this model of perception– it’s nonsensical.

Again, neuroscience is young. If we know anything about visual perception, it’s that it happens in stages in the brain, and in anatomically distinct regions that are responsible for different parts of the process. Some of the most compelling findings in perceptual neuroscience only have to do with the early stages of processing: lines, color, motion, coherence, object recognition. There are as of yet more unknown aspects of perception, often referred to as “higher” brain functions, though they undoubtedly trickle top-down to influence the very early stages of perception: the integration of one’s own memory and emotions, associations with anything relevant to the work at hand, intellectual significance. No one is claiming to have answers to everything yet– just go back and read Zeki’s passage quoted above to remind yourself of the end-of-the-day humility of someone at the center of the work that Noë is criticizing.

Noë continues:

Some of us might wonder whether the relevant question is how we perceive works of art, anyway. What we ought to be asking is: Why do we value some works as art? Why do they move us? Why does art matter?  And here again, the closest neural scientists or psychologists come to saying anything about this kind of aesthetic evaluation is to say something about preference. But the class of things we like, or that we prefer as compared to other things, is much wider than the class of things we value as art. And the sorts of reasons we have for valuing one art work over another are not the same kind of reasons we would give for liking one person more than another, or one flavor more than another. And it is no help to appeal to beauty here. Beauty is both too wide and too narrow. Not all art works are beautiful (or pleasing for that matter, even if many are), and not everything we find beautiful (a person, say, or a sunset) is a work of art.

Again we find not that neuroaesthetics takes aim at our target and misses, but that it fails even to bring the target into focus.

Why do I value Monet’s Impression Sunrise? For many reasons. Some art historical– its significance to the school of impressionism, its departures and influences. Some personal and indescribable– waves of feeling, a sudden mood. And some reasons, despite Noë’s overbearing negativity, stemming from recent offerings of perceptual neuroscience. When I read Livingstone’s account of Sunrise, I was given an awareness of the perceptual process occurring inside my own biology that added deep value to my conscious awareness of viewing the art, just as my own emotional resonances and art historical understanding of the piece had.

Livingstone’s work led me to some new questions I hadn’t really considered before in studying art history: Maybe some artists have intuitively, quite unconsciously, tapped into universal features of our neurobiology to induce widespread appreciation of their artistic output? Maybe it follows, then, that it could be interesting and useful to study these universal aspects of our biology of perception?

Yet it’s early. Neuroaesthetics, like the neuroscience of consciousness itself, is still in its infancy. Is there any reason to doubt that progress will be made? Is there any principled reason to be skeptical that there can be a valuable study of art making use of the methods and tools of neuroscience? I think the answer to these questions must be yes, but not because there is no value in bringing art and empirical science into contact, and not because art does not reflect our human biology.

“Value” here is totally relative, totally subjective. If this passage told me anything, it’s that Noë and I have a very different definition of what a “valuable study of art” is.

To begin to see this, consider: engagement with a work of art is a bit like engagement with another person in conversation; and a work of art itself can be usefully compared with a humorous gesture or a joke. Just as getting a joke requires sensitivity to a whole background context, to presuppositions and intended as well as unintended meanings, so “getting” a work of art requires an attunement to problems, questions, attitudes and expectations; it requires an engagement with the context in which the work of art has work to do. We might say that works of art pose questions and encountering a work of art meaningfully requires understanding the relevant questions and getting why they matter, or maybe even, why they don’t matter, or don’t matter any more, or why they would matter in one context but not another. In short, the work of art, whatever its local subject matter or specific concerns ― God, life, death, politics, the beautiful, art itself, perceptual consciousness ― and whatever its medium, is doing something like philosophical work.

I’m with Noë here– art and philosophy are doing similar work.

One consequence of this is that it may belong to the very nature of art, as it belongs to the nature of philosophy, that there can be nothing like a settled, once-and-for-all account of what art is, just as there can be no all-purpose account of what happens when people communicate or when they laugh together. Art, even for those who make it and love it, is always a question, a problem for itself. What is art? The question must arise, but it allows no definitive answer.

Absolutely! No one is saying they have a definitive answer, though.

For these reasons, neuroscience, which looks at events in the brains of individual people and can do no more than describe and analyze them, may just be the wrong kind of empirical science for understanding art.

Here Noë is being quite authoritative on two positions: first, that neuroscience is trying to be definitive about art (even Zeki doesn’t claim this), and second, that it’s the wrong kind of empirical study for understanding art. This is like telling your daughter she can’t go on a playdate with a new friend from school when a) your daughter hasn’t asked to go on the playdate yet, but merely mentioned that she talked to a new student that day, and b) you’ve personally never met nor seen this new student yourself. But no playdate!

Noë’s concluding sections are hasty, over-the-top, and they put words into the mouth of an entire scientific field:

Far from its being the case that we can apply neuroscience as an intellectual ready-made to understand art, it may be that art, by disclosing the ways in which human experience in general is something we enact together, in exchange, may provide new resources for shaping a more plausible, more empirically rigorous, account of our human nature.

Noë’s legion of strawmen rushing in with their ready-made neuroscientific answers to the deepest questions apparently need to go back home and question how empirically rigorous they’ve been. But if Noë has anything more to say about this “more plausible” and “more empirically rigorous” study of the perception of art, it’s not to be found here. Stirring up the conversation with alternative propositions or lines of research is a good thing; but putting words in a entire field’s mouth, telling it what it is not and will never be: these are things that, when posted on the New York Times’ site, amount to a swell of unfounded negativity in full public view. Noë comes off in this essay like Raymond Tallis, minus the humble Socratic admission of knowing that he doesn’t know. We’ll have to wait to see what Noë does suggest in his forthcoming book on art and human nature.

For the rest of the scientists out there studying perception and adding valuable voices to the chorus of a deepened and widened understanding of and appreciation for all forms of art, we can continue to thank the labs that slowly but surely generate the insights we find useful and insightful enough to include in peer-reviewed journals, academic textbooks, and books for the general public. The deepest questions– the rings of the “target” Noë believes neuroscience can’t even bring into focus– will not be answered instantaneously. That is why they are the deepest questions.

The Thing That Discovers Itself

[ 2 ] August 24, 2011

What do a single cell, a simple organism, a Nobel Prize-winning scientist have in common?  Each has a life story.

In 2007, scientists discovered spherical and ellipsoidal forms preserved in the ancient sandstone slabs of Western Australia.  They were microfossils, impressions just a few millionths of a meter long, beyond invisible to the unmagnified eye.  But whatever they were, they were old.  Their chemical traces—carbon, sulfur, nitrogen, and phosphorous—date back approximately 3.4 billion years.  Because carbon and nitrogen are common elements in all living things, these newfound forms were once alive.  They were bacteria.  They fed off sulfur compounds.  They clung to sand grains in the sediment.  Dr. David Wacey from the University of Western Australia concluded that “early life was very simple, just single cells and small chains, some perhaps house in protective tubes.”[i] This could be the story of the first life on earth.

How many lives have come to pass on this planet?  An estimated one hundred billion human beings have existed.[ii] But we are just one species of roughly two million that are known.  Many more have yet to be undiscovered.  The National Science Foundation’s “Tree of Life” project estimates that there could be between five and one hundred million species present today.[iii] Then there are the extinct.  One popular claim holds that 99.9%of all species are long gone.[iv] Life has been present here on Earth for over three billion years.  Who knows how fruitful and multiplicative each species has been during its existence?

Despite the unknowable number and unimaginable variety of its forms, there is essential unity to life.  Every individual has descended from a common ancestor.[vi] We all take part in evolution, the process of change in genetic composition of a population due to both random and nonrandom mechanisms.  Our genes either do or do not endure, and those genes that have higher-than-average frequencies within the population can be considered fitter.  Evolution is an ongoing process of determining “What is more fit?”  We contribute data to this survey by shepherding our genes to the next generation.  So it goes, from generation to generation.  The arrow of time points forever forward; we must survive and pass on.  Because of the deep influence of evolution—from genes to species—patterns are conserved.  Genetic information is coded in the same basic molecular language: A, C, G, and T or U.[vii] All individuals are composed of cells, the smallest basic unit of life.  And the story of a life—of your life and all life—will unfold in a somewhat predictable way.

Next page

The Neuroscience of Tetris

[ 25 ] June 3, 2011

Think of all the puzzle games you’ve ever played. Which has forced you to make visuospatial connections at a rate faster than your brain can normally process them? Can you think of one that combines subtle geometric nuances with coordination of the eyes and fingers to create visual harmony? How about a game where you must cyclically build and destroy structures using randomized building blocks? Tetris is one of the most ubiquitous electronic games of all time, probably because it hides this beautiful complexity behind the faceof seven simple falling blocks, the infamous Tetriminoes.

When Alexey Pajitnov developed Tetris in the ‘80s it was an instant hit. People all over the world crammed around Nintendo Entertainment System (NES) consoles to play with the falling blocks and score points, and now the game has sold over 100,000 million copies worldwide. Today Tetris has developed into something of an online sport, spawning hundreds of yearly tournaments and even a global ranking system. The game is available on almost any platform you can think of and, at the very least, there will always be a person in a crowd who can hum the game’s famous Russian tune, “Korobeniki.” It’s probably safe to say that Tetris is here to stay for a very long time.

Yet what’s most fascinating about this game aren’t the statistics behind its virality. As anyone with an online PhD knows, human-computer interaction is an important part of the learning process that relies on fragile brain-machine interfacing. People break learning curves by activating multiple senses to overcome extreme knowledge barriers, and until very recently scientists had only speculated that playing Tetris relied on this type interfacing in a way that few other systems do. And in 2009, research was published in the open access journal BioMed Central Research Notes that changed the way people think about Tetris.

Gray matter is an amalgam of neuronal cells that is distributed throughout the central nervous system. It is involved in things like memory and sensory applications, and is remarkable in that it has demonstrated plasticity– the ability to shrink and thicken in response to repetitive external stimuli. As young children learn new information their gray matter develops accordingly. Yet when humans reach old age, it is generally believed that cognitive capacity decreases until death. Gray matter’s plasticity, however, allows even the adult brain to continue growing. In all simplicity, Tetris has been found to act upon this flexibility of brain matter by actually thickening it, thereby strengthening neural networks and the webs they control.

The BMC study used a MRI to scan the brains of subjects who practiced Tetris for 30 minutes a day. They compared these images to the MRIs of people who had not practiced Tetris at all. In the experimental group, the researchers found that the subjects’ gray matter had thickened, leading them to believe that the game is responsible for physical cognitive development that should (in theory) also improve things like memory capacity. In effect, playing Tetris allows your brain to operate more efficiently. But how?

Just like any motor, the brain needs fuel to work properly and consistently. Sugars like glucose provide this fuel.  A prevailing theory known as the Tetris Effect states that when a person initially starts to play Tetris, their brain consumes a huge amount of glucose in order to solve its fast-paced puzzles. Through consistent and limited daily practice, the brain begins to consume less glucose to perform just as well, if not better, at Tetris. After a few months the brain becomes so efficient at playing the game that it requires only a very small amount of fuel to perform the game’s rapid puzzle work. What this shows is that the brain actually learns how to solve Tetris conundrums with energy efficiency while it improves performance on the same tasks that once required loads of glucose. This is a prime example of brain efficiency– still a mysterious concept to researchers.

The BMC study appears to actually link gray matter plasticity to brain efficiency, but no substantial research making these claims has been published at this time. At the bare minimum, it is probably safe to assume that Tetris affects the brain in a way that is healthy and even beneficial to learning. So why not exercise that gray matter and give your brain a boost? You might find Tetris difficult at first, but it grows on you. Better yet, it’ll grow your noggin, too!

Jeremy Fordham is a contributing writer for Online PhD Programs. He is an engineer who hopes to inspire dialogue in unique niches by addressing topics at the intersection of many disciplines.

Page 1 of 3123