Subscribe via RSS Feed

The Evolution of Chalkboard Torture

[ 6 ] October 18, 2011

On the universal terribleness of fingernails scratching a chalkboard.

Having grown up in New York City in the mid-nineties, I was spareth’d the rod of classroom corporal punishment.  Wrist slapping, spankings, canings – these were methods of discipline I saw in movies and read about in books, often inflicted on slight, well-meaning cockney schoolboys.  But when my old French teacher, Madame G, launched her fingernails-on-the-chalkboard assault, French class became third-period waterboarding.  Far worse than the occasional verbal scolding, “time out,” or shameful walk to the principal’s office, there was something to be said for the gripping discomfort caused by the hellish timbre of keratin on slate.

Loud, high-pitched noises are good at cutting through the low rumble of background noise – that’s why our alarms aren’t baselines, violins play lead, and it’s impossible to ignore the aspirant opera singer among the 2 a.m. karaoke dregs.  But the sound of fingernails on the chalkboard isn’t your run-of-the-mill bothersome high-pitched noise. It’s unique in its cross-culture universality, an annoyance that only the most masochistic of us enjoy hearing, and only the most malicious enjoy producing.  Why is it that so?  One theory says it has something to do with the distress calls of chimpanzees.

Not all sounds are created equal – our brains have evolved to attribute special significance to certain ones.  For instance, we all experience a particularly strong emotional response to the sound of a crying baby. The sound arouses our instinct to care for the young, telling us that a helpless little whelp needs something, and we can’t help but empathize (unless you’re on a bus and it’s not yours).  The evolutionary roots of this response are clear: If humans innately treated a baby’s cry as they did, say, a bird’s chirp, our species, having left its babies unattended and undernourished, surely wouldn’t have survived long enough to be the successful, world-raping genius race it has become.

In addition to crying, screaming and yelling have a special place too.  They communicate distress – someone needs help, someone is angry, or danger is close by.  Cries, screams, and yells aren’t elements of language, they’re more basic, they’re unlearned;  you are born crying and screaming, and you never forget how to.  Such unlearned shouts primarily exist to communicate a simple though important message (i.e. “danger!”) and trigger an appropriate emotional response in the listener (i.e. “fear”).  They’re nature’s expletives.

Most vertebrates have their own expletives, like when your neighbor’s pit bull barks “f*** you intruder!” at you while you innocently trot by his fenced yard, or when your surly cat meows “feed me you lazy failure!” while you watch the Weather Channel in your underpants.  Intriguingly, we are able to sense the emotions of other animals when they exclaim – our innate response to human cries of pain and threatening screams also generalizes to other species.  We obviously didn’t adapt to understand when gerbils are frightened, but we know it when we hear it.

So, with all that in mind, could the enigmatically universal response to the sound of fingernails on the chalkboard have something to do with our innate response to distressed yells?  Maybe.  Randolph Blake, a neuroscientist at Vanderbilt University has showed that the nail-on-chalkboard sound is remarkably similar acoustically to the distress calls of our cousins, the chimpanzees.  According to Blake, we cringe at chalkboard torture because ancient parts of our brains think a chimpanzee is screaming.

But why would we fear a chimpanzee’s scream?  Well, 6.5 million years ago we diverged from the chimps, and during the vast majority of that time we evolved alongside them in the East African savannah.  Though chimps mainly stuck to the trees while human ancestors moved to the ground, they shared predators (lions, cheetahs, big birds of prey, etc). Chimps, along with other apes and several species of monkey, have notoriously loud distress calls.  When a chimp sees danger – say, a skulking lion – she lets out a piercing high-pitched screech, like a knife through the jungle.  These screeches would surely have been heard by our ancestors, and, because we don’t speak chimp, it would take generations of natural selection for humans to develop an innate fear-response to the sound and, ultimately, for that response to generalize to similar sounds, like the ones that come out of French teachers’ nails scraping chalkboards.  Et Voilà – the chalkboard scratch is uniquely abhorrent because it mimics the once-familiar distress call of a threatened chimp neighbor, an auditory relic of our perilous evolutionary past.

A fair objection to the tidy little story above is that humans could merely learn, through experience, that when chimps scream, trouble is about.  What evidence do we have that it is an innate response rather than a learned one?  Well, not much.  But here’s a brief experiment -  take a listen to this chimp distress call. Goosebumps? Cringing?  Wet your pants?  I would venture a guess that you’ve never heard a chimp distress call until now, and I would also guess you reacted to it with fear and surprise.  No, there isn’t a lion near you (though you should check), but something in your viscera is telling you this sound bodes badly.

It’s generally impossible to prove that a modern human eccentricity is the result of an ancient adaptation because we can’t go back in time, but if a theory makes sense it should be entertained.  It’s funny to think that my French teacher may have been unknowingly mimicking a chimpanzee when she tortured us with the chalkboard maneuver.  I should be thankful she didn’t take the imitation a step further and throw her feces at me.

Play

Free(r) Wills

[ 6 ] December 20, 2010

Do people think that their own will is “freer” than others’?  

Emily Pronin and Matthew Kugler’s recent paper, “People Believe They Have More Free Will Than Others,” confidently suggests an answer to that question.  

While the debate concerning the existence of true “free will” rages on (see the Re:Cog piece on that debate here), it is accepted that “free will” is, at least, a subjective feeling everyone has. Published in the most recent Proceedings of the National Academy of Sciences, Pronin and Kugler’s paper is centered around an interesting phenomenon known as the “actor-observer bias,”  which was first described forty years ago by social psychologist Richard Nisbett.  The actor-observer bias describes people’s inclination to perceive their own actions as calculated responses to shifting circumstances, while viewing others’ actions as results of a fixed, established personality.

In a series of survey-based experiments, Princeton’s Pronin and Kugler show that people exhibit the actor-observer bias in a variety of circumstances. In one experiment, subjects answered the question “What are you doing this Saturday night?”  In most cases, their answers reflected their own desires and intentions in the moment (I imagine these answers went something like: “I’m gonna go to the bar because I’m in the mood for a drink and I’m tired of going to the weekly campus concert”). When subjects were asked “What is your roommate [X] doing this Saturday night?” their answers tended to account for history and personality more than desires and intentions (“Jake is probably going to the concert series because he goes to it every week and loves loud indie rock.”)

In another study, the author’s asked restaurant employees to describe the possible paths of their futures.  Those asked generally listed twice as many paths for their own future than that of their fellow workers:

Participants circled more options as being genuine possibilities in the case of themselves than a coworker…[They] believed that their own futures contained many possible paths, but that others were likely to continue on whatever path they were currently walking.

***

While we may like to think that free will, if it exists, is imbued by evolution in all individuals in equal measure, our actions don’t appear to reflect this principle.  We attempt to predict our peers’ behaviors by drawing on our knowledge of their previous actions and their personality, while we characterize our own behaviors as reactions to our ever-changing environment:

This research supports the hypothesis that people perceive themselves as possessing more of the ingredients that constitute free will than those around them. Individuals in our experiments viewed their past and future behaviors as less predictable a priori than those of their peers, and they believed that, relative to their peers, there were more possible paths that their own lives could take…Our participants indicated that it was internal desires and intentions that best predicted their own (but not others’) future behavior.

In some ways, though, this result should not come as a surprise.  We can only really know our own minds, and it makes sense to think of our own behaviors as malleable and adaptive – after all, that’s why we think we have free will.  On the other hand, we cannot get inside others’ minds and can only use what we know about them (their history, personality, etc.) to predict their behavior.

***

So, we live on alone, tortured by the endless number of options we have at every moment and annoyed by the millions of robots that get in our way.  George Carlin, in one of his best comedy specials, made an astute observation on driving that seems eerily apropos:

Have you ever noticed that when you’re driving, anyone going slower than you is an idiot and anyone going faster than you is a maniac?

Tools Don’t Suffer Fools

[ 4 ] September 21, 2010

Interesting parallels between human and non-human tool use

The use of tools plays an important role in the ranks of human-specific behaviors; it stands beside our mastery of language, our upright posture, and even our propensity to make art.

But over the past century or so, many human-specific behaviors have been systematically shown to be present elsewhere in the animal kingdom.   Research by scientists like Robert Seyfarth and Peter Marler show how some primates and song birds are able to make symbolic connections between vocalizations and abstract meanings, which is the foundation of language.  Pavlov lead the way for research showing that animals as genetically different as crickets and pigeons can learn and problem-solve, skills humans often consider the basis of our “superior intelligence.”  Orangutans and chimps can stand upright.  Elephants can paint.  We know chimps occasionally use tools, and new research in Science¸ which will be discussed below, shows that New Caledonian crows likely owe their healthy disposition and success as a species to their ability to hunt protein-rich larvae using twigs.

I do not mean to gloss over the impressiveness of these behaviors in humans – we certainly use language in a far more complex way than vervet monkeys, the famous Thai painting elephants were intensely trained by humans, and crickets can’t do algebra.  However, while it is easy to point out the behavioral differences between us and the rest of the animal kingdom, the evolutionary causes of many human-specific behaviors are probably more similar to other organisms than we like to imagine.

I’ll only discuss one such evolutionary parallel here because, as with any discussion of evolutionary history, the data is scarce and the variables are numerous and intricate.

New research just published in Science concerning tool use in New Caledonian (NC) crows tells an interesting story – it seems that these clever birds are not only expert twig-yielders, but their use of twigs is central to their diet.  They use twigs to custom fashion hook-like devices that help them fish out wood-boring beetle larvae.  Some impressive calculations by Christian Rutz and his team reveal that the protein and fat-rich larvae account for a substantial portion of their diet.   These crows have found a more intelligent way to fulfill their dietary needs quickly and efficiently than the spread-out, time-consuming foraging patterns normally seen in crows and their relatives (e.g. scouring the ground for less nutritious nuts and seed).

Furthermore, previous research has shown that NC crows are born with the propensity to use sticks, meaning that it is a heritable trait:

“Hand-raised juvenile New Caledonian crows spontaneously manufacture and use tools, without any contact with adults of their species or any prior demonstration by humans (Kenward et al, 2005).”

Lastly, young birds have to learn how to use the hunting tools.  At an early age they have trouble wielding the twigs efficiently, and can only effectively hunt with twigs after months of practice.

Here are some parallels with the evolution of human tool use.

  • First, while humans likely evolved the ability to alter objects in the environment (make tools) for specific adaptive applications about 2 million years ago and NC crows only acquired the trait in the past few hundred years, the behavior almost certainly evolved in hominins for dietary reasons (I use the word hominins because the first stone tool industry, the Oldowan industry, is usually attributed to Homo habilis, one of our early, pre-Homo erectus ancestors).  Cut marks on bones found near tool sites point towards the role of early stone tools in butchery.
  • Butchery implies meat-eating, and meat-eating is believed to be a watershed moment in human brain evolution because of both the direct benefits of protein-rich meat, and the social changes it brought through food sharing and collective hunting.  Like the NC crows, use of tools allowed humans to steer their foraging strategies away from the chimp-like methods of scouring for nuts and fruits towards a hunting-based strategy.
  • Once tools caught on in humans, they became ubiquitous.  Stone tool assemblages have been found near fossils of every species since Homo habilis and ultimately led to the myriad tools of modern humans.  Furthermore, tool use is thought to be somewhat heritable in humans (babies love their inanimate objects) and has even been attributed to distinct, human-specific brain areas in the frontal cortex.

While human and crow behavior is vastly different, evolutionary pressures can be very similar, and the adaptive products of those pressures sometimes line up.  NC crows use tools to efficiently enrich their diet, as Homo habilis did in the East African rift valley 2 million years ago.  This seemingly minor statement actually reflects an important, oft-overlooked concept in evolution.  Vastly unrelated species often turn to similar behaviors when faced with similar environmental pressures: bats fly as birds do, some whales sing like cicadas, songbirds can communicate to each other like monkeys, and, like Homo habilis, New Caledonian crows invented tools to eat meat.  And, like human ancestors, they also must learn (and teach) the behavior, allowing it to continue down the lineage.

Without denying our uniqueness and superlative abilities, humans must admit that our behaviors come from the environment; the same environment all animals share.  And isn’t that comfortably humbling?

Five Notes For All

[ 3 ] August 20, 2010

Are We Wired for Music?

by Sam McDougle

I was recently leafing through Jared Diamond’s bestseller “Guns, Germs, and Steel” and was reminded of the prevailing idea in biological anthropology that humans first colonized the Americas  around 15,000 years ago (give or take a couple thousand years).  I had just posted on this site about Leonard Bernstein’s Harvard lectures and his thoughts concerning the universality of the five-note “pentatonic scale,” so I began to think about Native American music and its pentatonic nature.

The centrality of the flute in Native American music then reminded me of a study in Nature published last year, that reported the oldest ever archeological evidence of music:  A 35,000 year old flute carved from the bone of a vulture.  While the exact tones produced by the flute remain unknown, it had a curious number of holes:  5.  I also recently stumbled on a research article in the journal Infant Behavior and Development on the language of mothers and their infants – the authors showed that infants and their mothers coordinate their pitches harmonically while they speak, and the go-to pitches were often within a pentatonic scale.  Throw in the pentatonic’s ubiquity in Eurasian and African traditional music and the picture is pretty clear – these 5 notes are genetic.

The universality of the pentatonic scale in world music is not a new idea.  However, the idea that it could be biological is more controversial.

***

The acoustic signatures of human speech have been attributed to the efficacy of a harmonic series, which is a tone that resonates with a fundamental frequency, kind of like a note. This series, which is a natural occurrence of certain tones, is also present in human speech.

Kamraan Gill and Dale Purves argue that humans are drawn to musical scales because scales represent a harmonic series similar to human speech:

“The component intervals of the most widely used scales throughout history and across cultures are those with the greatest overall spectral similarity to a harmonic series. These findings suggest that humans prefer tone combinations that reflect the spectral characteristics of conspecific vocalizations (Gill and Purves, 2010).”

In other words, we are drawn to scales because they acoustically resemble speech, and we are drawn to speech for obvious reasons.  But what do notes arranged in scales (music) communicate that speech doesn’t?

Another line of new research suggests that this question is misguided.  Instead of separating music from speech and finding obvious functional differences, we have to look deeper at the similarities.  In a new article in the psychology journal Emotion, Megan Curtis of Tufts University argues that the defining pitch intervals of both sad, “minor,” and happy, “major,” music can also be heard in normal speech.

Curtis recorded professional actresses speaking neutral two-syllable phrases (i.e. “Let’s Go”) with various emotional signatures, like “sadness” and “pleasantness.” The actresses typically uttered a minor 3rd interval when expressing sadness and the major 3rd interval when expressing happiness. Furthermore, listeners overwhelmingly heard “sadness” in the minor third and “pleasantness” in the major third – it appears that the pitch of spoken words communicates emotion in the same way music does.

Intriguingly,  these intervals play an important role in differentiating the major and minor pentatonic scales.

***

Noam Chomsky put forth the idea of a “universal grammar” in language – a cross-cultural, biologically ordained set of syntactical rules that turn words into sentences.  I suspect that a “universal musical grammar” will eventually be added to his model.  Subjects and verbs are universals in human language, perhaps the pentatonic scale is one too.

PlayPlay

Will You Will?

[ 7 ] July 19, 2010

By Sam McDougle

The Neuroscience of Free Will

Consciousness” and “Free Will” are complicated topics.  Actually, researchers wish they were merely “complicated” topics – I would go for “staggeringly complex.”  “Philosophically and scientifically baffling” has a nice ring to it as well.

There is a two-pronged attack going on in academia in the quest to understand the feeling of “I,” involving both theorists and laboratory neuroscientists.  Theorists, like Daniel Dennett or Daniel Wegner, use a wide panoramic lens to peer into the big philosophical conundrums of consciousness and free will.  Is the physical brain the engine of the train of experience, and the feeling of “consciousness” merely steam? Or is consciousness the coal driving the brain-engine forward?  Is volitional action really caused by our conscious decisions?  Or is “deciding” simply an illusion of control superimposed on deterministic biology?

These are huge questions that will likely remain unanswered for years to come.

Neuroscientists prefer to use a microscopic lens to study consciousness and free will.  Without losing sight of the big puzzle, they work tirelessly to fit little pieces together, one by one, and their questions are usually pointed and a little more manageable:  What is the role of the posterior parietal cortex in the subjective feeling of control?  How does the pre-supplementary motor area contribute to voluntary action?

Research on these smaller questions offers much-welcomed relief from philosophy-induced headaches.

***

The most important aspect of free will is the impression that we consciously control our bodily actions.  The official name of this feeling is the “Sense of Agency” (SoA).  A basic example of SoA would be my current feeling that, “the words I am typing on this screen are a result of the control I have over the movement of my fingers on the keyboard.”  Furthermore, I sense that, “these are not someone else’s fingers, nor do I think that the appearance of words on the screen is a coincidental accident – there is a causal relationship between my typing and the appearance of the words.”

There are two main theories concerning the psychology of SoA — I’ll try to describe them as succinctly as possible:

  • The first theory is that we feel SoA because we constantly make predictions about our actions.  When these predictions are fulfilled, we perceive that we “willed” the consequences of our actions: If I predict that typing is going to make words appear, and it does, I feel that I caused them to appear.
  • The other main theory is that SoA is an illusion, and is experienced after the effects of our actions occur — The motor actions are biologically and subconsciously predetermined, and we only feel that we consciously willed them subsequent  to the effects: After noticing that words appear when I type, I assume I willed them to.  Because I only feel this after the words appear, my SoA did not cause the actual effect, it just “makes sense” of the whole affair.

Parsing through these heady theories is a tiring process, but if we can quantify aspects of SoA we can make the debate more concrete.

One way that neuroscientists have quantified SoA involves a phenomenon known as the “intentional binding effect.”  It goes like this:

Say you have a button in your hand that gives you a small, painless electrical shock exactly one second after you press it.  While doing this, you are looking at a timer.  At the end of the task you are asked to estimate the time lapse between pressing the button and getting shocked, and you confidently say “a half a second,” though it was actually a full second.

This is the intentional binding effect – when given a task that involves voluntary motor control over a stimulus (i.e. a shock), people’s perception of time is compressed, and they sense a more immediate consequence of their actions than the actual time lapse.  The intentional binding effect is used as a marker for SoA because it only occurs when someone has full motor control of a stimulus (i.e. they have complete “agency” over an event), it is consistent across almost all individuals, and it is quantifiable.

New research by Moore et al in the latest Proceedings of the Royal Society sheds some light on the neurological foundations of SoA.  Using a technique called “theta-burst stimulation” the researchers were able to temporally “turn-off” specific areas of their human subjects’ brains with bursts of electricity.  This was done while they performed the button-pressing shock task described above.  Because volitional motor movements are at the heart of SoA, they tested two motor areas – the sensorimotor cortex and the pre-supplementary motor area.  While the sensorimotor cortex is responsible for “processing signals directly related to action execution and sensory feedback,” the pre-supplementary motor area is involved in “the preparation and initiation of voluntary actions (Moore et al, 2010).”

The authors found that when they turned-off the sensorimotor cortex there was no effect on intentional binding.  That is, the subjects still reported a squished perception of time between pressing the button and feeling the shock.  However, when the pre-supplementary motor area was shut down, the subjects no longer perceived a compressed time interval – The pre-supplementary motor area seems to have an effect on SoA while the primary sensorimotor cortex does not.

***

While this all might seem like run-of-the-mill cognitive neuroscience, the authors put on their theorist caps to show the significance of this result:

Theoretically, SoA could arise from either of two distinct processes. On the one hand, SoA may involve a prediction—based on the neural commands for action—that the sensory effect will occur. On the other hand, the brain might infer, or postdict, from the conjunction of action and effect, that the action caused the effect, as in illusions of conscious will (Moore et al, 2010, emphasis added).”

Their results point toward the former – the predication-based theory – because the role of the pre-supplementary motor area is the preparation of motor actions, not the sensory feedback that occurs afterward.

***

In essence, this research scores points for a non-illusory free will. Maybe we have a “sense of agency” because we predict the effects of our actions before we act, and this sense is validated when the predictions are correct.

If you were scared of being a deterministic robot, this work offers some hope.

At the present, my pre-supplementary motor area is preparing me for a walk to the coffee-maker…am I consciously willing my legs to move me through the corridor, or is my mechanic subconscious just pushing me along?

The Crack Rocky Road

[ 4 ] May 30, 2010

by Sam McDougle

Can One Be “Addicted” to Food?

Addiction is hard to define.

Does it merely refer to any uninhibited behavior that is habitually repeated?  This definition feels too loose – I wouldn’t say I was addicted to tying my shoes or going to the bathroom.

Does addiction refer to a habitually repeated behavior that is carried out despite its negative consequences?  Again, I would argue that biting my nails or cracking my knuckles is not, technically, an addiction (the colloquial hyperbole in stating that one is “addicted” to biting their nails, or another such potentially unhealthy habit, seems to prove the point).  These behaviors seem to lie under the bigger umbrella of “habits.”

So are addictions just particularly nasty habits?

***

“Physically addictive” is a popular buzz phrase in the modern jargon.  Marijuana legalization activists use the term to highlight the drug’s innocuousness, and nicotine is often cited for its extreme “physically-addictive” properties.  The phrase implies a biological definition of addiction – addiction is a chemical occurrence in the brain.

But isn’t all behavior a chemical occurrence in the brain?  What differentiates my compulsive intake of coffee and my compulsive knuckle-cracking?  Both behaviors are products of chemical transfers in my brain and body, both result in some kind of reward, and I perform both behaviors too often for my own good.  Why am I “addicted” to coffee, but simply have the “bad habit” of knuckle-cracking?

Let’s ditch these questions for a minute, and look at a behavior that is widely agreed to be a “proper” addiction – repetitive cocaine use.  Cocaine causes euphoria by blocking dopamine reuptake in cells, flooding your brain with an excess of pleasurable neurochemicals.  In the short-term, this causes excessive partying, in the long term, it rewires your brain’s response to the reward of the drug and incites further (and heavier) use.  The common signs of full blown addiction are increased tolerance, amplified motivation to use, bingeing, and withdrawal.

Additionally, one of the common signatures of addiction is continuing the addictive behavior in light of palpable negative consequences.  This is clear with cocaine addicts, who continue to heavily use the drug even with their heart failing, body withering, and bank account dwindling.

Alcohol, nicotine, and heroin are also truly “addictive” substances, and have similar addictive characteristics.

But what about junk food?  Recent research suggests that the brain reacts similarly to both the intake of high-calorie foods and the use of addictive substances (i.e. cocaine).  Our society may not be snorting pixie sticks or main-lining Pepsi, but the analogies are staggering.

***

It’s no question that the intake of dense, fatty foods causes increased motivation to use – just give someone a bowl of potato chips and see if they can eat only one.  Bingeing is another obvious correlate, with entire pints of ice cream as the most common victims. Withdrawal behaviors, that is, depression and anxiety, were observed when experimental rat subjects were deprived of sugar after a prolonged binge.  In a recent paper in Nature Neuroscience, increased tolerance was observed in rats who were exposed to a high-fat diet, and rats exposed to high-fat foods (bacon and cheesecake, as supposed to their normal gruel) showed changes in their brain’s dopamine system that were comparable with cocaine-addicted rats.

The rats were getting high on cheesecake.

This same paper also showed that rats who were habituated to eating high-fat foods would continue eating them even if they were electrically shocked afterwards.

On the comparison to drug addiction, the authors write:

Given all of this, how far shall we go in drawing parallels between drug addiction and food addiction? Unlike drugs, food is essential for survival, but frequent consumption of bacon, sausage and cheesecake is not. The availability of such foods in most developed societies has increased so quickly that, similar to addictive drugs, they may stimulate brain reward systems more powerfully than we have evolved to handle, signaling a false fitness benefit and thereby reinforcing unhealthy patterns of consumption. In that respect, a parallel is defensible (Epstein & Shaham, 2010).

Food addiction can be connected to our ever ballooning waist lines as well.  Food addiction can not be cited as the only cause of obesity, though it probably plays a role – while there are cases where obesity is a result of genes or the limited availability of leaner foods, addiction to junk food can surely lead to serious weight gain, especially in the obese, drive-thru West.

***

Mill once said something about “personal experience” bringing truths home.  To me, no personal experience better supports the “junk food addiction theory” than childhood candy obsessions.  And, in my first post after a month, I’ll leave you with a YouTube video instead of a haughty painting.

Page 1 of 3123