Primatologist Frans de Waal has written a terrific essay for the New York Times’ Opinionator blog about our moral instincts. Do we need a “God” in order to do “good?” No, de Waal argues:
“According to most philosophers, we reason ourselves towards a moral position. Even if we do not invoke God, it is still a top-down process of us formulating the principles and then imposing those on human conduct. But would it be realistic to ask people to be considerate of others if we had not already a natural inclination to be so? Would it make sense to appeal to fairness and justice in the absence of powerful reactions to their absence? Imagine the cognitive burden if every decision we took needed to be vetted against handed-down principles. Instead, I am a firm believer in the Humean position that reason is the slave of the passions. We started out with moral sentiments and intuitions, which is also where we find the greatest continuity with other primates. Rather than having developed morality from scratch, we received a huge helping hand from our background as social animals.”
A new study in the journal Cognition, by Judith Kroll and her team at Penn State, suggests that deaf individuals fluent in American Sign Language (ASL) with English as their second language can read words and internally visualize signs at the same time. Like non-deaf bilinguals, ASL users engage their knowledge of both languages simultaneously, though in the case of the deaf, that means they aren’t merely thinking of two words, they’re thinking of both a word and a mental representation of positioned human hands.
Deaf individuals were told to decide whether two sample English words were related or unrelated in meaning as fast as they could (i.e. ‘bird’ and ‘duck’ would be related, ‘movie’ and ‘paper’ would be unrelated). If the related words also had ASL hand signs that were simlar to each other in their shape and motion, the deaf participants recorded significantly faster translation times for the task.
In other words, if two signs are physically similar, they speed up language translation. There is clearly some kind of “vocabulary of the hands” in ASL speakers that may make two similar hand shapes analogous to two similar words, like “drive” and “dive” or “sun” and “sunk.”
These data further support the idea that ASL is very cognitively similar to spoken language, likely uses some of the same neural pathways, and is learned in the same way. The latter idea is strongly supported by the touching history of Nicaraguan Sign Language, which was created by an isolated population of genetically deaf children in Nicaragua in the 1970s.
PET scan technology, or positron emission tomography, is an extremely important tool in neuroscience. Aside from their use in medicine for the detection of neurological disease, tumors, and stroke, PET scans – like fMRI scans – can measure molecular activity in specific regions of the brain and answer important questions about neurophysiology. Until now, PET scans have been limited to humans. A research group from Brookhaven National Laboratory recently published an article in Nature detailing a customized PET for rodents.
Unlike the human PET, which requires subjects to be immobilized, the rodent PET scan allows the animal to move freely while its brain is scanned. This seemingly small detail has immense consequences for neuroscience research; the behavioral tests that have been used on rodents for the last century (i.e. mazes, wheels, tests of eating behavior, addiction studies, etc etc) can now be accompanied by visual representations of regional brain activity.
I assume scientifically inclined gerbil owners eagerly await a custom gerbil-fitted PET scan…if not for home experiments, they can at least make sure their furry pets have clean bills of health.
It’s important to take fMRI studies with a dose of skepticism. These are any of the studies that tell you “the brain lights up” when you do or think of X. Really, this “lighting up” is an fMRI scanner detecting changes in bloodflow to certain regions of the brain, assumed to correlate with increases in neuronal electrochemical activity in that region. These studies are certainly helpful with localizing function in the brain, and showing us what healthy versus diseased function looks like. But in mainstream media, it shouldn’t surprise us so much anymore that certain regions of the brain light up when we do or think anything at all. They better!
That being said, here’s one of the coolest fMRI studies I’ve seen. Charles Limb studies the neural correlates of creativity– what happens in the brain when we play a piece of music that’s memorized versus when we improvise a solo? What happens in the brain when we “trade fours,” a classic improvisation technique in jazz? This 17-minute TEDxMidAtlantic talk is a must watch for anyone interested in creativity and the brain. See what Limb does that makes the audience burst into spontaneous applause, and all the great footage of jazz piano playing (and rap freestyling!) while in an fMRI scanner.
While yesterday we BBBlogged about Facebook’s role in boosting self-esteem, some other recent psychology research argues the opposite. Though Stanford’s Alex Jordan, et al, agree that Facebook allows for the posting of primarily positive things (i.e. self-esteem boosting pictures and quotes or smart profile additions), they argue that constantly seeing images of your peers expressing joy can amplify depressive or negative feelings about oneself. Slate‘s Libby Copeland wrote on the report:
By showcasing the most witty, joyful, bullet-pointed versions of people’s lives, and inviting constant comparisons in which we tend to see ourselves as the losers, Facebook appears to exploit an Achilles’ heel of human nature. And women—an especially unhappy bunch of late—may be especially vulnerable to keeping up with what they imagine is the happiness of the Joneses.
The “happiness of the Joneses” is an important idea; how “relative” is happiness? Does seeing others in a fixed state of joy, as we often see them on Facebook, make us feel that we are alone with our problems? Or does our ability to craft a see-what-we-want, joyful, flattering Facebook persona make us happy?
(read the rest of Copeland’s nice Slate piece here)
A new research article in the interestingly-titled journal Cyberpsychology, Behavior, and Social Networking, argues that facebook is a self-esteem booster. In the study, one group of subjects were allowed to spend several minutes tinkering with their facebook profiles, while subjects in the other group sat and looked in the mirror for the same period of time. Using some well-trod, in-depth self-esteem tests, the subjects’ perceived self-esteem levels were measured.
The lead researcher, Jeffrey Hancock of Cornell, argues that the reason for the Facebook boost is the overall “positive” nature of Facebook content; One can post mainly flattering pictures of them self and update their status with primarily witty comments, and the majority of social interactions on the site are positive. These factors have an obvious correlation with high self-esteem:
The results revealed that… becoming self-aware by viewing one’s own Facebook profile enhances self-esteem rather than diminishes it. These findings suggest that selective self-presentation in digital media, which leads to intensified relationship formation, also influences impressions of the self…By providing multiple opportunities for selective self-presentation—through photos, personal details, and witty comments—social-networking sites exemplify how modern technology sometimes forces us to reconsider previously understood psychological processes.
The moral of the story…close this window and go Facebook for a few hours, I guess. Weird…