In Search of Morality: An Interview with Dr. Joshua Greene

ArtsEducationPeopleReviewsScienceStories

When making that big decision, do you go with your gut, or do you map out how your judgment will affect those around you? This has been an endless source of fascination for Dr. Joshua Greene. Greene has been busy bridging the gap between psychology and philosophy at Harvard University, where he is professor of psychology and director of the Moral Cognition Lab.

The Moral Cognition Lab studies how ethical intuitions play out in the world using a scientific approach. Greene’s book, “Moral Tribes: Emotion, Reason, and the Gap Between Us and Them,” examines the biological and cultural forces that shape moral behavior. Brain World had a chance to chat with Joshua Greene about his work and the big questions driving it.

Brain World: How did you end up studying moral psychology?

Joshua Greene: I came to psychology and neuroscience as a philosopher interested in the main questions of moral philosophy: What’s right, and what’s wrong? How can we know? And, when people disagree about what’s right and what’s wrong, how can we rationally resolve these disagreements?

As an undergrad, I was introduced to the “trolley problem”: Is it OK to push a person off a footbridge and onto the tracks in front of a speeding trolley in order to stop that trolley from killing five people? Most people say “no.” But then there’s this version: Is it OK to hit a switch turning the trolley away from five people but towards one person? Now, most people say “yes.” Why does it seem right to trade one life for five in one case but not the other? This set of dilemmas reflects tension between the deontological perspective, which says morality is fundamentally about rights and duties, and the utilitarian perspective, which says that morality is ultimately about consequences for human well-being.

Understanding the trolley problem would help me understand most moral problems in the real world. (Almost every moral debate is somehow about the rights of the individual versus the greater good.) In grad school, I read about cognitive neuroscience and had some ideas about how competing moral impulses might work in the brain. That began my current research program.

BW: How were you influenced by philosophers David Lewis and Gilbert Harman?

JG: Gilbert Harman was interested in empirical moral psychology early on. Most moral philosophers believe that they can study the “ought” of moral philosophy while ignoring the scientific “is” of moral psychology. Harman and I agree that you can’t determine right or wrong from scientific data. But Harman recognized that science can help us get beneath the intuition we rely on to make moral judgments, and how this changes our assessment of the reliability of those intuitions. In that way, Harman was an influence and inspiration.

David Lewis’ style of thinking had a big influence on me, even though my work is very different. We share something called “naturalism,” explaining as much as possible in terms of ordinary physical evidence, the kind of stuff that science can study. Lewis famously argued that there are many physical worlds beyond our own. I have my doubts about that, but despite this difference, we’re both trying to make sense of reality without appeal to anything “spooky.”

BW: Why did you move back to Harvard in 2006, after your postdoctoral fellowship at Princeton in the Neuroscience of Cognitive Control Laboratory?

JG: If you get offered a decent job in the academic world, you go! (And if it’s a great job, then you go really fast.) It wasn’t a hard decision. I was just delighted that I had the chance.

In a sense, though, I wasn’t really coming back. When I was here as an undergrad, I was a philosophy major. I took only one psychology class — behavioral neuroscience. So, even though I was returning to Harvard, it was a whole new set of people and a very different kind of work.

I see Harvard’s psychology department as an empirically minded philosophy department. People here ask the big questions about the human mind, but in a way that starts with scientific investigation.

BW: You study moral judgment and decision-making using behavioral experiments and functional neuroimaging (fMRI). Can you explain how neuroimaging plays into studying decision-making?

JG: Sure. As an undergrad, I realized that one could test my hypotheses by looking at patients with certain kinds of brain damage. There was a famous neurological patient from the 19th century named Phineas Gage. He was a railroad foreman working in Vermont, and suffered a tragic accident on the job. An explosion sent an iron bar through his medial prefrontal cortex, the brain region behind the middle of the forehead. We now know that this region integrates emotional signals in decision-making, partly because of Gage’s case.

Gage was a respected, upstanding guy before the accident. After the accident he was still able to talk, think, and reason, but his personal, even moral character changed. By studying contemporary patients like Gage, Antonio Damasio and others have helped us understand this part of the brain. These patients, according to Damasio, “think” but don’t “feel.” My thought, upon reading Damasio’s article “The Return of Phineas Gage,” was that this difference between “thinking” and “feeling” can explain why people respond so differently to apparently similar moral dilemmas.

At Princeton, there were no neurological patients, but Jonathan Cohen, who became my mentor, was setting up a new brain imaging center. I thought, maybe I don’t need to test patients. Perhaps we can see these responses in the brains of healthy people given moral dilemmas. So, we did a series of brain imaging experiments and results were consistent with our predictions. More “personal” dilemmas such as the footbridge case, elicit activity in regions of the brain associated with emotion. More “impersonal” dilemmas, such as hitting the switch, elicit activity in regions associated with rule-based reasoning.

More recently, these experiments have been done with neurological patients. Patients like Gage with damage to the lower parts of the medial prefrontal cortex are about twice as likely as others to say that it’s OK to push the guy off the footbridge to save five lives. Again, this is because they lack normal, negative emotional reactions to the thought of pushing.

You May Also Like

Tunnel of Light: Making Sense of Near-Death Experiences
The Question of “Good” Versus “Evil”

Sponsored Link

About Us

A magazine dedicated to the brain.

We believe that neuroscience is the next great scientific frontier, and that advances in understanding the nature of the brain, consciousness, behavior, and health will transform human life in this century.

Education and Training

Newsletter Signup

Subscribe to our newsletter below and never miss the news.

Stay Connected

Pinterest