In Search of Morality: An Interview with Joshua Greene

(Editor’s note: This article from a past issue of Brain World magazineIf you enjoy this article, consider a print or digital subscription!)


When making that big decision, do you go with your gut, or do you map out how your judgment will affect those around you? This has been an endless source of fascination for Joshua D. Greene. Greene has been busy bridging the gap between psychology and philosophy at Harvard University, where he is professor of psychology and director of the Moral Cognition Lab.

The Moral Cognition Lab studies how ethical intuitions play out in the world using a scientific approach. Greene’s book, “Moral Tribes: Emotion, Reason, and the Gap Between Us and Them,” examines the biological and cultural forces that shape moral behavior.

Brain World had a chance to chat with Joshua Greene about his work and the big questions driving it.

Brain World: How did you end up studying moral psychology?

Joshua Greene: I came to psychology and neuroscience as a philosopher interested in the main questions of moral philosophy: What’s right, and what’s wrong? How can we know? And, when people disagree about what’s right and what’s wrong, how can we rationally resolve these disagreements?

As an undergrad, I was introduced to the “trolley problem”: Is it OK to push a person off a footbridge and onto the tracks in front of a speeding trolley in order to stop that trolley from killing five people? Most people say “no.” But then there’s this version: Is it OK to hit a switch turning the trolley away from five people but towards one person? Now, most people say “yes.” Why does it seem right to trade one life for five in one case but not the other? This set of dilemmas reflects tension between the deontological perspective, which says morality is fundamentally about rights and duties, and the utilitarian perspective, which says that morality is ultimately about consequences for human well-being.

Understanding the trolley problem would help me understand most moral problems in the real world. (Almost every moral debate is somehow about the rights of the individual versus the greater good.) In grad school, I read about cognitive neuroscience and had some ideas about how competing moral impulses might work in the brain. That began my current research program.

BW: How were you influenced by philosophers David Lewis and Gilbert Harman?

JG: Gilbert Harman was interested in empirical moral psychology early on. Most moral philosophers believe that they can study the “ought” of moral philosophy while ignoring the scientific “is” of moral psychology. Harman and I agree that you can’t determine right or wrong from scientific data. But Harman recognized that science can help us get beneath the intuition we rely on to make moral judgments, and how this changes our assessment of the reliability of those intuitions. In that way, Harman was an influence and inspiration.

David Lewis’ style of thinking had a big influence on me, even though my work is very different. We share something called “naturalism,” explaining as much as possible in terms of ordinary physical evidence, the kind of stuff that science can study. Lewis famously argued that there are many physical worlds beyond our own. I have my doubts about that, but despite this difference, we’re both trying to make sense of reality without appeal to anything “spooky.”

BW: Why did you move back to Harvard in 2006, after your postdoctoral fellowship at Princeton in the Neuroscience of Cognitive Control Laboratory?

JG: If you get offered a decent job in the academic world, you go! (And if it’s a great job, then you go really fast.) It wasn’t a hard decision. I was just delighted that I had the chance.

In a sense, though, I wasn’t really coming back. When I was here as an undergrad, I was a philosophy major. I took only one psychology class — behavioral neuroscience. So, even though I was returning to Harvard, it was a whole new set of people and a very different kind of work.

I see Harvard’s psychology department as an empirically minded philosophy department. People here ask the big questions about the human mind, but in a way that starts with scientific investigation.


BW: You study moral judgment and decision-making using behavioral experiments and functional neuroimaging (fMRI). Can you explain how neuroimaging plays into studying decision-making?

(Editor’s note: This article from a past issue of Brain World magazineIf you enjoy this article, consider a print or digital subscription!)

Advertisements


1 Comment

  1. The trolley problem is entirely flawed and easily falsified as a moral problem at all. Most people with an intelligence greater than their shoe size will consider the greater implications of their decision in the trolley problem and make a decision on that rather than what is intended by the rather naive and narrow minded researchers who seem to think that merely saying something to a subject makes it true. It doesn’t.

    Here are some reasons why it doesn’t work:
    1) It is not possible to have the purported knowledge of the consequences of the trolley’s motion down the track, only an unmedicated schizophrenic or child would believe that they are in possession of such perfect predictive knowledge;
    2) If a person pushed the fat man off the bridge or interfered with a switch to send the trolley on a track to kill one person then they will have become involved in subsequent events whereas if they do nothing they are free of responsibility for the outcome. Most people will rightly choose not to get involved. Legally, a person is guilty of a crime if they interfere with the railway switch and secondly, as it is impossible to prove that you knew the five people would be killed, you are guilty of the murder of one person, a crime which, even if you subsequently prove innocence, will still require your arrest, the posting of a substantial bail, the long court process, public exposure through the media and so on.

    Whilst most people do not think of these details they are generally aware of the risk of intervening and the responsibility they take on if they get involved and the possible subsequent cost of doing so.

    There are other scenarios, for instance if a person is already involved and can not escape such involvement thus eliminating the involvement problem and if magical precognitive knowledge is not required people will choose one over five every time, indicating that it is the confounding dimensions in the trolley problem that cause the result obtained in studies and not the moral one-or-five question.

    My note above is greatly abridged, but I’ve related a sufficient argument to show the fundamental flaw in the trolley problem thought experiment generally.

Leave a Reply

Your email address will not be published.


*