Big Ideas in Social Science: An Interview With Jonathan Haidt on Moral Psychology - Pacific Standard

Big Ideas in Social Science: An Interview With Jonathan Haidt on Moral Psychology

Author:
Publish date:

The latest in a series of conversations with leading intellectuals in collaboration with the Social Science Bites podcast and the Social Science Space website.

By David Edmonds & Nigel Warburton

5e493-1fiwdrs-w5xv-dgyqcaebmq

Jonathan Haidt. (Photo: Public Domain)

Jonathan Haidt is a social psychologist and professor of ethical leadership at New York University’s Stern School of Business. He is author of The Happiness Hypothesis and The Righteous Mind and is currently writing a book about morality and capitalism.

David Edmonds: Abortion, capital punishment, euthanasia, free speech, marriage, homosexuality: these are all topics on which liberals and conserva­tives take radically different views. But why do we adopt certain moral and political judgments? What factors influence us? Is it nature or nurture? Are we governed by emotion or reason? Jonathan Haidt, a psychologist and best-selling author, most recently of The Righteous Mind, was formerly a staunch liberal. His research has now convinced him that no one political persuasion has a monopoly on the truth.

Nigel Warburton:The topic we’re going to focus on is moral psychology. Morality is normally studied in philosophy departments, not psychology depart­ments. What is moral psychology?

Jonathan Haidt:Philosophers are certainly licensed to help us think about what we ought to do, but what we actually do is the domain of psychologists. We study many different aspects of human nature including morality, moral judgment, moral behavior, hypocrisy, and righteousness. These are major topics of huge importance to our political and everyday lives.

Warburton:For a psychologist this, presumably, involves experiments, or at least observation?

Haidt:That’s right. It involves scientific methods, which don’t have to be experimental. As with any difficult-to-study field, it’s appro­priate to use a range of different methods including fieldwork, reading widely, talking to people who have varying moral world views, and so on. In that sense it can be a little bit like anthropology.

Warburton:In recent years there have been some interesting developments now that MRI scans can give us a glimpse of what’s going on physiologically when people are making decisions.

Haidt:Yes. There are two aspects of this. First, to acknowledge that moral judgments arise from events in the brain. But the more interesting questions are about which parts of the brain are particularly active when we’re making them. It turns out that, from Joshua Greene’s original study, and Antonio Damasio’s before that in the 1990s, that brain regions associated with emotion play a very large role, and the reasoning areas sometimes take a long time to come in. One of the big topics of debate now is how to fit those together: the fact that we reason logically, we feel emotions; the frontal insula is more active when we’re disgusted. I’m on the side that says that emotional reactions tend to drive the reasoning, and I think most of the neuroscience literature is consistent with that.

Warburton:Most of us like to think that when we make a moral decision, it’s somehow a rational decision, it’s not just a gut instinct. Are you saying that’s a kind of self-deception?

Haidt:Yes, I am. We judge right away. This is one of the major findings in social psychology; it’s sometimes called the “Automaticity Revolution.” It goes back to Wilhelm Wundt 120 years ago. Within the first quarter of a second we react to people’s faces, we react to words, we react to propositions, and then reasoning is much slower. Robert Zajonc, a well-known social psychologist, argued in the 1980s that “preferences need no inferences.” For example, our minds react to something new as if to aesthetic objects, and then that reaction constrains the nature of our reasoning. What we’re bad at doing is sizing up all the evidence and seeing where it points overall.

What we’re really good at is saying: “Here’s the hypothesis I want to believe; let me now see if I can find evidence. If I can’t find any evidence, alright, I might give it up….” But wouldn’t you know it, we’re usually able to find some evidence to support it.

Warburton:So, you’re saying that moral reasoning is really just rationalization?

Haidt:For the most part, when we are doing moral reasoning about anything that is vaguely relevant to us. Some people think that I deny that rationality exists, which I don’t at all. We’re able to reason about all sorts of things. If I want to get from point A to point B I’ll figure it out, and then if somebody gives me a counterargument and shows me that, no, it’s faster to go through C, I’ll believe him or her. But moral judgments aren’t just about what’s going on in the world.

Our morality is constrained by so many factors: one of the main ones is our team membership. Political disagreements have a notorious history of being impervious to reasons given by the other side. That then makes the other side think that we are not sincere, we’re not rational; and both sides think that about each other. What you think about abortion, gay rights, whether single mothers are as good at parenting as married couples: all of these attitudes tie you to your team, and if you change your mind, you are now a traitor, you will not be invited to dinner parties, and you might be called some nasty names.

Warburton:What’s your evidence that moral judgments aren’t rational?

Haidt:I first got interested in this sort of question in graduate school. I was reading a lot of ethnography about how morality varies across cultures. In culture after culture they had moral rules — about the body, about menstruation, food taboos — and I was reading the Old Testament, and the Qur’an, and all these books, and so much of this morality seemed visceral, hard to justify in a cost–benefit analysis. It’s true that utilitarians say, “Actually, they’re just wrong; morality is really about human welfare, and all we have to do is maximize human welfare.” If most people were intuitive utilitarians, then I think you could say that; but what you find is that people are not naturally utilitarians. Again, this is not to say utilitarianism is wrong. I’m just saying that people have a lot of moral intuitions, and experiments on persuasion show it’s very hard to persuade people.

My own research involved giving people scenarios that were disgusting, or disrespectful, but caused no harm. For example, a family eats their pet dog after the dog was killed by a car in front of their house. When confronted with that scenario, the Ivy League undergraduates did generally say that it was OK — if they chose to do that, it was OK. So there was one group that was rational utilitarian in that sense, or rights-based I suppose you would also say. But the great majority of people, especially in Brazil and especially working class in both the United States and Brazil, said, “No, it’s wrong, it’s disrespectful, there’s more to morality than this.” So, just descriptively, most people have moral intuitions that conflict with utilitarianism. When you interview them about these, or if you do experiments where you manipulate their intuitions, you can steer their reasoning to reveal the intuitions.

Warburton:I know you’ve divided the kinds of intuitions they have into five categories. Would you mind just recapping those?

Haidt:What really struck me when I was reading all this ethnography, and when I spent three months doing research in India, is the degree to which certain patterns are so recognizably similar all around the world, yet the final expression of a morality is so often unique and so variable between cultures.

47304-1mjtqyouvzfpu8_1uqfbrwg

Pacific Standard is running a series of excerpts from Big Ideas in Social Science, a collection of interviews from the

Social Science Bites

podcast. (Photo: SAGE Publishing)

What are the underlying patterns that can be explained from an evolutionary point of view? Reciprocity is a strong candidate here. Robert Trivers wrote a famous article on reciprocal altruism. If someone wants to claim that fairness and reciprocity are entirely socially constructed, entirely learned from our parents, that is just implausible. The same applies to caring for vulnerable offspring. We’re mammals, and we have mammalian tendencies. So you start with those two aspects, reciprocity and caring for offspring, and recognize that nativism has to be right about those. The categories that my colleagues and I added are group loyalty — we’re good at coalitions; respect for authority; and then the fifth one is sanctity or purity — the idea that the body is a temple. This is just descriptive, not normative. Those are the five that we feel most confident about, but there are many more. Nowadays we think liberty is different from the others. I think in the future we’re going to find that property, or ownership, is a moral foundation; you see it all over the animal kingdom with territoriality, and there’s some new research from several labs showing that children at the age of two, or three, notice and care about property and ownership and what’s in someone’s hand versus not in their hand.

Warburton:One of the interesting insights in your research was the way that, politically, liberals and conservatives are attached to different sets of values.

Haidt:That was not my original intent. I was investigating how culture varied across countries, especially the contrast between India and the U.S. What I found in my early work was that social class was sometimes an even more important fac­tor than nationality. That’s what I set out to study, and that’s how my colleagues and I arrived at this list of moral foundations. I was engaged in this research on morality when the Democrats lost in the 2000 U.S. elections, and then lost again in 2004. I was a fairly strong liberal back then: I really disliked George W. Bush. I wanted to use my research to help the Democrats understand, to help them connect with American morality, because Bush was connecting, and Al Gore was not. So when I was invited to give a talk to the Charlottesville Democrats in 2004, right after the election, I took my cross-cultural theory and applied it to Left and Right, as though they were different cultures. It worked well. I expected to get eaten alive: I was basically telling this room full of Democrats that the reason they’d lost was not because of Karl Rove, and sorcery and trickery, it was because Democrats, or liberals, have a narrower set of moral foundations — they focus on fairness and care, and they don’t understand or communicate the group-based, visceral, patriotic, religious, hierarchical values that most Americans have.

Warburton:That seems to imply that you helped Democrats to play to virtues or types of moral thinking that didn’t come naturally to them. It’s almost as if you were suggesting they should be insincere in the way that they put themselves across.

Haidt:I began simply by wanting the Democrats to win. It was an open question whether the advice would be that they should assume a virtue whether or not they had it. But I changed what I felt as I went on. I treated this like ethnographical fieldwork. I would read conservative magazines; I subscribed to cable television so I could watch Fox News; and, at first, it was offensive to me, but once I began to get it, to see how the views interconnected, how if you really care about personal responsibility, and if you’re really offended by leeches and mooches and people who do foolish things, then you don’t want others to bail them out. If those are your values, then I can begin to see how the welfare state is one of the most offensive things ever created. So, I started actually seeing what both sides are really right about: certain threats and prob­lems. Once you are part of a moral team that binds you together, it blinds you to alternate realities; it blinds you to facts that don’t fit your reality. So, I was writing Chapter 8 of The Righteous Mind, where I tried to explain conservative notions of fairness and liberty, and I handed it to my wife to edit. I told her that I couldn’t call myself a liberal any more, because I really thought both sides were deeply right about different issues.

Warburton:That’s very interesting. So the connection between your empirical research and your own changing personal political beliefs was fairly direct.

Haidt:That’s right. If you’re studying morality, you’re studying the operating system of our social life. Since the operating system of academia is very liberal, or leftist, I was enmeshed in the liberal team. I was trying to help my team win as an activist would. We have a lot of debate in social psychology, whether it’s acceptable to be an activist, because we have many social psychologists who are activists, especially on race and gender issues, and most people think that’s all right. But I’ve come to think that it’s not. Once you become part of a team, moti­vated reasoning and the confirmation bias are so powerful that you’re going to find support for whatever you want to believe. I’d like to think that my research, eventually, helped me leave my team and become a free agent.

Warburton:Do you want to generalize from your own experience? Or are you saying that social scientists ought to remain aloof from politics?

Haidt:I’m saying that if you are a partisan, you are not going to process reality evenly. Good science doesn’t require that we all be neutral and even-handed. The way science works, the reason why it works so brilliantly, is not because scientists are so rational; it’s because the institution of science guarantees that whatever we say is going to be challenged. As long as we have a working intellectual marketplace, as long as there’s somebody to take the other side, someone to try to refute what we’re saying during the peer review process, then sci­ence can be full of biased individuals. The problem is, if every­body shares the same bias, there’s nobody on the other side, and it is likely that the group will reach conclusions that are simply false. That’s what’s happened, not on most issues, but on some politically charged issues of race, gender, and politics.

Warburton:One thing that’s noticeable in your work is the way you use metaphors. How important are metaphors for you?

Haidt:I believe we are intuitive creatures that are not persuaded just by logic: things have to feel right first, and then we look for supporting evidence. If it feels right and we see the evi­dence, then we believe. I’m trying to persuade people and say, “Look, here’s how the mind works, here’s how morality works.” I therefore have to offer them not just a whole list of experiments — every science book does that — but I have to give them some metaphors to help them change their mental structures, and then have a place to put all these experiments that I then summarize. So, my first big review article, published in 2001 in the Psychological Review, was titled “The Emotional Dog and Its Rational Tail.” I was trying to make the case, based mostly on a review of the literature, not my own research at that point, that intuitions drive reasoning, not the other way around. So that’s a metaphor that I put out there in the title of my paper and that seemed to stick: a lot of people seemed to gravitate to that.

The second metaphor that I suggested that’s had some currency was for my book The Happiness Hypothesis. I devel­oped the metaphor that the mind is divided into parts, like a rider on an elephant, where the rider is the conscious, reasoning, verbal-based processes, 1 or 2 percent of what goes on in our heads, and the elephant is the other 99 percent, the intuitive automatic processes which are largely invisible to conscious­ness. That’s the best metaphor I ever developed; I hear from people all the time: “Oh yeah, I read your book; don’t remem­ber anything about it, but man, that metaphor, that stuck with me forever, and I use it in my psychotherapy practice.” And then in The Righteous Mind I’ve added a few more metaphors: one is the idea of hive psychology, the thought that we human beings are products of individual-level selection, just like chimpanzees, that make us mostly selfish; and we can be strategically altruistic, but that we also have this weird feature, which is that, under the right circumstances, we love to transcend ourselves, our self-interest, and come together like bees in a hive. These are some of the best times in our lives; these are incredibly important politically, in terms of people joining causes and rallies. So, the metaphor that I developed in The Righteous Mind is that we are 90 percent chimp, and 10 percent bee.

Warburton:You’re a psychologist by training, and psychology is usually thought of as one of the social sciences. Is that how you see yourself, as a social scientist?

Haidt:Yes. I study morality, I identify as a social psychologist. But because I focus on a topic from multiple perspectives, I find that some of the best things I’ve read have been by historians, economists, anthropologists, and philosophers, especially those philosophers who have been reading empirical literature. I think of myself as a social scientist, almost as much as I think of myself as a social psychologist.

Warburton:Is there something distinctive about being a social scientist, as opposed to being a scientist, or a philosopher?

Haidt:The natural sciences form the prototype of what we think the sciences are. If you’re studying rocks or quarks, and there’s this definitive experiment, and you design this experi­ment and get the results, it’s very clear, it’s easy to understand. But rocks and quarks do exactly what the laws of physics tell them to. The social sciences are necessary because our subjects have these properties of consciousness and intentionality. There are these emergent properties that rocks and quarks don’t have. Studying people and social systems requires a different set of tools and different ways of thinking, and you can’t avoid questions of meaning. For the natural sciences, meaning is not a relevant concept, but it is unavoidable in the social sciences.

Related