If you didn’t notice, the presidential primaries are in full swing. Daily, we’re seeing news about Hillary Clinton, Bernie Sanders, and Donald Trump (in the U.S., anyway), and by this point, most people know where they stand. Most people know who they support.
But why? Think about your own preferred candidate. Why do you support that person? One major reason may come down to morality.
Past research on “moral conviction” has proposed that some people hold certain opinions because of their core moral beliefs. Take Bernie Sanders supporters, for example. Although some Sanders supporters’ opinions come down to practicality and logistics, others have a moral basis for their opinion. To them, being pro-Sanders is actually a reflection of what they think is morally right. Similarly, people who dislike Donald Trump may do so on moral grounds, believing that he is an affront to their sense of morality.
So it’s clear that there are some opinions that we think are based on our morals and others that aren’t morally based. And what’s moral for me may not be moral for you.
The Consequences of Moral Opinions
But what does it really matter that one person’s attitude toward an issue is based on his moral beliefs whereas another person’s attitude toward the same issue comes from somewhere else? There’s been quite a bit of research on this question, but I’ll highlight two key ways in which moral and non-moral opinions differ.
First, people are more likely to act on their opinion if they see it as a matter of morality. It’s the difference between supporting Hillary Clinton and voting for Hillary Clinton. Two people may equally support her, but it’s the person who supports her for moral reasons who’s more likely to get out and vote.
In one study, for example, researchers found that the more people thought their choice for president reflected their moral beliefs, the more likely they were to vote in the 2000 U.S. presidential election. They have also found that the more people think that their position on a specific issue is a matter of morality, the more they say they will vote in upcoming elections.
Second, people are less likely to revise their opinion if they see it as a matter of morality. We’re constantly faced with pressures to change our opinions. We read new information, have surprisingly good and bad experiences, and learn the opinions of friends and family. All of these things could lead us to update our opinions. If our opinions are tied to our moral compass, however, those things are less likely to affect us.
To show that this is the case, consider one study in which researchers tried to create social pressures to get people to rethink their opinion of torture. People participated in a group activity where they would discuss the issue of using torture to extract confessions. The group would first meet one another on the computer, but the sneaky part is that there was really only one participant–the rest of the group was fake. In their introductions, though, these “other group members” shared their own opinions about torture…and they all said they were in favor of it.
So group pressure was created. The real participant, who initially opposes torture, finds out that all of his fellow group members are in favor of it. What does he do? Stick to his original opinion or change it? The results showed that the more he thought his initial opinion was a matter of morality, the less likely he was to change in the face of group pressure.
It’s Enough Just to Think It’s Moral
An interesting new question has emerged, though: What if someone just thinks her opinion is about morality even if it has nothing to do with her moral beliefs?
Usually I use this blog to cover research from all corners of social psychology, but today I’m going to cover some of my own research. Fine, you caught me. My post a few weeks ago about Tylenol also included some research I was involved in. My point is, it doesn’t happen often! Recently, along with my collaborators, Rich Petty, Pablo Briñol, and Ben Wagner, I published some studies that addressed this question.
We wanted to know whether moral opinions get all of their power from the real set of moral beliefs that people hold or whether it’s enough to just call something moral. So if I don’t have any actual moral beliefs behind my love of coffee, could I still strengthen that love by simply calling it a moral preference?
To do this, we devised a method of getting some people to think that their opinions had a moral basis to them and getting other people to think their opinions came from a place that wasn’t related to morality. We had them simply write out their thoughts about the topic and then told them that a computer program was able to analyze their text patterns This isn’ that far-fetched. Text mining is a big thing these days in data science..
Lo and behold, the program’s expert analysis told some people that their thoughts reflected morality more than was typical, but it told other people that their thoughts reflected other things like practicality, equality, or tradition. The feedback people got was entirely random and not at all based on their actual thoughts.
We found that when people were told that their opinions had a moral basis (whether that was true or not), they then said that they would be more likely to sign petitions and vote on the issue, compared to when they were told that their opinions had different bases.
We also did some studies where we gave people persuasive essays to see if this information would change their opinions. Specifically, we crafted essays that argued against recycling, claiming that it would put more trucks on the road and require more resources to process, causing more pollution. We actually didn’t make these up. Most of the arguments in the essay came from the report “Eight Great Myths of Recycling” by Daniel Benjamin. Once again, people who simply came to perceive a moral basis for their recycling opinions resisted the persuasive arguments–they changed their opinions less than the people who saw their opinions as rooted in practical concerns.
A Moral Conclusion
This research has illustrated the power of “morality” to strengthen our opinions. It doesn’t necessarily say that you should use morality to change other people’s opinions. Instead, it suggests that framing an issue in terms of morality can crystallize or harden people’s current opinions.
People who are pro-Clinton will be more likely to use and defend their opinion when it’s cast in a moral light. People who oppose factory farming will also become more likely to use and defend their opinion if it’s framed in terms of morality.
However, if you wish to change a person’s current opinion, you best not make them think that they’re current opinion is rooted in their morality.
Footnotes [ + ]
|1.||↑||Fine, you caught me. My post a few weeks ago about Tylenol also included some research I was involved in. My point is, it doesn’t happen often!|
|2.||↑||This isn’ that far-fetched. Text mining is a big thing these days in data science.|
|3.||↑||We actually didn’t make these up. Most of the arguments in the essay came from the report “Eight Great Myths of Recycling” by Daniel Benjamin.|