Group Polarization

In this video I describe how group opinions and decisions can differ from those of the individuals who make up the group. Group polarization refers to the tendency of groups to become more extreme and more polarized than the original views of members. Groups also tend to be more tolerant of risk, known as risky shift. Polarization can still occur when groups are exposed to inconclusive evidence, strengthening views of supporting ideas and dismissing contradictory evidence. Groups may make poor decisions as a result of groupthink, a term coined by Irving Janis to refer to characteristics of some groups that increase conformity, encourage a sense of invulnerability, and disparage opposition.

Don’t forget to subscribe to the channel to see future videos! Have questions or topics you’d like to see covered in a future video? Let me know by commenting or sending me an email!

Check out my full psychology guide: Master Introductory Psychology: http://amzn.to/2eTqm5s

Video Transcript

Hi, I’m Michael Corayer and this is Psych Exam Review. In previous videos we’ve seen how groups can influence individual behavior through indirect pressures like conformity. We’ve seen how groups can form easily through minimal group theory; this idea that even trivial classifications can be enough to make people feel like part of an in-group. And we’ve seen group conflict. We’ve seen the formation of stereotypes prejudice and discrimination and we’ve also seen techniques that can try to reduce group conflict but what we haven’t considered is that groups differ from the individuals that make up those groups and group opinions and group decisions are not necessarily the same as the accumulated opinions or accumulated decisions of the individuals in those groups.

So this brings us to consider group dynamics and one of the first things we find is that groups have a tendency to become more polarized. They adopt more extreme positions than the individuals in the group originally had. So groups tend to move away from the average of their members. One way we could think about this happening is to imagine that we’re evaluating a teacher for a class. Now one way we could look at this evaluation would be to have all the individuals fill out an evaluation and we could just look at the average of those individual scores. And let’s imagine a class where we did that and all of the students have a slightly positive view of the class.

But we can also imagine what would happen if we put all of these students together into a group to make a decision about the class. Let’s say they had a discussion, now, in this case we’d probably see that because all these students have a slightly positive view of the class many of them will express slightly positive views of different aspects of the class. So one student might say that “well, the class was okay but, you know, the homework wasn’t excessive, it was the right amount of homework, I thought it was a reasonable amount” or someone might say “oh the lecture wasn’t too boring, sometimes it was even mildly entertaining” or someone else might say that you know “the tests were hard but they were graded fairly, and so that, you know, is a good thing”. And as a result of hearing all these slightly positive reviews, I mean none of these reviews are outstanding, none of the students are saying this class totally changed my life for the better. But everybody has a slightly positive view as a result of hearing all of these slightly positive views what might happen is that this will push up the scores of the group as a whole. This will push the evaluation higher.

Now, of course this could also happen in the opposite direction. If we had a bunch of students with slightly negative views of the class and they got together to have a discussion and they all started pointing out the things they didn’t like, we’d probably find that the group average would end up becoming more extreme and they would have a more negative view of the class than each of the individuals had when they walked into the room.

Now we also see the groups have a tendency to take larger risks than the individuals in those groups will be willing to tolerate. This is a type of group polarization known as “risky shift“. So if you were to ask individuals in a group, for instance, how much, what probability of failure would they be willing to accept for some new venture. And if you look at that and then you compare that to a group’s decision about the level of, or the possibility of failure for some new venture, what you generally find is that the group is willing to tolerate more risk than the individuals.

Now you might say well, these cases here are too simple, right? We’re looking at a group where all the students have a slightly positive view of the class or all the students have a slightly negative view. Well, what happens when we have mixed reviews, mixed results, inconclusive evidence? Does this move everybody to the center? Will the group become less extreme? The students who had slightly positive views, hear some negative views and so they sort of move to the middle and the same happens for the students with slightly negative views?

Well, this doesn’t seem to be what happens. This brings us to a study by Charles Lord, Lee Ross, and Mark Lepper, and what they did was they looked at two groups that were already polarized. We had those who were strongly opposed to capital punishment and those who were strongly in support of capital punishment and then the researchers gave these two groups of people the same evidence, but it was mixed evidence; some of the evidence supported capital punishment and some of the evidence went against capital punishment. So it showed things like murder rates increasing or decreasing in different states depending on whether they adopted or banned capital punishment. And rather than moving these two groups to the center, as you might expect, where they might say “well it looks like there’s evidence on both sides, I guess a more reasonable position would be closer to the middle”. Instead this mix of evidence actually strengthens the opinions of both of the groups, it actually further polarizes them.

So how is this happening? Well what Lord, Ross, and Lepper proposed is that what happens is the people in the groups look only at the supporting evidence. This relates to the idea of confirmation bias. So they look at this, let’s imagine there’s, you know, six studies that we’re looking at for something and three of them support our opinion and three don’t. We tend to focus on those three that support us and say “look, see, I told you I was correct. Look at these three great pieces of evidence here” and I might ignore the three that are contradictory.

And then the person with the opposite viewpoint from mine is going to do the same thing but with the opposite pieces of evidence. They’re gonna look at the three that I discarded and say “look at these studies supporting my opinion” and they’re not going to pay as much attention to the contradictory information. And so each side starts to believe that the evidence on their side is better conducted, it’s more convincing and that they can be dismissive of the contradictory information. This can help us to understand why we see this sort of thing happening in politics, where we see the same event can trigger polarizing reactions on both sides, rather than moving people to the center.

Now we also see that groups occasionally make very bad decisions; worse decisions than individuals in those groups might have made. And we can find catastrophic errors of group decision-making and things like not being able to predict the attack at Pearl Harbor, or the botched Bay of Pigs invasion, or the failed Challenger launch. We might think, how do these terrible decisions happen? Now these are cases where it’s not just hindsight bias; it’s not just looking back and saying “well we know now that it was a bad decision”. So of course hindsight is 20/20 and and if we only knew then what we know now. It’s not really the case that these are examples of simple hindsight bias, because these are situations where there were early warnings and there were criticisms of the approaches that were implemented and yet these early warnings and criticisms were largely ignored. So why would the groups ignore these possible criticisms or these possible warnings?

This brings us to the idea of “groupthink“. This was proposed by Irving Janis and he proposed that there were certain characteristics of groups that made them likely to make these types of catastrophic errors and so if this is what he called groupthink. He came up with this term based on the newspeak and doublethink of George Orwell’s 1984. So some of the factors that Janis proposed contributed to groupthink were things like having a strong and charismatic leader, someone that the other group members wanted to impress or wanted to be on the good side of. So a leader like John F Kennedy during the Bay of Pigs invasion might have been somebody that was a strong charismatic leader that other people were less inclined to disagree with or to point out potential flaws. He also thought we have a tendency towards groupthink if we have a group where the group members believe in the intelligence of the group; they believe that it’s a very talented group. And so if we have a roomful of scientists at NASA then these are some of the very smartest, best, brightest people and so they couldn’t possibly be getting things wrong. And lastly we had the idea that when there is dissent, when there is contradictory information that’s brought up, it’s minimized, it’s disregarded, or it’s even suppressed. And he referred to some members of groups that play an active role in suppressing contradictory information. He sometimes referred to these as “mind guards“; people who try to protect the leader of a group from hearing contradictory information. He thought one such mind guard could be found in Robert Kennedy, who may have insulated JFK from contradictory information. And the result of these characteristics of groups is that they experience greater conformity, they have an illusion of invulnerability, that they can’t possibly make mistakes, and they tend to develop stereotyped views of outsiders or of the opposition. And these factors can all contribute to poor decision-making. I hope you found this helpful, if so, please like the video and subscribe to the channel for more. Thanks for watching!

Leave a Reply

Your email address will not be published. Required fields are marked *