Identify the different ways that, according to social dominance theory, hierarchies are maintained.
Explain how beliefs in meritocracy, a just world, and economic and social mobility are intertwined and help preserve hierarchies.
Although countless artists, philosophers, and political theorists have romanticized the idea of the egalitarian society, this idea has remained exactly that—a romanticization. All human societies are hierarchical, although some are more so (the ancient Egyptians or Indonesians today) than others (many Native American tribes or the Rojava in the Kurdish enclaves of Syria today). Social dominance theory takes the hierarchical structure of all societies as a given. It both articulates the most common dimensions of inequality and tries to account for how hierarchical societies, which by their very nature are unequal, nevertheless remain stable and endure over very long periods of time (Sidanius, Devereux, & Pratto, 1992; Sidanius & Pratto, 1999).
Social dominance theorists note that societies’ hierarchies tend to be based on age (with adults having more power than children), on gender (with men having more power than women), and on what they call an “arbitrary set.” The latter, as the name implies, takes different forms in different societies. For instance, it may be based on ethnicity (e.g., in Rwanda, the United States, and many South American countries), on religion (e.g., Sunni versus Shia, Catholic versus Protestant, or Sikh versus Hindu), or on race (e.g., in all countries with a history of slavery).
Because tensions can arise whenever there is inequality, a main focus of social dominance theory is specifying the social and psychological structures that evolve to prevent those tensions from erupting into destabilizing conflict. Hierarchies are said to be kept in place through:
individual discrimination, in which individuals in the dominant groups act to preserve their advantage and to keep those in subordinate groups “in their place”;
institutional discrimination, in which laws and norms preserve the hierarchy; and
behavioral asymmetries, in which deference is shown to members of dominant but not subordinate groups and self-fulfilling prophecies undermine the achievements of members of subordinate groups.
Individual discrimination can be found every day in news accounts of the mistreatment of racial, ethnic, sexual, and religious minority groups in every corner of the globe. It is experienced when parents announce the birth of their boys with more evident pride than the birth of their girls (Gonzalez & Koestner, 2005), when people lock their car doors or cross the street to avoid members of stigmatized groups, and when individual law enforcement officers run afoul of their training and treat members of stigmatized groups with an iron fist (more on all of this below). The connection between such events and the concern with hierarchy—particularly with the prospect of losing a privileged status—is reflected in the fact that when White Americans learn that the United States will become a “majority-minority” country (in which Whites will be the largest racial group but not the majority, as they are now), they tend to express more negative attitudes toward Asian, Black, and Hispanic people (Craig & Richeson, 2014a) and come to believe that discrimination against Whites will increase (Craig & Richeson, 2017).
Concerns about the dissolution of existing hierarchy and the prospect of losing a dominant position in it are also reflected in the actions of individuals who subscribe to the “great replacement theory,” which takes its name from a book, Le Grand Remplacement, by the French author Renaud Camus (not to be confused with Albert Camus, the Nobel Prize–winning author of such works as The Stranger and The Plague). Great replacement theory posits that White people throughout Europe and in the United States are being “replaced” by non-White people from around the globe. The idea has received considerable traction among various White supremacy groups in Europe and the United States, including those who marched across the University of Virginia campus in 2017 chanting, “You will not replace us!” and “Jews will not replace us!” Concerns with replacement appear to have inspired the assailant in the 2019 attacks on two mosques that killed 51 people in Christchurch, New Zealand; the individual who killed 23 people in a rampage at a Walmart in El Paso, Texas, the same year; and the gunman who targeted Black people in a Buffalo, New York, supermarket in 2022, killing 10.
(A)(B)SOCIAL DOMINANCE THEORYAccording to social dominance theory, dominant groups are reluctant to give up their privileged status in society. Here (A) White nationalists concerned about “replacement” by other races protesting on the University of Virginia campus. (B) Members of a right-wing nationalist group demonstrating on the Rue de Rivoli in Paris, France.
Mass murderers like those three are extremists whose vile actions are denounced by nearly everyone. And White supremacists are but a small—though growing—minority in the United States (Gunter & Southern Poverty Law Center’s Intelligence Project, 2020). The fact that they remain a minority shows that individuals in the dominant groups of society differ in the degree to which they’re concerned about losing their status and privilege. This has led to the development of scales designed to assess the extent to which a person is oriented toward the preservation of hierarchy and the status quo (Pratto et al., 1994). Research has shown that people who score higher on this scale of social dominance orientation are more willing to express prejudiced attitudes toward different groups and are more inclined to endorse policies that preserve existing hierarchies (Pratto, Sidanius, & Levin, 2006).
What about all those people (that is, most people in most societies today) who don’t score very high on the scale of social dominance orientation and aren’t especially comfortable with the idea that some people, by dint of luck or birth alone, are granted more status and more material resources than others? According to social dominance theory, even they can remain untroubled by evidence of institutional discrimination if they buy into ideological tenets, or legitimizing myths, that make unequal treatment seem not only perfectly reasonable but highly desirable. That is, some people don’t see observed discrepancies in status and wealth as troubling because they don’t see them as the products of good luck or accidents of birth. The divine right of kings was one such belief that served to preserve the social order for centuries. The authority of the king came from God and therefore could not be questioned. Similarly, many of the assumptions baked into sexism, racism, and ethnocentrism make it easier to accept stark inequalities because they justify why those in the dominant group have so much more than those in subordinate groups do. No less an authority than Walter Lippmann (1922), who coined the term stereotype, noted that people’s stereotypes constitute “the defenses of our position in society.”
Stereotypes about who is and isn’t productive or virtuous govern who is and isn’t thought to be deserving. In the United States, for example, such stereotypes legitimized longer prison sentences for possession of crack cocaine (used more often in Black neighborhoods) than powder cocaine (used more often in White neighborhoods), the federal government’s refusal to issue mortgages in African American neighborhoods while subsidizing the development of all-White suburbs (a practice that led to the development of the Fair Housing Act in 1968 and the Equal Credit Opportunity Act of 1974 to combat this bias), and the confiscation of Indigenous Americans’ lands as part of the “manifest destiny” of U.S. expansion (Eberhardt, 2019; Rothstein, 2017).
Believing In Strict Meritocracy, a Just World, and Economic Mobility
The very word meritocracy connotes that some people merit more than others do. Most of us are okay with that because some people do indeed work harder and are more talented than others. Is anyone upset that Tom Hanks makes more money than they do or that Serena Williams can take more lavish vacations? Of course not. But many have argued that strict meritocracy is something of a myth that fails to acknowledge either the role of luck in everyone’s life or the vast differences in opportunity and privilege that create unequal outcomes (R. H. Frank, 2016; Sandel, 2020).
The belief in strict meritocracy runs into further difficulties when erroneous stereotypes distort assessments of merit. Social dominance theorists note that dominant groups use their position to make it easier for members of their own groups to succeed—and then turn around and cite the failure of subordinate groups to achieve similar levels of success as evidence that members of those groups don’t have what it takes to get ahead.
A related, broader legitimizing myth that social psychologists have studied extensively is the just world hypothesis—the belief that people get what they deserve in life and deserve what they get (Lambert, Burroughs, & Nguyen, 1999; M. J. Lerner, 1980; Lipkusa, Dalbert, & Siegler, 1996; Nudelman & Shiloh, 2011). Victims of rape, for example, are often viewed as responsible for their fate (Abrams et al., 2003; S. T. Bell, Kuriloff, & Lottes, 1994), as are victims of domestic abuse (Summers & Feldman, 1984). The just world hypothesis reaches its zenith in the claim that if no defect in a victim’s character or past actions can be found, the tragic affliction must be due to some flaw or transgression in a “past life” (Woolger, 1988). Studies have shown that people are broadly inclined to “derogate the victim”—to disparage the character of those who suffer unfortunate experiences that are completely beyond their personal control (C. Jones & Aronson, 1973; M. J. Lerner & Miller, 1978; M. J. Lerner & Simmons, 1966).
JUST WORLD HYPOTHESISIn part because there are so many well-known “rags to riches” success stories in the United States, such as Apple Computer Company getting its start in the garage of Steve Jobs’s childhood home, people tend to overestimate the amount of economic mobility there is in the country.
The belief in a just world stems in part from the desire to be reassured that bad things won’t happen to us. The twists and turns of life can be unsettling: A superbly qualified job candidate may be passed over in favor of a mediocre applicant with the right connections; a selfless Good Samaritan may be stricken with cancer and experience an agonizing death. Such events cause anxiety, so we’re motivated to think they can’t happen to us. We do so by attributing terrible outcomes to something about the people who suffer them rather than to fate or chance (Burger, 1981; Walster, 1966). By thinking that people “get what they deserve” or that “what goes around comes around,” we reassure ourselves that nothing bad will happen to us if we are the right kind of person living the right kind of life.
A corollary of the belief in meritocracy is the idea that “the cream always rises to the top” or that those with sufficient talent and drive will succeed regardless of where they started in life. It’s easy to see why people would have such a conviction because notable examples are all around us—Abraham Lincoln being born in a log cabin and becoming president of the United States; Barack Obama’s equally spectacular rise to the most powerful office in the world; the world’s most valuable publicly traded company, Apple Inc., getting its start in Steve Jobs’s parents’ garage. The problem is that spectacular examples like these can lead us to believe that there is more mobility than there actually is (and thus that society is more meritocratic than it actually is).
Because a belief in mobility plays such an important role in the idea of the United States as a “land of opportunity,” one study asked a sample of American respondents to rank 15 countries in terms of economic mobility—Argentina, Brazil, Chile, China, France, Germany, Italy, New Zealand, Pakistan, Peru, Singapore, Spain, Switzerland, the United Kingdom, and the United States. The United States ranks eighth among these countries in actual economic mobility, so respondents had an equal chance of overestimating or underestimating the countries’ levels of mobility. That equal chance notwithstanding, the “land of opportunity” ideal had a noticeable impact on respondents’ assessments: They ranked the United States third among these countries, significantly higher than its actual ranking. Interestingly, conservatives ranked it highest among all 15 countries on average, whereas liberals ranked it fourth (Davidai & Gilovich, 2018). Other studies have shown that people overestimate economic mobility in the United States not just relative to other countries but also relative to the actual likelihood of someone born on the lower rungs of the economic ladder rising to the top rung (Davidai & Gilovich, 2015; Kraus, 2015; Kraus & Tan, 2015).
Note that this is exactly what social dominance theorists would expect: Beliefs in meritocracy, in a just world, and in pronounced mobility make it easier for all of us to accept the inequalities we see around us and they remove any incentive to challenge the social, cultural, and political systems that produce them. Note that these beliefs need not be consciously held to make us blind or indifferent to the pervasive inequalities in the world.
Justifying Status Differences Through Dehumanization
To become comfortable with pronounced group differences in wealth, status, and opportunity is to believe that those at the bottom are not as deserving as those at the top are. Embracing meritocracy and believing in a just world enable that stance. An even more powerful way of becoming comfortable with inequality and untroubled by the condition of disadvantaged groups is to dehumanize them. Dehumanization is the conviction, conscious or not, that someone or some group of people lacks the complex emotions and capacity for agency that are characteristic of humanity. Nonhumans are thought to experience simple emotions like anger or happiness but not complex emotions like shame or nostalgia. And other organisms are granted some degree of cognition but are not imagined to be capable of detailed planning or complex thought.
Both history and psychological research make it clear that dehumanization is common and tragic (Kteily & Bruneau, 2017). Sustained efforts to deny the humanity of outgroups have preceded some of the worst episodes of genocidal violence in modern times. The Nazis likened Jews to vermin before and during World War II (6 million killed); the Khmer Rouge depicted their enemies as worms (between 1.5 and 3 million killed); the Hutus of Rwanda referred to the Tutsis as cockroaches (over half a million killed).
DEHUMANIZATIONMany people dehumanize those experiencing homelessness, such as those in this encampment on a city street in Los Angeles, California.
In a telling neuroimaging study of dehumanization, participants were shown images of members of a number of different groups while in an fMRI machine. When they were exposed to individuals from most groups, their brain scans revealed activation in an area associated with social cognition—the medial prefrontal cortex. But activation in that area was notably absent when participants were shown images of objects or people from notable outgroups, such as people who are homeless or addicted to drugs (L. T. Harris & Fiske, 2006). As shocking as it may seem, many people apparently don’t think of individuals from such groups as being fully human.
When are people inclined to dehumanize others? One way to get a handle on that question is to note that dehumanization is essentially the opposite of anthropomorphism, or the attribution of human qualities to nonhuman entities. If you’ve played or worked with a robot, it has probably seemed human to you at times, and you’ve probably thought of the virtual assistant on your smartphone—Alexa, Siri, or whom(!)ever—in human terms sometimes as well. Adam Waytz, Nick Epley, and John Cacioppo (2010) have argued that we tend to anthropomorphize things to the extent that they resemble human beings: We’re more likely to anthropomorphize a robot than a toaster, more likely to anthropomorphize a mammal than an arachnid.
Waytz and colleagues also noted that we’re more likely to anthropomorphize when we’re feeling in need of social connection. Anyone who’s seen the film Cast Away immediately grasps why the character played by Tom Hanks, all alone on a remote island, gives his volleyball the name “Wilson” and treats it like a person. Finally, we anthropomorphize when we’re feeling generally helpless or ineffective because doing so can lend a sense of order and predictability to the world around us. Random acts and outcomes can seem more predictable, or potentially more predictable at least, if we imagine that there is someone responsible for them. As Waytz and colleagues noted, the World Meteorological Organization maintains that assigning human names to storms, hurricanes, and typhoons facilitates the kind of effective communication needed to prepare the public for these often life-threatening events.
Turning this around, we should expect people to be more likely to dehumanize those they think of as different from themselves. Indeed, immigrants are particularly vulnerable to dehumanization (Kteily & Bruneau, 2017), and the historical record provides abundant evidence of people dehumanizing other racial, ethnic, and religious groups—the conquest of the Americas, the colonization of Africa, the Armenian genocide, the enduring discrimination against the Roma throughout Europe, just to name a few. We should also expect people to dehumanize others when they are feeling strongly connected to their own ingroup and see it as distinct from various outgroups. Here, too, history teaches us that strong feelings of ingroup loyalty can foster an atmosphere that gives rise to brutal dehumanization, such as the strong ethnic connection among Rwanda’s Hutus that fed their genocide of their Tutsi neighbors. Additional support for this idea comes from a study in which participants who were led to feel socially connected (by thinking about a Thanksgiving dinner and the people there they felt closest to) were more likely to deny distinctly human mental states to others and to endorse the torture of those involved in the 9/11 terrorist attacks in the United States (Waytz & Epley, 2012). Finally, we should expect people to be more likely to dehumanize others when they see the world as chaotic and threatening. This also turns out to be the case: The historical record testifies to the tight connection between dehumanization and periods of turmoil and conflict (Kteily et al., 2015).
What lessons can we learn from the study of dehumanization? Most important, we should be alarmed whenever we hear dehumanizing language, especially from powerful leaders with committed constituencies during times of uncertainty and strife. When Zsolt Bayer, cofounder of Hungary’s right-wing Fidesz party, says that “These Roma are animals . . . inarticulate sounds pour out of their bestial skulls. . . . [They] shouldn’t be allowed to exist,” we might be tempted to dismiss his comments as “just words” (Brown, 2013). It’s tempting to react that way as well when Donald Trump says of migrants, “These aren’t people, they are animals” (Agence France-Presse, 2018). But words matter. The use of homophobic epithets, for example, encourages the dehumanizing of LGBTQ people (Fasoli et al., 2015). We therefore dismiss the impact of such language at our moral peril because that sort of dehumanizing vocabulary has been a prelude to some of history’s worst episodes of outgroup-directed violence.
And we know better. As easy as it may be to mindlessly think of others as less than fully human, there’s no denying that we are part of the broad human family. When we hear dehumanizing language, it should sound an alarm that someone is trying to manipulate us by appealing to our baser impulses.
LOOKING BACK
Social dominance theory is based on the observation that all societies are at least partly hierarchical and it tries to explain how those hierarchies are maintained. Especially common and important are individual and institutional discrimination. What social dominance theorists call legitimizing myths—such as the belief in a strict meritocracy, a just world, and pronounced mobility—also help preserve existing hierarchies by making people less troubled by any inequalities they experience or observe. Dehumanizing marginalized groups also makes it easier to accept the hierarchical system that contributes to their marginalization.
A theory about the hierarchical nature of societies, how they remain stable, and how more powerful or privileged groups in a society maintain their advantage.
A personality trait that corresponds to a person’s support for socioeconomic hierarchy and the belief that different groups should occupy higher and lower positions in society.
Social Dominance Theory
LEARNING OBJECTIVES
Although countless artists, philosophers, and political theorists have romanticized the idea of the egalitarian society, this idea has remained exactly that—a romanticization. All human societies are hierarchical, although some are more so (the ancient Egyptians or Indonesians today) than others (many Native American tribes or the Rojava in the Kurdish enclaves of Syria today). Social dominance theory takes the hierarchical structure of all societies as a given. It both articulates the most common dimensions of inequality and tries to account for how hierarchical societies, which by their very nature are unequal, nevertheless remain stable and endure over very long periods of time (Sidanius, Devereux, & Pratto, 1992; Sidanius & Pratto, 1999).
Social dominance theorists note that societies’ hierarchies tend to be based on age (with adults having more power than children), on gender (with men having more power than women), and on what they call an “arbitrary set.” The latter, as the name implies, takes different forms in different societies. For instance, it may be based on ethnicity (e.g., in Rwanda, the United States, and many South American countries), on religion (e.g., Sunni versus Shia, Catholic versus Protestant, or Sikh versus Hindu), or on race (e.g., in all countries with a history of slavery).
Because tensions can arise whenever there is inequality, a main focus of social dominance theory is specifying the social and psychological structures that evolve to prevent those tensions from erupting into destabilizing conflict. Hierarchies are said to be kept in place through:
Individual discrimination can be found every day in news accounts of the mistreatment of racial, ethnic, sexual, and religious minority groups in every corner of the globe. It is experienced when parents announce the birth of their boys with more evident pride than the birth of their girls (Gonzalez & Koestner, 2005), when people lock their car doors or cross the street to avoid members of stigmatized groups, and when individual law enforcement officers run afoul of their training and treat members of stigmatized groups with an iron fist (more on all of this below). The connection between such events and the concern with hierarchy—particularly with the prospect of losing a privileged status—is reflected in the fact that when White Americans learn that the United States will become a “majority-minority” country (in which Whites will be the largest racial group but not the majority, as they are now), they tend to express more negative attitudes toward Asian, Black, and Hispanic people (Craig & Richeson, 2014a) and come to believe that discrimination against Whites will increase (Craig & Richeson, 2017).
Concerns about the dissolution of existing hierarchy and the prospect of losing a dominant position in it are also reflected in the actions of individuals who subscribe to the “great replacement theory,” which takes its name from a book, Le Grand Remplacement, by the French author Renaud Camus (not to be confused with Albert Camus, the Nobel Prize–winning author of such works as The Stranger and The Plague). Great replacement theory posits that White people throughout Europe and in the United States are being “replaced” by non-White people from around the globe. The idea has received considerable traction among various White supremacy groups in Europe and the United States, including those who marched across the University of Virginia campus in 2017 chanting, “You will not replace us!” and “Jews will not replace us!” Concerns with replacement appear to have inspired the assailant in the 2019 attacks on two mosques that killed 51 people in Christchurch, New Zealand; the individual who killed 23 people in a rampage at a Walmart in El Paso, Texas, the same year; and the gunman who targeted Black people in a Buffalo, New York, supermarket in 2022, killing 10.
Mass murderers like those three are extremists whose vile actions are denounced by nearly everyone. And White supremacists are but a small—though growing—minority in the United States (Gunter & Southern Poverty Law Center’s Intelligence Project, 2020). The fact that they remain a minority shows that individuals in the dominant groups of society differ in the degree to which they’re concerned about losing their status and privilege. This has led to the development of scales designed to assess the extent to which a person is oriented toward the preservation of hierarchy and the status quo (Pratto et al., 1994). Research has shown that people who score higher on this scale of social dominance orientation are more willing to express prejudiced attitudes toward different groups and are more inclined to endorse policies that preserve existing hierarchies (Pratto, Sidanius, & Levin, 2006).
What about all those people (that is, most people in most societies today) who don’t score very high on the scale of social dominance orientation and aren’t especially comfortable with the idea that some people, by dint of luck or birth alone, are granted more status and more material resources than others? According to social dominance theory, even they can remain untroubled by evidence of institutional discrimination if they buy into ideological tenets, or legitimizing myths, that make unequal treatment seem not only perfectly reasonable but highly desirable. That is, some people don’t see observed discrepancies in status and wealth as troubling because they don’t see them as the products of good luck or accidents of birth. The divine right of kings was one such belief that served to preserve the social order for centuries. The authority of the king came from God and therefore could not be questioned. Similarly, many of the assumptions baked into sexism, racism, and ethnocentrism make it easier to accept stark inequalities because they justify why those in the dominant group have so much more than those in subordinate groups do. No less an authority than Walter Lippmann (1922), who coined the term stereotype, noted that people’s stereotypes constitute “the defenses of our position in society.”
Stereotypes about who is and isn’t productive or virtuous govern who is and isn’t thought to be deserving. In the United States, for example, such stereotypes legitimized longer prison sentences for possession of crack cocaine (used more often in Black neighborhoods) than powder cocaine (used more often in White neighborhoods), the federal government’s refusal to issue mortgages in African American neighborhoods while subsidizing the development of all-White suburbs (a practice that led to the development of the Fair Housing Act in 1968 and the Equal Credit Opportunity Act of 1974 to combat this bias), and the confiscation of Indigenous Americans’ lands as part of the “manifest destiny” of U.S. expansion (Eberhardt, 2019; Rothstein, 2017).
Believing In Strict Meritocracy, a Just World, and Economic Mobility
The very word meritocracy connotes that some people merit more than others do. Most of us are okay with that because some people do indeed work harder and are more talented than others. Is anyone upset that Tom Hanks makes more money than they do or that Serena Williams can take more lavish vacations? Of course not. But many have argued that strict meritocracy is something of a myth that fails to acknowledge either the role of luck in everyone’s life or the vast differences in opportunity and privilege that create unequal outcomes (R. H. Frank, 2016; Sandel, 2020).
The belief in strict meritocracy runs into further difficulties when erroneous stereotypes distort assessments of merit. Social dominance theorists note that dominant groups use their position to make it easier for members of their own groups to succeed—and then turn around and cite the failure of subordinate groups to achieve similar levels of success as evidence that members of those groups don’t have what it takes to get ahead.
A related, broader legitimizing myth that social psychologists have studied extensively is the just world hypothesis—the belief that people get what they deserve in life and deserve what they get (Lambert, Burroughs, & Nguyen, 1999; M. J. Lerner, 1980; Lipkusa, Dalbert, & Siegler, 1996; Nudelman & Shiloh, 2011). Victims of rape, for example, are often viewed as responsible for their fate (Abrams et al., 2003; S. T. Bell, Kuriloff, & Lottes, 1994), as are victims of domestic abuse (Summers & Feldman, 1984). The just world hypothesis reaches its zenith in the claim that if no defect in a victim’s character or past actions can be found, the tragic affliction must be due to some flaw or transgression in a “past life” (Woolger, 1988). Studies have shown that people are broadly inclined to “derogate the victim”—to disparage the character of those who suffer unfortunate experiences that are completely beyond their personal control (C. Jones & Aronson, 1973; M. J. Lerner & Miller, 1978; M. J. Lerner & Simmons, 1966).
The belief in a just world stems in part from the desire to be reassured that bad things won’t happen to us. The twists and turns of life can be unsettling: A superbly qualified job candidate may be passed over in favor of a mediocre applicant with the right connections; a selfless Good Samaritan may be stricken with cancer and experience an agonizing death. Such events cause anxiety, so we’re motivated to think they can’t happen to us. We do so by attributing terrible outcomes to something about the people who suffer them rather than to fate or chance (Burger, 1981; Walster, 1966). By thinking that people “get what they deserve” or that “what goes around comes around,” we reassure ourselves that nothing bad will happen to us if we are the right kind of person living the right kind of life.
A corollary of the belief in meritocracy is the idea that “the cream always rises to the top” or that those with sufficient talent and drive will succeed regardless of where they started in life. It’s easy to see why people would have such a conviction because notable examples are all around us—Abraham Lincoln being born in a log cabin and becoming president of the United States; Barack Obama’s equally spectacular rise to the most powerful office in the world; the world’s most valuable publicly traded company, Apple Inc., getting its start in Steve Jobs’s parents’ garage. The problem is that spectacular examples like these can lead us to believe that there is more mobility than there actually is (and thus that society is more meritocratic than it actually is).
Because a belief in mobility plays such an important role in the idea of the United States as a “land of opportunity,” one study asked a sample of American respondents to rank 15 countries in terms of economic mobility—Argentina, Brazil, Chile, China, France, Germany, Italy, New Zealand, Pakistan, Peru, Singapore, Spain, Switzerland, the United Kingdom, and the United States. The United States ranks eighth among these countries in actual economic mobility, so respondents had an equal chance of overestimating or underestimating the countries’ levels of mobility. That equal chance notwithstanding, the “land of opportunity” ideal had a noticeable impact on respondents’ assessments: They ranked the United States third among these countries, significantly higher than its actual ranking. Interestingly, conservatives ranked it highest among all 15 countries on average, whereas liberals ranked it fourth (Davidai & Gilovich, 2018). Other studies have shown that people overestimate economic mobility in the United States not just relative to other countries but also relative to the actual likelihood of someone born on the lower rungs of the economic ladder rising to the top rung (Davidai & Gilovich, 2015; Kraus, 2015; Kraus & Tan, 2015).
Note that this is exactly what social dominance theorists would expect: Beliefs in meritocracy, in a just world, and in pronounced mobility make it easier for all of us to accept the inequalities we see around us and they remove any incentive to challenge the social, cultural, and political systems that produce them. Note that these beliefs need not be consciously held to make us blind or indifferent to the pervasive inequalities in the world.
Justifying Status Differences Through Dehumanization
To become comfortable with pronounced group differences in wealth, status, and opportunity is to believe that those at the bottom are not as deserving as those at the top are. Embracing meritocracy and believing in a just world enable that stance. An even more powerful way of becoming comfortable with inequality and untroubled by the condition of disadvantaged groups is to dehumanize them. Dehumanization is the conviction, conscious or not, that someone or some group of people lacks the complex emotions and capacity for agency that are characteristic of humanity. Nonhumans are thought to experience simple emotions like anger or happiness but not complex emotions like shame or nostalgia. And other organisms are granted some degree of cognition but are not imagined to be capable of detailed planning or complex thought.
Both history and psychological research make it clear that dehumanization is common and tragic (Kteily & Bruneau, 2017). Sustained efforts to deny the humanity of outgroups have preceded some of the worst episodes of genocidal violence in modern times. The Nazis likened Jews to vermin before and during World War II (6 million killed); the Khmer Rouge depicted their enemies as worms (between 1.5 and 3 million killed); the Hutus of Rwanda referred to the Tutsis as cockroaches (over half a million killed).
In a telling neuroimaging study of dehumanization, participants were shown images of members of a number of different groups while in an fMRI machine. When they were exposed to individuals from most groups, their brain scans revealed activation in an area associated with social cognition—the medial prefrontal cortex. But activation in that area was notably absent when participants were shown images of objects or people from notable outgroups, such as people who are homeless or addicted to drugs (L. T. Harris & Fiske, 2006). As shocking as it may seem, many people apparently don’t think of individuals from such groups as being fully human.
When are people inclined to dehumanize others? One way to get a handle on that question is to note that dehumanization is essentially the opposite of anthropomorphism, or the attribution of human qualities to nonhuman entities. If you’ve played or worked with a robot, it has probably seemed human to you at times, and you’ve probably thought of the virtual assistant on your smartphone—Alexa, Siri, or whom(!)ever—in human terms sometimes as well. Adam Waytz, Nick Epley, and John Cacioppo (2010) have argued that we tend to anthropomorphize things to the extent that they resemble human beings: We’re more likely to anthropomorphize a robot than a toaster, more likely to anthropomorphize a mammal than an arachnid.
Waytz and colleagues also noted that we’re more likely to anthropomorphize when we’re feeling in need of social connection. Anyone who’s seen the film Cast Away immediately grasps why the character played by Tom Hanks, all alone on a remote island, gives his volleyball the name “Wilson” and treats it like a person. Finally, we anthropomorphize when we’re feeling generally helpless or ineffective because doing so can lend a sense of order and predictability to the world around us. Random acts and outcomes can seem more predictable, or potentially more predictable at least, if we imagine that there is someone responsible for them. As Waytz and colleagues noted, the World Meteorological Organization maintains that assigning human names to storms, hurricanes, and typhoons facilitates the kind of effective communication needed to prepare the public for these often life-threatening events.
Turning this around, we should expect people to be more likely to dehumanize those they think of as different from themselves. Indeed, immigrants are particularly vulnerable to dehumanization (Kteily & Bruneau, 2017), and the historical record provides abundant evidence of people dehumanizing other racial, ethnic, and religious groups—the conquest of the Americas, the colonization of Africa, the Armenian genocide, the enduring discrimination against the Roma throughout Europe, just to name a few. We should also expect people to dehumanize others when they are feeling strongly connected to their own ingroup and see it as distinct from various outgroups. Here, too, history teaches us that strong feelings of ingroup loyalty can foster an atmosphere that gives rise to brutal dehumanization, such as the strong ethnic connection among Rwanda’s Hutus that fed their genocide of their Tutsi neighbors. Additional support for this idea comes from a study in which participants who were led to feel socially connected (by thinking about a Thanksgiving dinner and the people there they felt closest to) were more likely to deny distinctly human mental states to others and to endorse the torture of those involved in the 9/11 terrorist attacks in the United States (Waytz & Epley, 2012). Finally, we should expect people to be more likely to dehumanize others when they see the world as chaotic and threatening. This also turns out to be the case: The historical record testifies to the tight connection between dehumanization and periods of turmoil and conflict (Kteily et al., 2015).
What lessons can we learn from the study of dehumanization? Most important, we should be alarmed whenever we hear dehumanizing language, especially from powerful leaders with committed constituencies during times of uncertainty and strife. When Zsolt Bayer, cofounder of Hungary’s right-wing Fidesz party, says that “These Roma are animals . . . inarticulate sounds pour out of their bestial skulls. . . . [They] shouldn’t be allowed to exist,” we might be tempted to dismiss his comments as “just words” (Brown, 2013). It’s tempting to react that way as well when Donald Trump says of migrants, “These aren’t people, they are animals” (Agence France-Presse, 2018). But words matter. The use of homophobic epithets, for example, encourages the dehumanizing of LGBTQ people (Fasoli et al., 2015). We therefore dismiss the impact of such language at our moral peril because that sort of dehumanizing vocabulary has been a prelude to some of history’s worst episodes of outgroup-directed violence.
And we know better. As easy as it may be to mindlessly think of others as less than fully human, there’s no denying that we are part of the broad human family. When we hear dehumanizing language, it should sound an alarm that someone is trying to manipulate us by appealing to our baser impulses.
Social dominance theory is based on the observation that all societies are at least partly hierarchical and it tries to explain how those hierarchies are maintained. Especially common and important are individual and institutional discrimination. What social dominance theorists call legitimizing myths—such as the belief in a strict meritocracy, a just world, and pronounced mobility—also help preserve existing hierarchies by making people less troubled by any inequalities they experience or observe. Dehumanizing marginalized groups also makes it easier to accept the hierarchical system that contributes to their marginalization.