ETHICAL ISSUES
Purposes of Personality Testing
According to one wide-ranging survey, the validity of well-developed psychological tests is comparable to that of the most widely used medical tests (Meyer et al., 2001). Still, a further question must be considered: How will test scores be used? The answer has practical and ethical implications.
The most obvious uses for personality tests are those of the professional personality testers—the ones who set up the booths at APA conventions, mentioned in Chapter 2—and their customers. Customers are typically organizations such as schools, clinics, corporations, or government agencies that wish to know something about the people they encounter. Sometimes this information is desired so that, regardless of the score obtained, the person who is measured can be helped. For example, schools frequently use tests to measure vocational interests to help their students choose careers. A clinician might administer a test to get an indication of how serious a client’s problem is or to suggest a therapeutic direction.
Sometimes, testing is for the benefit of the tester, not necessarily the person being tested. An employer may seek to assess an individual’s “integrity” to find out whether the person is trustworthy enough to be hired (or even to be retained), or may test to find out about other personality traits deemed relevant to future job performance. The Central Intelligence Agency (CIA) has long used personality testing when selecting its agents (Waller, 1993).
Reasonable arguments can be made for or against any of these uses. By telling people what kind of occupational group they most resemble, vocational-interest tests provide potentially valuable information to individuals who may not know what they want to do (D. B. Schmidt et al., 1998). On the other hand, the use of these tests rests on the implicit theory that any given occupation should continue to be populated by individuals like those already in it. For example, if your response profile resembles those obtained from successful mechanics or jet pilots, then perhaps you should consider being a mechanic or a jet pilot. Although this approach seems reasonable, it also could keep occupational fields from evolving and prevent certain individuals (such as women or members of underrepresented groups) from joining fields from which they traditionally have been excluded. For example, an ordinarily socialized American woman may have outlooks or responses that are very different from those of the typical garage mechanic or jet pilot. Does this mean that women should never become mechanics or pilots?
A more general class of objections is aimed at the wide array of personality tests used by many large organizations, including the CIA, major automobile manufacturers, phone companies, and the military. According to one critic, almost any kind of testing can be objected to on two grounds. First, tests are unfair mechanisms through which institutions can control individuals by rewarding those with the institutionally determined “correct” traits (such as high conscientiousness) and punishing those with the “wrong” traits (such as low conscientiousness). Second, perhaps traits such as conscientiousness or even intelligence do not matter until and unless they are tested, and in that sense they are invented or “constructed” by the tests themselves (Hanson, 1993). Underlying these two objections seems to be a more general sense, which I think many people share, that there is something undignified or even degrading about submitting oneself to a test and having one’s personality described by a set of scores.
All of these objections make sense. Personality tests—along with other kinds of tests such as those measuring intelligence and honesty—and even drug tests do function as a part of society’s mechanism for controlling people by rewarding the “right” kind (those who are intelligent, honest, and don’t do drugs) and punishing the “wrong” kind. During the 1930s and 1940s some employers used personality tests to try to screen out job applicants inclined to be pro-union (Zickar, 2001). Does this seem ethical to you?
Still, criticisms that view personality testing as undignified or unethical, when considered, appear rather naïve. These criticisms seem to object to the idea of determining the degree to which somebody is conscientious, or intelligent, or sociable, and then using that determination as the basis of an important decision (such as employment). But if you accept that an employer is not obligated to hire randomly anybody who walks through the door, and that the employer tries to use good sense in deciding who would be the best person to hire (If you were an employer, wouldn’t you do that?), then you also must accept that applicants’ traits like conscientiousness, intelligence, and sociability are going to be judged. The only real question is how. One common alternative is for the employer to talk with the prospective employee and try to gauge conscientiousness and other traits by the candidate’s shoeshine, haircut, or some other such clue (Highhouse, 2008). And while tests might be biased against certain groups (this issue is complex and controversial), it is definitely true that many persons are biased against certain groups. So, do you trust the test or the person more? Neither is perfect; that’s for sure.
Protection of Research Participants
DECEPTION
Whenever research involves humans, psychologists (or other researchers, such as medical scientists) need to be concerned about the consequences. Will the research harm the participants? Medicine has a long and checkered history of human research, including the infamous Tuskegee study, in which Black men with syphilis, without their knowledge or consent, deliberately went untreated for long periods of time. Astonishingly, this study that began in 1932 was not stopped until 1972, after the Associated Press published an article about it (Centers for Disease Control and Protection, 2021). Partially as a result of outrage over this and other instances, medical research in the United States must now undergo rigorous ethical review. In psychology, the well-known studies of obedience by Stanley Milgram (1975), in which participants were ordered to give painful shocks to a screaming victim (who was really an unharmed research assistant) would probably not be allowed today by institutional review boards (IRBs) because of the emotional upset experienced by many of the participants.
More information
A comic shows two men in an office. One is sitting behind a desk holding a document. The other is sitting in front of the desk. The caption reads, “Remember when I said I was going to be honest with you, Jeff? That was a big, fat lie.”
IRBs are also wary of studies that deceive or lie to participants. The Tuskegee victims were not told the true purpose or nature of the study; the cover story was that they were being treated for “bad blood.” While psychological research does not generally involve the same kinds of matters of life and death, I personally still find it somewhat disconcerting that psychologists, to this day, frequently tell their research participants something that is not true.20 The purpose of such deception usually is to make the research realistic. A participant might be falsely told that a test is a valid measure of IQ or personality, for example. Then the experimenter can assess the participant’s reaction to a poor score. Or a participant might be told that another person was described by a “trained psychologist” as both “friendly” and “unsociable” to see how the participant resolves the inconsistency. The most common deceptive practice is probably the cover story, in which participants are misinformed about the topic of the study. For example, they might be told the study is examining perceptual acuity, when the actual purpose is to see how long participants are willing to persist at a boring task.
Even today, this kind of deception is allowed by the principles of the American Psychological Association, and by most IRBs, though extra justification for why it’s necessary is frequently required. Still, the use of deception in research has a nasty history, and the ethical issues have been controversial for a long time (see, e.g., Baumrind, 1985; S. S. Smith & Richardson, 1983) and are not completely settled even now. Fortunately, I suppose, the use of deception is rare in personality research, much of which involves correlating personality measures with behavioral and life outcomes; deception is much more common in the neighboring field of experimental social psychology but seems less common than it used to be even there.
PRIVACY
While deceiving participants may be less of an issue than it was in the past, another is becoming more important: privacy. Massive amounts of data are being gathered about our online behavior, whether we know it or not, and their use in psychological research may not even be the most important reason to worry. But in the enthusiasm over the usefulness of B-data gathered by smartphones and social media, psychologists need to be mindful of the ethical issues involved.
Similarly, the experience-sampling methods that gather real-world B-data, described in Chapter 2, offer the possibility of violating the privacy of the people who are participants in the studies or others who just happened to be nearby. For example, if a participant carries a listening device such as the electronically activated recorder (EAR) or wears a lapel camera all day, the recorded sounds and images not only reveal the participant’s own behavior but also provide information about what other people in the vicinity said and did. The ethical and legal complications are numerous and some guidelines for using these methods have been proposed (Robbins, 2017). These guidelines include never publishing verbatim quotes that could identify the participant or any other individual, getting consent from everybody who ends up getting recorded (not just the initial participant), and removing all identifying information, such as names, from data files as soon as possible.
Uses of Psychological Research
A different kind of ethical concern is that psychological research, however it is conducted, might be used for harmful purposes. Just as physicists who develop atomic bombs should worry about what their inventions can do, so too should psychologists be aware of the consequences of what their work might enable.
For example, one field of psychology—namely, behaviorism—has long aimed to develop a technology to control behavior (see Chapter 13). The technology is far from fully developed but still raises questions about who decides what behaviors to create and whose behavior should be controlled. The main historic figure in behaviorism, B. F. Skinner, wrote extensively about these issues (Skinner, 1948, 1971). Even more disturbing, and more recently, the largest organization of psychologists, the American Psychological Association, was accused of arranging for behavioral scientists to help the CIA implement a program of interrogation that included torture (Pope, 2019; Risen, 2015).21
Yet another fraught issue arises when psychologists choose to study ethnic, racial, or sex differences. Putting aside whatever purely scientific merits the work might have, it raises fundamental questions about whether the findings are likely to do more harm than good. The arguments in favor of exploring these issues are that science should study everything because knowing the characteristics of a group might help in tailoring programs specifically to the needs of its members, and recognizing differences between groups can be a powerful argument in favor of diversity (Eagly & Revelle, 2022). The arguments against studying such topics are that such findings are bound to be misused by racists and sexists and, therefore, can become tools of oppression themselves and that knowledge of group characteristics is not really very useful for tailoring programs to individual needs.
When the question is whether or not to study a given topic, psychologists, like other scientists, almost always come down on the side of “yes.” After all, ignorance never got anybody very far. Still, there are an infinite number of unanswered questions out there that one could usefully investigate. For a study on any topic, it is worth asking: (1) Why is this research being done? (2) How will the results of this research be used?
Representation
The discussion of generalizability, earlier in this chapter, observed that the representation of various populations among participants is far from ideal. Most psychologists, having come to understand that their research samples are WEIRD in the sense of mostly coming from prosperous countries in North America and Europe, are increasingly worried about the implications of this fact. Efforts toward remedying the situation include the Psychological Science Accelerator, which is a global network of more than 1,000 researchers in 84 countries (see https://psysciacc.org), and the Many Labs Africa Study, which seeks to replicate research findings from African countries in other places around the world (Adetula et al., 2022).
Psychologists themselves are even less representative of the general population than their participants, being disproportionately (even to this day) White men from economically and educationally advantaged backgrounds. Maybe you have to be hardworking and smart to earn a PhD in psychology, but you definitely also need luck in your access to financial resources, quality education, and mentors to guide your way. The limited diversity in researchers inevitably leads to a limited diversity in research, because what a scientist chooses to study stems from the scientist’s interests, background, experience, and social context. A cliché among psychologists is that “all research is me-search,” meaning that personal issues may lie behind what one chooses to study. What your me-search will be will depend on who you are.
The limited diversity in researchers inevitably leads to a limited diversity in research.
This is why efforts toward DEI (the widely used term for diversity, equity, and inclusion) are more than a matter of fairness. These efforts are important for research quality because, as was discussed in Chapter 2, research is about exploring the unknown. I don’t know what my research would conclude or seek to discover if I were a different person from a different background, and neither does anybody else. We all have unique blind spots. It’s important to have diverse people as researchers, so that their different viewpoints accumulate to eliminate as many blind spots as possible. Many universities and research organizations are striving toward diversity, but the task is and will continue to be slow going because the many gaps in representation have deep, historical roots in educational inequality, economic disadvantage, and systemic racism.
Honesty and Open Science
Honesty is an ethical issue common to all research. The past decade has seen scandals in physics, medicine, and psychology in which researchers fabricated their data; the most spectacular case in psychology involved the Dutch researcher Diederik Stapel, described earlier. Lies cause difficulty in all sectors of life, but they are particularly worrisome in research because science is based on truth and trust. Scientific lies, when they happen, undermine the very foundation of the field. If I report about some data that I have found, you might disagree with my interpretation—that is fine, and in science this happens all the time. Working through disagreements about what data mean is an essential scientific activity. But if you cannot be sure that I really even found the data I report, then there is nothing to talk about. Even scientists who vehemently disagree on fundamental issues generally take each other’s honesty for granted (contrast this with the situation in politics). If they cannot, then science stops dead in its tracks.
In scientific research, complete honesty is more than simply not faking one’s data. A lesson that emerged from the controversies about replication, discussed earlier, is that many problems arise when the reporting of data is incomplete, as opposed to false. For example, it has been a not uncommon practice for researchers to simply not report studies that didn’t “work,” that is, that did not obtain the expected or hoped-for result. And, because of publication bias, few journals are willing to publish negative results in any case. The study failed, the reasoning goes, which means something must have gone wrong. So why would anybody want to hear about it? While this reasoning makes a certain amount of sense, it is also dangerous because reporting only the studies that work can lead to a misleading picture overall. If 50 attempts to find precognition fail, for example, and one succeeds, then reporting the single success could make it possible to believe that people can see into the future!
More information
A cartoon shows two cartoon figures standing in a room with shelves on one side and a counter and sink on the other. One of the cartoon figures has a broad smile as he sweeps papers labeled data under the rug. The cartoon figure behind him has a file in her hands and is shown pointing her fingers at the papers.
A related problem arises when a researcher does not report results concerning all the experimental conditions, variables, or methods in a study. Again, the not unreasonable tendency is to report only the ones that seem most meaningful and omit aspects of the study that seem uninformative or confusing. In a more subtle kind of publication bias, reviewers and editors of journals might even encourage authors to focus their reports only on the most “interesting” analyses. But also again, a misleading picture can emerge if a reader of the research does not know what methods were tried or variables were measured that did not yield meaningful results. In short, there is so much flexibility in the ways a typical psychology study can be analyzed that it’s much too easy for researchers to inadvertently “p-hack,” which, as mentioned earlier, means that they keep analyzing their data in different ways until they get the statistically significant result that they need (Simmons et al., 2011).
The emerging remedy for these problems is a movement toward what is becoming known as open science, a set of practices intended to move research closer to the ideals on which science was founded (M. Hong & Moran, 2019). These practices include fully describing all aspects of all studies, reporting studies that failed as well as those that succeeded, and freely sharing data with other scientists. An institute called the Center for Open Science has become the headquarters for many efforts in this direction, offering Internet resources for sharing information. At the same time, major scientific organizations such as the American Psychological Association and the Association for Psychological Science are establishing new guidelines for full disclosure of data and analyses (Appelbaum et al., 2018), and a rapidly growing organization, the Society for the Improvement of Psychological Science (SIPS), is devoted exclusively to promoting these goals.
Glossary
- open science
- A set of emerging principles intended to improve the transparency of scientific research and that encourage fully reporting all methods and variables used in a study, reporting studies that failed as well as succeeded, and sharing data among scientists.
Endnotes
- This is not the same as simply withholding information, as in a double-blind drug trial, in which neither the patient nor the physician knows whether the drug or placebo is being administered. Deception involves knowingly telling a lie.Return to reference 20
- Your textbook author resigned his longtime membership in APA over this issue.Return to reference 21