NINETEEN“HELP ME UNDERSTAND. . .”When Your “They Say” Is a Bot

A COLLEAGUE OF OURS recently asked us how generative AI tools like ChatGPT can be used with this textbook—how, that is, they can help students make the signature move of the academic world: entering into conversation or debate with what “they say.” It was a good question, and we didn’t have an immediate answer. After giving it some thought, however, and working more closely with these new technologies, we realized that an answer was needed. The result is this chapter. While these supersmart chatbots are sometimes thought of as a conversational search engine, we show how, if used responsibly, they can help you make the kinds of analytic, argumentative moves that this book addresses, and so, in this chapter, we provide model prompts or templates for moving chatbots in this “they say / I say” direction, as in:

  • I am writing an essay in which I want to argue __________ and could use some help thinking through my argument. Can you help me identify who says or might say otherwise—what views my own argument might be seen as a response to?

But before saying more about using generative AI tools in the service of this kind of academic critical thinking, we need to address some basic points about how to use them responsibly.

Most important, your instructor likely has a policy on the use of chatbots, so be sure to read your course syllabus carefully and check with your instructor if you’re in any doubt. And don’t be surprised if your instructor asks you not only to follow citation guidelines but also to submit a transcript or record of any interactions you had with generative AI in writing your paper so that your work with the technology can be assessed as part of the final written product. You may also be asked to do more in-class writing, in part to give your instructor a sense of your unique style and concerns as a writer but also to provide a benchmark against which any take-home writing assignments can be compared. Along similar lines, you may also be asked to draw more on personal experiences in your writing and in class discussions, which can help reassure readers that you’re the one who wrote your paper and not some distant machine technology.

See Chapter 7, “In My Experience.”

reveal, don’t conceal

It’s hard to explain the sheer excitement of using these new generative AI tools to anyone who hasn’t yet used them. These programs rely on Large Language Models (LLMs)—predictive engines—that draw on gigantic storehouses of data and are often described as having learned from these datasets how to respond to human prompts and questions in literate, grammatical (even polite!) language. When we first tried our hand at these chatbots, we kept saying things like “Wow,” and “Look at this! Did you know that these bots can __________?” Not only do chatbots draw on enormous swaths of data and produce coherent, logical writing but they also do all this in a matter of mere seconds, answering the prompts you give them in real time and thus creating the illusion of real, human-to-human conversation. Furthering this illusion, these advanced technologies have the capacity to remember what you said in earlier parts of the conversation, meaning that you can develop a powerful train of thought with them over time, and keep directing and redirecting them in the direction you want.

Is it any wonder then, that worldwide debates are raging about whether and how generative AI should be regulated— including in the domain of education? After all, as you probably already know, it can be very tempting for students to turn in an AI-generated text as their own, to simply change a word or two and put their name on it. But the central message we want to offer on this topic is: don’t. It’s no more acceptable to submit the work of a bot as your own than it is to submit the work of another person as your own. Ultimately, plagiarizing is not only a way of cheating the academic system but also of cheating yourself, since it denies you the chance to develop the independent intellectual skills that you go to college to learn.

That having been said, there may be occasions when you find chatbots helpful in your writing process—if, that is, your instructor allows it, and if, as our section heading suggests, you reveal rather than conceal that you are doing so, giving credit where credit is due by following documentation guidelines, which we summarize in an appendix at the back of this book (“Citing What ‘They Say’”).

A good rule of thumb, then, is to be transparent in your use of generative AI, citing it as you would any other source or collaborator, as in:

  • When asked why __________, ChatGPT responded, “__________.”

In addition, to show that you played a significantly active role directing or prompting the bot, you might write:

  • Working collaboratively with BARD over a series of twenty prompts (see this link to the transcript of these exchanges), the bot and I came up with the following interpretation: “__________.”
  • After a series of prompts (see the chat history in my Works Cited page), BING was able to explain the concept clearly: “__________.”

To quote a chatbot’s answers, you may want to consult not only our appendix on citation but also Chapter 3, “The Art of Quoting.” And though it could go without saying, you may also want to avoid sharing significant quantities of original writing with today’s chatbots, which are owned by private corporations that are free to do almost whatever they want with information they collect on us.

fact or fiction?

In addition to exercising caution uploading your original writing to any AI tool, caution is called for when using chatbots as sources you quote, because they are prone to what are commonly called “hallucinations” in which they fabricate (or “hallucinate”) facts. Anyone who has ever worked with these technologies can see pretty quickly that they not only make factual errors but also do so in such a convincingly authoritative tone that these errors can be easy to miss.

Consider what happened when a first-year student at the State University of New York was using a chatbot to investigate the role of masculinity in the Spider Man movies, and the bot she was using wrote at great length about an article by T. Sasson (2018). At one point, for instance, it wrote:

According to Sasson (2018), Parker’s journey exemplifies the exploration of masculinity within a coming-of-age narrative, showcasing the complexities of growing up amid traditional gender expectations.

Looks to us like a sound piece of writing. The chatbot even provided a Works Cited list indicating that the essay by T. Sasson was published in the edited collection Marvel Comics into Film: Essays on Adaptations Since the 1940s. So all’s well and good— right? The problem, however, is that, while this edited collection does exist—look out!—it does not include an essay by T. Sasson. Furthermore, T. Sasson, the supposed author, doesn’t exist either! Despite its reassuringly confident tone, the bot fabricated the reference out of whole cloth.

Along similar lines, consider what happened when, prompted by this student’s experience, we decided to conduct a little test by having a conversation ourselves with ChatGPT about the Spider Man movies. The conversation started out swimmingly and lasted for several days, most of it centering around an article that the bot referenced by the former New York Times film critic, A. O. Scott. But at the end of the exchange, when we asked the bot for a link to Scott’s article so that we could read it ourselves, the bot reported matter of factly that there was no such article, that it had merely imagined what Scott “might” have written. Shocked, and a little annoyed, we typed: “But you were the one who told us about the article; you’ve been discussing it with us for hours; and, instead of using phrases like, “Scott might have said . . .” or “might have argued . . . ,” you wrote, “Scott insists . . .” and “Scott appreciates . . .” or “. . . claims.” GPT’s response? To apologize, but simply to reiterate what it had said before: that it had been referring to “a hypothetical” article Scott could have written “based on his concerns and general outlook.”

The two of us were just experimenting, of course. But a mistake like this for a student fulfilling an assignment, or someone in the world of work, journalism, or politics, say, can have serious consequences. As our sister-in-law who works for a notfor- profit company explains, while chatbots “offer increased productivity, they have their risks—like hallucinations” (Carolyn Kahn Birkenstein, The MITRE Corporation).

verify, verify, verify

As these examples demonstrate, AI-generated writing requires critical scrutiny and investigation on your part. The lateral reading or fact-checking tips we describe in Chapter 15 can help you with this. By leaving the program momentarily and searching online, you can confirm or disconfirm the veracity of any sources—or any examples, statistics, quotations, summaries, and so forth—that a chatbot provides.

So beware. Even if a source your chatbot refers to does exist, you can’t always be sure that the bot accurately represents what that source says. Most of the time, it seems to us, chatbots do a good, even brilliant, job of summarizing what others say. You just never know in advance when they will present a summary that is just plain wrong or will cite a source that doesn’t exist. Chatbots often misrepresent and misquote sources, which explains why it’s a serious problem when we rely on what they say about an assigned text, without bothering to read it.

Imagine, for instance, what would happen if you were to rely on the following claims made by ChatGPT about Kenneth Goldsmith’s article, “Go Ahead, Waste Time on the Internet.” When asked for a summary of Goldsmith’s central argument, the bot explained that Goldsmith

posits that aimless drifting on the internet can be a creative act, drawing parallels to older forms of “wasting time” such as daydreaming or doodling, which have been recognized as sources of creativity.

Sounds plausible enough, perhaps—but not if you’ve actually read Goldsmith’s article. Goldsmith does not defend “wasting time” on the internet. And he never mentions any of the examples that the bot identified like “drifting” online, or other overtly unproductive activities like “daydreaming or doodling.” His title, in fact, encouraging us to “Go ahead” and “waste time on the internet” is ironic, suggesting not that we should actually waste time on the internet, but that activities often deemed to be wasteful—such as reading online and making social plans—are in fact edifying and productive. The only way to avoid repeating this kind of AI-generated misrepresentation is by engaging in the hard, often timeconsuming work of reading every text you’re writing about and double-checking everything, even tracking down material, if need be, in a library database. Your instructor will hold you responsible for your writing, so it’s up to you to detect and remove any problems, which can only be done by developing a solid understanding of the sources and data you’re dealing with.

Often, your understanding of these sources needs to be more than solid. It needs to be granular. For instance, when we were working with a chatbot on a project involving Gerald’s essay, “Hidden Intellectualism” (see page 309), the LLM quoted Gerald as having written:

[N]o necessary connection has ever been shown between what goes on in the street and what goes on in the school. . . .

But Gerald never wrote that sentence, which, if you think about it, makes no sense. Instead, he’d written:

[N]o necessary connection has ever been established between any text or subject and the educational depth and weight of the discussion it can generate. (607)

We were able to spot this easily overlooked misquotation right away because Gerald is the author and we’re therefore on intimate terms with his text. ChatGPT was quick once again to apologize for its mistake, but the experience made us even more vigilant than ever to double-check everything chatbots say, no matter how authoritatively they say it.

The experience also led us to rethink the claim one sometimes hears today that, were internet access to become universal, chatbots will have the capacity to democratize education by serving as a personal “tutor” or “mentor” to virtually every student.

A cartoon shows a person sitting at a laptop while a robot stands over them.

Given how error-prone these technologies can be (and of course they may improve drastically in the near future), ceding full control to them is always a big mistake. Students themselves need to take the lead, mentoring, directing, and often even correcting the chatbots themselves, engaging with the technology in a complex collaboration in which the mentee becomes the final authority or mentor.

A cartoon shows a robot sitting at a laptop while a person stands over them, pointing to the laptop.

ai for academic argument

But problems with chatbots are only part of the story. There’s also the vast potential of these technologies to help us improve as writers and thinkers. Despite having access to gigantic swaths of information (no matter how faulty that information can sometimes be), these sophisticated bots can also help you overcome that dreaded feeling of writer’s block, and keep you company during a long slog of writing, breaking through the isolation of the writing process and offering stimulating feedback that can sharpen your focus—even at 2 o’clock, if it suits you, in the morning. Furthermore, as we’d now like to show, and as we suggested earlier, these technologies can also help you make what this book presents as the most important move of academic critical thinking: summarizing and responding to what “they say.”

How to get chatbots to do this, however, is not immediately apparent. While there’s a wealth of advice out there on how to prompt or direct such programs (a practice often referred to as “prompt engineering”), there isn’t much advice on how to use them to summarize and respond to other perspectives. And part of this has to do with the nature of chatbots themselves. After all, these bots “do not,” as ChatGPT tells us, “have opinions,” and are instead “designed to be neutral tools.” As BING explains, “I am not allowed to express subjective opinions on controversial or sensitive topics.” As a result, you cannot engage with a bot in the same way you would with a noted scholar such as Michelle Alexander or a newspaper opinion writer. As they describe themselves, at least, these programs are “neutral” and, as BING puts it, “objective” tools that remain above the fray of controversies and debates (though as you may discover, because bots draw from what’s readily available online, they often reproduce the biases of mainstream media).

But there are ways to prompt or guide these technologies in a dialogical, “they say / I say” direction. For one thing, despite their claims of neutrality and objectivity, chatbots do sometimes misinterpret texts, as we saw earlier, so you can argue with them within limits:

  • Although ChatGPT claimed that Kenneth Goldsmith argues __________, what he actually argues, on the contrary, is that __________.

For another thing, to the extent that chatbots don’t take positions themselves, they can help you understand positions others take, your own position, and the relationship between them.

For instance, to get help understanding someone else’s position—a particularly challenging reading or “they say” you’ve been assigned—you might write:

  • What does X mean when she writes, “__________”? In making this claim, is X being sincere or sarcastic?
  • Author X often refers approvingly to the theory __________. Can you explain in simple terms what that theory is?
  • X often uses the term __________. What does it mean?

Along similar lines, to understand how an author’s position fits into a larger debate or conversation, you might write something like the following:

  • X’s argument is that __________. Please identify another writer who names X and either critiques or builds on X’s perspective.
  • X’s argument is that __________. Please identify another writer who doesn’t name X but nevertheless offers a contrary perspective.

And a template like the following can help with answering the “so what?” and “who cares?” questions:

  • A classmate who read a draft of my essay in which I argue __________said, “Your argument is clear enough to me, but I guess I don’t see why I should care one way or the other about it.” Can you suggest how I might answer this comment in my essay?

In yet other instances, you may be writing a research paper and could use some help coming up with a specific issue or debate to focus on. In such a case, you might prompt your chatbot as follows:

  • Please identify two or three controversies over the topic of __________.

In still other instances, you may have a clear idea of a debate you want to address but need help seeing how two authors fit into it:

  • Please help me put X and Y into dialogue about __________, clarifying exactly where they agree and disagree.

Templates like this that put authors into dialogue are extremely useful in the academic world, which values the art of synthesizing and comparing perspectives.

In prompting chatbots in this way, you may find that, despite their impressive dexterity, they do not always get things right on the first try. When we used a version of the last template above with ChatGPT, for instance, we had to prompt and re-prompt the bot multiple times before it produced a response that actually put the two authors into dialogue. In its initial responses, what the bot said about the one author seemed unrelated to what it said about the other—like two ships passing in the night. Only with persistent redirecting was the program able to pinpoint clearly and exactly where the two authors parted ways.

the naysayer role

There are, of course, other ways to move chatbots in a “they say / I say” direction, but one merits special attention. It involves exploiting what these technologies seem especially good at: role playing, taking on the role of a naysayer in particular.

For example, you might ask a chatbot to play the role of a naysayer who challenges an argument you’re making, as in:

  • I am writing a paper on the topic of X, and I would like to take the position that __________ . My view, in other words, is not __________, but __________. Before I begin writing, however, I’d like you to help me refine my argument by playing the role of a skeptic. You might point out where, for instance, someone could challenge my position, from what standpoint (feminist, conservative, pro-market capitalist, and so forth) they might mount such challenges, and how they might use my evidence against me.

Or, in revising something you’ve written, you might write:

  • When I asserted that __________in a recent essay, my instructor wrote in the margin, “Surely, many would disagree,” and I’ve been struggling to figure out how to respond. Can you help me identify how others might disagree with my claim?

You can also enlist a chatbot’s help in answering a naysayer, reversing the roles outlined above so that, instead of critiquing the position you’re taking, the chatbot defends it:

  • I just thought of a counterargument to my position. I’ve been making the case that __________, but realized that a skeptic could always point out __________. Should I radically revise my argument, or even start over, or is there some way I can rescue it?
  • In my essay, I take the position that __________, and at one point I address the counterargument that __________. In my heart of hearts, I completely disagree with this counterargument. And yet, there must be some concession that I can make to it while still standing my ground. Can you help me identify such a concession by listing three or four?

Again, prompting a chatbot in ways like these isn’t always a one-and-done activity, but very often a matter of prompting it again and again. You may need to remind your chatbot what you want from it, as in:

  • Again, please remember that I’m asking you to __________, not to __________.

questions of style

Chatbots can also be prompted to help you with your writing itself, as in:

  • Please read this paragraph I’ve written and identify the biggest problems you see in my writing. Where, for instance, might my ideas be disconnected, my transitions weak, a quotation mishandled, or the connection unclear between the position I take and the one I’m challenging?

You can also enlist a chatbot’s help with tricky questions about voice—as in:

  • The following sentences I’ve written sound a little too informal for the kind of academic essay I’m trying to write: __________. Can you please suggest how I might translate these sentences into the kind of academic professional language that scholars might appreciate?

Conversely, chatbots can also be prompted in the opposite direction to help you include some of the non-academic language we discuss in Chapter 10, “Academic Writing Doesn’t Mean Setting Aside Your Own Voice”:

  • The following sentences I wrote contain too much jargon and sound a little stiff to me: __________.” Please suggest how I could translate my point into more everyday colloquial language.

Animating this chapter has been the question of how to use today’s revolutionary AI writing technologies without overrelying on them as a shortcut that does the work for you but rather as tools that, when used responsibly, can help you make the moves that matter in academic writing. We can’t put this point any better than a first-year student at Georgetown University did after reading a draft of this chapter:

Whenever our generation talks about AI, there is a lingering notion that it’s more powerful than humans, that AI overtakes what we humans can do. I believe changing the narrative to give us control over AI is exactly how we can use it to our advantage.

The ultimate goal, then, is to work collaboratively with this technology in a way in which you lead it instead of letting it lead you.

Exercises

  1. Treat the privacy policy (and/or the terms and conditions) of a chatbot as a “they say” that you read and respond to.
    1. Consider the risks that users should be aware of when using this program. Which risks are stated? Are any risks left out? Does the policy say anything about data being collected on users? What about matters of copyright?
    2. Identify any passages that may be especially hard to understand. Enlist the help of the chatbot itself to parse out what such passages are saying.
    3. Use one of the “they say / I say” templates in Chapter 2 to compose a summary and response to the policy.
  2. Ask a chatbot to write a few paragraphs summarizing a controversial topic that you’ve recently heard or written about. Does the chatbot live up to the “objectivity” and “neutrality” that most chatbots aspire to?
    1. Examine the output for signs of obvious bias or stereotypes. What do you see in the language and examples used in the argument?
    2. Does the summary leave out any important perspectives? What evidence exists beyond the internet-based dataset that the bot draws from?
    3. Describe one of these unrepresented perspectives: a personal experience, a possible interview or survey, a physical artifact. How could you revise this argument to account for this missing perspective?
  3. With a partner or on your own, review a chatbot-generated text that uses sources or quotations, and, using the lateral reading techniques discussed in Chapter 15, track down any sources or facts referenced by the bot. Do the sources actually exist? If so, are they quoted or summarized accurately? Grade the chatbot based on how well it answered the prompt and incorporated the words of others. Explain why you chose that particular grade.
  4. Ask a chatbot to help you revise your writing.
    1. First, experiment with style. Ask the bot to help you make a passage more concise or revise it to include more transitional words and phrases. Were the bot’s suggestions helpful?
    2. Next, ask the bot to adopt the voice of a certain perspective (a dynamic salesperson, a pompous politician, a nosy neighbor). Analyze the chatbot’s answers: Did it use any language or make any moves that you could borrow to make your writing stronger?
    3. Finally, experiment with audience and different kinds of writing. Ask the chatbot to turn an argument into a slide presentation outline, a country and western song, a clickbait headline. Some of these kinds of writing may be more useful to you than others, but you may find something unexpected that helps you strengthen what you say.