Using Generative AI Ethically

Now that we’ve covered the importance of acknowledging your sources, you may be wondering how this applies to information generated by a computer.

Let’s consider a case study. A student, Maya, is putting together a presentation about the serious consequences of water temperatures rising above 100 degrees Fahrenheit off the coast of Florida. She’s analyzed the speaking occasion and done her research by identifying valid facts and opinions. Now it’s time to create the presentation. As she thinks about what to say and how to organize it, she asks ChatGPT, a generative AI tool developed by OpenAI, to provide some key ideas for her presentation. Within seconds she receives a clear, well-written summary of six significant harms—from health-related dangers to increases in the intensity of storms.4 She’s tempted to use four of them and will back up each harm with her views and the supporting materials she’s collected.

Maya now faces several ethical decisions. Should she use the wording and content generated by AI without revealing her source? Does she run the risk of being caught and accused of plagiarism? Is it unethical to use generative AI if her instructor, college, or organization restricts its use? What if her college or class doesn’t have a specific AI policy? Ethical questions like these aren’t always straightforward, especially when it involves rapidly changing technology. To start considering these ethical questions, it helps to understand the nature of generative AI.

UNDERSTANDING GENERATIVE AI

Generative artificial intelligence (or generative AI) is a kind of machine learning algorithm that learns by searching and sourcing applicable available content, including text, images, and audio, to produce new content.5 Generative AI entered the communication mainstream in the early 2020s with platforms such as OpenAI’s ChatGPT, Google’s Bard and Gemini, and Microsoft’s Bing Chat and Copilot. By the time you read this book, there will be many more new and improved generative AI platforms and capabilities.

In this chapter, we refer to generative AI platforms that generally work in similar ways: you create a prompt or start a conversation that asks or describes what you want AI to generate, and the generative AI tool delivers a response gleaned from sources all over the internet, including journals, websites, databases, and social media. In doing so, the AI tool searches and sorts through vast amounts of information to deliver the information or new content that you request.

It may seem that generative AI is a more efficient version of doing an INTERNET SEARCH Part 3 symbol green square (141–42)—and by the time you read this book, AI will likely be incorporated into your favorite internet search tool. But consider the key difference between searching for and generating the main ideas for a presentation. While a search tool helps you locate preexisting material from relevant sources, a generative tool creates new material by “reading” those sources, selecting the content to use, and presenting its findings in a way that mimics a human doing the work on your behalf. So, is it ethical to use the material that AI generated from a wide range of other sources? Should you cite generative AI and, if so, when?

ETHICAL DECISIONS FOR USING GENERATIVE AI

Let’s return to three of the rhetorical elements that are most important when considering how to use generative AI ethically: your speaking occasion, yourself as a speaker, and your content.

Occasion: Using Generative AI in Academic Contexts The most important guideline for using AI ethically is to follow the policy or policies set by your instructor, college, or the institution hosting your presentation. If using generative AI is prohibited or discouraged, doing so in any capacity would be unethical and could be labeled as plagiarism. In some cases, you might be barred from speaking or receive an F on an assignment.

Answer the following questions before using generative AI in an academic context—even if you just plan to use it for brainstorming. If you don’t know the answer to either of these questions, consult your syllabus or ask your instructor.

  • Does your instructor or college have a policy for using generative AI as the basis for academic work?
  • If permitted by your instructor or college, what are the acceptable circumstances and conditions for using generative AI?

Speaker: Generative AI and Your Ethics If permitted in your speaking situation, AI can be a helpful tool as you begin thinking about what to say—but only if you adapt any generated material to your audience and to your own ethics as a speaker.

The voice, language, and ideas generated by AI come from a wide range of sources—writers, thinkers, academics, social media users, editors, and even other AI chatbots. The material it generates does not come from you, speaking to your audience on a particular occasion. It doesn’t incorporate your natural speaking style or your unique perspectives. You won’t be as familiar with supporting material generated by AI as you would if you’d spent the time doing your own RESEARCH Part 3 symbol green square (134–51): finding, reading, evaluating, and summarizing your sources.

If you ask generative AI to produce an outline or manuscript, or if you use language generated by AI, you are crossing an ethical line by plagiarizing and saying words or ideas that don’t reflect you and your personal ethics.

Content: Evaluating and Verifying Generated Material As an ethical speaker, it is your duty to EVALUATE YOUR SOURCES Part 3 symbol green square (143–47), and the same is true for anything generated by AI. Harvard University’s Information Technology office puts it this way: AI-generated content can be inaccurate, misleading, entirely fabricated, or offensive, so be sure to carefully review any work containing AI content before you use or publish it.6

AI is notorious for producing HALLUCINATED SOURCES Part 3 symbol green square (142–43) along with fully reliable ones, a concern that continues even as generative AI tools become more sophisticated. Upon releasing a powerful new version of ChatGPT in 2024, for example, OpenAI admitted that ChatGPT “‘hallucinates’ facts and makes reasoning errors.”7 So, if you use AI-generated text without verifying and citing the information it generates, you not only risk plagiarism, but you also risk losing credibility and spreading false information.

BEST PRACTICES FOR USING GENERATIVE AI ETHICALLY

Like any tool, there are proper and improper ways to use generative AI. Here are some best practices for using generative AI ethically:

  • Only use AI if permitted by your instructor, college, workplace, or speaking situation.
  • Use AI as part of your overall process, not as an ending point. You might, for example, USE AI TO NARROW YOUR TOPIC Part 3 symbol green square (131–32) or to consider new and opposing viewpoints.
  • Track down the original source for any information provided by generative AI. ASK AI FOR CITATIONS Part 3 symbol green square (142–43), and if AI cannot provide a source, seek out the original source on your own. Cite that source (not AI) in your presentation.
  • Verify all AI-generated content. EVALUATE Part 3 symbol green square (143–47) all information to make sure it is accurate, relevant, valid, and consistent with reputable sources. Consider an example of two lawyers who trusted ChatGPT to provide accurate information for a legal brief they submitted to a judge. They were fined $5,000 for including fake case citations and making “false and misleading statements to the court.”8
  • Cite any AI-generated ideas. Use an ORAL CITATION Part 3 symbol green square (150) during your presentation and a written citation if required. All major source cit ation formats provide guidance about how to cite ideas generated by AI.
  • Be mindful of biases. Generative AI reflects the biases that exist within the data it was trained on, which can reinforce inequalities and existing societal prejudices.9
  • Protect your credibility. If an audience believes you are using generative AI without disclosing it, you may be seen as unethical and lose your credibility as a result.10

As we recommended earlier, you should avoid using someone else’s sequence of ideas and organization without acknowledging and citing the similarities in structure. Recall Maya, the student researching a presentation on rising water temperatures off the coast of Florida. She would not want to use the AI-generated summary of six significant harms without citing it as a source. Since AI is compiling the work of others on your behalf, it counts as a “someone” who should be cited anytime you use AI-generated ideas, main points, or language.

And as an ethical speaker, you should not deliver a presentation written by AI as if it were your own. If you rely on AI to generate a full speech or outline the night before your presentation, you are committing plagiarism, you will risk spreading false information, and you will deliver a presentation that didn’t come from you, for your audience, your occasion, and your purpose. Remember, your voice, not an AI chatbot’s, is the voice that matters.

Glossary

generative AI
A kind of machine learning algorithm that produces new text, images, or video by searching and sourcing available content from a variety of online sources, and sometimes from HALLUCINATED SOURCES.

Endnotes

  • ChatGPT, response to “Describe the dangers of 100 degree plus ocean temperature on the Florida coast,” OpenAI, August 3, 2023, https://chat.openai.com.Return to reference 4
  • S. S. Sundar, and M. Liao, “Calling BS on ChatGPT: Reflections on AI as a Communication Source,” Journalism & Communication Monographs, 25, no. 2 (2023): 165–80, https://doi.org/10.1177/15226379231167135.Return to reference 5
  • Harvard University Information Technology, “Getting Started with Prompts for Text-Based Generative AI Tools,” Harvard University, August 30, 2023, https://huit.harvard.edu/news/ai-prompts.Return to reference 6
  • “GPT-4,” OpenAI, March 14, 2023, https://openai.com/research/gpt-4.Return to reference 7
  • Sara Merken, “New York Lawyers Sanctioned for Using Fake ChatGPT Cases in Legal Briefs,” Reuters, June 26, 2023, https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22.Return to reference 8
  • Simon Friis and James Riley, “Eliminating Algorithmic Bias Is Just the Beginning of Equitable AI,” Harvard Business Review, September 29, 2023, https://hbr.org/2023/09/eliminating-algorithmic-bias-is-just-the-beginning-of-equitable-ai.Return to reference 9
  • Bronson Dant, “Using AI in Writing: Ethical Implications and Personal Responsibility,” LinkedIn, September 20, 2023, https://www.linkedin.com/pulse/using-ai-writing-ethical-implications-personal-bronson-dant-pmp/?trackingId=%2F5DYISegTdqQKMMVGWBF0Q%3D%3D/.Return to reference 10