Overview

It is hard to find an article in the popular press about generative artificial intelligence (GenAI), particularly ChatGPT, that does not describe its presence and future influence as “inevitable.” Writing about ChatGPT for the MIT Technology Review, Will Douglas Heaven asserted, “What’s certain is that essay-writing chatbots are here to stay.”[1] Many teachers of film and media studies may have a distinct feeling of déjà vu—and ambivalence—in hearing industry leaders, tech insiders, and digital optimists celebrate the arrival of a new technological phenomenon. After all, many of us remember similar techie enthusiasm around SMART Boards, MOOCs, Google Glass, and Bitcoin—with mixed results. This Teaching Media dossier hopes to create a constructive space to discuss generative AI, assess its applications to our respective classrooms, and develop strategies for using (or not using) it. We should be clear at the outset that we write from different institutional contexts (Kimberly, a private liberal arts college; Peter, a private research university) as well as different ranks (Kimberly, tenured associate professor; Peter, tenure-track assistant professor) in the United States. While our positionalities shape our thinking, we hope the points raised here will help readers to develop and adopt policies that work best for their positions, institutions, and geographical contexts.

So, what exactly is it? Unlike earlier forms of artificial intelligence, generative artificial intelligence, as James H. Oldham explains, “implies a new form of AI that not only can complete binary tasks, but can create a product as well.”[2] ChatGPT, in particular, is a large language model (LLM) developed by OpenAI, which means it is essentially trained on large amounts of text to generate new texts based on patterns and relationships it notices in the initial data. So, applications such as ChatGPT—the GPT stands for “Generative Pre-trained Transformer”—allow users to input commands and receive an output based on training and past inputs.

Proponents of generative artificial intelligence praise its ability to complete menial tasks and expedite the writing process. Its critics, though, sound numerous alarms about how it works. After all, generative artificial intelligence can only work with what it has been given. Andrew Hsu, an aerospace engineer and president of the College of Charleston, notes that humans remain vital for evaluating the reliability of information generated by generative AI because “If you feed it enough misinformation, it’s going to generate misinformation.”[3] In fact, the lack of moral and ethical reasoning beyond generative artificial intelligence prompted Michael Townsen Hicks, James Humphries, and Joe Slater to recently argue that what it generates is “bullshit” (in the Harry Frankfurt sense of the concept) because it “produce[s] text that looks truth-apt without any concern for truth.”[4] Some have challenged the very concept of artificial intelligence at all. A task force composed of members of the Modern Language Association (MLA) and the Conference on College Composition & Communication (CCCC) asserts, “Although it is often tempting to speak in terms of what an LLM is ‘doing’ or ‘intending’ or even ‘thinking,’ what we are witnessing is the production of word sequences that look like intentional human text through a process of statistical correlation. [...] LLMs do not, however, ‘think’ in the way that we would define such an activity as it takes place in human cognition.”[5] This view, however, remains contested, as Ted Underwood notes.[6] These concerns only relate to the nature of generative artificial intelligence; its cultural, ethical, legal, environmental, and political impacts (detailed below) raise further concerns.

As with any new technology, understanding generative artificial intelligence requires one to chase a moving target. A subsequent working paper from the MLA-CCCC task force underscored the “constantly evolving” nature of generative AI.[7] Indeed, we might be suspicious of a notion of “evolving” as we continue to hone our approaches to generative AI in writing and instruction. Relatedly, to return to the opening paragraph, Princeton University student Edward Tian, who developed the AI-detection software GPTZero, contends, “If nothing else, many teachers now recognize that they have an obligation to teach their students about how this new technology works and what it can make possible.”[8] We are similarly unconvinced teaching ChatGPT and other generative artificial intelligence can or should be our individual professional responsibility. Education researcher Cecilia Ka Yuk Chan argues that colleges and universities need a comprehensive AI policy that includes training for instructors and students,[9] and the Federation of American Scientists has called for a Congressionally established program that would create AI literacy guidelines and provide training for K-12 teachers.[10] But for the moment, colleges and universities are responding by offering forms of training and giving wide latitude to instructors to determine AI use policies for their own courses. This reality means that instructors have to decide when and how AI could be an effective and ethical aspect of their teaching, a prospect that can be both empowering and daunting.

We hope this dossier provides a starting point for our colleagues to review the arguments for, against, and in-between on generative artificial intelligence; develop instructor, departmental, and institutional policies that suit their students and faculty; and incorporate it into their courses (if at all) in ways that best serve their learning objectives and course outcomes.

Challenges

Let’s start with the challenges, because there are several.

Generative AI complicates our commitments to academic honesty, integrity, and originality. If the work is not wholly the student’s, does that not suggest they are cheating and undercutting their education? Clearly papers that are written entirely by generative AI cannot be accepted, but the challenge otherwise is to what extent can students use it. Some instructors allow it to be used in the early stages of writing, referred to as “invention” by scholars of composition and rhetoric. Others suggest it should be properly cited as a source. And still others contend it may be useful as a revision and proofreading tool. Even if a teacher decides to ban any use of generative AI, these various degrees of usage suggest we must be open, direct, and clear about how we are expecting it will be used.

That said, definitively detecting its usage remains nearly impossible. The available AI-detection software, even the proprietary ones, admit limited reliability. “Students may experience an increased sense of alienation and mistrust,” the MLA-CCCC task force wisely cautions teachers, “if surveillance and detection approaches meant to ensure academic integrity are undertaken.”[11] This approach is further complicated by the considerable time, labor, and bureaucratic hurdles that come with charges of academic dishonesty. It should come as no surprise that some teachers have chosen to wash their hands of it entirely: trust the students are doing the work but refuse to prove that they are not.

Another major challenge is the quality of the writing generated by generative AI. Since it is “designed to convey convincing lines of text,”[12] ChatGPT-generated text can read as strange and stilted. Christine Jeansonne, for example, has observed with her students how generative AI “did not effectively appeal to emotion [in writing] because, again, it lacked real life examples of shared experience.”[13] For this reason, she suggests students and teachers rhetorically analyze such output for its limitations and deficits. Others have noted its tendency to falsify citations to back up its incredulous or simply incorrect claims.[14] As a result, the scientific journal Nature has already instituted a policy that generative AI cannot be listed as an author “because they cannot take responsibility for the content and integrity of scientific papers.”[15] (Cue a Foucauldian critique of generative AI via the author function.)

Above, we discuss its tendency to reproduce misinformation. By extension, we might note how algorithmic bias can reproduce racism, sexism, homophobia, ableism, and other social inequities, even as it attempts to navigate around those pervasive ideologies. The mere act of using generative AI, even in an instructive capacity, has material effects we must consider beforehand. Such applications are not readily available to all students in all institutions here and around the world; the nature and reasons for those disparities warrant critical attention. Several news sources also have documented the resource expenditure, especially of water, and carbon dioxide emissions connected to using ChatGPT.[16] Additionally, generative AI often depends upon the exploitation of a global labor force working under dehumanizing conditions.[17] And as Matteo Wong has warned, generative AI furthers internet culture’s preference and proliferation for select languages and puts others at further risk of erasure and extinction.[18] This risk would have a disproportionate effect on populations living in the Global South as well as Native and Indigenous communities in the Global North. Therefore, ethical pedagogy and use of generative AI goes far beyond the intellectual consequences to include the social, environmental, and cultural ramifications as well.

Opportunities

Of course, there may be advantages to a critical application of generative artificial intelligence.

One possibility will be the opportunity to demonstrate for our students the power and significance of media literacy by extending it to a critical AI literacy. This extension will not only include exploring the various social and environmental impacts outlined above, but also developing strategies for effectively prompting generative artificial intelligence.[19] Such lessons, in turn, may require institutions to train teachers to do this work. While web videos, websites, and dossiers such as this one may facilitate that work, we hope—perhaps naively—that institutions will commit time, money, and resources to helping educators understand these applications and the surrounding issues so that they can better teach and prepare students.[20]

But this opportunity also means more labor for teachers although some observers have pointed to the ways that generative artificial intelligence may help to lighten the load of teachers. Jacob Steiss et al. contend that generative AI can be useful for “providing feedback in the early phases of writing, where students seek immediate feedback on rough drafts. This would precede, not replace, teacher-provided formative or summative evaluation that is often more accurate and more tailored to student-specific characteristics, albeit less timely.”[21] Such a resource may prove especially helpful for students developing English-language proficiency and learning a new language. Journalists and commentators also have identified possible benefits for supporting neurodivergent students by summarizing readings, assessing and altering tone in writing, and practicing conversational skills.[22]

Similarly, teachers might consider how generative AI can be used as a collaborative partner in their pedagogical work, which could take the form of creating essay assignments and syllabi. While some will wonder how long until it fully replaces us, we might counter with the fact that the imperfections of ChatGPT and other applications still necessitate the expertise, emotional intelligence, and sensitivity of an instructor. The possibility here lies in “human in the loop” design thinking, which Ge Wang of the Stanford Human-Centered AI Center describes as “a process that harnesses the efficiency of intelligent automation while remaining amenable to human feedback...while retaining a greater sense of meaning.”[23] The emphasis here is on recruiting AI to assist with the often time-consuming invention stage, when something is being drafted, and then revising and refining the resulting output according to our own expertise rather than shortcutting the development process once and for all.

The understandable frustration and concern around generative AI also creates an opportunity to reflect upon how we teach and why. H. Holden Thorp, editor-in-chief of the Science family of journals argues, “If anything, the implications for education may push academics to rethink their courses in innovative ways and give assignments that aren’t easily solved by AI.”[24] For instance, professors have expressed disdain for how students are using generative AI to complete discussion board posts and essays, but we also might ask if those modes are the most effective ways to assess student learning in the first place? What other types of assessment might give us a sense of students’ understanding of the material? How might we incentivize them to not only complete the assignment without generative AI, but enjoy doing so? Admittedly, that is a tall order, but the point we make here is that the issue may not just be with generative AI, but with the established approaches to delivery and assessment that the ease, accessibility, and strengths of generative AI so readily complicate. Our goal herein is not to dismiss traditional writing assignments outright or to offer some panacea; rather, we wish to draw attention to how the skepticism and disdain for generative AI asks us to re-examine why we do what we do. We can reinforce why we assign these specific assignments to our students, we can develop new approaches to assessment that anticipate the temptation to use generative AI, or we can forge a path that toggles between both poles.

Conclusion

Whatever approach a teacher chooses to take (bound, that is, by institutional and practical limitations) models for our students how to think through and with a new technology. To this end, Matthew A. Vetter et al. note that “Whether teachers integrate generative AI into their pedagogies or impose limitations, explore ethical parameters, or implement static policies, a local ethic is ever-present.”[25] Even a lack of a policy represents, in effect, a policy. Scientists have already noted how generative AI complicates simplistic understandings of “authorship, plagiarism, and sources,”[26] while rhetoric and composition scholars underscore questions of “AI authorship, agency, reliability, and accessibility”—and their interconnectedness.[27] Regardless of how we choose to handle it, generative artificial intelligence poses a challenge and an opportunity for us as teachers.

What follows is not a roadmap so much as a series of possibilities and provocations for college instructors to consider as they navigate what lies ahead. The contributions herein engage these questions in two interrelated clusters. One, lessons that might be learned through critical work both within and outside our field, including science and technology studies, computer science, sociology, rhetoric and composition, and scholarship on teaching and learning, among others. Two, explanations of and reflections upon sample assignments, delivery methods, and modes of assessment that thoughtfully and carefully engage artificial intelligence. Taken together, we hope they provide a foundation for teachers to think and work through the challenges posed by generative AI.

The essays included in this dossier offer examples of how generative AI can be incorporated into cinema and media studies classrooms in various ways. Several essays focus on specific assignments. To begin, Bridget Kies and Mel Stanfill focus on a key legal and ethical issue in media studies in their essay “From Algorithms to Attribution: Teaching AI And Copyright in Media Studies”: copyright. They use a series of remix assignments that ask students to consider the ethical implications of generative AI on the creative labor used to both train generative AI and produce new works. In “Teaching AI Against the Given World,” Jiangtao Harry Gu describes an assignment for an introductory-level course that allows students to better understand how large language models (LLMs) function by utilizing the Giant Language model Test Room (GLTR) to illustrate the ethical dimension of how LLMs “predict” the words that will appear in a prompted sentence. Other essays in this issue offer examples of linked or connected assignments. Iskandar Zulkarnain and Suriati Abas describe two assignments designed to foster critical AI literacy and consider its application in teaching social justice concepts in their essay, “Fostering AI Literacy in Higher Education: Integrating Generative AI and Social Annotation Tools for Critical Engagement.” Next, Nilo Couret’s “Speculative Historiography in the Age of Hallucinations” describes a “critical fabulation” project which draws on the work of Saidiya Hartman as well as the known tendency of generative AI tools to hallucinate and asks students to use repeated and revised prompting to encourage the generative AI to imagine a world shaped by the archival object and then critically engage the response. In “Intelligence and Imitation: Teaching the Turing Test Now,” Kimberly Hall explains a series of assignments based on both the representation of a Turing-like test for AI in fiction and film and the prompting of current generative AI tools using Turing’s “Imitation Game” as a critical tool. Our final two authors explore how a return to the personal might help us adapt to (and, perhaps, work around) generative AI in the classroom. Jelena B. Ćulibrk explores the political dimensions of generative AI in “Academia in Anarchy? Hyper-personal Pedagogy in the Age of AI” and considers connections between global social justice movements and the incorporation of generative AI policies and assignments in her own classroom. In “Paranoid Pedagogy; or, Learning to Trust the Process,” Peter C. Kunze illustrates how film and media studies can draw from the work of scholars in the field of composition and rhetoric to focus on process pedagogy to provide students and teachers with a reflective and recursive approach to the use of generative AI.

Each of these essays explores both the practical realities of working with or around generative AI as well as the critical issues that this technology can engage or return to in a productive or provocative way. In bringing together these discussions our hope is to provide ideas or assignments, spark conversation, to think across our individual approaches to begin to imagine what role generative AI will play in the teaching of cinema and media studies in the years ahead.


Kimberly Hall is an Associate Professor of English and digital media studies at Wofford College. Her research focuses on autobiographical narrative and social media discourse and has appeared in Television and New MediaWomen & PerformanceAmodernModern Language Studies, and Social Media + Society.

Peter C. Kunze is assistant professor of communication at Tulane University. His first book, Staging a Comeback: Broadway, Hollywood, and the Disney Renaissance, was published by Rutgers University Press in 2023.


    1. Will Douglas Heaven, “ChatGPT is Going to Change Education, Not Destroy It,” MIT Technology Review, April 6, 2023, https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/.

    2. James H. Oldham, “ChatGPT: The Co-teacher We Need?” English Journal 113, no. 4 (2004): 54, https://doi.org/10.58680/ej2024113453.

    3. Andrew Hsu, quoted in Cole Claybourn, “Why Some College Professors are Embracing ChatGPT,” U.S. News & World Report, May 9, 2023, https://www.usnews.com/education/best-colleges/articles/why-some-college-professors-are-embracing-chatgpt.

    4. Michael Townsen Hicks, James Humphries, and Joe Slater, “ChatGPT is Bullshit,” Ethics and Information Technology 26, no. 38 (2024): 1, https://doi.org/10.1007/s10676-024-09775-5.

    5. MLA-CCCC Joint Task Force on Writing and AI, “Overview of the Issues, Statement of Principles, and Recommendations,” Modern Language Association and Conference on College Composition & Communication (2023): 7.

    6. Ted Underwood, post to “MLA-CCCC Joint Task Force on Writing and AI Working Paper 1: Overview of the Issues, Statement of Principles, and Recommendations,” MLA-CCCC Joint Task Force on Writing and AI, Humanities Commons, July 15, 2023, 3:05 p.m., https://aiandwriting.hcommons.org/working-paper-1/.

    7. MLA-CCCC Joint Task Force on Writing and AI, “Generative AI and Policy Development: Guidance from the MLA-CCCC Task Force,” Modern Language Association and Conference on College Composition & Communication (April 2024): 3.

    8. Edward Tian, quoted in Heaven, “ChatGPT is Going to Change Education, Not Destroy It.”

    9. Cecilia Ka Yuk Chan, “Student’s Voices on Generative AI: Perceptions, Benefits, and Challenges in Higher Education,” International Journal of Educational Technology in Higher Education 20, no. 43 (2023): 1-18, https://doi.org/10.1186/s41239-023-00411-8.

    10. Amanda Bickerstaff, Amanda Depriest, and Corey Layne Crouch, “Establish a Teacher AI Literacy Development Program,” Federation of American Scientists, June 24, 2024, https://fas.org/publication/teacher-ai-literacy-development/.

    11. MLA-CCCC Joint Task Force on Writing and AI, “Overview of the Issues, Statement of Principles, and Recommendations,” Modern Language Association and Conference on College Composition & Communication (2023): 7.

    12. Hicks, Humphries, and Slater, “ChatGPT is Bullshit,” 3.

    13. Christine Jeansonne, “Using ChatGPT in the Composition Classroom: Remembering the Importance of Human Connection to Audience,” College Teaching (2024): 2, https://doi.org/10.1080/87567555.2024.2304003.

    14. William H. Walters and Esther Isabelle Wilder, “Fabrication and Errors in the Bibliographic Citations Generated by ChatGPT,” Scientific Reports 13, no. 14045 (2023): 1–8, https://doi.org/10.1038/s41598-023-41032-5.

    15. Chris Stokel-Walker, “ChatGPT Listed as Author on Research Papers,” Nature 613 (26 January 2023): 620, https://doi.org/10.1038/d41586-023-00107-z.

    16. Cindy Gordon, “ChatGPT and Generative AI Innovations Are Creating Sustainability Havoc,” Forbes, March 12, 2024, https://www.forbes.com/sites/cindygordon/2024/03/12/chatgpt-and-generative-ai-innovations-are-creating-sustainability-havoc/, and Kate Saenko, “Is Generative AI Bad for the Environment? A Computer Scientist Explains the Carbon Footprint of ChatGPT and Its Cousins,” The Conversation, May 23, 2023, https://theconversation.com/is-generative-ai-bad-for-the-environment-a-computer-scientist-explains-the-carbon-footprint-of-chatgpt-and-its-cousins-204096.

    17. Billy Perrigo, “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic,” Time, January 18, 2023, https://time.com/6247678/openai-chatgpt-kenya-workers/; Julian Jacobs, “How Generative AI is Changing the Global South's IT Services Sector,” Center for Data Innovation, June 10, 2024, https://www2.datainnovation.org/2024-ai-global-south.pdf.

    18. Matteo Wong, “The AI Revolution Is Crushing Thousands of Languages,” The Atlantic, April 12, 2024, https://www.theatlantic.com/technology/archive/2024/04/generative-ai-low-resource-languages/678042/.

    19. Ashley Mowreader, “Academic Success Tip: Working Smarter with ChatGPT,” Inside Higher Ed, July 26, 2023, https://www.insidehighered.com/news/student-success/academic-life/2023/07/26/teaching-college-students-write-using-chatgpt.

    20. MLA-CCCC Joint Task Force on Writing and AI, “Overview of the Issues, Statement of Principles, and Recommendations,” Modern Language Association and Conference on College Composition & Communication (2023): 10–11.

    21. Jacob Steiss, Tamara Tate, Steve Graham, Jazmin Cruz, Michael Hebert, Jiali Wang, Youngsun Moon, Waverly Tseng, Mark Warschauer, and Carol Booth Olson, “Comparing the Quality of Human and ChatGPT Feedback of Students’ Writing,” Learning and Instruction 91 (2024): 13, https://doi.org/10.1016/j.learninstruc.2024.101894.

    22. Nicki Faulkner, “How Can ChatGPT Assist with Neurodiverse Challenges?,” Snowplow, March 9, 2023, https://snowplow.io/blog/how-can-chatgpt-assist-with-neurodiverse-challenges/; Amanda Hoover and Samantha Spengler, “For Some Autistic People, ChatGPT is a Lifeline,” Wired, May 30, 2023, https://www.wired.com/story/for-some-autistic-people-chatgpt-is-a-lifeline/.

    23. Ge Wang, “Humans in the Loop: The Design of Interactive AI Systems,” HAI Stanford University Human-Centered Artificial Intelligence, October 20, 2019, https://hai.stanford.edu/news/humans-loop-design-interactive-ai-systems.

    24. H. Holden Thorp, “ChatGPT is Fun, but Not an Author,” Science 379, no. 6630 (27 January 2023): 313, https://doi.org/10.1126/science.adg7879.

    25. Matthew A. Vetter, Brent Lucia, Jialei Jiang, and Mahmoud Othman, “Towards a Framework for Local Interrogation of AI Ethics: A Case Study on Text Generators, Academic Integrity, and Composing with ChatGPT,” Computers and Composition 71 (2024): 9, https://doi.org/10.1016/j.compcom.2024.102831.

    26. Eva A. M. van Dis, Johan Bollen, Willem Zuidema, Robert van Rooij, and Claudi L. Bockting, “ChatGPT: Five Priorities for Research,” Nature 614 (9 February 2023): 225, https://doi.org/10.1038/d41586-023-00288-7.

    27. Vetter, Lucia, Jiang, and Othman, “Towards a Framework for Local Interrogation of AI Ethics,” 9.