From Algorithms to Attribution: Teaching AI and Copyright in Media Studies
Skip other details (including permanent urls, DOI, citation information)
The impacts of generative artificial intelligence tools, which can produce various types of expression like language, images, video, and music, are large, growing, and not going away anytime soon. A 2023 paper from the Association for Computing Machinery estimated that 80% of U.S. workers will have their jobs affected by generative AI,[1] and as seen in recent negotiations about the use of AI from both the Writers Guild of America (WGA) and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), media industries are at the forefront of these impacts. The recently-negotiated minimum basic agreements (MBAs) for both WGA and SAG-AFTRA ultimately set limits on how studios can use generative AI tools in creating media. Actors gained protection for “consent and compensation” for digital replicas.[2] Writers gained protection against sharing credits with AI, being forced to use AI, or having their work used to train future AI.[3]
These agreements are at the intersection of the two big concerns about generative AI for media: how models are trained and how AI might be used to undermine human creation of media. The unions’ restrictions on using their members’ work in training AI models, meaning that it would not be used as inputs on which the AI tool could draw to create new outputs, aligns with recurring contestation over whether basing these tools on existing creative work is legal or ethical. Newspapers, authors, and visual artists have all sued AI companies for copyright infringement for training AI tools on their works.[4] There have also been disputes over scraping everyday people’s web contents.[5] The union agreements also underline the issue of recognition of labor. It requires skill and labor to create an actor’s performance or TV and film writing, and that labor should be recognized as more than training data. The MBAs assert that creators should get credit and have their labor fairly paid, and shouldn’t be coerced into using or becoming training material for AI tools. We argue these same issues are essential to helping students think through AI tools. Centering these questions in courses—whether screenwriting, media law, internet studies, or film production—allows instructors to work through concepts like fair use, derivative works, and work for hire, as well as larger ethical issues like the disjuncture between what is legal and what seems moral or fair, preparing students for potential futures as media authors.
Making Sense of Generative AI with Remix Culture
Fundamentally, the underlying demands from both unions assert attribution, compensation, and control as the keys to when and how it is acceptable to use AI tools. These are the same characteristics that Mel’s research has shown underlie when it is socially and legally acceptable to reuse other people’s creative materials in the context of music remix.[6] Examining popular discussion about songs that build on other songs between 2009 and 2018, Mel found that samples and other music that reuses existing music are socially regarded as valuable when they are transformative, which a 1994 U.S. Supreme Court case defined as “altering the original with new expression, meaning, or message.”[7] The case, which involved a lawsuit over a parody song by hip-hop group 2 Live Crew, determined that transformative works were fair use. Musical reuse has not only been legally contested, but culturally contested: When is reworking someone else’s creation doing work, and when is it free-riding? How skillful and transformative is the addition to the existing work? Is it respectful of the source? Mel found that the answers to these questions tend to protect the socially powerful, such as record labels and white artists, at the expense of marginalized people.
Using AI to create media based on existing work raises many of the same questions, and it is valuable to think (and teach) about AI in terms of the benefits and limitations of remix. We have come to culturally recognize samples, mashups, and other kinds of remix as creative in their own right even though they build on someone else’s work—at least when artists do this with attention to credit and labor. Prior cultural moments that have unsettled how we think about media creation can therefore offer insights into similar uses of AI. For example, when OpenAI found a “soundalike” for Scarlett Johansson to voice ChatGPT-4o, without crediting Johansson and without her permission, the act was legal but unpopular, and after backlash the company eventually stopped using the soundalike voice.[8]
Credit and labor are key. Professional organizations like the Modern Language Association have developed templates for citing AI that resemble standards for citing sources in other contexts to acknowledge what the author has done with them.[9] Students must be transparent not only about using AI, but also which tool, which prompt, what the outputs were, and what modifications they made to outputs, with something like a social scientific methods section or appendix in their assignments. Taking this approach helps resolve essential questions: Are you telling the truth about creating if you didn't create by yourself? Is it acceptable to draw on other people's work this way? What are the guardrails on using these tools? In other words, as with sampling and remix, use of these tools must also be transformative.
Putting Copyright and Remix Lessons into Practice
In spring 2024, Bridget taught an undergraduate seminar on AI in contemporary media industries. The course examined the concept of “authorship”—from shared folkloric traditions to the solo genius of the Romantic author, from auteur theory and its limitations to collaborative television writers’ rooms. Students examined challenges to and limitations of U.S. copyright law, learning about non-copyrightable material like titles and scènes a faire, public domain and fair use, and internet-based communication like memes. The class also devoted significant time to the new MBAs resulting from the aforementioned WGA and SAG-AFTRA strikes, as well as other labor responses to generative AI.
By the time the course turned to generative AI use in media production, students were primed to think about copyright and attribution. As they used various AI tools each week, they were prompted to reflect on how this use affected their own creative practices—did it replace their labor, or did it enhance it?—and how it might affect the labor and intellectual property of other artists.
As one example, students watched the 2008 documentary RiP! A Remix Manifesto, which uses mashup artist Girl Talk as a case study that embodies the tension Mel’s research finds between actions that are legally forbidden (Girl Talk uses samples without permission and compensation) yet culturally sanctioned (Girl Talk’s music is wildly popular). Students also read a chapter explaining fan fiction from Henry Jenkins’ Textual Poachers, after which they read about Character.AI’s fan-created character chatbots.[10] Students then found a chatbot of a favorite film or TV character, had a conversation with it, and finally created their own chatbot.
Modeling the activity, Bridget conversed with a previously created chatbot of Stede Bonnet, the main character from the HBO series Our Flag Means Death (2022–23). At many points, the chatbot captured the inflection and general spirit of television Stede, an affluent pirate who is always optimistic and chipper [Fig. 1].

When asked questions about contemporary matters, like the LGBTQ representation for which Our Flag Means Death was known, the chatbot could only produce answers using predictive language patterns from the provided information. The chatbot reproduced contemporary concepts (living one’s truth) and language (“queer” as an umbrella term for a multitude of identities) [Fig. 2]. Although television Stede and the series in general are often anachronistic, these comments felt less authentic to the television character and more reflective of contemporary discourse. Students also conversed with other Stede chatbots and determined that the chatbots with more conversation experience felt more “authentic” to the characters. This is likely both a consequence of popularity and a cause of it: the bot may refine its answers through experience, but fans are also more likely to interact with better-designed chatbots in the first place.

After this lab time, Bridget asked students who deserves the credit for the chatbot: David Jenkins, the creator of Our Flag Means Death; actor Rhys Darby, who plays the character; Character.AI user Coray (@Reader1989), who specified the parameters for the chatbot; or the Character.AI users whose chats helped refine “Stede.” She then asked students to reflect on how the answer to this question might correlate to film and television production, from literature-to-media adaptation, to the differences between a series creator and staff writer, to transformative works like fan fiction—which, essentially, character chatbots are. Finally, Bridget asked students how character chatbots might be used in future media production. They imagined chatbots as tools to help screenwriters refine a character or practice dialogue, but they also envisioned a digital space in which users could interact with various uniquely-created chatbots in an interactive media experience. Overall, the lesson reinforced the notion that “authoring” continues to be a networked, plural activity even as our tools for authorship have changed.
Conclusion
As our theoretical and practical approaches show, there is no magic answer for teaching about or with generative AI. We find that discussing copyright and attribution, especially identifying their legal and cultural overlaps and distinctions, is a useful starting point to help students develop their own ethics regarding generative AI. Our experiences show that our task, as with other teaching topics, is to raise questions rather than provide concrete answers. As with other tools, we should encourage students to explore the possibilities of generative AI while minimizing its risks. We need to strike a better balance between protection and reuse in AI so as not to repeat the mistakes of earlier media forms.
No doubt, generative AI will continue to have a significant impact on film and media production and consequently need to be a part of film and media courses. By equipping students with the framework of attribution, we encourage them to be more reflective about their uses of and attitudes toward generative AI in the creative industries.
Bridget Kies is Associate Professor of Film Studies and Production at Oakland University, where she is currently serving as faculty fellow for AI and teaching. She has previously edited a dossier of Teaching Media on inclusive course design. Her research on television and audiences has been published in numerous peer-reviewed journals. With Megan Connor, she is co-editor of Fandom, the Next Generation (University of Iowa Press, 2022), the first significant study of transgenerational fandoms and intergenerational fan relationships. Her forthcoming book on the iconic 1980s television series Murder, She Wrote will be published by Wayne State University Press in January.
Mel Stanfill is an Associate Professor with a joint appointment in the Texts and Technology Program and the Department of English at the University of Central Florida. Key focuses of Stanfill’s work include the uses and abuses of platforms and cultural studies of the law. They have been published in New Media and Society, Cultural Studies, Television & New Media, and Rock This Way: Cultural Constructions of Musical Legitimacy (Michigan, 2023).
David Leslie and Francesca Rossi, ACM TechBrief: Generative Artificial Intelligence (New York, NY, USA: Association for Computing Machinery, 2023). ↑
“Tentative Agreement Reached!,” SAG-AFTRA, November 29, 2023, https://www.sagaftra.org/tentative-agreement-reached-0. ↑
“What We Won,” WGA Contract 2023, accessed May 8, 2024, https://www.wgacontract2023.org/the-campaign/what-we-won. ↑
“Eight US Newspapers Sue ChatGPT-Maker OpenAI and Microsoft for Copyright Infringement,” AP News, April 30, 2024, https://apnews.com/article/chatgpt-newspaper-copyright-lawsuit-openai-microsoft-2d5f52d1a720e0a8fa6910dfd59584a9; James Vincent, “AI Art Tools Stable Diffusion and Midjourney Targeted with Copyright Lawsuit,” The Verge, January 16, 2023, https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart; Andrew Albanese, “Court Trims Authors’ Copyright Lawsuit Against OpenAI,” PublishersWeekly.com, February 14, 2024, https://www.publishersweekly.com/pw/by-topic/industry-news/publisher-news/article/94342-court-trims-authors-copyright-lawsuit-against-open-ai.html. ↑
Rose Eveleth, “The Fanfic Sex Trope That Caught a Plundering AI Red-Handed,” Wired, May 15, 2023, https://www.wired.com/story/fanfiction-omegaverse-sex-trope-artificial-intelligence-knotting/; Teresa Xie and Isaiah Poritz, “Creator of Buzzy ChatGPT Is Sued for Vacuuming up ‘Vast Amounts’ of Private Data to Win the ‘A.I. Arms Race,’” Fortune, June 28, 2023, https://fortune.com/2023/06/28/openai-chatgpt-sued-private-data/. ↑
Mel Stanfill, Rock This Way: Cultural Constructions of Musical Legitimacy (Ann Arbor: University of Michigan Press, 2023). ↑
Campbell v. Acuff-Rose Music, Inc., No. 92-1292 (US Supreme Court March 7, 1994). ↑
Andrew R. Chow, “The Scarlett Johansson Dispute Erodes Public Trust in OpenAI,” Time, May 21, 2024, https://time.com/6980710/scarlett-johansson-open-ai-sam-altman-trust/. ↑
“How Do I Cite Generative AI in MLA Style?,” MLA Style Center (blog), March 17, 2023, https://style.mla.org/citing-generative-ai/. ↑
Henry Jenkins, Textual Poachers: Television Fans & Participatory Culture (New York, NY: Routledge, 1992); Allegra Rosenberg, “Custom AI Chatbots Are Quietly Becoming the Next Big Thing in Fandom,” The Verge, March 13, 2023, https://www.theverge.com/23627402/character-ai-fandom-chat-bots-fanfiction-role-playing. ↑