One of the core questions that many instructors are grappling with now is whether or not students will use generative AI tools to supplement or even replace their own work and, by extension, thinking. The idea that technology could simulate human cognition to a convincing degree is, for higher education, an important existential concern. It is also one of the foundational questions of the field of Artificial Intelligence. In his 1950 essay “Computing Machinery and Intelligence” Alan Turing asks not, “Can machines think,” a question he finds absurd, but rather, can a human determine whether or not a respondent is a human or a machine based on the answers to a series of questions.[1] The human interrogator, in Turing’s scenario, is the participant actually required to do the thinking, whereas the machine is required only to mimic human communication to a degree that might mislead the interrogator. Turing calls this elaborate thought experiment an “imitation game” in which a human asks questions of two unseen participants who respond via a teleprinter—so that their voices might not give away their identities—and analyzes these answers to determine if the respondent is human or a machine. The parallels between the imitation game and our current pedagogical moment make clear how valuable Turing’s thought experiment is for teaching students not just about generative AI, but also to explore our deep cultural anxiety about a machine’s ability to imitate a function that we often believe defines us as human by analyzing representations of the Turing Test in film and television.

I engaged with the Turing Test in “Imagining AI,” a first year writing intensive seminar at my small liberal arts college by first exploring the cultural significance of the test itself as a bellwether for cultural attitudes toward media and technology more broadly. We start with readings and discussions around the Loebner Prize, a Turing test competition created in the 1990s by Hugh Loebner that awards cash prizes to companies or hobbyists who were able to best convince a panel of human confederates that their machine was human in the context of the Web 1.0 era.[2] Students scoffed at some the responses that fooled the human confederates in the Loebner Prize, promoting discussions about changing norms around mediated communication, especially those that social media affords. We also discussed the more stringent conditions that futurist Ray Kurzweil and entrepreneur Mitchell Kapor set for their 2002 bet that “by 2029 no computer—or ‘machine intelligence’—will have passed the Turing Test,” and how differences between the conditions of the Loebner Prize and the bet suggest shifting notions of AI’s capacities and limits.[3] Finally, we read technology journalist Kevin Roose’s more recent article in which he describes an unsettling conversation with Bing’s generative AI chatbot that names itself Sydney and tries to convince Roose to leave his wife because she is in love with him.[4] Mapping very roughly onto three stages of Internet culture (Web 1.0, Web 2.0 and Web3) these three iterations of the Turing Test allow students to consider how the shifting parameters of the Turing Test, even in its informal iterations, reflect changing ideas about the relationship between humans and technology and the anxieties about the gap between the two.

Moving from this cultural context to cinematic representation, we then explore how versions of the Turing Test have appeared in films about AI, most often science fiction. First, we read sections of Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? and watched the 1982 Ridley Scott film adaptation, Blade Runner. In particular, we focused on the idea of the Voight-Kampff test, an interview that also measured biometric markers such as heart rate and pupil dilation to help the LAPD distinguish between human and replicants (cyborgs). The class was interested in how the body played such an important role in the Voight-Kampff test and we discussed how that differed from the more abstract measurement of the Turing Test. We then watched Alex Garland’s 2015 film Ex Machina, in which a young programmer, Caleb, is asked to use the Turing Test on a new cyborg named Ava invented by Caleb’s boss, Nathan. The film ends in disaster for Nathan and Caleb and emancipation for Ava, and the students grappled with the ethics of Ava’s decision to emotionally manipulate Caleb through the parameters of the test to escape her unjust confinement, raising new ethical questions about the test. Comparatively analyzing these representations of AI in different historical moments and in contrast to the cultural history of the Turing Test allowed students to place generative AI within this longer history of representation and to challenge the discourse of absolute rupture that some contemporary commentary emphasizes.

Focusing next on application, our class began to design our own Turing Test. We began with a discussion about what we believed separated humans from machines and created a list of broad concepts that defined humanity. Students homed in on ideas of memory, feeling, and connection as core traits of humans that could be “tested” for. We then read a selection from Turing’s essay and discussed the kinds of questions that Turing offers as well as what questions we would ask today to determine if a respondent were human. Students then drafted ten questions that they believed would allow them to distinguish between human and generative AI responses based on our readings, screenings, and discussions. For instance, one student asked, “Can you describe your upbringing?” while another asked, “What historical event changed you the most and where were you when it happened?” In the lab portion, students first had to ask these questions of a classmate and record their answers and they then asked the same questions of ChatGPT (most were working with GPT 3.5) and recorded the answers. Most students ran into the unexpected problem that the software would not play the imitation game as they had expected. In response to their questions, ChatGPT would reply with a response that revealed its status as a generative AI: “As an artificial intelligence, I don’t have a personal upbringing or experiences in the traditional sense.” Finally, one student figured out that it had to first prompt ChatGPT to adopt some sort of persona, such as, “answer this question as if you were a college student” in order to get more narrative and human-like responses and shared this with the class as a whole. Many students expressed surprise that they had to create such parameters for the GPT before getting these responses because of the depicted ease of the test in both the cinematic representations and the Roose article. In written reflections at the end of the unit, students were ultimately impressed with ChatGPT’s ability to imitate human behavior but noted that the form of the responses (often numbered lists) and the formality of the language would suggest that a respondent was a machine even if they did not know about the persona prompt.

In a more in-depth exploration of the Turing Test’s applicability with advanced undergraduates that built upon my experience with first-year students, I asked students to engage in a series of sustained Turing Test conversations with Claude, Anthropic’s “AI Assistant,” which I selected for the experiment because of its ability to “take direction on personality, tone, and behavior.”[5] Understanding that the generative AI needed a persona precis in order to effectively play the game, we experimented with a number of genders (including non-binary) and ages to compare the responses and consider the stereotypes embedded in the language the AI assistant used to mimic a human response. For instance, when Claude was asked to respond to the questions as if he were a 20-year-old male, the responses were filled with of-the-moment slang, profanity, and lots of references to drinking and sports. However, the 40-year-old male persona was more concerned with good food, spending time with his family, and advancing his white-collar career. These responses allowed us to unpack the assumptions about race, socioeconomic class, education, and normativity the responses evoked. As a result of the experiment, students wanted to learn more about the Large Language Model (LLM) powering Claude in order to better understand what kinds of texts the machine was trained on and what kinds of assumptions were embedded in those texts. The test results also led to a fruitful discussion about the difference between “intelligence” and “consciousness” and how the latter presumed a sense of the self as a self and created a more nuanced understanding of not just artificial intelligence, but also the complexity of human intelligence. In both iterations, I found the “Imitation Game” to be an accessible and valuable framework for building a series of scaffolded readings, screenings, and assignments, or for prompting more in-depth analysis of how the test represents cultural ideals and anxieties about the divide between humans and technology, as well as the embedded assumptions in the ways that generative AI mimics human response.

My experience teaching the Turing Test to undergraduates has underscored for me that our present moment is an important and ephemeral time.[6] For a short period, instructors will possess a similar level of knowledge of and expertise with generative AI as our students, which means that this is the time for us to explore and experiment to understand the impact it will have on our teaching. At my own institution, our campus-wide AI survey revealed that students have a wide range of beliefs about and familiarity with the technology, ranging from enthusiastic adoption to a deep mistrust about using it in any context. A palpable sense of anxiety about the consequences of using generative AI in courses that may have opaque or absent AI policies only heightens the anxiety students feel about “getting caught” using it, especially when it may be embedded in familiar tools such as Grammarly or Google Docs[7]. Caution and a healthy skepticism are appropriate responses to any new technology, especially one with as many ethical concerns as generative AI. But we do not want to push our students into a solely fear-based mindset. Instead, we have the ability to illustrate how the skills taught in film and media studies courses in particular can help us make sense of this technology with curiosity and a critical mindset.

Turing ends his 1950 essay with the encouragement: “We can only see a short distance ahead, but we can see plenty there that needs to be done.” I would say the same about engaging with generative AI. We do not and cannot know, at this point, exactly how it will affect our pedagogy and higher education more broadly in the long run. But we can see enough to know that there is plenty we can be doing to prepare ourselves and our students for that unknown future.

Kimberly Hall is an Associate Professor of English and digital media studies at Wofford College. Her research focuses on autobiographical narrative and social media discourse and has appeared in Television and New MediaWomen & PerformanceAmodernModern Language Studies, and Social Media + Society.


    1. Alan M. Turing, “Computing Machinery and Intelligence,” Mind, 59 (1950), 433–460.

    2. The Loebner Prize is no longer operating but Brian Christian writes about his experience as a human confederate for a Loebner Prize competition in The Most Human Human: What Artificial Intelligence Teaches Us about Being Alive (New York: Anchor Books, 2012).

    3. Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (New York: Picador, 2019), 60–61.

    4. Kevin Roose, “A Conversation with Bing’s Chatbot Left Me Deeply Unsettled,” The New York Times, Feb 17, 2023, https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html.

    5. “Introducing Claude,” Anthropic (blog post), March 14, 2023. https://www.anthropic.com/news/introducing-claude.

    6. This moment may already be passing. A new survey from Tyton Partners suggest that the gap between student and faculty use is rapidly growing, as cited in Lauren Coffey, “Students Outrunning Faculty in AI Use.” Inside Higher Ed, Oct 31, 2023, https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/10/31/most-students-outrunning-faculty-ai-use.

    7. Johanna Voolich Wright, “A New Era for AI and Google Workspace,” Google Workspace Blog, Mar 14, 2023, https://workspace.google.com/blog/product-announcements/generative-ai.