Paranoid Pedagogy; or, Learning to Trust the Process
Skip other details (including permanent urls, DOI, citation information)
Last Christmas, I was back home in Philadelphia and playing ping-pong with my 18-year-old cousin. She had just completed her first semester of college, and I was eager to hear her impressions of life after high school. While she had thoroughly enjoyed her initial months of college life, one particular course had made her incredibly anxious. Her professor has chastised the class for (allegedly) using generative artificial intelligence on their term papers. He warned them that not only was it unethical, but also it undermined the objectives of a college education. But, he finally relented, if students came to his office hours and admitted they had used generative artificial intelligence to complete their term papers, he would not fail them. My cousin insisted she personally had not used such software, but she feared she might be accused of doing so and had considered going to talk to him nevertheless. I told her she did the right thing in not going to him: while he may have had his suspicions, he clearly did not have evidence and was depending on students to confess rather than provide any meaningful justification for his doubts. The fact remains, I assured her, that professors have no real way yet of knowing when and to what extent students have used artificial intelligence in their assignments.[1]
As her (much) older cousin, I felt somewhat protective of my cousin and the unfounded accusations recklessly lobbed at her and her classmates by their professor, a knowledgeable expert entrusted with power and authority over his students. I had serious ethical reservations with how he chose to address his presumptions. To my mind, it represents the latest strain in what I call “paranoid pedagogy,” an approach to teaching in which instructors are convinced that students are trying to cheat, lie, deceive, and shortcut their way around their education. Such instructors therefore become deeply suspicious, and they take an aggressive and/or dismissive approach toward students and toward teaching, which can lead to students feeling infantilized, attacked, and alienated from their instructor. The trust at the center of a student-teacher relationship is potentially replaced with bitter skepticism as the latter imagines the former is not acting in good faith. This conspiratorial thinking is not only toxic, it is deleterious to our working relationships with our students. Elvis Presley taught us this fact when he sang, “We can't go on together / With suspicious minds.” (OK, not really, but you see my point?)
Nevertheless, I also felt some empathy for this professor’s frustrations, which are not trivial. We reasonably expect students will produce original work, characteristic of their capabilities as well as their engagement with course content. And generative artificial intelligence can be misused by some students to act in bad faith and falsify their work. That said, what are the consequences, ethically and practically speaking, of a pedagogy grounded in doubt, cynicism, and defensive tactics? And how might such efforts corrupt the rapport we work so hard to develop and maintain with our students? Most importantly for me, how might those of us invested in an ethic of care navigate the challenges that generative artificial intelligence poses?
There is no easy answer here. I teach at a research-intensive liberal arts university in the United States, and with my 2/2 courseload, I usually have 50-75 students a semester. This reality affords me privilege to design my courses and manage my workload in a way that will become clear below. My comments are not meant as directives so much as suggestions for readers to consider and adapt to their own purposes.
For many humanities professors, the essay remains the most popular assignment—and with good reason. It is an intimate genre, reflective of a student’s personal, intellectual engagement with the material. It also provides a seemingly concise way to assess originality, depth, and clarity of the student’s thought. Few of us want to abandon it. But in 2016, Adam J. Banks called for the traditional essay to be retired and designated as “dominant genre emeritus.” As he astutely observes,
If we take the literacies, the abilities, that we use the essay to build with students: ethical source use; connection to scholarly community; the ability to value other voices, including those with whom we disagree; the ability to develop compelling support for an idea; experimentation with different rhythms and organizing strategies in our prose, we should free ourselves enough to understand that we can work on those literacies and critical understandings with students in almost any genre of writing or production.[2]
Furthermore, essay-writing is one of the easiest tasks for generative artificial intelligence to carry out for users. As teachers committed to strengthening students’ communication skills, we face a challenge in how to teach something that is now facilitated and even executed by these emerging technologies.
Before going further, I wanted to stop and point out some curious similarities between students and generative artificial intelligence. Both are trained by taking in large amounts of text to familiarize themselves with a mode of discourse. Both are then asked to create original discourse based on patterns and consistencies within what they have taken in. Both struggle to do so in a way that is not forced or stilted.[3] Perhaps it is unsurprising that students may turn to generative AI: in addition to saving time, it may save the frustration of trying (and potentially failing) to produce academic discourse. But if this admittedly broad comparison holds up, two possible solutions may alleviate students’ frustration and their temptation to cheat. One, we must allow them to write in their own voices with their own Englishes, not some academic discourse they learn and try to mimic.[4] Two, we must create a space where they can falter. I do not say “fail” because failure implies permanence; falter suggests stumbling—but leaving room for a student to get back up again. Here again: two solutions are possible. One, have the class form a rubric for an assignment based on the teacher-generated prompt and a group discussion of the assignment and its objectives. What is the prompt asking for? What are the priorities of this assignment? What skills can students be fairly assessed on here? The rubric need not be quantitative; rather, it can be qualitative to supplement the marginalia and/or endnote and shape rather than determine the final assignment grade. Two, create a space for revision and resubmission. This can potentially create additional labor for the instructor but encourage students to highlight changes and briefly explain and reflect upon those changes in an accompanying memo. By encouraging the student to guide us through their revisions, this memo helps to handle the additional grading labor that resubmission creates.
Instructors worried about what generative AI means for students’ writing might follow rhetoric and composition’s lead in rethinking what we mean when we say we teach students writing. In particular, does student writing mean the traditional essay alone? How might I invite, even challenge, my students to conduct research-based inquiry and demonstrate argumentation through video essays, audio essays, even PowerPoints or magazines? I resist restricting students’ abilities to express sustained analytical thinking to the essay alone. In fact, I often have found that students who wrestle with the traditional essay feel a level of comfort and competency with these alternate modes. While they might not be writing in the traditional sense, they are nevertheless conducting original research, designing and organizing said information, and presenting an argument in a clear and logical fashion.
Another avenue to pursue is investigating what artificial intelligence cannot do well—and, unsurprisingly, it is quite a lot. The research here is still emerging, and programmers are continuing to develop and improve artificial intelligence. But a survey of articles, blog posts, even social media discourse reveals some common threads.[5] One, artificial intelligence is still learning to learn and adapt to different rhetorical contexts. Two, artificial intelligence has issues understanding emotions and ethics. Three, artificial intelligence struggles with causality—that is, justifying its decisions and explaining how things happen. So while artificial intelligence can generate quite a bit, what it struggles with are often foundational dimensions of a liberal arts education, including the histories and traditions, forms and aesthetics, politics and ideologies, critical thinking skills, and humanist values we teach and study in critical film and media studies.
The difficulties that generative artificial intelligence has with context, emotion, logic, and process leads me again to rhetoric and composition, a field of study that has always examined and studied writing and communication in a humane, holistic manner. Process pedagogy, in particular, centers the writer-as-communicator finding their voice and working to strengthen their work through revision and reflection. Writing in 1972, Donald Murray noted, “No matter how careful our criticisms, they do not help the student since when we teach composition we are not teaching a product, we are teaching a process.”[6] To this end, he reminds us that when one grades, it should not be on how well students have thought but how well students are thinking. Our emphasis may not be just on the final result, but the gradual process of invention and development, trial and error, reimagining and refashioning that goes into writing. This is “teach a [person] to fish” thinking in action: I want to help students not only produce a polished piece of writing, but perhaps more importantly, I want to help them develop an effective process for assessing, understanding, and completing other writing tasks. As Murray asserts, “What is the process we should teach? It is the process of discovery through language. It is the process of exploration of what we know and what we feel about what we know through language. It is the process of using language to learn about our world, to evaluate what we learn about our world, to communicate what we learn about our world.”[7]
What does this look like in practice? To begin, it is dividing an assignment into multiple steps and stages. Many of us already do this: proposal, annotated bibliography, outline, draft, final. The benefit here is twofold: on the one hand, it allows us to follow the paper as it develops and offers gentle corrections and questions to help the writer. Two, generative artificial intelligence may have more difficulties with these preliminary stages to the extent they require students to demonstrate exploration and invention. Returning to the concerns above, I add in the process memo (or reflection statement), a short, informally written essay that lets me see how the students developed the essay they are submitting. How did they understand the assignment and the goals therein? Where did they begin? What obstacles did the writer encounter, and how did they address them? How might they handle the next stage—or future assignments—differently as a result? The value of reflection, as Kathleen Blake Yancey has observed, is its ability to not only encourage students to think about their writing, but also to show us as their teachers what students may not have learned yet. She concludes, “Through reflection we learn what we know now, and we begin to understand what we need to learn next.”[8]
At this point, some readers maybe be intrigued, but flummoxed by the potential workload this approach entails. Our teaching loads can vary considerably from institution to institution. One thing to note here is the value of short, pointed feedback. I have graded term papers with extensive marginalia and perhaps an endnote, but when I return it to students, they flip to the grade, smile or shrug, then toss it in the wastebasket on the way out the door. Who can blame them? After all, the grade is set—and all that feedback that justifies the grade is ultimately a rationale for penalty rather than a roadmap for improvement. What if that energy instead was distributed throughout the writing process, making brief suggestions and posing questions for consideration rather than exhausting ourselves to prove why I gave the grade I ultimately gave? And what if students had the option to apply that feedback and explain how they did so?
I also have reconsidered what feedback may look like. I separate it from grading, offering comments without assigning a grade, only a complete or incomplete. I occasionally assign peer review, in which students exchange assignments out of class or in class, reserving time for me to circulate, check in, answer questions. Time-permitting, I enjoy giving verbal feedback, setting up individual or group meetings, reviewing assignments with student(s) present, and modeling how to give feedback productively. While this tactic can be time-consuming, it can help us reconceive of assignments, office hours, and assessment, particularly in our smaller courses. At the very least, it provides a level of attention and care to hopefully compel students to see their work and themselves as active participants in the course and their education. Rather than imagining my students as prone to deception and cutting corners, I hope it exhibits grace and provides guidance to potentially assuage the kinds of anxieties and uncertainties that lead some students to use generative artificial intelligence. Personally, I cannot and will not build a pedagogical praxis that treats all students with suspicion because an undetermined/undeterminable contingent may be using such tools. I do not have the time, energy, or institutional support to relentlessly pursue and police my students—nor should I want it.
Of course, process pedagogy cannot redress the myriad concerns that the introduction of generative artificial intelligence raised—or, more accurately, revived. Furthermore, as Lad Tobin notes, process pedagogy has been roundly critiqued within rhetoric and composition, including its disregard for teaching grammar and mechanics, neglect or inattention to how positionality inflects the writer’s writing, and focus on individual writers rather than the inherently social and transactional nature of composition.[9] But at its core, process pedagogy reminds us to be reflective, self-critical, and recursive in our teaching—to see it, too, as an ongoing process in need of examination and improvement.
Sam Altman, the CEO of the company behind ChatGPT, OpenAI, has compared it and similar LLMs to calculators that allow us to do the work we have always done, but now more easily and efficiently.[10] The comparison is not completely fair: regardless of past inputs, a calculator will continue to deliver the same outputs to the same inputs, whereas LLMs can be trained on inputs to produce different, ideally better, results in the future that more accurately mimic the requested styles and discourses. But the calculator comparison may be apt to the degree it calls upon us to encourage our students to show their work. Mathematics teachers often require students to show the moves and decisions that led to the result, breaking down the process that the calculator does so ably (and somewhat surreptitiously). Process pedagogy, in effect, is having the writer show their work, breaking down their decision-making, their writing process, explaining their successes and failures en route to the final product.
For better or worse, teaching remains a Sisyphean task for me. I will never perfect it, but each semester poses the opportunity to try again and to try something different. Despite the pressures and anxieties that come with generative artificial intelligence, it is only the latest technology to reshape teaching and learning. Rather that submitting to the temptation to treat our students with suspicion, we must proceed in good faith and encourage them to as well. We have been entrusted with their education; in turn, we must imagine our students trustworthy.
Acknowledgements
The author wishes to thank Matthew Davis of the University of Massachusetts-Boston.
Peter C. Kunze is assistant professor of communication at Tulane University. His first book, Staging a Comeback: Broadway, Hollywood, and the Disney Renaissance, was published by Rutgers University Press in 2023.
Geoffrey A. Fowler, “Detecting AI May Be Impossible. That's a Big Problem for Teachers,” Washington Post, June 2, 2023, https://www.washingtonpost.com/technology/2023/06/02/turnitin-ai-cheating-detector-accuracy/ and Lauren Coffee, “Professors Cautious of Tools to Detect AI-Generated Writing,” InsideHigherEd, February 9, 2024, https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/02/09/professors-proceed-caution-using-ai. ↑
Adam J. Banks, “Dominant Genre Emeritus: Why It’s Time to Retire the Essay,” CLA Journal 60, no. 2 (December 2016): 179–180. See also Kathleen Blake Yancey, “Made Not Only in Words: Composition in a New Key,” CCC 56, no. 2 (December 2004): 297–328, http://dx.doi.org/10.2307/4140651; Elizabeth Wardle, “‘Mutt Genres’ and the Goals of FYC: Can We Help Students Write in the Genres of the University?” CCC 60, no. 4 (June 2009): 765–789, http://dx.doi.org/10.58680/ccc20097196; and Jody L. Shipka, Toward a Composition Made Whole (Pittsburgh: University of Pittsburgh Press, 2011). ↑
Consult David Bartholomae, “Inventing the University,” in When a Writer Can’t Write: Studies in Writer’s Block and Other Composing-Process Problems (New York: Guilford Press, 1985), 134–166. ↑
MLA-CCCC Joint Task Force on Writing and AI, “Generative AI and Policy Development: Guidance from the MLA-CCCC Task Force,” Modern Language Association and Conference on College Composition & Communication (April 2024): 15. See also Geneva Smitherman, “‘Students’ Rights to Their Own Language’: A Retrospective,” English Journal 84, no. 1 (January 1995): 21–27, https://doi.org/10.2307/820470. ↑
Rob Toews, “What Artificial Intelligence Still Can’t Do,” Forbes, June 1, 2021, https://www.forbes.com/sites/robtoews/2021/06/01/what-artificial-intelligence-still-cant-do/; Kai-Fu Lee and Chen Quifan, “What AI Cannot Do,” Big Think, January 19, 2022, https://bigthink.com/the-future/what-ai-cannot-do/; Lydia Smith, “What AI Can’t Do at Work,” Yahoo Finance UK, January 29, 2024, https://uk.finance.yahoo.com/news/ai-work-artificial-intelligence-060059289.html; and Laurence Santy, “14 Things AI Can—and Can’t Do (So Far),” Invoca, March 5, 2024, https://www.invoca.com/blog/6-things-ai-cant-do-yet. ↑
Donald M. Murray, “Teach Writing as a Process Not Product,” in Cross-Talk in Comp Theory: A Reader, 2nd ed. (Urbana: NCTE, 2003), 3. ↑
Murray, “Teach Writing as a Process Not Product,” 4. ↑
Kathleen Blake Yancey, Reflection in the Writing Classroom (Logan, UT: Utah State University Press, 1998), 143. ↑
Lad Tobin, “Process Pedagogy,” in A Guide to Composition Pedagogies (New York: Oxford University Press, 2001), 10–13. ↑
Sam Altman, quoted in Clea Simon, “Did Student or ChatGPT Write That Paper? Does It Matter?,” Harvard Gazette, May 2, 2024, https://news.harvard.edu/gazette/story/2024/05/did-student-or-chatgpt-write-that-paper-does-it-matter/. ↑