Introduction

In today's rapidly evolving technological landscape, artificial intelligence (AI) has become an integral part of our daily lives, altering the way we interact with information, entertainment, and communication.[1] As a result, AI literacy, also a part of critical media literacy, has emerged as an essential skill for students in higher education.[2] AI literacy encompasses the understanding of AI concepts, applications, and implications, enabling students to navigate and critically engage with the AI-driven world effectively.[3] Critical AI literacy, according to Maha Bali, however, involves “not only understanding how AI systems work but also critically examining their social and ethical implications, as well as the ways in which they can perpetuate or amplify existing biases and power structures.”[4] In a multimodal environment where meaning is conveyed through various forms of media, including text, images, videos, and audio, AI plays a significant role in shaping and delivering content.[5] From personalized recommendations on streaming platforms to AI-generated news articles and social media feeds, students must possess the knowledge and skills to discern the influence of AI on the information they consume.[6] By incorporating AI literacy into higher education curricula, institutions can empower students to become informed, critical thinkers, and active participants in the AI-driven society.[7]

This article brings together two approaches in higher education’s incorporation of AI. In the first section, Suriati explores generative artificial intelligence for teaching social justice concepts in a public higher education institution. In the second section, Iskandar, who teaches in a liberal arts college, shares the use of Hypothesis, a social annotation tool, to identify patterns of AI-generated essays and to analyze their potential biases.[8] From their experience integrating the two AI tools into their course assignments, both authors offer recommendations for educators who might want to experiment with AI tools to develop effective critical Al literacy pedagogy.

Developing AI-Generated Superhero Characters 

Suriati designed an innovative activity for her “Diversity and Teaching” course that incorporates generative AI to explore Kimberlé Crenshaw’s concept of “intersectionality,” which examines how various social identities such as race, gender, class, and sexuality intersect and create unique experiences of discrimination and privilege.[9] Prior to this activity, the students, who were sophomores at the time when this course was taught, watched a short video clip that uses the metaphor of a crossroad junction to describe “intersectionality.” Suriati told them to unpack the concept of intersectionality based on what they understood from the video. As a follow up, they worked in randomly assigned groups of trios and fours to create one superhero/heroine characters of color using a generative AI tool.

The instructions were as follows:

Using DALL.E or any other generative AI tool, create an image of a superhero or heroine character that represents an imagined identity that differs from your own.

Write a character profile for your superhero or heroine and include the following information:

a. Name and alter ego

b. Origin story

c. Powers and abilities

d. Weaknesses

e. Physical appearance

f. Personality traits

g. Allies and enemies

h. Affiliations

In 500 words, analyze how the concept of intersectionality complicates the identity of your superhero or heroine character. Consider the following questions:

a. What aspects of your character’s identity intersect (e.g., race, gender, class, sexuality, ability, etc.)?

b. How do these intersecting identities shape your character's experiences, challenges, and opportunities?

c. How might your character's intersectional identity influence their role as a superhero or heroine?

d. What unique perspectives or strengths does your character's intersectional identity bring to their superhero or heroine role?

The students had thirty minutes to generate the character and add their character profile onto Padlet, an online platform for organizing information. During the other thirty minutes, students discussed how the concept of intersectionality complicated the imagined identity of the character. Each group presented their AI-generated superhero/heroine characters and explained the concept of intersectionality to the class (see the instructor’s example in Fig. 1 and Text 1). The lesson continued virtually with students participating in an online discussion forum on the social and ethical issues and implications of using generative AI.

Fig. 1: An AI-generated superheroine with the character profile (Abas, 2024)
Fig. 1: An AI-generated superheroine with the character profile (Abas, 2024)

Text 1. An example that explains the concept of intersectionality

Luminous, aka Dr. Amira Patel, is a superhero whose identity is complicated by the intersectionality of her various attributes. As a woman of Indian descent in the field of astrophysics, Amira faces challenges related to her gender and race in a predominantly male and white profession. Her experiences as a woman of color in STEM shape her perspective and drive her to use her powers and intelligence to promote diversity and equity. Moreover, Amira's identity as a scientist intersects with her role as a superhero. Her analytical mind and problem-solving skills, honed through her scientific background, influence her approach to fighting crime and protecting the innocent. However, this intersection also creates tension as she strives to balance her responsibilities as a researcher and a superhero.

Amira's powers, derived from cosmic radiation, add another layer to her intersectional identity. As Luminous, she becomes a symbol of hope and enlightenment, using her light-based abilities to combat darkness and ignorance. Her powers also connect her to a larger cosmic community, where she must navigate the complexities of being an Indian woman in a universe filled with diverse beings and cultures. The intersection of Amira's gender, race, profession, and superhero status creates a unique set of challenges and opportunities. It allows her to bring diverse perspectives to her heroic work and to serve as a role model for underrepresented groups in both science and the superhero community. Through her intersectional identity, Luminous embodies the power of diversity and the importance of representation in all fields.

During the initial attempt, the students used broad keywords, resulting in images that did not match their expectations. They attempted to create an AI-generated character following similar strategies to Google search. After providing explicit prompt examples and mentioning the basic elements to be included in the prompt, the groups developed more concise and creative prompts, resulting in characters that better represented their intended identities.[10]

Based on these observations, Suriati recommends the following pointers for using generative AI with students:

  • Familiarizing students with AI tools. To begin, it would be good to use one generative AI tool such as DALL.E. Assign students an in-class activity that aligns with the course content or an activity that requires students to use generative AI for the course. Invite them to articulate their accomplishments and challenges so that you could support them. Then provide links to other generative AI tools for them to explore independently and use in their assignments where appropriate.
  • Providing examples of effective prompts. Just like any other forms of writing, students are able to instruct generative AI better when they understood the conventions. For example, showing them how they could be more specific in the description of prompts and highlighting specific elements such as artistic styles (e.g. painting, photography, sketch, outline, etc.) or building on the descriptions from previous prompts.
  • Emphasizing creativity and originality. The potential of generative AI lies in its ability to enhance creativity and originality in character development. Encourage students to use AI as a tool to explore unique concepts, perspectives, and visual representations that may not have been possible through traditional means. Emphasize the importance of using AI as a starting point for ideation and iteration, rather than relying solely on the AI-generated outputs. Guide students to critically evaluate and refine the AI-generated content, incorporating their own creative vision and personal touches to create truly original and distinct characters.

By engaging with generative AI to create superhero/heroine characters, students gain hands-on experience of using AI technologies and learn to craft effective prompts. Exploring intersectionality through AI-generated characters allows students to critically examine how AI can perpetuate or challenge existing biases and stereotypes, fostering a deeper understanding of AI's potential social and ethical implications. Encouraging students to analyze and reflect on the AI-generated characters through this lens will promote a more nuanced understanding of AI’s capabilities and limitations, ultimately empowering them to use AI responsibly and ethically.

Analyzing AI-Generated Essays with Hypothesis 

Iskandar designed a course assignment for his “Introduction to Media and Society” survey course where students examine AI-generated essays using Hypothesis, a social annotation tool where users can collaboratively annotate a piece of digital text (e.g. a web page, PDF, or YouTube video) by highlighting the text, adding comments, and engaging in discussions directly on the text itself. The assignment was designed to develop students’ ability to describe and analyze media technologies and practices in their social and historical contexts, which is one of the general objectives of the course. In particular, the assignment aims to practice students’ ability to identify forms and patterns generated by large language model (LLM) tools, evaluating them for accuracy, potential bias, and “AI hallucination,” or untrue claims presented as facts.[11] In this case, the assignment goes beyond mitigating students’ cheating practice using AI tools. Instead, it is intended to offer an introspective activity in students’ everyday engagement with AI as a contemporary media technology.

In preparation for the AI-generated essay assignment, students explore a variety of concepts related to artificial intelligence in a two-week unit on algorithmic cultures. In this unit, students discussed, among others, Safiya Noble’s concept of algorithmic oppression and technological redlining to understand how algorithms can reinforce structural oppression on minorities and people of color.[12] They also watched a screening of Shalini Kantayya’s Coded Bias (2020), where they gain insights about the dangers and discrimination within various algorithmic systems that have now become prevalent across all aspects of everyday life from the activism of various data scientists and watchdog groups from all over the world. In addition, students also examine case studies related to issues of human labor in LLM development. The unit invites students to familiarize themselves with these concepts and issues, as well as to contextualize the ways in which artificial intelligence technologies both open certain exploratory opportunities while foreclosing others.

Students complete the AI-generated essay assignment in two phases. In the first phase, they select a previously read essay or book chapter from the course syllabus and use an LLM tool to generate a 300-word reflection paper. Iskandar gave a quick hands-on demonstration in class on how to write a prompt for an AI-generated reflection paper to make sure students understand the parameters and expectations of the assignment. In the second phase, students are randomly assigned their peers’ AI-generated essays, with personal information removed, and annotate them using Hypothesis to evaluate their quality and accuracy, considering both strengths and weaknesses.

The assignment results were interesting. Students detected that LLM tools like ChatGPT are somewhat proficient at summarizing and organizing essays but tend to use “big words” without addressing the original reading’s arguments in detail. Some students identified cases of AI hallucination, such as misrepresenting content or making up titles and authors. In this case, students demonstrated how a teacher can generally tell if an essay has been written using a LLM tool. However, students were not yet adept at detecting “hegemonic biases” encoded by LLM systems, as Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell describe.[13] This result calls for further exercises to hone students’ critical AI literacy skills. Iskandar considers adding another element into his AI-generated essay assignment: the Giant Language Model Test Room (GLTR), a color-coded visual forensic tool to detect text that was automatically generated from large language models, to further examine the potential of encoded hegemonic biases in LLM tools.[14] Students could analyze the color-coded result of the AI-generated essay to reveal its potential biases of how a LLM tool would have predicted a word at each position in a sentence based on its training data. What does it mean when a text is highlighted mostly in green, the color code for top 10 most likely words to appear? And what does it mean when it has more purple or red highlights, the color codes for unlikely predictions? Does the color-coding explain anything about culture-bound biases, like the difference between using the word “undocumented” and “illegal” to characterize immigrants?

Through designing their course assignments, Suriati and Iskandar aim to provide students with meaningful opportunities to engage with the current development of AI as part of their deep learning process. They share the same pedagogical objective of resisting the allure of AI “hype" machines and avoiding the pitfalls of groundless criticism about the existential dangers of AI to humanity. Both educators allow their students to independently examine the social and technological affordances of AI tools such as ChatGPT or Midjourney through their assignments’ design, encouraging the development of critical AI literacy.

Conclusion

The two classroom examples presented in this article demonstrate the importance of integrating AI literacy and critical AI literacy into higher education curricula. By engaging students with generative AI tools and social annotation platforms, educators can foster a deeper understanding of AI’s capabilities, limitations, and potential biases. Suriati’s activity on developing AI-generated superhero characters encourages students to explore the concept of intersectionality while gaining hands-on experience with AI technologies. Iskandar’s assignment on analyzing AI-generated essays using Hypothesis, however, enables students to identify patterns and evaluate the accuracy and potential biases of LLM tools, such as analyzing the choices of words that the tools frequently use and their confidence in making statements without any support from the source text.

Both approaches emphasize the need for students to develop critical thinking skills in an AI-driven world. As AI continues to shape various aspects of our lives, it is essential for students to possess the knowledge and skills to navigate and critically evaluate the information that AI systems generate. By incorporating critical AI literacy into their pedagogy, educators can empower students to become informed and active participants in the AI-driven society, and not easily falling prey to an existential AI panic.

Lastly, the reflections Suriati and Iskandar provided highlight the importance of ongoing experimentation and refinement in AI literacy education. As AI technologies continue to evolve, educators must adapt their teaching strategies to keep pace with these developments. By sharing their experiences and insights, the authors contribute to the growing body of knowledge on effective AI literacy pedagogy, inspiring other educators to explore innovative ways of integrating and examining AI into their curricula.


Iskandar Zulkarnain is an Assistant Professor of Media and Society at Hobart and William Smith Colleges. His research focuses on global digital media cultures with particular interests in Southeast Asia. His work has appeared in Sojourn: Journal of Social Issues in Southeast Asia and in edited collection, Feminist Interventions in Participatory Media (Routledge, 2019), among others. His website is: http://digitalperipheries.net/.

Suriati Abas is an Assistant Professor in the Department of Elementary Education and Reading at State University of New York (SUNY) Oneonta. Her research focuses on intersections of spatial, digital and critical dimensions of literacy within and beyond the school context. She teaches literacy, diversity and children’s literature courses. To learn more about her, visit https://suriatiabas.com/


    1. Bill Cope and Mary Kalantzis, Making Sense: Reference, Agency, and Structure in a Grammar of Multimodal Meaning (Cambridge: Cambridge University Press, 2020).

    2. Kevin M. Leander and Sarah K. Burris, “Critical Literacy for a Posthuman World: When People Read, and Become, with Machines,” British Journal of Education Technology 51, no. 4 (2020), https://doi.org/10.1111/bjet.12924.

    3. Martin Kandlhofer, Gerald Steinbauer, Sabine Hirschmugl-Gaisch, and Petra Huber, “Artificial Intelligence and Computer Science in Education: From Kindergarten to University,” (2016 IEEE Frontiers in Education Conference [FIE], 2016), https://doi.org/10.1109/FIE.2016.7757570.

    4. Maha Bali, “Where Are the Crescents?,” LSE Higher Education Blog, February 26, 2024, https://blogs.lse.ac.uk/highereducation/2024/02/26/where-are-the-crescents-in-ai/.

    5. Gunther Kress, Multimodality: A Semiotic Approach to Contemporary Communication (London: Routledge, 2010); Joseph E. Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence (Cambridge, MA: MIT Press, 2017).

    6. Yoram Eshet-Alkalai, “Digital Literacy: A Conceptual Framework for Survival Skills in the Digital Era,” Journal of Educational Multimedia and Hypermedia 31, no. 1 (2004).

    7. Allison Littlejohn and Nina Hood, Reconceptualising Learning in the Digital Age: The [Un]democratising Potential of MOOCs (Singapore: Springer, 2018); Davy Tsz Kit Ng, Jac Ka Lok Leung, Samuel Kai Wah Chu, and Maggie Shen Qiao, “Conceptualizing AI Literacy: An Exploratory Review,” Computer and Education: Artificial Intelligence 2 (2021), https://doi.org/10.1016/j.caeai.2021.100041.

    8. Iskandar would like to thank Sarah Gobe, Instructional Technologist at Hobart and William Smith Colleges, who assisted him in using Hypothesis for his introductory media studies survey course, as well as students in his “Introduction to Media and Society” classes in fall 2023 and spring 2024.

    9. Kimberlé Crenshaw, “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics,” University of Chicago Legal Forum 1989, no. 1 (1989).

    10. See the list of elements created by Alicia Bankhofer at https://padlet.com/aliciabankhofer/creating-images-with-generative-ai-jvk7en4211219lzn.

    11. Tom Chatfield, “AI Hallucination,” New Philosopher, June 2023, https://www.everand.com/article/651743427/Ai-Hallucination.

    12. Safiya Noble, “The Power of Algorithms,” in Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).

    13. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” FAccT ‘21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (2021): 613, https://doi.org/10.1145/3442188.3445922. Shmargaret Shmitchell is an alias of Margaret Mitchell, Chief Ethics Scientist at Hugging Face.

    14. Hendrik Strobelt and Sebastian Gehrmann, “Catching a Unicorn with GLTR: A Tool to Detect Automatically Generated Text,” accessed May 31, 2024, http://gltr.io/. At the moment, GLTR only works with GPT-2. Iskandar would like to thank his colleague at Hobart and William Smith, Jiangtao “Harry” Gu, for introducing him to this tool.