AI Literacy

 The AI literacy grouping helps students to develop a crucial suite of critical thinking skills needed to work with emerging technologies: functional awareness, skepticism about claims, and critical evaluation of outputs.

AI-Enhanced Research Workflow

Ella Howard
Wentworth Institute of Technology

This assignment introduces students to AI-augmented research design through collaborative workflows using generative AI tools like ChatGPT and Google NotebookLM. Students generate and refine research questions, test them against a curated source set, and reflect on the capabilities and limitations of AI as a research partner. The recursive structure of using AI to study the impact of AI encourages students to critically examine methodology, bias, and epistemology. The process highlights the value of human agency in guiding AI output and invites reflection on the environmental and ethical implications of generative tools.

Mini Audits: Analyzing GenAI Outputs to Track Systemic Priorities & Proclivities

Kirkwood Adams & Maria Baker
Columbia University

This assignment asks students to conduct their own Mini Audit of a text or image generation model and evaluate the model’s potential as a collaborator. Adapting methods from researchers like Danaë Metaxa and Joy Buolamwini, auditing positions genAI systems in and of themselves as valuable objects for inquiry and scrutiny. Through a sequence of productive steps, students generate small datasets and subject them to critical thinking. By experiencing outputs in aggregate rather than in isolation, students can consequently reconsider the effectiveness and value of individual outputs.

The Token Writer: Teaching Hallucination and Rhetorical Agency with Generative AI

Andrew Ridgeway
Methodist University

This assignment invites students to explore how large language models (LLMs) use tokens to condense, process, and generate text by having them “code” and “decode” a previous reading assignment using a basic set of three tokens. By learning how tokenization works, students identify the difference between a human reading for context and meaning and an LLM that relies on stochastic pattern recognition. Subsequently, they gain a clearer understanding of why hallucinations occur, have a better grasp on the relationship between reading, writing, and rhetorical agency, and are less likely to ascribe LLMs with a capacity for thought or understanding.

Understanding and Avoiding Hallucinated References: An AI Writing Experiment

Ronald Cole (University of Cincinnati)
Lauren Maher (Texas Tech University)
Rich Rice (Texas Tech University)

This assignment invites students to critically examine AI hallucinations—false or fabricated content generated by AI—with a focus on academic references. Through guided experimentation, students use generative AI tools to compose texts and reference lists, then to evaluate citation factual validity. The activity makes visible how easily AI can produce convincing but incorrect information, especially in scholarly contexts, highlighting the need to verify for accuracy. Rather than discouraging AI use as sometimes occurs, the assignment invites students into a guided process of inquiry that positions them as co-researchers in an evolving technological landscape. By engaging in prompt engineering and verification techniques, students gain practical strategies for detecting and minimizing hallucinations, thereby enhancing their digital and research literacies.