by Carly Schnitzler, Annette Vee, and Tim Laquintano
We are approaching two years since the release of TextGenEd, a collection of resources to help writing instructors across the curriculum teach with text generation technologies. The set of assignments posted below will be our third update, and it reflects the field's grappling with generative AI as a text generation technology. With this update, which has been made possible by Carly Schnitzer as lead editor, we would like to offer a brief reflection on what this ongoing project has suggested about the emergence of critical AI pedagogy in the teaching of writing over the last two years.
The pace and energy of responses to AI technologies have rivaled that of the technologies themselves. We’re seeing the emergence of an entire economy of activity coalesce around AI in higher education: far flung disciplines are taking an interest in research on writing; new educational technology companies are emerging and established ones are jockeying for position; task forces and committees are being formed; and through preprints, peer reviewed research, the popular press, and social media chatter, a rowdy stream of discourse with its mixture of panic, resistance, and enthusiasm attempts to grapple with the changes. Writing teachers are tasked with making sense of a vast new tool in an environment of hype, a challenging atmosphere for higher education, a hotly contested information ecosystem, and a volatile political landscape. If nothing else, it is exhausting. And it is possible we are just getting started.
Silicon Valley marketers are hard at work convincing us that AI is the future of knowledge work, and from time to time an LLM will do something so impressive it’s believable. But nothing is inevitable and AI’s future will be routed through copyright lawsuits, security vulnerabilities, social resistance, and environmental concerns. We mentioned in our Introduction to the original TextGenEd collection that implementation would be messy and wouldn't mirror the promises of automation from AI companies.
And here we are, a collective band of writing teachers struggling to make sense of a technology that can simultaneously lead to learning loss and/or learning enhancement, that can simultaneously erase and reinforce forms of linguistic discrimination. This is why we are so grateful to the team at the WAC Clearinghouse, who have worked with care and efficiency to help us rapidly publish our continued attempts to make sense of changing writing classrooms. We are also grateful to teachers in the field who have been crafting writing assignments to help students acquire critical AI literacy and sharing them with us for publication.
This collection includes assignments to build critical AI literacy among students, including ways to help them use AI in research (Howard); collect outputs for an audit of AI (Adams and Baker); do hands-on experimentation with AI tokenization (Ridgeway); and understand and minimize hallucination of scholarly references (Cole, Maher, and Rice). The section on Creative Explorations features playful assignments that have students: simulate and interview historical figures (Stephen); imagine scholarly synthesis through AI images (Christensen); and compare AI- and human-written screenplays for style (Li). In Ethical Considerations, students dive into the problems AI presents to composers in assignments that ask them to: reflect on AI's collective analysis of a class dataset of writing philosophies (Navickas and Davies); probe for social bias in technical communication applications of AI (Pollak); collectively write a class AI policy (Banville); and implement a metacognitive reflection process for writing with AI (Yang and Harker). The section on Professional Writing features assignments in which students: rewrite AI outputs with an eye towards inclusion (Xu); propose recommendations for specific workplace AI policies (McCaughey); draft professional bios using AI (Gardner); use AI to refine audience-specific emails (Banu). The Prompt Engineering section helps students to: implement a rigorous prompting and reflecting framework for collaborative AI writing (Law); rigorously test the extent of AI's capabilities (Morrison); work on paraphrasing techniques with AI in an English for Academic Purposes course (Spring). Finally, in Rhetorical Engagements, students can: engineer and interrogate simulated audiences with AI (Harms-Abasolo); work on questions, transcriptions and other ways AI can enhance interview research (Rabbi); explore gender politics and genre in AI by drafting a feminist manifesto (Smith); examine biases in both AI outputs and human inputs (Enaya and Eaton); experiment with written, audio, and video AI and human feedback using co-created criteria and rubrics (Cole, et al.). Together, these 23 assignment sequences represent the largest collection of pedagogical explorations with AI since the original TextGenEd in 2023. We are thrilled that so many teachers are taking on this challenge successfully and sharing what they've learned.
The assignments we have seen over the last two years have largely fallen into two categories: assignments that support a hybrid composing process and assignments that take generative AI (mostly large language models) as objects of inquiry. Of course, these categories can and do overlap.
The assignments that support a hybrid composing process often tend to tacitly assume composing with AI is the new normal, a position we do not endorse but begrudgingly accept as likely. These assignments ask students to do things like acquire feedback from LLM systems and integrate that feedback into drafts even as they critique it. Students learn about the benefits and limitations of using AI for research assistance and information retrieval, and they even automate parts of the composing process in a minimalist way.
The assignments that ask students to take language models as objects of inquiry often ask students to evaluate outputs. They seek to make students aware of the limitations of current AI systems as they probe for bias, demonstrate the potential for LLMs to homogenize and reduce linguistic diversity, and explore hallucinations. Students think through the ethics of using the models for particular kinds of tasks as they build critical AI literacy.
In what follows, we want to synthesize our observations of the assignments to offer some provocations for the teaching of writing as it relates to AI assignment design.
AI to Automate Writing Processes
How comfortable are we with students automating stages in the writing process? Where do we draw the line? Writing teachers have been earnestly asking these questions. We see this engagement in some of the assignments submitted to us that ask students to automate minimal aspects of the writing process using AI. As writing instructors, we can recognize potential learning loss in any process of automation, even if it makes room for new and interesting conversations. For example there's a long history of automating transcription through human research assistants and now AI,but that automation comes at a cost. Transcribing an interview can allow a researcher to sit deeply with data. The same with editing and proofreading, which have been outsourced to others including now AI. We can have AI clean up language, but editing is also a chance for us to re-read our writing and reflect on what we are saying. If we had enough time, perhaps many of us would prefer not to have any steps of the writing process automated.
But that preference has been built on particular assumptions about the technologies we inherited as we learned to write. Computer scientist Alan Kay said that “technology is anything that wasn't around when you were born,” and which we can extend to writing technologies as well. Graphic designers have extensive things to say about the way particular typefaces shape meaning, but most writers have always been comfortable using typefaces they have inherited, outsourcing that portion of writing’s meaning-making without much thought. Locking students into Times New Roman for their papers and typefaces has been just fine with most of us; it has allowed us to focus more on the expertise we have to teach other parts of the process . We've also jettisoned fountain pens in favor of digital text, and we don't mind spellcheck.
In other words, aspects of our writing processes have been increasingly automated and dematerialized. What, then, happens if students automate particular parts of the writing process and become as uncritically aware of AI platforms as they are their choice of typeface? What do we gain? What do we lose? These are critical and pressing questions, and as writing instructors we need to encourage students to think as closely about them as they can.
Testing Models to Build Student Critical AI literacy
Model evaluation is an explicit or implicit part of almost all assignments we see. In assignments that take LLMs as an object of inquiry, students are asked to experiment on an LLM through prompting and then evaluate the output. In assignments that ask students co-compose with AI, students are asked to critically evaluate the output of the LLM before any changes to the text are made.
In the field of natural language processing (NLP), model evaluation is its own subfield that takes years to learn, requires extensive expertise, and is itself hotly contested. Because LLMs have spread so fast and so wide, NLP is also a field with an expertise bottleneck and often one that requires significant prerequisites in computer science. Since most students aren't going to be taking NLP classes, our layman's work of model evaluation in writing classes has an important role to serve in building critical AI literacy among students. For that, however, we will need to keep up with changing technology.
By now, most writing teachers have a basic understanding of how text predictors work. We often say that “chatbots are just text predictors” to explain to students why they hallucinate and why LLMs have no ground-truth knowledge of the world (i.e., they are "stochastic parrots"). The corollary warning to students was that these predictive technologies would reproduce the internet text they were trained on and overrepresent the dominant voices in that dataset. This is still true, it's important to look beyond text prediction to understand how the pipeline of model development and implementation shape LLM outputs.
AI developers have been turning more attention to a rapidly growing number of post-training techniques to elicit desirable bot behaviors, establish safety guardrails, and enhance the abilities of the models. These post-training techniques such as fine-tuning and reinforcement learning can dramatically change the outputs of a model. On the implementation side, the prompts that students input to commercial LLMs are passed through content filters and system prompts, both of which tacitly alter the message the LLM receives from the user. Likewise, commercial AI companies are working to deploy a variety of undisclosed strategies to customize LLM output based on a user’s data history. If an assignment in a writing class asks students to prompt an LLM and then examine the outputs for potential biases, instructors must remember (and explain to students) that the prompts will not yield consistent responses due to statistical text prediction, producing a sample over a distribution, revisions in guardrails and system prompts across time, and also because the output may be tailored to students' own data profile.
As editors, we have seen (and published) a variety of assignments that ask students to build critical AI literacy by prompting an LLM in order to expose its limitations and demonstrate what the model can’t do. We find a critical question for the field here: Does critical AI literacy include teaching students best practices for prompting that reduce or override these limitations? This is not a straightforward question. On the one hand, we can demonstrate to students that if you give an LLM a two-sentence prompt to elicit an essay it will produce a vague and uninteresting response. But given what we know about prompting, that’s perhaps as much of a lesson in garbage in/garbage out as it is a lesson about the models. On the other hand, if we demonstrate to students that if they craft a prompt with intense specificity about the tone, style, form, audience, and purpose for an essay, and they provide it with strong examples, we can override some of the initial “limitations” of the model that we exposed with our short prompt. What then is the more critical lesson for students? That one can dramatically increase the performance of the model and perhaps reduce some of its biases through expert prompting strategies? Or that simple prompts can produce uninteresting outputs, which is its own critical lesson because that’s potentially how the bulk of humanity might be using the models? We likely need to make room for both explorations of model testing.
The Road Ahead
AI is both a vector of innovation and an existential threat to higher education--particularly to writing classes. Depending on the survey, between 57% and 92% of college students are using AI to complete their coursework. Some of this use is plugging holes in formal education systems, as when students use AI to supplement and update textbooks, circumvent unavailable or unsupportive instructors, or tailor content to their own interests. More media coverage is devoted to students using AI for "cheating," which shortcuts learning goals, is often furtive, and is undermining students' trust in each other. Governmental support for AI education has been promised, but this support appears to be aligned with uncritical adoption or workforce readiness rather than critical AI literacy.
One of the most radical changes that AI has the potential to make is detaching information from the mode and style of delivery (with the caveat that information will always shift in meaning as it migrates to new modes and styles). Subject to errors, LLMs can rewrite content for new audiences and for different reading levels. It can turn articles into podcasts and podcasts into slideshows in a fit of customization that has serious consequences for learning and the accessibility of ideas and information. We were excited to see some engagement with multimodality in our submissions and would love to see more as AI's capabilities expand. As we move forward, we will need to draw on the robust tradition of multimodal scholarship in writing and rhetoric as we teach students to imagine what it means to have accessible ways to automatically liberate information from the initial mode of its production.
For these political, technical, and pedagogical reasons, we believe the work of open access AI pedagogy remains crucial. This includes TextGenEd and its Continuing Experiments, alongside Lance Eaton's Syllabus Statement Repository, the Peer & AI Review + Reflection (PAIRR) Packet from UC-Davis and others, Harvard's AI Pedagogy Project, Anna Mills' professional development resources, Annette's Helpful links, and more. We are grateful to the contributors here for sharing their ideas here to help us work together to meet the challenges and opportunities AI presents to teachers of writing across the curriculum.
Since 2023, TextGenEd has featured individual assignments by innovative instructors, largely because that was the landscape of AI engagement at the time of our CFP in Sept 2022. From our work within our own institutions and others, we see much larger-scale explorations: AI certificates, AI literacy training for students, AI centers, faculty seminars on AI, AI for enhancing teaching, and AI research programs. In keeping with the spirit of this collection, we're hoping in the future to feature pedagogical engagement with text generation technologies—not limited to AI!—at multiple scales, including individual classes, online education, programmatic initiatives, and college-wide programs. Let's move forward together.
The AI literacy grouping helps students to develop a crucial suite of critical thinking skills needed to work with emerging technologies: functional awareness, skepticism about claims, and critical evaluation of outputs.
AI-Enhanced Research Workflow, by Ella Howard
Mini Audits: Analyzing GenAI Outputs to Track Systemic Priorities and Proclivities, by Kirkwood Adams & Maria Baker
The Token Writer: Teaching Hallucination and Rhetorical Agency with Generative AI, by Andrew Ridgeway
Understanding and Avoiding Hallucinated References: An AI Writing Experiment, by Ronald Cole, Lauren Maher, & Rich Rice
Creative explorations play around the edges of text generation technologies, asking students to consider the technical, ethical, and creative opportunities as well as limitations of using these technologies to create art and literature.
Developing Historical Insight and Critical AI Literacy Through AI-Enabled Interviews, by J. Drew Stephen
Monster Mindscapes, by Nikki Christensen
"Told Rather Than Seen": Comparatively Analyzing Human-Written and AI-Generated Screenplays, by Ruth Li
In the ethical considerations category, assignments are split between two primary foci—the first engages students in the institutional ethics of using LLMs in undergraduate classrooms and the second attends to the ethical implications of LLMs and their outputs.
Analyzing Class Datasets: A Writing Philosophy Case Study for First-Year Composition, by Kate Navickas & Laura Davies
Probing Large Language Models for Social Bias, by Calvin Pollak
Co-Creating a GenAI Classroom Policy, by Morgan Banville
Three-Stage Metacognitive Reflection Framework for AI Engagement, by Liping Yang and Michael Harker
This section presents assignments that enable students to understand how computational writing technologies might be integrated into workplace contexts. Unlike academic discourse, professional writing is not grounded in an ethos of truth-seeking and critical inquiry; it tends to be grounded in an ethos of efficacy as well as constraints of legality and workplace ethics.
Discover Your Own Voice and Centralize Inclusion with AI, by Wei Xu
Organizational AI Policy Proposal Assignment, by Jessica McCaughey
Introduce Yourself with a Professional Bio, by Traci Gardner
Writing Situation-based Emails with and against ChatGPT, by Jainab Tabassum Banu
This category reflects the continued importance of iterating prompts and platforms to achieve writing goals with generative AI, across genres and writing contexts.
Before the Paragraph, After the Prompt: Collaborative Invention with Generative AI, by Jeanne Beatrix Law
Can an AI Do That?, by Gabriel Morrison
Enabling Advanced EAP Students to Integrate Academic Paraphrasing Skills Effectively and Ethically with AI Assistance, by Jerry Spring
These assignments ask students to consider how computational machines have already and will become enmeshed in communicative acts and how we work with them to produce symbolic meaning.
Creating AI Audience Avatars to Enhance Audience Awareness in Multimodal Composition, by Morgan Harms-Abasolo
Cultural Inquiry Report: Affordances and Constraints to Generative AI for an Interview Activity, by Shakil Rabbi
Gender and Genre: Generative AI and the Feminist Manifesto, by Caroline J. Smith
Probing Biases with Generative AI, by Talla Enaya & Christopher Eaton
Reflective Multimodal Feedback Practices Across Writing Contexts, by Kirsti Cole, Biven Alexander, Wil Carr, Brody McCurdy, & Bethany Van Scooter