Andrew Ridgeway
Methodist University
This assignment invites students to explore how large language models (LLMs) use tokens to condense, process, and generate text by having them “code” and “decode” a previous reading assignment using a basic set of three tokens. By learning how tokenization works, students identify the difference between a human reading for context and meaning and an LLM that relies on stochastic pattern recognition. Subsequently, they gain a clearer understanding of why hallucinations occur, have a better grasp on the relationship between reading, writing, and rhetorical agency, and are less likely to ascribe LLMs with a capacity for thought or understanding.
Learning Goals
Original Assignment Context: First-year writing course
Materials Needed
Time Frame: ~1 week
Overview: I have taught this assignment once per semester in my first-year writing classes. It is part of a 4-week unit on AI literacy that I teach each semester. The unit introduces students to how large language models (LLMs) work, what their limitations are, and what distinguishes human meaning-making from machine prediction.
The unit begins with a brief assignment that teaches students how LLMs use “tokenization.” Instead of reading for context and meaning the way a human would, LLMs convert language into a series of tokens, then analyze those tokens for patterns to predict the next token in the sequence. Hallucination occurs when LLMs produce a series of tokens that conforms to an established pattern, but is semantically incorrect when the tokens are converted back to language. Students who know how tokenization works understand that 1) some amount of hallucination is inevitable and 2) an LLM has no way of comprehending its own output.
This assignment helps students understand the difference between an LLM that relies on tokenization and a human who reads for context and meaning. LLMs are “stochastic,” which means they identify and generate patterns based on probability. Human readers, on the other hand, filter the text through the lens of their own knowledge, insight, and experience. As they read, they make choices about what’s important, what to pay attention to, and what the text ultimately means. We refer to the outcome of this decision-making process as the reader’s interpretation of the text. Since each reader decides for themselves which details of the text are the most salient, no two interpretations are completely identical. The reader’s interpretation of the text is an expression of their rhetorical agency. When we ask an LLM to summarize a text, we are ceding this agency and accepting an output that has been shaped by statistical patterns instead of human insight.
The assignment involves five simple steps:
Step 1
Divide students into small groups and introduce them to the tokens they will be using. Triangles represent nouns, circles represent all other words, and squares represent punctuation.
Once students understand what each token signifies, present them with a sentence or two from an article they have already read for class. I use Elizabeth Weil’s (2023) profile of the computational linguist and AI skeptic Emily Bender, titled “You Are Not a Parrot” and published in New York Magazine.
Ask students to translate these lines of text into a series of tokens.
Examples:
This refers to that. → ? ? ? ? ?
I am the walrus. → ? ? ? ? ?
As I explain during the lesson, this is a simplification of more complex tokenization methods like byte-pair encoding, a data compression algorithm that divides the words in a dataset into “sub-units” based on the frequency of character pairs.
Step 2
After students have the coded text, ask them to work together to identify 2-3 “rules” about how tokens are arranged, based on grammar or sentence structure. For example, every line from the source text will end with at least one square, because sentences in English always end with punctuation.
This part of the assignment is meant to help students recognize that language is highly structured. This is why LLMs can often predict the next token in a sequence without knowing what a word actually means.
The rules students come up with do not need to be correct in every instance. When students come up with rules that end up being mistaken, need to be revised, or are not universal, it provides an excellent opportunity to explain how and why LLMs hallucinate.
Step 3
Give students 4-5 sentences from another part of the article that have already been translated into a series of tokens and ask them to test the rules they have developed by identifying the original paragraph. Students will not have enough information to decode the tokens word-by-word. However, if you are careful about which section of the text you are selecting, they should have enough information to identify the passage represented by the token sequence.
Step 4
When students have identified the correct passage, ask them to rewrite the paragraph by swapping out the original words while maintaining the same sequence and number of tokens. This process is what Howard et al. (2010) refer to as “patchwriting,” a process of “reproducing source language with some words deleted or added, some grammatical structures altered, or some synonyms used” (p. 188). Patchwriting is an excellent tool for developing writers because it calls attention to the format, structure, and arrangement of the text. This leads to an interesting conversation about the difference between patchwriting and plagiarism.
Step 5
When students are finished patchwriting the paragraph, ask them to rewrite it using half the original number of tokens. When they have successfully completed this step, ask them to do it two more times. If the original passage was 80 tokens, for example, the next two iterations should be 40 tokens and 20 tokens, respectively.
As students are forced to choose which words to change or eliminate, the meaning of the text shifts and students have to decide if the new version is still faithful to the original text. LLMs cannot do this kind of close reading, which requires value-based judgements about subtext, symbolism, and authorial intent.
When students are finished, ask them to read the results out loud and identify the differences between the groups’ final products. Each group will have made different choices about what to keep and what to cut, which helps introduce students to the concept of salience.
At the end of the assignment, display a picture of the Rubin vase. Students who are focused on the black space will see faces, while students focused on the white space will see a vase. Many students will see both. The point here is to illustrate how what we consider salient shapes the interpretation of a text.
You can drive this point home by asking students to reflect (in writing or discussion) on the choices they made when cutting the passage. What did they choose to keep? What did they cut? What do those decisions reveal about what they found most important? When users ask LLMs to summarize a text, they abdicate their rhetorical agency, giving up the opportunity to decide for themselves whether the image represents a pair of faces, a vase, or both.
Howard, Rebecca Moore, Tricia Serviss, and Tanya K. Rodrigue. "Writing from Sources, Writing from Sentences." Writing & Pedagogy 2.2 (2010): 177-192.