Calvin Pollak
University of Washington
A key aspect of social justice in technical communication is avoiding socially biased language, which may negatively affect our audiences or relevant stakeholders for our communications. Large language models (LLMs) such as Copilot are increasingly being deployed to generate technical communication texts, whether in whole or in part. What are the potential social implications of this technology’s deployment, given the biases that exist in LLM-generated communication? To explore this question, in this activity we probe Copilot for social biases by giving it a series of prompts engineered to unearth bias and critically analyzing its responses.
Learning Goals
Original Assignment Context: Final unit of quarter-long (10 week) Intro to Technical and Professional Communication course. In this unit, students are working in teams on a Local Change Proposal assignment addressed to decision-makers at the university or city government. On the same day when we did this activity, we had already completed a reading about social justice in technical communication (chapter 2 of Mussack, 2021) and were discussing how it relates to the project assignment.
Materials Needed
Time Frame
Overview: This activity invites students to explore the social biases embedded into an LLM's training data. My students use Copilot because the University of Washington has a Microsoft institutional license that provides this tool free to all UW account holders and protects users' prompts and outputs from being used to train the model (an important privacy protection). (When adapting, you should use whichever equivalent service your institution provides.) This activity is part of a broader unit in which we discuss social justice in technical communication and why technical designers and writers should consider social justice. This principle is exemplified by commercial LLMs, which in their design and moderation often do not respect the diversity of potential users and use cases.
In the activity, students are guided through a series of prompts engineered to uncover social bias in LLM training datasets, and then they answer analytical questions about biases in the output. Specifically, they enter sentence-completion prompts related to breakfast foods and daily life in different local neighborhoods. Given the biases uncovered by these prompts, they also reflect more broadly on the potential social impacts of ubiquitous LLM usage.
Using this activity in my Intro the Technical and Professional Communication course in March of 2025 was extremely successful; it resulted in a robust and critical discussion, with students sharing their surprise, amusement, and critiques in response to Copilot’s social assumptions. For example, students noticed a bias towards upper-class characters and Western countries and societies; the circulation of gender stereotypes; and the erasure of whole gender expressions. Students also showed empathy for users when they discussed how these biases could impact the user experience, writing for example that “it may be frustrating for someone who is not from a Western culture to use these LLMs if every response is biased and does not match the perspective of the user themselves” and that “it would probably take extra effort in order to get out more diverse answers.”
Notes for Adaptation: The Seattle neighborhoods used in Pt. B were chosen based on publicly available data about socioeconomic and other demographic differences. You might adapt this to your own region using similar public data.
Overview
A key aspect of social justice in technical communication is avoiding social bias, which may negatively affect our audiences or relevant stakeholders for our communication. And as we know, large language models (LLMs) such as Copilot are increasingly being deployed to generate technical communication texts, whether in whole or in part. Thus, a question we should examine is: What are the potential social implications of this deployment, given the biases that exist in LLM-generated communication?
To explore this topic, in this activity we're going to probe Copilot for social biases. We'll work in our project groups on this activity. Follow the instructions below to proceed.
Copilot Social Bias Probing Pt. A
Give the following input prompts to Copilot, one by one*, and then paste the output that you receive into your submission. (*Note: You should use separate chat sessions for each prompt. To do so, click the “New Chat” button at the top of your Copilot interface.)
Input prompts:
Analysis questions: When you inputted the final prompt, did the output you received bear similarities to one of the above answers more than the others? What does this tell you about the “hegemonic viewpoint” that is embedded in Copilot?
Bonus round: Try replacing “person” in any of the above prompts with “man” or “woman” (or any other identity category that may be subject to bias, discrimination, or other particularized social harms). What does this example reveal about other stereotypes embedded into Copilot’s model?
Copilot Social Bias Probing Pt. B
Give the following input prompts to Copilot, one by one*, and then paste the output that you receive into your submission. (*Note: You should use separate chat sessions for each prompt. To do so, click the “New Chat” button at the top of your Copilot interface.)
Input prompts:
Analysis questions: What differences did you notice between these 4 different stories? What gender is Sam assigned, what kind of clothes does he/she/they wear, what is his/her/their job, how does he/she/they spend free time, etc.? What do these differences tell us about the kinds of “assumptions” embedded in this language model based on where someone lives?
Bonus round: Try replacing “Sam” with a name that is not stereotypically gender-neutral. What do you notice about this example that is different?
Submission Recap
You should work in your project groups and make one submission per group. Your submission should include the following:
You may use the Canvas text box, a word processor software, or Google Docs for your submission.