Ella Howard
Wentworth Institute of Technology
This assignment introduces students to AI-augmented research design through collaborative workflows using generative AI tools like ChatGPT and Google NotebookLM. Students generate and refine research questions, test them against a curated source set, and reflect on the capabilities and limitations of AI as a research partner. The recursive structure of using AI to study the impact of AI encourages students to critically examine methodology, bias, and epistemology. The process highlights the value of human agency in guiding AI output and invites reflection on the environmental and ethical implications of generative tools.
Learning Goals
Original Assignment Context: This assignment was taught in HSSI-3800: Responsible Uses of Generative AI, an undergraduate humanities course open to students across majors including computer science, construction management, and engineering. It followed readings on AI ethics and infrastructure, particularly Kate Crawford’s Atlas of AI, and introduced students to real-time use of LLMs for critical inquiry.
Materials Needed: 4–6 curated PDF sources on the environmental impact of AI (provided via LMS); access to generative AI tools (e.g., ChatGPT, Claude, Gemini); Google NotebookLM (free access with Google account); shared document (Google Docs or Word) for collaboration and workflow steps
Time Frame: One 90-minute class session or two 50-minute sessions
Overview: This assignment invites students to collaborate with generative AI tools in research design while maintaining human agency in decision-making. Working in pairs, students use LLMs to generate and evaluate research questions on complex topics, then select and test questions using Google NotebookLM. They compare their predictions to actual AI outputs, engaging in a “human-in-the-loop” process where AI is treated as a research partner, not an oracle.
I have taught this assignment twice in undergraduate courses with students from diverse majors. After clarifying my explanation between sessions, students in the second group navigated the workflow more successfully. Many were surprised by the conversational potential of LLMs and impressed by Google NotebookLM’s ability to synthesize and cite multiple sources.
The assignment is intentionally recursive in that students use AI to investigate the environmental and social costs of AI itself. This design prompts reflection on the ethical tensions of using extractive technologies to critique extractivism. Students wrestled with the contrast between Crawford’s critical framing and the AI’s tendency to prioritize quantifiable, “clean” research questions.
Throughout the activity, students refined their questioning techniques and learned to distinguish between surface-level fluency and evidence-based insight. They found that geographically specific or systems-level questions yielded stronger results than abstract ones. The assignment inspired students to consider not just what they learned, but how the tools they used shaped that knowledge. In class discussions, students described the process as both eye-opening and ethically challenging.
Pre-Class Preparation
Read “Earth” in Kate Crawford, Atlas of AI.
Phase 1: AI-Enhanced Design
Step 1: Generate Options with Content Knowledge
Working in pairs, choose any LLM and record which model you're using. Begin with this context-rich prompt:
"Based on Kate Crawford's analysis in Atlas of AI where she traces AI's material infrastructure from lithium mining in Nevada to rare earth extraction in China, generate 10 different research question ideas for systematically analyzing the environmental impact of AI. Each question should cover a different dimension and ensure diverse approaches and avoid common research pitfalls."
Step 2: Expert Assessment
Ask the same LLM: "Now act as an expert researcher on the environmental impact of AI. Assess and rank these 10 questions according to these criteria: 1) Specificity: avoids vague language, 2) Buildability: provides foundation for follow-up questions, 3) Evidence-seeking: likely to yield concrete data from sources. Rank them 1-10 and explain your reasoning."
Step 3: Human-in-the-Loop Selection
Review the AI's ranked list and choose 3 questions for your workflow. You don't have to accept the AI's top choices. Select questions that make sense based on your reading and judgment, or write your own, then explain your reasoning.
Step 4: Class Sharing
Post to shared document: your AI's top 3 recommendations, your actual selections, and whether you followed AI suggestions or made independent choices.
Phase 2: Refinement
Ask your LLM two critical questions: "As an expert on AI's environmental impact, what are the biggest methodological problems with this workflow? What biases might we encounter? What types of evidence might be misleading or missing?"
"What assumptions might you be making about AI's environmental impact that could limit these questions? What perspectives might you be missing?"
Revise your questions based on AI feedback and class discussion.
Phase 3: Testing and Analysis
Prediction Phase: Before testing, predict what each question will reveal based on your Crawford reading. Post predictions to shared document.
Implementation: Create Google LM Notebook, upload provided sources and test your questions. Also try one question from another pair's workflow for comparison.
Challenge Round: Identify something in sources that contradicts or complicates your background reading.
Class Discussion and Reflection
Review discoveries and methodological insights across all pairs.
Students address these questions in a brief written reflection:
This assignment helps students experiment with AI for process improvement rather than content generation, emphasizing critical questioning, methodological awareness, and student agency in research design. It reminds them to remain in control of the research process even when collaborating with AI.
The technique of having LLMs generate and assess questions was inspired by Ethan Mollick’s Co-Intelligence. I learned about Google LM Notebooks through the AAC&U Institute on AI, Pedagogy, and Curriculum. Special thanks to the students in HSSI-3800: Responsible Uses of Generative AI for their thoughtful engagement with the assignment.