It's a good time to experiment with AI in writing classes
Let's look at how faculty are already teaching with AI
By: Annette Vee, Carly Schnitzler, and Tim Laquintano
This co-authored post is focused on the open access pedagogical collection I co-edited with Tim Laquintano and Carly Schnitzler, TextGenEd: Teaching with Text Generation Technologies (WAC Clearinghouse, 2023) and its addendum: TextGenEd: Continuing Experiments (WAC Clearinghouse, 2024).
Given the volatility of AI right now--the turnovers in leadership, the rapid-release models, changing capabilities and liabilities--instructors might be forgiven if they decide to wait until we know more about the tech before they integrate large language models (LLMs) like ChatGPT into their course. A hype cycle is not the best time to radically reimagine one’s pedagogy. Plus, what we know about the training data for LLMs suggests they overrepresent dominant viewpoints, draw from pirated texts, and perform better on canonical texts. We also know that some LLMs use exploitative labor, have continuing privacy and security concerns, and spread misinformation. Yet students find LLMs useful for editing, tutoring, clarifying questions, and workplaces are adapting quickly to harness the tech. Put another way, teachers across the curriculum need to grapple with AI in their courses.
But we can’t yet say that we have evidence-based or data-driven best practices for helping students work with language models. We have only anecdotes, lore, opinions, lots of Tweets, and a small amount of experimental data about how writers use LLMs. So, what should we do when we're on unstable ground and there are no "best practices" for teaching with AI yet?
So, what should we do when we're on unstable ground and there are no "best practices" for teaching with AI yet?
In writing workshops and in our own departments, we advocate for instructors to take this opportunity to experiment with AI technologies alongside students. Rather than have students use AI to complete traditional tasks like essay-writing, courses should work with LLMs as objects of study. That is, courses that contend with LLMs should center inquiry on how they work and how to use them. Thoughtful experimentation that asks students to dive into the tech can teach them--and us!--about the technical aspects and emerging social implications of LLMs.
This spirit of experimentation underpins many of the 50+ assignments that we’ve gathered and published in a recent open access collection with the Writing Across the Curriculum Clearinghouse, TextGenEd: Teaching with Text Generation Technologies, and an addendum, Continuing Experiments. (Another addendum is scheduled for publication in Aug 2024, with a May 31 deadline for submissions.) The assignments come from the digital humanities, writing and rhetoric studies, technical communication, creative writing, philosophy, economics, and other related disciplines. Many of our contributors have been working with students with AI or precursor technologies for years, and all of the assignments in the collection are classroom tested. Each is accompanied by learning goals, information about the original context and student uptake, and the teacher's thoughts, all of which help to contextualize the assignment.
Below, we share one takeaway and example from each of the six categories in the original collection (Aug 2023) plus the update (Jan 2024). These examples provide just a few of many pedagogical approaches to working with LLMs as objects of study; you can explore more in each of the collection's sections.
Explore the rhetorical moves of writing through feedback from generative AI platforms. Antonio Byrd has students set up an LLM as a peer reviewer. Key to students' interaction with the chatbot is how they formulate pointed questions about their own writing, for example: How can the thesis be more specific and complex? Is every piece of evidence followed by analysis in the following paragraph? Students compose first drafts on their own and work with peers and the instructor for a first round of revision. Then they ask the chatbot for feedback using the questions they generated, evaluate the feedback's quality, and revise their work. The final product they turn in includes their revised essay, a transcript of their interactions with the chatbot, and a Statement of Goals and Choices where they reflect on how interactions with their peers and the chatbot shaped the revision process. This assignment, like Kyle Booten's assignment focused on prompt engineering in the same section, harnesses the rapid feedback and precision needed in prompting AI to help students understand rhetorical moves in their own writing. Ideas for using AI to generate counterarguments, teach about genres, and more are in the Rhetorical Engagements section of the collection.
Prioritize AI literacy for instructors and students in a real-world context. In “AI Literacy: Real-World Cautionary Tales,” Maureen Gallagher asks students to respond to cases involving misuse or controversial use of Generative AI in public relations, mental health counseling, and data privacy contexts on a class discussion forum. The vivid examples spark conversation as well as reflection among students on the dangers and complexities of AI use. Educators can update and adapt this assignment as new cautionary tales inevitably arise. This is a literacy exercise focused on understanding both how LLMs work and their sociocultural implications—an important expansion of the idea of what digital literacy looks like in our contemporary moment. More examples of AI Literacy activities are available in the Aug 2023 and Jan 2024 editions.
Iterate prompts and platforms to achieve your goals with generative AI. Josh Anthony's Introductory Activity for Generative AI asks students to recreate a human-authored text from class to give them practice steering their prompts to achieve a particular outcome. Students learn quickly that straightforward prompts--write X in the style of Y--don't work. They then have to break down the texts' key stylistic features and articulate them back to the LLM in order to come closer to their desired result. This exercise shows students the limitations of LLMs and also has them engage deeply with texts from a course. This assignment is in the "Prompt Engineering" section of our "Continuing Experiments" addendum, which includes other fresh approaches to prompt engineering from the Fall 2023 semester.
Consider the ethical implications of both developing and using generative AI in our culture and our classrooms. In Jentery Sayers’ low-stakes, in-class assignment, Professor Bot: An Exercise in Algorithmic Accountability, students are positioned as advisors to university decision-makers in a hypothetical scenario where AI is assessing university entrance essays. After reading texts that help to establish an understanding of algorithmic accountability—the idea that if an algorithm can cause societal harm, there should be mechanisms for redress—students come up with questions, suggestions, and concerns for these hypothetical university decision makers to address. This assignment is a straightforward way into these complex ethical conversations, highlighting the gaps in student knowledge, while prompting an interest in understanding and questioning algorithmic decision-making in our contemporary culture.
Contextualize the development of generative AI with creative explorations that lend valuable historical background. Through kathy wu’s assignment, “Who's Talking: Dada, Machine Writing, and the Found”, students situate writing with contemporary LLMs within a history of found writing, using legacy text generation tools like Markov chains and even analog cut-up writing techniques. The assignment asks students to explore the intersection of found art and generative text, developing an understanding of the relationship between the sources used to write with machines and the power dynamics that shape the creation and dissemination of meaning.
Acknowledge the impact these tools will have on the workforce students are entering. Students need space to critically engage with traditional and AI-infused genres of professional writing, and the writing classroom can provide it. In "The Paranoid Memorandum," Jason Crider helps students critically evaluate workplace communication against a backdrop of AI. AI is already a staple of many professional writing contexts, especially the tried and true memorandum. Students are split into groups and told, at random, whether they can, cannot, or must use AI to compose their memo. After the memos are completed, students evaluate and revise them across groups, speculating on which AI category their peers' group had drawn. This assignment has a twist--which we won't spoil here--which allows the students to talk about their assumptions for AI and human collaborations in professional writing. Other assignments in the Professional Writing section address reading level translation, editing, lit reviews for health sciences, and more.
All of these assignments foster experimentation with AI writing and help students to explore the rapidly shifting possibilities and limitations of LLM platforms for their work and others'. We are heartened that several other collections of assignments have popped up in the wake of TextGenEd, including the AI Pedagogy Project (Harvard's MetaLab) and Exploring AI Pedagogy (MLA-CCCC Joint Task Force on Writing and AI). These resources constitute a growing ecosystem of open access pedagogical materials to support our emerging future of writing. If you'd like to contribute, please check out the CFP for the Aug 2024 Continuing Experiments!
Carly Schnitzler teaches writing at Johns Hopkins University. Her research and teaching centers on digital rhetoric, creative computation, and the public humanities. She is also the founder of If, Then: Technology and Poetics, a community of artists, scholars and teachers invested in promoting inclusivity and skills-building in creative computation.
Tim Laquintano is Associate Professor of English and Director of the College Writing Program at Lafayette College. He teaches classes on writing and digital media, writing technologies and science writing and is the author of Mass Authorship and the Rise of Self-Publishing (University of Iowa Press, 2016).