Writing collaboratively with AI
Our academic integrity approaches should acknowledge that possibility
Over in the AI & How We Teach Writing Substack this week, I wrote about academic integrity—a topic of concern in every workshop I’ve run, in addition to the subject of countless, hand-wringing hit pieces on college writing. While some instructors say, “I just trust the students,” others are holding out for reliable AI-detection tools to maintain the integrity of their classes.
We’re never going to get those tools. It’s just not technologically possible—nor would it be good if we did. And for the trusting folks: yes! But in software there’s an expression: “trust, but verify.” In that post, I argue that a suspected AI paper is an opportunity for a conversation, a learning moment.
But, alongside that pragmatic response, I’m thinking of bigger questions about writing and collaboration. What if we think about students using AI as a kind of collaboration? What if we support that kind of AI use in our courses?
Since when are there individual authors?
Composition has been breaking down the idea of a lone author since at least the 1990s. Andrea Lunsford and Lisa Ede were some of the first scholars to emphasize the reality of collaborative authorship and the benefits of encouraging it in the college classroom. They warned that in the writing classroom,
concerns over individual performance—and especially over plagiarism—can become near obsessions. Certainly collaborative writing calls such obsessive concerns into question, and reveals the formalist, positivist, and individualist ideological assumptions on which common notions of plagiarism rest.
They advocated for teachers to adopt a "rhetorically situated view of plagiarism, one that acknowledges that all writing is in an important sense collaborative…” (426-7).[1] Of course, they had human and not AI collaborators in mind in 1994. But their argument for a “pedagogy of collaboration” can inform our responses to student uses of AI.
Rebecca Moore Howard asked us to look carefully at the “academic death penalty” we associate with plagiarism and move away from the “language of morality” that is often associated with institutional responses to it. There are a lot of different ways and reasons for plagiarism to show up in a student paper, she noted, including “patchwriting” that includes the voices of others stitched together by a student and that reveals a students’ emerging understanding of a difficult concept or text.[2] Moore Howard offers a policy that breaks down plagiarism into “cheating, non-attribution of sources, and patchwriting,” with different responses to each. Importantly, she doesn’t offer an “anything goes” approach. In her policy, cheating is: “Borrowing, purchasing, or otherwise obtaining work composed by someone else and submitting it under one's own name.” Some uses of AI in writing fall under that category. But others may be more accurately classified as non-attribution, or patchwriting.
Writing effectively in collaboration
When I posted a link to my Substack on Facebook, a friend asked me: “What is considered AI overuse?” Fair question. In the post, I left it undefined so that instructors can decide for themselves. But I'd say it's when a student's use of AI is going beyond what's allowed in the course, or when it's cutting into the student's ability to achieve the learning goals of the course.
But: writing effectively in collaboration can be a learning goal in a course. It always has been, at least tacitly: our students get help from peers, roommates, writing center tutors, TAs, and us. We consider this a good thing! Knowing where to draw the line between collaboration, assistance, and too much help has always been challenging, though we went ahead and encouraged that kind of collaborative writing anyway.
So, what about collaboration with AI? Anthropic breaks down student uses of AI into two categories:
(1) Direct conversations, where the user is looking to resolve their query as quickly as possible, and (2) Collaborative conversations, where the user actively seeks to engage in dialogue with the model to achieve their goals.
I do the first category—Direct conversations—when I’m looking for a word or identifying a plant. The second category—Collaborative conversations—is more appropriate when I’m working through an idea, like what exactly I’m trying to argue in a chapter, and where to go with a conclusion to a piece of writing.
Can we set up our classes so that they are encouraging more collaborative conversations? These conversations should be happening between students and instructors, but they can also be scaffolded to happen with AI. We have never written alone, and students are now writing in the company of AI. We can acknowledge that fact and support them as they learn to write effectively in collaboration—with humans or AI.

[1] Lunsford, Andrea A. and Lisa Ede, “Collaborative Authorship,” in The Construction of Authorship, Eds. Martha Woodmansee and Peter Jaszi, Duke University Press, 1994.
[2] Moore Howard, Rebecca. “Plagiarisms, Authorships, and the Academic Death Penalty,” College English, Vol. 57, n. 7, Nov. 1995, 788-806.
Back by popular demand! Yes! Love how happy this AI grandpa is.. even as I find AI terrifying...
Thank you for this helpful post. Yes, I agree that we need to create spaces in our thinking and our writing pedagogies for the reality that students write WITH AI in quite complex ways. In fact, I think we need more qualitative studies to better understand how students already use GenAI products to complement their writing process. You and your co-author provide a great example of such work, focusing on professional writing, in the "AI and the Everyday Writer" piece. We need more insight into how students write with AI. But you know, I also want to say that the vast majority of commercially available AI chatbots thwart what Anthropic calls "collaborative writing." In fact, we should object to calling chatbots "collaborators" because that anthropomorphizes and thus obfuscates how they actually work. They are not truly collaborating with us. They are sentence completion machines. I am still working out this whole argument, but I want to say that the value of language models for writers is to see them as what they are--they don't do anything other than repeatedly calculating a likely next word, so let's see them as that! How can they help us with this narrow, text-level writing? How can they free the writer from the burden of "authorship" by allowing textual (and playful?) experimentation at the level of the sentence, the next word? And fundamentally, the AI chatbot interface discourages this sort of text completion because it pushes the anthropomorphized, "assistant" or "collaborator" persona. It makes writing transactional.