For years, my research on the ways computers could automate writing and my role as a writing program administrator were separate. Sure, there were political bots online, Google translate and Siri, but those were pretty removed from composition classes.
And then ChatGPT launched November 2022. Suddenly, my research, teaching, and admin work overlapped. Few students were aware of ChatGPT right away, but I knew we needed a policy for its inevitable adoption.
The journal Nature also foresaw the coming challenge of AI writing. The policy they came up with in Jan 2023 was a gift to me as a WPA:
[N]o LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.
I loved the way they talked about authorship and accountability.
To write an AI policy for our Composition program, I took inspiration from Nature’s policy, alongside theories of workplace authorship. Memos, speeches, reports, and communications are often written by one person and signed by another. Whatever the composition process is, the signatory takes responsibility and is therefore the author. Deborah Brandt describes this common practice of workplace ghostwriting in “Who’s the President? Ghostwriting and Shifting Values in Literacy.” Evidence from a study I’m conducting with Tim Laquintano about workplace writers who use AI suggests that writers are outsourcing some of their research, editing, or drafting to AI—but that they retain responsibility for their writing.
I think the concept of authorship and accountability is an evergreen one. Regardless of how much AI continues to shape writing processes, we should retain that commitment in our writing classes. I say more about this in my most recent post for the Norton Newsletter, AI and How We Teach Writing: What’s your AI Policy?
Greetings from CCCC in Baltimore!