Why “just right” is wrong
What the Gemini ad "Dear Sydney" says about writing that people choose to do
Google aired “Dear Sydney,” during the Olympics—an ad that was designed to promote Gemini and get us in our feels. In a touching voiceover, a dad talks about supporting his little girl’s dreams of being a runner and how much she looks up to Olympic sprinter and hurdler Sydney McLaughlin-Levrone. The girl wants to train like Sydney and aspires to break her records. Moving music, home video footage, smiling kids. What’s not to love? Gemini. Gemini is what’s not to love here.
The girl—who, I want to note, has no voice in the ad—“wants to show Sydney some love.” Ok, a fan letter. Great! And here’s where the ad falls off the rails. Not once, but twice.
Here’s the script:
Dad says in a voiceover:
She wants to show Sydney some love and I’m pretty good with words but this has to be just right. So, Gemini: Help my daughter write a letter telling Sydney how inspiring she is.
Cut to him prompting Gemini for the letter.
Do you see the problem? I see two: 1) Dad jumps from his daughter’s wish to express love for her idol to his writing skills. Why? Is the girl Sydney’s #1 fan, or is he? 2) Dad turns to AI to get the words for his daughter’s fan letter just right. As if getting the words just right was the key for kids’ fan letters. Oof.
Sydney McLaughlin-Levrone seems pretty great so I’m going to speculate here that she’s likely going to prefer the letter from the 8yo girl over one from the girl’s dad, and especially over one from Gemini. The messier the handwriting and the more misspellings, the better. What’s the point of using AI to write here? As John Warner points out, it’s the experience of writing that matters, at least in this writing context, and this ad misses that mark by a wide margin. The potential human connection between the star and her fan is mediated and marred by both the dad’s and the AI’s intervention. Neither Sydney nor the girl stand to learn or benefit from Gemini’s letter.
As a number of commenters noted, the ad strikes the same sour note as Apple’s May 2024 Crush! ad, which heartlessly flattens musical instruments, paint, and other creative tools in a hydraulic press. Seriously, what do these tech people think of the creative potential of regular people? Ouch.
OK, so the ad sucks and Google is apparently not testing their ads nor their products with diverse enough audiences before launching. But I’m interested in what this issue of “just right” suggests about AI and writing.
What does it mean to get the words “just right” when we write?
Writing assignments in school often suggest that “just right” is about proper spelling, grammar, sentence structure, getting arguments lined up, and then tying it all up in a conclusive bow. The idea that writing can be “right” is a fiction propped up by formal education.
What does it mean to get the words “just right” when we write?
Most writing happens outside of school. And in writing outside of school, it doesn’t matter if writing is “right.” For workplace writing, it matters if it’s tailored to the audience, if it’s compelling, if it’s timely, if it’s unique, if it’s doing its job.
But what about the writing that no one is forced to do?
When people choose to write, they write love letters, fan fiction, romance novels, family histories, memoirs, niche-interest blogs, poetry, and, of course, fan letters. This writing isn’t usually valued in formal schooling. It’s rarely published. It’s judged only by one person, or a small community, or even just the writers themselves. This is writing to heal, to connect, to share, to process difficult feelings. Who the author is often matters more than what’s been written. People can have a lot of different reasons for writing in these contexts, but it’s rarely not to get it “just right.”
As a writing teacher, and a writer myself, I worry about how AI might be reinforcing the “right” view of writing. This view is bad enough in formal education. But if academic notions of “right” writing are imposed on writers in their chosen genres—even kids’ fan letters, yikes!—then we stand to lose a lot of what can be great about writing.
I don’t think it’s incidental to the ad that the father and daughter are Black. The dad’s words are “pretty good” but not “just right.” Gemini can help. Crucially, Gemini and other commercial LLMs are tuned to produce what’s sometimes called “Standard English,” or even “Standard American English,” since the leading AI companies are based in the US. This form of English dominates formal education and is often perceived to be “right.” But it’s steeped in race-based hierarchies of linguistic value. And for that reason, it’s sometimes called “Standard White English.”
Laura Gonzales, a scholar of multilingual writing, argues that educators are nervous about AI because AI enables “multilingual speakers and writers to draft content that may ‘deceive’ teachers and administrators into thinking these writers are skilled at composing in Standardized White English.” AI could disrupt those established hierarchies of linguistic value, which is a good thing.
Indeed, in the sciences, AI might be a boon for non-native English speakers who have paid a tacit tax for professional publication in English-only journals, as they’ve previously needed to hire out for editing. Nature reported in Fall 2023 that postdoctoral researchers in STEM were embracing AI to “refine text” or even make their research sound more “native.”
The use of AI in academic contexts such as classes and journals has dominated the discourse about AI and writing. What will educators do when students use AI? What happens to creative writers and artists who make their living from their craft? And so on. These are important questions.
But the Gemini ad reveals another fault line, this one closer to the heart of most writers—that is, people who aren’t forced to write for academic contexts. Why do we choose to write? And what do we stand to lose when AI takes over those writing contexts?
My 11yo daughter is writing a book. I’ve shown her how AI writes and we have fun with it. She laughs at how flat ChatGPT’s characters are, how simple its narrative conflicts are, and how easily they’re resolved. She wants nothing to do with AI for her book. The book is something she is choosing to write. She hasn’t let me read it yet, but I already know that it’s amazing—not because of the words on the page, but because she enjoys writing it. Not all writing is joyful, of course, even hers. But she’s working hard and she’s proud of that work.
I worry that more interesting aspects of writing—especially of writing that people choose to do—will get overtaken by AI. In academic contexts, widening access to “right” writing could be productively disruptive—or it could reinforce that “right” writing exists. But I worry that AI will metastasize the values associated with “right” writing from academic contexts to writing that people choose to do. That AI marketing will lead us to believe that getting an expression “just right” is more important than our process of writing it. That AI will flatten the diversity of writing expression. I worry that AI is the hydraulic press from the Apple ad, crushing the tenor and bend of our words, and crushing the joy and potential human connection that writing can create.
After I posted this, I saw a couple of other write-ups that echoed some of the same points I made here:
Linda Holmes's piece for NPR is hilarious: "if you like your letters awkwardly structured and with all the emotion of a birthday card from your eye doctor, Gemini can help." https://www.npr.org/2024/07/30/nx-s1-5056201/google-olympics-ai-ad
And Tim Murphy in Mother Jones notes that the ad reveals with tech oligarchs really think of us, but "it portends something far darker about the world, I think, if it turns out that vast numbers of people really are clamoring for “art” without artists, “news” without news outlets, and letters from children without the children." https://www.motherjones.com/politics/2024/08/thank-god-google-pulled-that-awful-and-depressing-ai-ad/
Love this. Exactly right.