AI and new promises for teaching at scale
And what I learned about writing for "The Conversation"
About a month ago, Open AI co-founder Andrej Karpathy announced that he would be launching a new AI-centered educational venture, Eureka Labs. The announcement got a lot of buzz in tech circles and seemed to me like something fun to write about from my perspective as an educator familiar with AI. I’d promised the NEH some public-facing writing in exchange for a summer stipend, so I pitched an article to The Conversation, a publication venue that features academics writing for popular audiences. I’d pitched a few things to them and been turned down, but the editor liked my angle on Karpathy and teaching at scale.
The resulting article, “AI pioneers want bots to replace human teachers – here’s why that’s unlikely” was just published today on The Conversation and featured on the front page of the site (yay!).

One great thing about The Conversation is their generous terms for republishing. Publication venues looking for content can reprint it, following the Creative Commons No-Derivatives license. Plus, I retain copyright in the article. Another thing about the venue—and why I learned a lot from this experience—is their editorial process. Academics tend to write with a lot of detail, caveats, citations, etc. Guilty as charged! The Conversation’s editing team irons all of that out and trims pieces to a length that facilitates sharing and republication. The writer is in the loop the whole time, but there are multiple editors working with the piece and it takes a while to move through the process.
The Conversation gave me permission to share the original draft I wrote here. In my initial attempt, I tried my best to emulate the style of The Conversation: short paragraphs (I aimed for 3 sentences max); hyperlinks; establishing my credibility as an academic; compelling examples; a clear, main point. At 1500 words, it was still about 500 words too long (whoops hahaha) and with too much detail for the venue. A series of back-and-forths over a couple of weeks resulted in an article that is about 1000 words, cuts out a lot of the side stories (sorry, Mr. Rogers!), has a punchier headline (forgive me, I didn’t choose “AI pioneers”), and registered green on their readability score. My voice is still in the final piece, but a lot changed from the original version.
See below for my original draft and visit The Conversation for the final version. Thanks to The Conversation for letting me share both, and for walking me through this fascinating process. I hope folks find this before-and-after transparency about publishing interesting—as well as what I have to say about AI and the promises of personalized learning!
The original draft I submitted to The Conversation on 7/22/24:
Andrej Karpathy's new educational venture with AI follows a long line of technologies for teaching at scale
Andrej Karpathy announced his newest venture after leaving OpenAI in February: Eureka Labs, an education company that will be "AI native." Karpathy is an AI native himself. He was a founder at Open AI and led the computer vision team for Tesla’s autopilot.
In his announcement for Eureka Labs, Karpathy noted that “subject matter experts who are deeply passionate, great at teaching, infinitely patient and fluent in all of the world's languages are also very scarce and cannot personally tutor all 8 billion of us on demand.” Karpathy hopes AI can solve an age-old challenge in education: the scarcity of good teachers who are subject matter experts.
Karpathy is not alone in promising that AI will transform education in a big way. The explosive technological advancements of AI in the last few years have led OpenAI CEO Sam Altman, Khan Academy CEO Sal Khan, venture capitalist Marc Andreessen and University of California, Berkeley computer scientist Stuart Russell to suggest that AI might be a personalized tutor for everyone, an amplified guidance counselor, and a potential replacement for teachers. Stuart Russell speculated that soon we could use AI to “be delivering a pretty high quality of education to every child in the world.”
“That’s potentially transformative,” Russell says. If these promises pay out, the ways that kids and adults learn across the world could fundamentally change. However, we’ve seen similar promises about mass changes in education from technologies before AI.
Teaching at scale
The key to these tech leaders’ optimism about high quality education is that technology can scale in a way that humans cannot.
For Karpathy, an ideal learning experience would be working through physics material “together with Feynman, who is there to guide you every step of the way.” Richard Feynman was renowned for his research as well as his accessible way of presenting theoretical physics. With AI to replicate him virtually, many people could access Feynman at the same time, scaling up his reach and influence.
Feynman wrote several popular books on physics and his life as a physicist, so his influence already stretches beyond his personal contact. Books are one way that teaching can scale. However, books do not respond to individual readers. Socrates recognized this problem with writing over 2000 years ago.
Television also promised to teach people at scale through mass broadcasting. Generations of kids grew up with Mr. Rogers teaching about kindness and empathy. They learned to love reading from LeVar Burton in “Reading Rainbow.” Television can be helpful for adult learners of new languages.
But television was also blamed for learning declines among young people. Research in the 1980s showed that the impact of television on children’s learning depends on the type of programming and time spent watching. Fred Rogers, who testified before the US Senate about funding for public television in 1969, was horrified by the “bombardment” of fast-paced cartoons. He used research in child development to design Mister Rogers’ Neighborhood. To make the one-way communication of television more interactive, he made eye contact, showed conflict resolution among characters, and welcomed letters from children.
Even so, Mister Rogers’ Neighborhood could not respond directly to an individual child’s needs. Books are television are educational technologies that scale, but are not personalized learning.
The high cost of personalized learning at scale
I research AI and other computational writing technologies, and I also direct an English Composition program that serves 7000 students a year. First year composition is a required course for most college students across the United States. Composition programs have wrestled with the challenge of teaching writing at scale for over 100 years.
Composition courses generally have smaller class sizes, under 25 students and ideally under 15 students. College Writing Centers support students in these and other writing classes through one-on-one tutoring. Research shows that students learn better in smaller classes and in tutoring, in part because they are more deeply engaged. But although they are effective, these personalized learning formats are also more expensive to run.
One solution to the problem of cost is to pay teachers of smaller classes less. Teachers of writing tend to be at the bottom of the university pay scale and are often contingent faculty or lack benefits.
Another solution to the high cost of personalized learning is technology.
Technologies for personalized learning
Education researcher Audrey Watters tells a story of technologies for personalized learning that begins in the 1920s. At the American Psychological Association meeting in 1924, Sidney Pressey showcased an “automatic teacher” that he made out of typewriter parts and that asked multiple choice questions. Pressey also designed tests to measure intelligence—the same kinds of tests that were used in the American military and that are still used to sort students in school. These personalized learning technologies helped to reinforce conformity in institutions.
In the 1950s, the psychologist B. F. Skinner designed “teaching machines” based on the concepts of reinforcement learning that he discovered in research with pigeons. Educators broke concepts down step-by-step and then programmed the machine to present these incremental steps to students. If a student answered a question correctly, the machine advanced to ask about the problem’s next step. If the student answered incorrectly, they would stay on that step of the problem until they got it right.
The idea behind these teaching machines was to break problems down into small enough parts that it was nearly impossible to get the next step wrong. Students would get positive feedback from the machine for their correct answers, and gain confidence as well as skill in the subject.
Research on the success of this “behaviorist” teaching approach was inconclusive. The machines fell out of favor in the later 1960s because they were difficult to implement. Skinner himself lost credibility and teachers were not fond of the machines. And, as Watters notes, perhaps worse was that students found them boring.
In the 2000s, another education experiment with mass education was MOOCs, massive open online courses. These courses delivered video and quizzes through the web. The New York Times declared 2012 “The Year of the MOOC” because the technology promised so much for democratizing education. But as Elizabeth Losh notes in her book, MOOCs and Their Afterlives: Experiments in Scale and Access in Education, the lack of interactivity and community in these courses meant that learners often dropped out and failed to complete the program.
Coursera, a popular platform for such courses, has since developed new ways to account for student performance to tailor content better. The term MOOC is rarely used now and courses cost students a few hundred dollars. Some platforms like Outlier have paired up with colleges to offer college credit.
These changes increase incentive and accountability for students. Yet the behaviorist approach from 1960s programmed instruction is still present in the shallow interactions that students have with materials and the lack of community in these courses. These educational approaches scale, but it’s unclear if they are a cost-effective means of learning.
How is personalized learning with AI different?
Salman Khan, the founder of Khan Academy, writes that “AI tutors can personalize and customize coaching, as well as adapt to an individual’s needs while hovering beside our learners as they work” (110). His new AI-driven educational platform, Khanmigo, is the subject of his book Brave New Words: How AI Will Revolutionize Education (And Why That’s a Good Thing).
Khanmigo is designed to mimic one-on-one tutoring, which the mass delivery of television and books can’t do, and which the stilted interactions of 1960s programmed instruction and 2010s MOOCs never quite delivered.
Educational publisher Pearson is integrating AI into its educational materials, making them more responsive to student needs. The company says that their interactive platform “allows instructors to scale teaching excellence.” The platforms MyLab and Mastering can quiz students on concepts and point to additional resources, similar to Khanmigo. Over 1000 universities are adopting these materials for Fall 2024.
Does this meant that AI can finally solve the problem of personalized learning at scale?
Perhaps. But every technology that has promised learning at scale has fallen short of its hype. With AI, we will need to pay attention to the quality of learning and what it is replacing.
For example, if AI replaces the social relationships of teachers and students, what are students missing? Students in crisis turn to trusted adults like teachers and coaches. Will they turn to their AI tutors instead?
Student data privacy may also be an issue with AI-driven platforms. These platforms collect tons of data on student performance, which could be used or sold. Some of the more popular platforms are based in China, which has few data privacy protections.
Scale itself is also a potential problem. Any good teacher will influence their students. What happens when that teacher is an AI, scaled up for thousands or millions of students? Will we see diversity of thought diminished? Could students be manipulated en masse?
The idea of an AI tutor in every pocket sounds exciting. I would love to learn physics from Richard Feynman. But these concerns and the mixed history of personalized learning at scale should temper the hype about AI as a perfect solution for teaching at scale.
See the published version on The Conversation’s website.
