Following AI news the last few days has been even wilder than usual.
A few days ago, Sam Altman was suddenly ousted from his leadership role with OpenAI, where he was not only CEO but also the company’s spokesperson and most recognizable voice. No matter what folks feel about OpenAI, no one saw this coming. So there was a lot of social media refreshing and popcorn-eating. Reasons are murky even now, but they might be related to some conflict of interest he had with a GPU manufacturing startup, differences in opinion on the pace of releasing new products and AI safety, or even accusations of abuse from his sister. At any rate, he’s back now. More popcorn!
I don’t have much to say about that, except that I do like popcorn and it’s fun to find a reason to write.
What’s interesting to me are all the examples of leadership’s failures here: Sam Altman’s outsized leadership/spokesperson role in OpenAI; the unusual status of the board and the (non)profit aspects of OpenAI; how the board overstepped their role and failed to loop in Microsoft leaders; the threats from most of OpenAI’s employees to quit unless the board resigned; the rapid cycling of interim CEOs over a few short days; and, to be very meta about all this, the discussion about failures of leadership.
We’re now used to crises in leadership: politics, public health, banks, universities, everywhere. Crises are often related to failures of control: the American Republican party lost control of the presidential election in 2016; the CDC failed to control people’s responses to the Covid-19 pandemic; and walk-outs, whistleblowing, protests, and unionization movements all reveal that people in leadership roles aren’t always in control. That mismatch makes it hard to be a leader—or at least one who wants control.
Crises are often related to failures of control.
That desire for control is central to leadership in tech, and that’s what makes this whole fiasco about OpenAI so perfect. The Board thought that because it led OpenAI, it controlled it.
Nope. Bloomberg columnist Matt Levine nails the gap between leadership and control at OpenAI in a pair of diagrams:
No matter what the org chart says, MONEY is power. Microsoft doesn’t lead OpenAI, but as we’ve seen, it controls OpenAI. As Satya Nadella told Kara Swisher a couple of days ago, the position of Microsoft was always secure:
We have all the rights and all the capability. I mean, look, right, if tomorrow, if OpenAI disappeared […] we can go and just do what we were doing in partnership ourselves. And so we have the people, we have the compute, we have the data, we have everything.
Understanding this gap between leadership and control in tech helps to explain conversations about AI safety. Who will regulate AI? is a question of leadership, but How can AI be regulated? is a question of control. The latter question is more crucial.
Who will regulate AI? is a question of leadership, but How can AI be regulated? is a question of control.
In part because folks in tech know better than anyone that the tech is hard to control, the fantasy of control has been a dominant theme in computer design since at least the 1950s. While never perfect, these systems aspired to complete control: command and control systems like the SAGE missile defense system; cybernetics and feedback loops; expert systems that use a defined set of rules and knowledge base to control outcomes, and so on.
People are hard to control, too, and so the desire to control people through technology—whether by routing around them, surveilling them, or replacing them—is also a consistent theme in computer history. Nathan Ensmenger points to early ads for computers and operating systems that promise to replace (women) secretaries—who inconveniently distract (men) employees and are prone to pregnancy. Simone Brown traces a history of attempts to control Black people, especially through technologies of surveillance.
Which brings me to current control fantasies around AI: a utopia where everyone gets to be a manager. GenerativeAI does a lot of the work of junior employees and human assistants: editing, data processing, ghostwriting, illustration. GenAI models are faster, cheaper, and more importantly, they’re easier to control. While Gen Z employees are demanding remote work, higher wages, and more flexible benefits, GenAIs such as ChatGPT, Microsoft CoPilot, and Claude are simply helpful, honest, and harmless.
As a Gen Xer myself, I get the appeal! But I think we should be wary of the fantasy of control—especially when it comes to AI. As it plays out in science fiction, the idea of complete control often doesn’t turn out well for our heroes. Sci-fi teaches us that control is always a fantasy.
The concern about AI safety is a story about control: as AI gets more powerful, will we still be able to control it? Will leaders of AI companies be able to keep it in check, or align it with human values? And that whole existential risk thing—WILL AI KILL US ALL??
But nevermind x-risk, AI is already out of our control when we sit down at the ChatGPT terminal or the Midjourney Discord. Generative AI is stochastic, which means it’s both statistical and random. Sure, the most accessible models are trained with human feedback and guardrailed. But we can’t really know what the AI will generate in any given interaction. This makes AI different from most other computational technologies, where the computer does exactly what it’s told. Even as individual users, we can’t control AI outputs. Can we manage it, like the nice Microsoft man suggests above? Can we lead AI? I don’t know.
Leading people is a story about control. That is, if you’re going to be a leader, you have to tell people a story about control and then have them believe it—at least enough. You’re in charge. It’s all under control. Leadership is often a confidence game.
So Nadella’s repeated statements to Swisher about how Microsoft remains in control are as much as about owning OpenAI’s IP rights as they are about telling a convincing story about control. He reassures us about a dozen times in that short interview that Microsoft’s customers are fine and safe. He sure won’t let “surprises” like that happen again. Microsoft has it all under control.
Remember that org chart above that’s upended by Microsoft’s MONEY? There’s another key way that AI is out of control. MONEY disrupts the current control of generative AI: it’s already in the workflows of powerful businesses. Powerful, decentralized, and diverse businesses have a vested interest in generative AI. AI has been a business strategy for years, but the release of ChatGPT widened adoption and increased investment. Any attempts to control generative AI—to regulate it—is going to have to contend with MONEY. I’m not sure that we can get it under control.
So, mulling over the recent OpenAI drama, I’m reminded that control isn’t synonymous with leadership, but also that control is a fantasy—in leadership as well as in AI.
*Thanks to Nathan McKenzie for feedback on this draft, and Gayle Rogers for sharing the Bloomsburg diagrams.