|
Here's your second free edition of Platformer for the week: an on-the-ground account of a protest outside OpenAI headquarters in San Francisco and what it means for the nascent anti-AI movement. We'll soon post an audio version of this column: Just search for Platformer wherever you get your podcasts, including Spotify and Apple. Actual shoe-leather journalism like this is increasingly rare in tech media. Would you consider supporting us? If so, upgrade your subscription today. We'll email you all our scoops first, like our recent one about a viral Reddit hoax. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. You’ll also get access to Platformer+: a custom podcast feed in which you can get every column read to you in my voice. Sound good?
|
|
|
|
SAN FRANCISCO — “1, 2, 3, 4, we don’t want a robot war!” The chant echoed outside OpenAI’s headquarters, where a small throng of protesters had gathered on Tuesday to warn of a looming AI surveillance state. “5, 6, 7, 8, no AI surveillance state!” A handful of what I assume were OpenAI employees intermittently walked in and out of the building, studiously avoiding eye contact. As bleak as the slogans were, the juxtaposition of the mundane and the apocalyptic had a certain dark comedy to it. The demonstrators were on hand to protest the dramatic turn of events that saw OpenAI take Anthropic’s place as the United States’ lead vendor for military AI services. On Thursday, the Pentagon said it had notified Anthropic that it considers the company a “supply chain risk,” a designation previously reserved for foreign adversaries. OpenAI is now reportedly back at the negotiating table with the Pentagon, seeking to add yet more safeguards to its contract after an earlier effort failed to reassure the public. (Anthropic is said to be continuing to negotiate with the military as well.) The protest was the latest in a wave of anti-AI demonstrations that have taken place around the country. Last week, hundreds marched outside DeepMind, OpenAI, and Meta’s headquarters in an anti-AI protest that organizers said drew up to 500 people. Last month, 200 marched on Virginia’s state capital to protest the data centers being built in the state. For OpenAI, initial doubts about the strength of the company’s safeguards against the use of its technologies for domestic surveillance and autonomous weapons have metastasized into a full-blown PR crisis. On Tuesday, the company’s dealmaking resulted in a spur-of-the-moment protest that drew about three dozen people to OpenAI’s San Francisco headquarters. (OpenAI did not respond to a request for comment). Heading into the protest, I wondered what sort of person would show up — and what they would demand. Over the past week, Reddit posts about quitting ChatGPT have been getting tens of thousands of upvotes — and commenters have a pretty broad range of popular talking points, from “You're now training a war machine. Let's see proof of cancellation,” to the also very common, “I do like to celebrate any sort of downfall for OpenAI.” People were typing up a storm about the AI bubble, despising Trump, data privacy, protecting artists, and data center water and electricity costs. Would the OpenAI protesters present as anti-AI Luddites? Hippies worried about data centers’ effect on local water supplies? Quarter-zip-wearing technocrats with a sworn allegiance to Claude? I took out my notebook and pen, and started introducing myself to strangers. One organizer I spoke with was an anonymous man wearing a cardboard robot mask, who identified himself as “The Last Friendly Robot.”  The Last Friendly Robot (QuitGPT / X) I asked him why he was here. Well, he told me, “Who wants mass surveillance? Only the Epstein class.” (Epstein himself was not known as a defender of surveillance, of course — he did his evil deeds on a private island. The robot’s point was that billionaires often build powerful surveillance technologies while simultaneously going to great lengths to maintain their own privacy.) The robot told me he was even further disturbed by the idea of AI-powered weapons that could “kill with no conscience, and no human in the loop.” He’d worked in tech, and knew people at the big AI companies, he said. He felt the new contract wasn’t true to the promise of tech: “This isn’t what we came to the Valley to do. We came to build tech that makes people free!” The robot was echoing a sentiment that rang throughout the protest: that OpenAI’s employees are complicit in something bad. “Quit your job!” — another protest chant — was also written on the street in chalk, one of many colorful drawings protesters were chalking outside of headquarters.  A protester writes “No more AI” on the sidewalk outside OpenAI headquarters on Tuesday. (QuitGPT / X) I talked to a young woman named Perrin Milliken, who works at a climate nonprofit. She told me she had been concerned about “data centers, and how they consume so much water and electricity” in local communities. (While data centers can sometimes strain the supply in water-stressed areas, experts have pointed out that commonly reported statistics about how much water AI uses are too high, sometimes by orders of magnitude. But the electricity issue is real, as evidenced by recent AI lab commitments to pay for the price hikes that otherwise would be borne by consumers.) While she hadn’t been following the DoD issue closely, Milliken told me, OpenAI’s recent actions concerned her. “It was scary how quickly they were fine with whatever the government wanted,” Milliken said. She had been pushed to switch to Gemini from ChatGPT by the week’s events. You “can’t be complacent in this moment,” she told me, a moment rife with “very powerful, wealthy people who don’t have the interests of people in mind.” Most of the people I spoke to at the protest told me they use AI regularly — daily, even. Many said they had recently switched away from ChatGPT. And in peak San Francisco fashion, several told me they hadn't needed to “QuitGPT” at all: they already had an unshakeable loyalty to Claude. One of the few true AI abstainers present was Rick Girling, a retired economics and history teacher. He told me he had become concerned about AI after reading Karen Hao’s Empire of AI, a critical history of Sam Altman and OpenAI. Also, he said, “I hate billionaires.” (Several protesters were handing out a petition for taxing billionaires — a measure headed to the California ballot.) “I hate the notion that people with so much money can tell us what to do, and they’re not accountable,” Girling added. OpenAI's embrace of the military reminded him of Elon Musk's takeover of Twitter, he said, and how X's recommendation algorithm gradually came to steer people to the right. X deserved at least some of the blame for Trump’s re-election, he said. Surveillance and autonomous weapons felt like another manifestation of the billionaires' anti-democratic agenda. AI tools concern him because, as he put it, “it’s much easier to kill someone if you don’t see them.” Girling pointed at a protest sign with a picture of a humanoid robot and told me that he increasingly feared “stuff like this.” I told Girling that the picture he had just pointed at wasn’t real — it was an AI-generated meme of Sam Altman and Pete Hegseth posing with the Terminator. He laughed. “Oh my god,” he said. “It looks so real!”  Protesters’ signs, with the AI-generated Sam Altman-Terminator-Pete Hegseth meme obscured in the background (Michaël Trazzi / X) A software engineer named River Bellamy told me that what most concerns him is the Pentagon’s move to crush Anthropic, which he said OpenAI had been complicit in. Hegseth’s move to retaliate after Anthropic didn’t agree to his terms is “third-world dictator behavior,” he said. “People have the right to say no to a contract.” (OpenAI has said it objects to Anthropic’s supply chain risk designation. But plenty of protesters told me they nonetheless saw the company’s decision to accept the defense contract as a betrayal.) Protesters’ worries extended well beyond surveillance and murderbots. David Kreuger, a computer science professor who believes AI could cause human extinction, offered a dire message to his fellow protesters. “It will kill everybody,” he said. “If it doesn’t kill everybody, we will lose power slowly, and we will die slowly.” Kreuger’s comments underscored the way in which the QuitGPT protesters’ concerns about the effects of AI ran the gamut: massive job loss, environmental concerns, effects on education and human relationships, billionaires’ consolidation of power, government overreach, and even human extinction. Some expressed worries about OpenAI’s increasingly close relationship with the Trump administration; others had less partisan concerns. Some could quote me niche details about OpenAI’s contract with the DoD, while others had only seen a headline or two. Krueger told me this was exactly what he was interested in forming: a “broad anti-AI coalition.” But as I left the protest, I found myself wondering whether there was really a coalition here at all. The group gathered outside OpenAI’s office on Tuesday did not present itself as a movement so much as maybe the first draft of one. The people who showed up seemingly all hated the idea that OpenAI might help to normalize military uses of the technology that the company itself had once balked at. Beyond that, though, their politics were all over the place. And it made me doubt how effective the nascent anti-AI movement could be. Particularly given the massive increase in lobbying that OpenAI and its peers have undertaken as public opinion sours on AI. Pro-AI political action committees have raised almost $200 million to date — more than double the amount that pro-regulation groups have. Meta alone plans to spend $65 million in the midterm elections to support pro-AI candidates — its the most it has ever spent on an election. And beyond a stated dislike for OpenAI, what did these people really have in common? Krueger named one common theme: “Nobody has ever wanted killer robots.” Except that the military does seem to want killer robots, and for the most part AI companies have been eager to help build them. Google still has its DoD contract. Even Anthropic isn’t fully against them — they recently participated in a Pentagon autonomous drone swarm contest. (Dario Amodei’s stated objection to murderbots at this moment is that Claude is not yet reliable enough to operate them.) To me, this felt like the most uncomfortable aspect of Tuesday’s protest. There is seemingly no one in the current administration, and no AI company, that shares the values of the demonstrators. AI is reshaping the world with only minimal input from many of the people who will be affected most. The protesters felt largely powerless to shape the path of a technology that might result in their surveillance or even their death. Still, as the protest wound down, I watched people joking with each other, taking pictures of each others’ signs, exchanging contact information. Despite everything, the QuitGPT protesters seemed to be having fun. For the moment, QuitGPT feels less like a coherent boycott than an early test of whether and how public unease about AI might be turned into organized politics. Anthropic’s clash with the Pentagon and OpenAI’s decision to sign its own deal have given that unease a focal point. The group demonstrating outside OpenAI this week isn’t yet a coalition. But it might give us a hint about how one begins. On the podcast this week: Kevin and I talk through the latest developments between Anthropic, OpenAI, and the Pentagon. Then, we investigate how prediction markets are making a bad situation even worse in Iran. And finally, the Times' Arijeta Lajka joins us to discuss the flood of surreal and sometimes disturbing AI slop that YouTube is feeding children through Shorts. Is it Elsagate all over again? Apple | Spotify | Stitcher | Amazon | Google | YouTube Sponsored Each month, NewsGuard distills its latest AI reporting into a briefing on where and how AI systems fail. From red-teaming breakdowns to deepfakes and manipulated media, it helps organizations understand how AI risks are evolving — and why they matter. Already trusted by 11,000+ subscribers. See what they’re reading. Following
|