|
Here's this week's free edition of Platformer: a look at the twin forces working to force social platforms to eliminate infinitely scrolling feeds and make other changes to keep kids safer. We'll soon post an audio version of this column: Just search for Platformer wherever you get your podcasts, including Spotify and Apple. Want to kick in a few bucks to support our work? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about a viral Reddit hoax. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. You’ll also get access to Platformer+: a custom podcast feed in which you can get every column read to you in my voice. Sound good?
|
|
|
|
I. Today let’s talk about an important lawsuit against the platforms that begins this week, a related new investigation by the European Commission, and why the infinite-scroll apps that dominate the lives of so many teenagers might soon be a thing of the past. (For kids, at least.) The old way of thinking about how to make social platforms safer was that you had to make them do more content moderation. Hire more people, take down more posts, put warning labels on others. Suspend people who posted hate speech, and incitements to violence, or who led insurrections against their own governments. At the insistence of lawmakers around the world, social platforms did all of this and more. But in the end they had satisfied almost no one. To the left, these new measures hadn’t gone nearly far enough. To the right, they represented an intolerable infringement of their freedom of expression. Earlier in their existence, the social platforms had experimented with having principles of their own, rooted in expert opinion about the promotion of human rights. But this had proven costly, in terms of lawsuits against them filed by governments around the world; and dangerous, as the more authoritarian governments realized they could force the social networks to appoint local representatives and then throw them into prison when human rights conflicted with the government’s objectives. And so, as 2025 dawned, the platforms adjusted course. Except where required by law, they would no longer seek to build new and more effective forms of content moderation. And in the United States, human rights principles would take a backseat to the question that increasingly dominated policy questions inside tech companies: what does the Trump administration want us to do? What this approach lacked in moral virtue it made up for in effectiveness. During the 2024 campaign, President Trump threatened to throw Meta CEO Mark Zuckerberg in prison; by mid-2025, Trump was championing Meta’s interests around the world. TikTok should have shut down in the United States after ByteDance failed to divest it by the deadline set by Congress; Trump granted the company a series of unconstitutional delays via executive order to give them the time he needed to transfer its ownership to his allies. Google donated $1 million to Trump’s inaugural fund and then watched the Andreessen Horowitz wing of the Republican party push a deregulation agenda for AI around the world. This state of affairs may have held for a while longer, if not for an inconvenient truth that is recognized by both Republicans and Democrats: some significant number of children experience a wide range of harms on these platforms, and no amount of public pressure had managed to force meaningful change. Whether time spent on social media worsens mental health problems for young people at the population level remains bitterly contested. Studies generally find weak effects from social media when you zoom out to the level of an entire population. And yet it’s also true that millions of children are harmed on social platforms every year. They are bullied and harassed by their peers; they are introduced to groomers and predators; they tumble down rabbit holes leading them to eating disorders and self-harm; they fall victim to sextortion schemes. A steady drumbeat of notifications and “streaks” anxiety disrupt their sleep, make them anxious, and cause them trouble in school. Screen time “nudges” are easily swiped away. The child deletes the app, only to reinstall it days later after being beset with FOMO. She will feel bad about herself for what she perceives as a failure, unaware that whole teams at each platform are dedicated to increasing the amount of time that users like her spend on the platform. Nor will she understand just how good they are at their jobs. For a long time, the platforms have gotten away with this on free speech grounds. What, you’re going to tell us we can’t rank posts in a feed? What, you’re going to tell someone how many posts they can view? “Social media addiction” is a media invention, they’ll say. There’s no proven causal link between using apps like these and mental health harms. And in any case, Section 230 of the Communications Decency Act prevents them from being held liable for what other users post on their platforms. Don’t like that video celebrating eating disorders? Take it up with the person who posted it. And all of this mostly worked, because good democracies protect free expression. But by the mid-2020s, almost everyone knew both adults and children who struggled to regulate their usage of social apps and suffered as a result. The problem was almost never an individual act of speech on the platform. Rather, it was the way the products are designed. Regulators and plaintiffs’ attorneys began new investigations into whether a social app might be held liable not for what people said on it, but rather how it worked. Increasingly, it appears they will. II. Several critical lawsuits are coming to trial this year alleging that the platforms have enabled widespread harm to young people. Opening statements for the first of them will take place in Los Angeles County Superior Court this week. Here’s the Associated Press: Instagram’s parent company Meta and Google’s YouTube will face claims that their platforms deliberately addict and harm children. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.
At the core of the case is a 19-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury and what damages, if any, may be awarded, said Clay Calvert, a nonresident senior fellow of technology policy studies at the American Enterprise Institute.
It’s the first time the companies will argue their case before a jury, and the outcome could have profound effects on their businesses and how they will handle children using their platforms.
As the AP notes, the lawsuit seeks to sidestep questions of immunity under Section 230 by focusing on questions of exploitative product design. “Borrowing heavily from the behavioral and neurobiological techniques used by slot machines and exploited by the cigarette industry, Defendants deliberately embedded in their products an array of design features aimed at maximizing youth engagement to drive advertising revenue,” it says. Meta and Google deny the claims. Meta put up a long blog post accusing the lawsuits of “oversimplif[ying] a serious issue.” “Despite this complexity, plaintiffs’ lawyers have selectively cited Meta’s internal documents to construct a misleading narrative, suggesting our platforms have harmed teens and that Meta has prioritized growth over their well-being,” the company said. “These claims don’t reflect reality. The evidence will show a company deeply and responsibly confronting tough questions, conducting research, listening to parents, academics, and safety experts, and taking action.” YouTube offered a blog post of its own. Among other things, it says autoplay is disabled by default on YouTube videos for teens. "In collaboration with youth, mental health and parenting experts, we built services and policies to provide young people with age-appropriate experiences, and parents with robust controls," spokesman José Castañeda told me over email. "The allegations in these complaints are simply not true.” Not every social media trial generates such strong pushback. That Google and Meta have said as much as they have is a reflection, I think, of how serious this issue is. Note that TikTok and Snap have already settled the case that begins trial in LA this week. (Without commenting on it.) And the KGM case is only the first of dozens of similar cases faced by the platforms around the country. (Forty attorneys general have signed on to one of them aimed at Meta.) Meanwhile, the European Commission has arrived at conclusions similar to those of US plaintiffs’ lawyers. Here’s Adam Satariano at the New York Times: On Friday, the regulators released a preliminary decision that TikTok’s infinite scroll, auto-play features and recommendation algorithm amount to an “addictive design” that violates European Union laws for online safety. The service poses potential harm to the “physical and mental well-being” of users, including minors and vulnerable adults, the European Commission, the 27-nation bloc’s executive branch, said in a statement.
The findings suggest TikTok must overhaul the core features that made it a global phenomenon, or risk major fines. European officials said it was the first time that a legal standard for social media addictiveness had been applied anywhere in the world.
“TikTok needs to change the basic design of its service,” the European Commission said in a statement.
TikTok, for its part, called the findings “categorically false and entirely meritless.” The company will be given a chance to respond to the allegations in detail. But if found liable, it could be fined up to 6 percent of its global revenue under the Digital Services Act. III. It’s hard to predict the outcome of any individual trial or regulatory proceeding. But in their shared point of view and sheer volume, design-based critiques of social platforms have gathered unusual force. It’s rare to see plaintiffs' lawyers in Los Angeles, European regulators in Brussels, and attorneys general across both red and blue states all arrive at the same conclusion. But they have here. Some countries, of course, are going even further. France advanced a bill to bar social media for anyone under 15; Australia has already banned it for under-16s. Spain has gone further still, proposing an under-16 ban plus criminal liability for tech executives. In such a world, eliminating the infinite scroll and other engagement-maxing features may come to be seen as the moderate position. So what might that look like? The European Commission tried to sketch it out. In its preliminary findings against TikTok, regulators suggested that the platform should disable infinite scroll, make its screen time limits more robust, and make unspecified changes to its recommendation algorithms. How far will this go in making teens’ lives better? As always, it depends on the individual child. But with 7 percent of children 12 to 15 spending between four and five hours a day on TikTok, and a commission finding that kids spent more time on TikTok after midnight than on any other platform, it’s clear that the app has a powerful hold on Europe’s kids. And you don't have to believe that TikTok causes depression to believe an app that regularly keeps a 13-year-old scrolling past midnight is not working in her interest. Of course, Instagram Reels and YouTube Shorts work in similar ways. And so, whether on the stand or before the commission, I hope platform executives are called to answer: if you did want to make your products addictive, how different would they really look from the ones we have now? The platforms will surely fight back. They have to — infinitely scrolling, user-generated content is the business model. (And they have all those new friends in the Trump administration who might be able to help.) But they are arriving at the fight in a weaker position than usual. In a polarized world, their failures around child safety are increasingly the one thing that partisans of every stripe can agree on. Regulators are no longer impressed by the bare minimum. (They have teenagers of their own now, and all the screen-time battles that come with them.) I don’t know which trial or regulatory action will be the one that finally forces major changes to social platforms for teenagers. But it seems increasingly clear that change is in fact coming. And for the first time, some subset of users will find that the feed they are scrolling through suddenly comes to an end. Elsewhere in social media trials: Another high-profile trial began in New Mexico. The company is accused of failing to protect children from sexual predators. "Prosecutors say they’ll present evidence that Meta knew that some 500,000 inappropriate interactions with children take place daily on its platforms, and that the company doesn’t adequately track those interactions," the AP reports. Sponsored Your Skills Could Shape the Future of AIAI is evolving at breakneck speed — and the risks are growing just as fast. We’re not ready for what’s coming. 80,000 Hours has spent nearly a decade researching the biggest threats from advanced AI, long before ChatGPT made headlines. They believe this could be one of the most important challenges of our time — and they need people with all kinds of skills to help. Whether you’re into policy, safety research, governance, or another field entirely, you can be part of the solution. Their free career guide goes beyond the “follow your passion” clichés, giving you concrete, research-backed steps to build a career that truly matters. Everything is free because they’re a nonprofit. The only goal: help you use your career to solve global problems. Curious how your skills could shape the future of AI? FollowingAI comes for Super Bowl adsWhat happened: This year’s Super Bowl ads (and the discourse around them) were dominated by AI. Brands ranging from big tech companies to retail businesses jumped at the chance to promote their latest AI products and air AI-generated ads. As previewed last week, Anthropic aired its ad taking a veiled jab at OpenAI’s decision to bring ads to ChatGPT, which sparked a public feud with OpenAI CEO Sam Altman. (He called Anthropic’s ad “clearly dishonest.” The ad that aired did feature a change from the original tagline that made it less of a direct shot at OpenAI. Instead of “ads are coming to AI. But not to Claude,” the new tagline said “there is a time and place for ads. Your conversations with AI should not be one of them.” (So are they coming to Claude or ...?) Elsewhere, vodka brand Svedka aired a creepy 30-second ad that featured two robots dancing at a club, which it touted as the first Super Bowl ad “primarily” generated by AI. Silverside AI, which generated Svedka’s ad, was also behind Coca-Cola’s recent AI-generated holiday commercials, which sparked backlash online for resembling AI slop. (Pepsi took aim at Coca-Cola in its own Super Bowl ad, which featured a CGI Coca-Cola polar bear doing a blind taste test and choosing Pepsi.) (Disclosure: my boyfriend is a VFX artist and worked on the polar bear in the Pepsi Super Bowl ad. Platformer boyfriends are really doing the most.) Why we’re following: The Super Bowl represents a good chance to check in with the cultural zeitgeist — particularly those parts of it that can afford spending $8 million on a 30-second spot. Unfortunately, this means we were inundated with ads about AI, prediction markets, and crypto. What people are saying: “My takeaway from the super bowl ads is that the entire american economy is being propped up by AI, weight loss drugs, cryptocurrency and gambling,” Axios congress reporter Andrew Solender posted on X. Others were creeped out by home security company Ring’s ad, which promoted a feature for locating missing pets: “every commercial was ‘gamble your life away, AI will live it for you. We’re watching you,’” wrote @zaydante in an X post that garnered more than 800,000 views. Others prodded at Salesforce’s odd decision to feature Youtuber MrBeast in its ad considering the very different demographics in customer base and audience. “My 9-year old and all his friends are creating Salesforce accounts right now. And they’re all making cold calls to B2B decision makers and generating SQLs for enterprise SaaS companies,” @bradcarryvc joked. “Mr. Beast just created 1 billion new CRM users.” AI fatigue emerged as a topic of discourse. “Super bowl commercials so evil this |