|
Here's this week's free edition of Platformer: a look at the Oversight Board at five. I spoke with a board co-chair and the academic who has studied the board the most closely to ask: can its mission of bringing justice to Meta's user base be saved? Want to kick in a few bucks to support our work? You can think of it like a holiday present for us! If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about how Grok's porn companion is rated for children 12+ in Apple's App Store. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice.
|
|
|
|
Five years ago this week, Meta's Oversight Board accepted its first cases. Together, they highlighted the company's global reach — cases originated in Malaysia, Azerbaijan, and Brazil, among other countries — and the high-stakes hair-splitting that Meta's content moderation apparatus attempts to navigate. When is it OK for a woman's nipple to appear on Facebook? Can you quote Goebbels, if it's actually a commentary on rising fascism in America? At what point does a veiled threat against the president of France become an incitement to violence? Over the past half-decade, the Oversight Board has sought to make decisions like these more consistently, and in public. In more than 200 published decisions, and 317 policy recommendations to Meta, the board has sought to draw brighter lines around what is and is not allowed. The Oversight Board emerged from a series of crises, including the Rohingya genocide, Cambridge Analytica, and the larger backlash against Facebook following Donald Trump's election as president in 2016. At the time, CEO Mark Zuckerberg had the final say over the fate of every post on his platforms; the Oversight Board represented an effort to restore public trust by creating a check on that power. A retrospective on its first five years published by the board today documents the results of its efforts, including a push to allow Iranian protesters to post "death to Khamenei" as political speech, and an agreement from Meta to tell you which specific rule your post violated when removing it. The board also led an inquiry that resulted in Meta acknowledging its over-moderation of Palestinian content in 2021 had an "adverse human rights impact" on Palestinians' free expression. Meta, for its part, has funded the board through the next two years. At the same time, it seems likely that most users of Facebook and Instagram still have little to no idea that the board exists. The board was at its most prominent in 2021, when Meta asked it to consider whether Trump should be permanently banned for his actions related to the January 6 Capitol riots. But the board punted that decision back to Meta, and since then has largely faded from public view. The board has done some good work. But it has taken on disappointingly few cases, and can sometimes take the better part of a year to render a decision, even when the post in question has credibly incited political violence. (I noted that company's press release today says it will release a YouTube video about its impact next week, suggesting it even missed the deadline for a promotional video it had five years to make.) Over time, the founding promise of the Oversight Board — that it would serve as a kind of Supreme Court for content moderation, a judicial branch coequal to Meta's executive — has been revealed as a fantasy. And yet the board does push Meta on human rights issues, particularly outside the United States, and wins praise from civil society groups for giving them a place to channel their advocacy. Almost everyone I've spoken to on the subject says that they are in some ways disappointed by the Oversight Board's performance. And — often in the same breath — they will tell me that it beats the alternative. Particularly during a year when Meta abruptly sidelined its policy team, empowering lobbyists to rewrite community guidelines on the fly to curry favor with the Trump administration. As it enters its next five years, then, the board faces a moment of reckoning. Its early dream that other platforms would hire it to do for them what it does for Meta has not yet come to pass. And what it does accomplish for the average Meta user isn't always clear. Meanwhile, where Zuckerberg once saw the board as a shield against the threat of onerous regulation, he has since found in Trump a president who will happily advocate for his interests in exchange for a few million dollars and an embrace of the administration's culture war against transgender people, immigrants, and others. "There was actually a period of time in which Mark thought it was both in his best interest and the right thing to do," said Kate Klonick, a law professor at St. John's University, who chronicled the Oversight Board's development, of Zuckerberg's mindset in 2020. But that was also a time when tech companies felt more vulnerable to US regulators, she said — "and part of that was because they couldn't buy the White House." II. Board members I've spoken with acknowledge the limitations in what they have been able to accomplish so far. (And politely duck the question about whether what Zuckerberg and Meta want from them has changed over the past five years.) Making the board more effective might begin with letting go of the vision Zuckerberg laid out for it before its founding: as a Supreme Court that would create a series of binding precedents. Whatever power Meta has been willing to extend to its board, it has reserved the right to rewrite policy as it sees fit, and against the board's recommendations, often without bothering to explain its actions in any depth. As a result, the board's rulings on the relatively small number of cases that it hears have generally had limited impact. "It's always been a little bit of a fiction that the individual decisions on pieces of content are themselves very meaningful," Paolo Carozza, a law professor at Notre Dame and co-chair of the Oversight Board, told me in an interview. "We all understand more and more that each case is only really meaningful and helpful as a tool for leveraging the board's influence if we really do a good job of linking the cases to systemic, wide issues." Carozza told me he had never liked analogizing the Oversight Board to a court. Among other things, the analogy pushed the board to act like a court — aloof and hesitant to comment on Meta's actions except when directly connected to a case or policy advisory. Courts, after all, do not comment on cases before they are decided, or issue statements when companies behave badly. Instead, they wait for cases to arrive and then rule narrowly on the facts before them. "I think that has constrained us," Carozza said. This judicial posture may have granted the board a certain legitimacy, particularly in its first year. Klonick told me she has been impressed with the quality of thinking on display in the board's decisions. "The writing is more thoughtful" than she thought it would be, she told me. "It's more legal and rigorous." On the other hand, the events of 2025 have challenged the idea that the board is providing real oversight. A series of cascading revelations about the company has been met with only the most timid of statements by the board — or, more often, silence. In January, after the company announced it would create new categories of permitted hate speech to impress the Trump administration, the board — which had been blindsided by Meta's announcement — issued a bizarrely upbeat statement saying it "welcome[d] the news that Meta will revise its approach to fact-checking, with the goal of finding a scalable solution to enhance trust, free speech and user voice on its platforms." When it was called to rule on two closely watched cases involving anti-trans rhetoric, a divided board ruled in favor of Meta's decision to leave them up. At least it engaged with that issue. In the months since, we have learned that Meta's content policies let its AI bots engage in "sensual" roleplay with children. It let users create chatbots using celebrity likenesses without permission, which then made frequent sexual advances. We have current and former employees testify that the company suppressed research on child safety in VR, and a judge say that Meta lawyers ordered staff to block or delete internal research into the mental health of teens on its platforms to reduce legal liability. To cap off the year, we learned last month that Meta's internal projections showed that it would earn $16 billion in 2024 from scams and ads for banned goods; that a third of all successful scams in the US take place on its platforms; and that users had to be caught attempting to traffic people for sex 17 times before their accounts would get banned. Meta denies or disputes much of the above, even though a huge portion of those findings come from its own employees. At a minimum, though, these findings would seemingly demand some sort of response from a board entrusted with overseeing the human rights of Meta's billions of users. None, though, has been forthcoming. "I totally agree with you, in a sense, that there's a lot more room for the board to speak more generally about issues," Carozza said. The board is planning stakeholder consultations on child safety issues, he noted, and hopes to say more about the subject in the coming year. In the meantime, the board's silence can risk appearing to be a comment on its own independence. Given that its existence relies on money from Meta — which funds it only a couple years at a time — critics have long questioned how stridently the board would be willing to criticize its patron. Data on that question from the first five years does not look great. Klonick said the funding structure may be at fault. "It limits their willingness to push back," she said. "Because even for a lot of the people on the board, it's just a very nice paycheck, and they'd rather not give up that paycheck." III. That said, the board has also accomplished things that matter, Klonick told me. "Before the board existed, it was real black magic for civil society, governments, every type of group to have a voice at these platforms when something happened," she said. "The one huge benefit of the board has been a mechanism to basically have a direct voice — a consolidated place to express signals and do it in a transparent way. So it's not just 'does anyone know anyone at Facebook? Let's call them.'" She also pushed back on the court analogy from a different direction than Carozza. The board, she said, functions less like the U.S. Supreme Court and more like a European inquisitorial body — "surveying as widely as possible the various kinds of values that they think they have to preserve." Carozza noted — correctly — that the board's impact has been greatest outside the United States. The board often takes up cases that the US press would likely never write about, and which might have otherwise languished in Meta's unfathomable automated systems. "When you look at this at a global scale, and especially all across the global South," he said, "I think the value of the board is especially strong there." The darker suggestion in that idea is that the board has been able to do the most good in regions where Meta's leadership cares the least. Through no fault of its own, the board operates in a world where an unspoken rule holds that Meta content policy must not damage its relationship with the US government. How do you provide "oversight" over that? Particularly when the Trump administration is now threatening to deny visas to any foreign worker who ever worked as a content moderator? IV. Despite her criticisms, Klonick told me that she isn't ready to give up on the Oversight Board experiment. Asked to grade the board, she landed on a reluctant C. "This really did not meet my expectations," she told me. "But would I have changed it or decided not to do this at all? Absolutely not. I still think it was a project worth doing, and it's not completely without hope yet as a model." Particularly because the need that the board was created to serve remains as great as ever. "It's bad for government to control speech," Klonick said. "And it's bad for billionaires to control speech. And it was always really, really important for users to have a mechanism of direct impact and control." At its best, the Oversight Board has been that mechanism. But it has too rarely been at its best. It is also only one actor in a broader tech ecosystem — and US government — that has retreated from the work of protecting human rights. Speaking about the board, Carozza told me, "It only makes sense in a larger ecosystem and culture of wanting to protect human rights and wanting to protect human dignity and doing the right thing." Ultimately, I remain convinced that the board has been a useful experiment. But five years of pretending to be a court has not given Meta's user base much that you could call justice. If it's serious about achieving that mission, it's time for the board to try being something else. On the podcast this week: Kevin and I go deep on OpenAI's Code Red, Gemini 3, and Opus 4.5. Then, it's time once again for the Hard Fork Review of Slop. Apple | Spotify | Stitcher | Amazon | Google | YouTube Sponsored New Grooming detection added! Safer by Thorn is a purpose-built child sexual abuse material (CSAM) and exploitation (CSE) solution powered by trusted data and Thorn’s issue expertise. The Safer Predict text classification model now detects messages and conversations that contain signs of suspected grooming. When indicators of sexual exploitation or abuse of a minor are detected, the model applies a “grooming” label and confidence score to each message. Learn how Safer by Thorn can help you mitigate the risk of your platform hosting CSAM or being misused to sexually exploit children. Following
The new LLM skepticismWhat happened: AI researcher Yann Lecun made his first comments on Thursday about the world-model startup he’s leaving Meta to found. And while he has long been a skeptic of the idea that large language models will take us all the way to superintelligence, his comments underscore a fresh flare-up of skepticism about LLMs among prominent practitioners and observers. “Silicon Valley is completely hypnotized by generative models,” LeCun said at the AI-Pulse event in Paris on Thursday. “So you have to do this kind of work outside of the Valley, in Paris. Last week, on his podcast with OpenAI co-founder Ilya Sutskever, Dwarkesh Patel raised his frequent criticism that LLMs have shown little progress in continual learning: the ability for AI systems to learn from experience the way humans do, rather than having to be updated via tedious reinforcement learning methods. Sutskever agreed that new approaches are necessary, and faintly suggested that his company, Safe Superintelligence, may be working on them. In a follow-up post today, Patel elaborates on the slow progress in continual learning and why he thinks it will delay the arrival of AGI by a "decade or two.” “Either these models will soon learn on the job in a self-directed way — making all this pre-baking pointless — or they won’t — which means AGI is not imminent,” he wrote. If people don’t think continual learning is necessary for truly powerful AI, “people are underrating how much company- and context-specific skills are required to do most jobs.” This weekend at AI conference NeurIPS, Turing award winner Richard Sutton gave a talk on the same themes. “We need agents that learn continually. We need world models and planning. We need knowledge that is high-level and learnable,” he argued. “As AI has become a huge industry, to an extent it has lost its way.” Why we’re following: Despite the recent impressive performance gains in Google's Gemini 3 and Anthropic's Claude Opus 4.5, there's still plenty of skepticism in Silicon Valley about the ultra-short timelines of frontier lab CEOs. While they insist that AGI may be no more than a few hundred days away, other practitioners are highlighting the obvious gaps along the way. What people are saying: AI researcher François Chollet agreed with Dwarkesh’s argument. In a post on X, he said, “Either you crack general intelligence — the ability to efficiently acquire arbitrary skills on your own — or you don't have AGI.” On Naye |