TL;DR: For weeks, an unauthorized Discord group had access to Mythos Preview, the model Anthropic kept guarded from the public due to its dangerous cyberoffense capabilities, Bloomberg reported on Tuesday. Anthropic framed this gatekeeping as the cost of safety—but the incident raises questions about the effectiveness of a limited rollout that was pitched as an exercise in caution. What happened: The Discord group told Bloomberg that they got their hands on Mythos on April 7. That’s the same day Anthropic revealed Project Glasswing, giving a handpicked group of some 40-plus tech firms and other institutions exclusive access to Mythos for defensive purposes. The company says that while its goal is to eventually release Mythos-class models publicly, this specific preview model wouldn’t be generally available. Anthropic told Bloomberg that it’s “investigating” the report of unauthorized access “through one of our third-party vendor environments,” but that there’s “no evidence” it impacted Anthropic’s system. Meanwhile, Axios also reported on Tuesday that the Cybersecurity and Infrastructure Security Agency—tasked with protecting critical tech infrastructure in the US—wasn’t given Mythos access and that it wasn’t included in Glasswing. If the limited Mythos rollout was to shore up cybersecurity, that invites scrutiny on why CISA was left out. The opaque Mythos guest list: While Anthropic says Glasswing was intended to give cyber defenders a “head start,” we don’t have a full list of who the company decided would make the cut. Confirmed partners include many of the Big Tech names you might expect—and, despite the ongoing legal fight between Anthropic and the Trump administration, both random Discord users and the government are clearly interested in trying out the all-powerful AI. The White House is reportedly working on making a version of Mythos available to various agencies, with the National Security Agency already using it, according to Axios. Guessing your way into the VIP section: For all the hush-hush of an exclusive cabal of sanctioned Mythos testers, the Discord group that apparently crashed the party didn’t need sophisticated exploits. They reportedly guessed the URL to reach the model based on Anthropic’s naming conventions. As for what they were doing with such a powerful AI cyber weapon: “building simple websites,” per Bloomberg. A member of the Discord server also claims that they have access to other unreleased Anthropic models. Bottom line: OpenAI’s own restricted model, GPT-5.4-Cyber, came about a week after Mythos. But such measured rollouts don’t work if the perimeter isn’t actually secure. If some Discord users found a way to use Mythos, there’s no guarantee other groups haven’t already squeezed valuable, potentially dangerous knowledge out of a highly capable AI model—before relevant government agencies have a chance to check that everything’s secure. —WK |