Granola MCP (Sponsor)Take your meeting context to new places If you’re already using Claude or ChatGPT for complex work, you know the drill: you feed it research docs, spreadsheets, project briefs... and then manually copy-paste meeting notes to give it the full picture. What if your AI could just access your meeting context automatically? Granola’s new Model Context Protocol (MCP) integration connects your meeting notes to your AI app of choice. Ask Claude to review last week’s client meetings and update your CRM. Have ChatGPT extract tasks from multiple conversations and organize them in Linear. Turn meeting insights into automated workflows without missing a beat. Perfect for engineers, PMs, and operators who want their AI to actually understand their work. Use the code SCOOP The rivalry between frontier AI labs just went from competitive to confrontational. In an internal memo, Dario Amodei, CEO of Anthropic, accused rival OpenAI of staging what he called “AI safety theater” in connection with its recently announced partnership with the United States Department of Defense. The memo, reportedly 1,600 words long and sent via Slack on March 6, 2026, laid out Amodei’s criticism of how OpenAI structured safeguards in its Pentagon deal—and revealed a deeper ideological divide between the two companies on how artificial intelligence should be governed in military settings. What began as a disagreement over contract language has now evolved into a high-stakes clash over AI safety, political influence, and the future of government partnerships in the AI era. Inside the Memo That Ignited the DisputeAmodei’s message to employees didn’t mince words. He described OpenAI’s defense partnership as “maybe 20% real and 80% safety theater,” suggesting that the safeguards promoted publicly were largely symbolic. According to Amodei, OpenAI’s communications about the negotiations contained “outright falsehoods” and “gaslighting,” framing Anthropic as an unreasonable partner that refused to engage constructively. He also claimed that OpenAI had portrayed Anthropic as:
Amodei characterized that narrative as part of a broader pattern associated with OpenAI CEO Sam Altman, one that allegedly positions competitors as obstacles whenever they push for stronger safety constraints. The Pentagon’s “Supply Chain Risk” LabelThe dispute escalated after the Pentagon labeled Anthropic a “supply chain risk.” The designation followed Anthropic’s refusal to sign contract terms that allowed its AI models to be used for “any lawful purpose.” Anthropic instead requested explicit contractual prohibitions against two controversial applications:
The company wanted those restrictions written directly into the legal agreement, not just implemented through technical safeguards. Shortly afterward, OpenAI reportedly accepted a similar defense deal but chose a different approach: embedding safeguards inside the AI systems themselves rather than requiring explicit contractual limits. To Amodei, that difference is critical. His argument is that technical safeguards can be modified, bypassed, or overridden, while contractual restrictions provide legal accountability. Anthropic is now challenging the Pentagon’s “supply chain risk” designation, arguing that the classification may have been applied improperly without a full risk assessment or congressional notification. Politics Enters the AI DebateThe memo also touched on the growing political dimension of the AI race. Amodei pointed out that OpenAI president Greg Brockman had reportedly made a $25 million donation to political groups aligned with Donald Trump. He contrasted this with Anthropic’s refusal to engage in what he described as “dictator-style praise” toward political leaders. While the remark was controversial, it highlights a broader reality: AI companies are increasingly operating within geopolitical and political power structures. As AI becomes a strategic national asset, government relationships may influence everything from contracts to regulation. OpenAI Pushes BackOpenAI leadership quickly pushed back against the criticism. Sam Altman reportedly dismissed the timing of Amodei’s memo as “opportunistic and sloppy.” From OpenAI’s perspective, embedding safeguards within AI systems may actually be more effective than relying on legal language alone. This reveals a fundamental philosophical divide between the two companies: Anthropic’s model
OpenAI’s model
Both companies claim to prioritize AI safety, but their methods and assumptions about risk differ significantly. |