
The U.S. Department of Defense is close to cutting business ties with Anthropic and declaring the company a "supply chain risk"—a remarkable move that could deal a serious blow to the $380 billion AI startup's business—according to
a bombshell exclusive published by Axios on Monday.
The situation apparently follows months of tough negotiations between the Pentagon and Anthropic over the ways its AI tools can be used by the military. While Anthropic's Claude is already used by the military (and is currently the only AI available in its classified systems, according to Axios), Anthropic CEO Dario Amodei is adamant that the technology can not be used with weapons that don't require human involvement or that the tech be used to spy on Americans.
That's not sitting well with the DoD, which Axios says wants AI companies to allow it to use their tech for "all lawful purposes." The Defense Department is also negotiating with Google, Meta, and Xai.
While losing its contract with the DoD wouldn't cause Anthropic too much pain, the supply chain risk designation could cause serious harm since it would mean any other companies doing business with the Pentagon could no longer use Anthropic.
Just last week, Anthropic announced that it had attained a $14 billion annual revenue run rate as business customers, including 8 of the Fortune 10, use its Claude AI tools. The San Francisco company, which raised $30 billion in funding, is considered a top candidate for an IPO in the coming year.
That gives the DoD significant leverage as it tries to force Anthropic's hand. And, for anyone concerned about the limits and safeguards society places on AI, it raises real questions about whether for-profit AI companies have the ability to control their creations.
—AO