|
Hey there,
Anthropic has really been on a roll lately...
...but building the best AI models has a dark side, even if you genuinely care about ethics and safety.
Case in point: the upcoming release of a new model, codenamed Mythos, which Anthropic believes will make it much easier for hackers to attack banks and other important infrastructure.
It's indisputable that Anthropic is a leader in AI safety and ethics (relative to its peers), as well as the creator of many of this year's coolest new releases, like Claude Cowork.
If you're a bit more of a devil's advocate like me, you may also notice that Anthropic is cool with creating killer robots with a few limitations. This makes it look good compared to those who want no limitations, but still crazy relative to the average human.
Anthropic's leadership spends a lot of time talking about how their products are going to hurt people. The approach seems to be:
- Announce a new release
- Talk about how it's going to be so powerful that it destroys something important (jobs, banks, the software industry, tank warfare)
- Release that thing after a short waiting period
To some, 'dangerous' means 'profitable'
You can see the same playbook happening this month with Mythos, the company's newest and best AI model. It has not been released yet because Anthropic is giving preview access to banks and governments to make sure that they have time to prepare for any cybersecurity threats that might arise from this particular model being superhumanly good at the task of finding holes in corporate firewalls and important operating systems.
Since we've seen Anthropic move through this announce-scare-release cycle for a couple years now, I want to dig a little deeper into the skeptic's reading of the company.
The claim:
Anthropic is exaggerating the risk associated with its products...
...because they think creating risky and dangerous products is impressive.
Before I consider whether this is true, I want to note that it is very unusual that it is even plausible.
Most companies want their products to seem less risky. AI companies are in a very weird position because their current audience is power users and investors, not the average consumer. This group (of which I consider myself a somewhat reluctant member) tends to interpret "dangerous" as "powerful" which leads to "profitable."
Nowhere in that target audience is the median everyday person. That's why Anthropic can promote something ("our products are dangerous") that most companies would spend a lot of money to keep secret.
As a counterexample, take sports gambling. The rapid spread of online sports betting is extremely dangerous and has zero possible positive outcome for society. But the beneficiaries of lax gambling regulation spend a ton of money obscuring that fact and hiring LeBron James to endorse their addictive products. The products are dangerous, so the companies try to make money while obscuring those dangers.
Unlike sports gambling, an excellent AI model brings with it both risk and reward. The question is which one outweighs the other. I think the Anthropic executives are broadly correct when they say that computing advancements will result in cyberattacks becoming easier. It is generally a good thing that they are taking steps to help important infrastructure providers like banks get ahead of those potential future attacks.
Talking Their Own Book
At the same time, Anthropic leaders are talking their own book, which means that "the superficially neutral view they give of the market is designed to promote the value of their own [investments]." They are making statements that seem factual and neutral but that also always promote their product in the eyes of investors – who, again, see "dangerous" as a rough synonym for "good investment."
If you forced me to predict the future, I'd say that 10 years from now we will look back on Anthropic and its leadership as:
- Great inventors and "better at building AI" than their competitors at OpenAI
- Uniquely good at straddling the line between AI danger and AI ethics
- Early movers that ultimately blended in to the crowd as AI matured
Now, if you own stock in an early mover you can make a lot of money. That's why the investor audience loves the catnip of "we're about to release the most dangerous/powerful AI model ever."
However, the missing piece is this:
In 5-10 years, the things that scare us about Anthropic Mythos will be so commonplace that they no longer stand out as anomalies.
By the same token, dozens of companies will have released tools that are equally or more powerful.
(If you need more proof of this, look no further than quantum computing, which isn't even AI, but will basically make all current encryption and security a moot point at some point in our lives.)
So, the narrative here really is not about Anthropic, per se. Anthropic is one salient example of an industry-wide, and really humanity-wide, trend. The trend is that computing is getting better, faster and cheaper at a rate that will make it very difficult for normal-speed institutions to adapt.
So it's not about what you (or a bank or a government) will do when one new AI model comes out that can do impressive or shocking things. Instead, you need to be thinking about a 5-10 year timeline where dozens of those models exist and they are a commodity rather than a novelty. It changes everything that computers do, and since computers touch almost everything, it changes a lot about everyone's daily life.
The 10-Year View: Yahoo or Google?
When you understand how significantly and quickly computing is changing, it also makes Anthropic and OpenAI look like good short-term investments but very tenuous in the long term. Can Anthropic outlast literally every other company in the world for long enough to become an AI dynasty? Maybe, maybe not. Google pulled off that feat in the last 25 years. Yahoo tried to do the same thing and failed spectacularly. No one knows if Anthropic will be the next Yahoo or not.
So, the investors are forced to guess (or gamble, depending on your perspective) on future winners and losers. Anthropic's leaders are highly incentivized to feed the scare/hype machine with the hope of inspiring investors to bet on them. At the same time, they're working really hard to win a race against their competitors – which means creating technology that is as dangerous / powerful / impactful / helpful / world-changing (depending on your perspective) as it can possibly be.
Until next time,
– Rob CEO of Innovating with AI
|