As AI agents become cheap, tireless attackers, the old assumptions behind corporate cybersecurity are breaking down.
 ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­ ­  
Thursday, January 29, 2026
AI has made hacking cheap. That changes everything for business


Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition…How AI is making cyberattacks cheap for hackers…U.S. lawmaker says Nvidia helped DeepSeek hone AI models later used by China’s military….Dow Chemical to cut 4,500 employees in AI overhaul…Inside Anthropic’s plan to scan and dispose of millions of books.

One of my ongoing fixations in AI is what it’s doing to cybersecurity. Two months ago in Eye on AI, I quoted a security leader who described the current moment as “grim,” as businesses struggle to secure systems in a world where AI agents are no longer just answering questions, but acting autonomously.

This week, I spoke with Gal Nagli, head of threat exposure at $32 billion cloud security startup Wiz, and Omer Nevo, cofounder and CTO at Irregular, a Sequoia-backed AI security lab that works with OpenAI, Anthropic, and Google DeepMind. Wiz and Irregular recently completed a joint study on the true economics of AI-driven cyberattacks. 

Bargain-priced AI-powered cyberattacks
They found that AI-powered hacking is becoming incredibly cheap. In their tests, AI agents completed sophisticated offensive security challenges for under $50 in LLM costs — tasks that would typically cost close to $100,000 if carried out by human researchers paid to find flaws before criminals do. In controlled scenarios with clear targets, the agents solved 9 out of 10 real-world–modeled attacks, showing that large swaths of offensive security work are already becoming fast, cheap, and automated.

“Even for a lot of seasoned professionals who have seen both AI and cybersecurity, it has been genuinely surprising in what we didn’t think AI would be able to do and that models will be able to do,” said Nevo, who added that even in just the past few months there has been a big jump in capabilities. One area is in AI models being able to stay on track to do multi-step challenges without losing focus or giving up. “We’re seeing more and more that models are able to solve challenges that are genuine expert level, even for offensive cybersecurity professionals,” he said. 

This is a particular problem now, because in many organizations, non-tech professionals, such as in marketing or design, are bringing applications to life using accessible coding tools such as Anthropic’s Claude Code and OpenAI’s Codex. These are people that are not engineers, Nagli explained. “They don’t know anything about security, they just develop new applications by themselves, and they use sensitive data exposed to the public Internet, and then they are super easy to exploit,” he said. “This creates a huge attack surface.”

Cost is no longer an issue for hackers
The research suggests that the cat-and-mouse game of cybersecurity is no longer constrained by cost. Criminals no longer need to carefully choose their targets if an AI agent can probe and exploit systems for just a few dollars. In this new economic landscape, every exposed system becomes worth testing. Every weakness becomes worth a try.

In more realistic, real-world conditions, the researchers did see performance drop and costs double. But the larger takeaway remains: attacks are getting cheaper and faster to launch. And most companies are still defending themselves as if every serious attack requires expensive, human labor.

“If we reach the point where AI is able to conduct sophisticated attacks, and it’s able to do that at scale, suddenly a lot more people will be exposed, and that means that [even at] smaller organizations people will need to have considerably better awareness of cybersecurity than they have today,” Nevo said. 

At the same time, that means using AI for defense will become a critical need, he said, which raises the question: “Are we helping defenders utilize AI fast enough to be able to keep up with what offensive actors are already doing?”

With that, here’s more AI news.

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

FORTUNE ON AI
AI IN THE NEWS

DeepMind opens access to its world-building AI with Project Genie. Google DeepMind is offering early access to its Project Genie, an experimental prototype that lets users generate and explore interactive, AI-created worlds in real time. Rolling out today to Google AI Ultra subscribers in the U.S., the web-based tool is powered by Genie 3 and allows people to sketch environments with text or images, navigate them as they evolve dynamically, and remix existing worlds into new ones. Unlike static 3D scenes, Genie generates what lies ahead as you move, simulating physics and interactions on the fly—a step toward the kind of general-purpose systems DeepMind believes are needed for AGI. The company frames Project Genie as both a creative playground and a research testbed for understanding how people might use world models across media, simulation, and AI development, while acknowledging limitations like short generation windows and imperfect realism.

U.S. lawmaker says Nvidia helped DeepSeek hone AI models later used by China's military. According to a letter seen by Reuters, a U.S. lawmaker is saying that Nvidia provided technical assistance to Chinese AI startup DeepSeek that helped DeepSeek improve the efficiency of its models—allowing them to be trained with far fewer GPU hours than typical U.S. frontier models—and that those models were later used by China’s military, raising fresh concerns about AI technology transfers to Beijing. Representative John Moolenaar, chair of the House Select Committee on China, cited internal Nvidia documents showing engineers helped optimize algorithms and hardware, and argues the episode underscores the need for stricter export controls and enforcement to prevent American AI technology from being repurposed for military use by potential adversaries. Nvidia responded that it would be unreasonable to think China’s military depends on U.S. technology, and the Commerce Department and DeepSeek did not comment.

Dow Chemical to cut 4,500 employees in AI overhaul. The Wall Street Journal reported Dow Chemical said it will cut 4,500 jobs as part of a sweeping cost-cutting effort that leans heavily on AI and automation to boost productivity and returns, as the company grapples with a widening quarterly loss driven by lower revenue and higher costs. The “Transform to Outperform” pro