AI has made hacking inexpensive. That changes everything for businesses

Welcome to Eye on AI, presented by AI reporter Sharon Goldman. In this edition…How AI is making cyberattacks more affordable for hackers…A U.S. lawmaker claims that assisted DeepSeek in refining AI models later employed by China’s military…. A chemical company is set to cut 4,500 employees in an AI – related overhaul…An inside look at Anthropic’s plan to scan and dispose of millions of books.
One of my persistent interests in the field of AI is its impact on cybersecurity. Two months ago in Eye on AI, I cited a security leader who characterized the current situation as “grim,” as businesses are struggling to safeguard their systems in a world where AI agents are no longer merely answering questions but acting independently.
This week, I had a conversation with Gal Nagli, the head of threat exposure at the $32 – billion cloud security startup Wiz, and Omer Nevo, the co – founder and CTO at Irregular, an AI security lab backed by Sequoia that collaborates with OpenAI, Anthropic, and . Wiz and Irregular recently completed a study on the actual economics of AI – driven cyberattacks.
Low – cost AI – powered cyberattacks
They discovered that AI – powered hacking is becoming extremely inexpensive. In their tests, AI agents accomplished complex offensive security challenges with LLM costs of less than $50 — tasks that would typically cost nearly $100,000 if carried out by human researchers who are paid to find vulnerabilities before criminals do. In controlled scenarios with well – defined targets, the agents solved 9 out of 10 real – world – modeled attacks, indicating that large portions of offensive security work are already becoming fast, affordable, and automated.
“Even for many experienced professionals who are familiar with both AI and cybersecurity, it has been truly astonishing to see what we thought AI wouldn’t be capable of and what the models can actually do,” said Nevo, adding that there has been a significant improvement in capabilities even in just the past few months. One area is that AI models can stay focused on multi – step challenges without losing concentration or giving up. “We’re increasingly observing that models can solve challenges that are at a genuine expert level, even for offensive cybersecurity professionals,” he said.
This is a particular concern at present because in many organizations, non – technical professionals, such as those in marketing or design, are using accessible coding tools like Anthropic’s Claude Code and OpenAI’s Codex to develop applications. These are not engineers, as Nagli explained. “They have no knowledge of security. They simply create new applications on their own, and they use sensitive data that is exposed to the public Internet, making them very easy to target,” he said. “This creates a vast attack surface.”
Cost is no longer a hurdle for hackers
The research implies that the cat – and – mouse game of cybersecurity is no longer restricted by cost. Criminals no longer need to carefully select their targets if an AI agent can probe and exploit systems for just a few dollars. In this new economic environment, every exposed system is worth testing. Every vulnerability is worth exploiting.
In more realistic, real – world conditions, the researchers did notice a decline in performance and a doubling of costs. However, the main point remains: attacks are becoming cheaper and quicker to launch. And most companies are still defending themselves as if every serious attack requires expensive human labor.
“If we reach the stage where AI can conduct sophisticated attacks on a large scale, suddenly many more people will be at risk, which means that even in smaller organizations, people will need to have significantly better awareness of cybersecurity than they do now,” Nevo said.
At the same time, this means that using AI for defense will become a crucial necessity, he said, which raises the question: “Are we enabling defenders to utilize AI quickly enough to keep up with what offensive actors are already doing?”
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@
@sharongoldman