Anthropic’s head of Claude Code on how the tool won over non-coders—and initiated a new era for software engineers

Anthropic has had a strong stretch of weeks. The lab is in the midst of a $10 billion fundraising effort that would value the company at $350 billion, and it’s also experiencing a viral product launch that most AI labs can only aspire to.
Claude Code, the company’s unexpectedly popular success, is a coding tool that has captured the attention of users far beyond the software engineers it was built for. First released in February 2024 as a developer assistant, the coding tool has grown increasingly sophisticated and sparked a level of excitement rarely seen since ChatGPT’s debut. Jensen Huang and encouraged companies to adopt it for coding. One user noted it recreated a year’s worth of work in an hour. And users with no programming background have used it to book theater tickets, file taxes, and even monitor tomato plants.
Even at , which sells GitHub Copilot, Claude Code has beenacross its major engineering teams, with non-developers reportedly being encouraged to use it as well.
Anthropic’s products have long been favored by software developers, but after users pointed out that Claude Code functioned more as a general-purpose AI agent, the company created a version of the product for non-coders. Last week, Anthropic launched Cowork, a file management agent that is essentially a user-friendly iteration of the coding product. Boris Cherny, head of Claude Code at Anthropic, said his team developed Cowork in roughly a week and a half, largely leveraging Claude Code itself to handle the core work.
“It just seemed clear that Cowork was the next logical step,” Cherny told . “We aim to make it significantly easier for non-programmers.”
What distinguishes Cowork from earlier general-use AI tools from Anthropic is its ability to take autonomous action instead of merely offering advice. The products can access files, control browsers via the “Claude in Chrome” extension, and manipulate applications—executing tasks rather than just suggesting how to perform them. For some general users, this marks their first experience of what the promise of agentic AI truly entails.
Many of its uses aren’t particularly glamorous, but they do save users hours. Cherny mentions using Cowork for project management, automatically messaging team members on Slack when they haven’t updated shared spreadsheets, and he’s heard of use cases like a researcher deploying it to sift through museum archives for basketry collections.
“Engineers now feel unburdened, no longer having to tackle all the tedious tasks,” Cherny told . “We’re starting to hear similar feedback about Cowork, where people say all the tedious work—shuffling data between spreadsheets, integrating Slack and , organizing emails—it just handles it, letting you focus on the work you actually want to do.”
Enterprise first, consumer second
Despite the consumer buzz, Anthropic is positioning both products firmly in the enterprise market, where the company reportedly already leads OpenAI in adoption.
“For Anthropic, we’re an enterprise AI company,” Cherny said. “We develop consumer products, but our primary focus is on enterprise.”
Cherny noted this strategy is also guided by Anthropic’s founding mission centered on AI safety, which resonates with corporate clients concerned about security and compliance. In this context, the company’s roadmap for general-use products involved first developing robust coding capabilities to enable advanced tool use and “testing” products with technical customers. By providing capabilities to technical users through Claude Code before expanding to broader audiences, Cherny said the company builds on a proven foundation rather than starting fresh with consumer tools.
Claude Code is now used by Uber, , , Salesforce, , and Snowflake, among others, according to Cherny. The product has found “a very strong product-market fit across various enterprise sectors,” he told .
Anthropic has also seen a surge in traffic due to Claude Code’s viral momentum. Claude’s total web audience has more than doubled since December 2024, and its daily unique desktop visitors are up 12% globally year-to-date, based on data from Similarweb and Sensor Tower published
The company faces challenges tied to AI agents capable of autonomous action. Both products have security vulnerabilities, particularly “prompt injections,” where attackers hide malicious instructions in web content to manipulate AI behavior.
To address this, Anthropic has implemented multiple security layers, including running Cowork in a virtual machine and recently adding deletion protection after a user accidentally deleted files. A feature Cherny described as “quite innovative.”
But the company acknowledges the limitations of its approach. “Agent safety—securing Claude’s real-world actions—remains an active area of industry development,” Anthropic warned in its announcement.
The future of software engineering
With the rise of increasingly advanced autonomous coding tools, some worry that software engineer roles, especially entry-level ones, could decline. Even within Anthropic, some engineers have stopped writing code entirely, according to CEO Dario Amodei.
“I have engineers at Anthropic who say, ‘I don’t write any code anymore. I just let the model write it, and I edit it,’” Amodei stated at the World Economic Forum in Davos. “We might be six to 12 months away from the model handling most, perhaps all, of what software engineers do end-to-end.”
Tech companies argue these tools will democratize coding, enabling those with little to no technical skills to build products by prompting AI systems in natural language. However, while it’s unclear if the two are causally linked and other factors are impacting job declines, it’s true that open roles foras the by generative AI has ramped up.
Time will reveal whether this signals a democratization of software development or the gradual erosion of a once-stable profession, but by bringing autonomous AI agents out of the lab and into daily work, Claude Code may accelerate the pace at which we find out.