Exclusive: AI cybersecurity startup RunSybil, founded by OpenAI’s first security hire, secures $40 million funding led by Khosla Ventures

(SeaPRwire) –   RunSybil, an AI-powered cybersecurity startup that leverages AI agents to autonomously test company software for security flaws, has raised $40 million in venture capital funding.

Khosla Ventures led the funding round, with participation from S32, Anthropic’s Anthology Fund, Menlo Ventures, Conviction, Elad Gil, and angel investors including Nikesh Arora, Amit Agarwal, Jeff Dean, plus other founders and leaders from firms like OpenAI, Palo Alto Networks, Stripe, and Google.

The company chose not to disclose the valuation it attained in this new funding round.

The startup’s AI agent, Sybil, conducts ongoing autonomous penetration tests on live applications—identifying, exploiting, and documenting real security vulnerabilities without human intervention. This differs from other high-profile security tools like Claude Code Security, which analyzes application source code for known vulnerabilities before deployment.

Instead, RunSybil tests software that’s already operational, probing live systems the way a hacker would: by exploring environments, chaining vulnerabilities together, and testing authentication boundaries to find paths to sensitive data.

Automating ‘ethical hacking’

For years, companies have relied on a mix of penetration tests (where external security experts, or “ethical hackers,” attempt to breach their systems), bug bounty programs that reward independent hackers for reporting flaws, and internal “red teams” that simulate real cyberattacks. RunSybil says its AI system can automate much of this work, continuously probing applications for vulnerabilities as new code is rolled out.

RunSybil argues this automation is becoming necessary as AI reshapes business operations. Procurement, legal, finance, engineering, and operations are all being rebuilt with AI—including the growing use of AI agents. Yet security testing is often still treated as a discrete, scheduled task managed by a separate team on its own timeline. This mismatch is especially challenging for highly regulated industries like finance, insurance, and healthcare, which face strict legal and audit requirements for cybersecurity.

RunSybil was co-founded in 2023 by Ari Herbert-Voss (OpenAI’s first security research hire, joining in 2019) and Vlad Ionescu (who previously led offensive security red teams at Meta). Together, they say they represent a rare intersection: expertise in building frontier AI systems and hacking complex software.

“We check every box needed for auditors, regulators, and compliance teams,” Herbert-Voss said. But the real work, he added, is transforming how, when, and where customers discover and fix security issues: “Not as a project, but as a permanent capability embedded in how they build.”

‘On the edge’ of the AI security frontier

Vinod Khosla—who made an early investment in OpenAI in 2019 and often backs companies he sees as technologically frontier—told that “adding security and penetration testing to the AI world is definitely frontier work, and RunSybil is on the edge.” He noted there’s currently little competition in this segment of the offensive security market, though incumbents like Palo Alto Networks may eventually enter the space.

For now, “nobody’s really knowledgeable about this except individuals like [Herbert-Voss],” he said, adding he’s long been concerned about AI’s cyber capabilities falling into the hands of adversaries like China. “We invest in founders who tackle large, unsolved problems with technically ambitious solutions,” he continued. “[Herbert-Voss and Ionescu] are building exactly the platform security teams will need as software complexity and AI-driven development accelerate.”

Herbert-Voss has long been immersed in both hacking and AI. Growing up in a mostly Mormon community in Utah, he was drawn to the online hacker scene in middle and high school but pivoted away after friends “started getting arrested.” While pursuing a Ph.D. at Harvard University studying machine learning and ways to make algorithms more efficient, he first learned about OpenAI.

He dropped out of Harvard, he said, after becoming convinced that the rapid scaling of AI models—training larger systems with more data and computing power—would unlock powerful new capabilities.

Evolving cyber capabilities with LLMs

“Once OpenAI released GPT-2, I thought, this changes everything about the economics of running a cyber campaign,” he explained. He sent a couple of hacker demos to OpenAI CEO Sam Altman and Jack Clark (then head of policy at OpenAI, who later co-founded Anthropic). Both expressed concerns about the potential misuse of LLMs and asked Herbert-Voss to join for security research.

By 2022, Herbert-Voss said he also began to see how quickly offensive cyber capabilities could evolve once powerful language models became widely available—including to malicious actors. Those same advances, he noted, could dramatically expand cyber threats. This led him to leave OpenAI and start RunSybil as a research project.

RunSybil currently works with startups including Cursor, Turbopuffer, Notion, Baseten, and Thinking Machines Lab, as well as major financial institutions and Fortune 500 companies (the company declined to name these Fortune 500 or financial customers). Herbert-Voss said customers have already reported finding critical vulnerabilities that went undetected using traditional methods.

This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.

Category: Top News, Daily News

SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.