Exclusive: Veteran Google DeepMind researcher David Silver quits to launch his own AI startup

David Silver—an esteemed researcher who played a pivotal role in several of Google DeepMind’s most notable breakthroughs—has departed the company to launch his own startup.

Silver is founding a new firm named Ineffable Intelligence, based in London, according to a source with firsthand knowledge of his plans. The company is actively recruiting AI researchers and pursuing venture capital funding, the source added.

Google DeepMind notified its employees of Silver’s exit earlier this month, the source stated. Silver had been on sabbatical in the months leading up to his departure and never officially resumed his role at the company.

A Google DeepMind spokesperson confirmed Silver’s departure in an emailed statement to . “Dave’s contributions have been irreplaceable, and we’re thankful for the impact he’s made on our work at Google DeepMind,” the spokesperson noted.

Silver could not be reached for immediate comment.

Ineffable Intelligence was established in November 2025, and Silver was appointed a director of the company on January 16, as per documents filed with the U.K. business registry Companies House.

Additionally, Silver’s personal webpage now lists his contact information under Ineffable Intelligence and provides an email address associated with the startup, though it still states he “leads the reinforcement learning team” at Google DeepMind.

Beyond his work at Google DeepMind, Silver holds a professorship at University College London (UCL), and he continues to retain that affiliation.

A Key Figure Behind DeepMind’s Most Significant Breakthroughs

Silver was among DeepMind’s first employees when the company launched in 2010; he had known co-founder Demis Hassabis since their university days. He played an instrumental role in many of the firm’s early milestones, including its landmark 2016 achievement with , demonstrating that an AI program could defeat the world’s top human players at the ancient strategy game Go.

He was also a core member of the teams that developed , an AI program capable of beating the world’s best human players at the complex video game Starcraft 2; AlphaZero, which mastered chess, shogi, and Go at superhuman levels; and , which could master multiple games better than humans despite starting with no knowledge of the games or their rules.

More recently, he collaborated with the DeepMind team that created , an AI system capable of successfully answering questions from the International Mathematics Olympiad. He is also a co-author of the 2023 research paper that introduced Google’s original Gemini family of AI models—now Google’s flagship commercial AI product and brand.

Pursuing a Path to AI ‘Superintelligence’

Silver has shared with friends that he wants to recapture the “awe and wonder of tackling AI’s hardest problems” and views superintelligence—defined as AI smarter than any individual human and potentially all of humanity—as the field’s largest unsolved challenge, per the source familiar with his thinking.

Several other prominent AI researchers have left established labs in recent years to launch startups focused on superintelligence. Ilya Sutskever, OpenAI’s former chief scientist, founded Safe Superintelligence (SSI) in 2024. That company in venture capital funding to date and is reportedly valued at as much as $30 billion. Some of Silver’s former colleagues from the AlphaGo, AlphaZero, and MuZero teams recently left to start Reflection AI, another startup targeting superintelligence. Meanwhile, last year around a new “Superintelligence Labs” that is headed by former Scale AI CEO and founder Alexandr Wang.

Moving Beyond Large Language Models

Silver is renowned for his work on reinforcement learning (RL)—a method of training AI models using experience rather than historical data. In RL, a model takes actions (typically in a game or simulator) and receives feedback on whether those actions help it reach a goal. Through repeated trial and error, the AI learns the optimal ways to achieve its objective.

The researcher was often viewed as one of RL’s most uncompromising advocates, arguing it is the sole path to creating artificial intelligence that can one day exceed human knowledge.

On an April-released Google DeepMind podcast, he stated that large language models (LLMs)—the AI type driving most recent AI excitement—are powerful but limited by human knowledge. “We aim to go beyond what humans know, and to do that we need a different approach: one that requires our AIs to figure things out independently and discover new knowledge humans don’t possess,” he said. He has called for a new “era of experience” in AI centered on reinforcement learning.

Currently, LLMs undergo two development phases: pretraining (using unsupervised learning, where they consume massive text datasets to predict statistically likely next words) and post-training (which uses some RL, often with human evaluators reviewing outputs and providing feedback—sometimes just a thumbs up or down—to enhance the model’s helpfulness).

But this training ultimately relies on human knowledge: pretraining uses past human learning and writing, while LLM post-training RL is rooted in human preferences. In some cases, human intuition can be incorrect or short-sighted.

For example, in move 37 of AlphaGo’s 2016 second match against Go world champion Lee Sedol, the AI made an unconventional move that all human experts commentating deemed a mistake. Yet it later proved critical to AlphaGo’s victory in that game. Similarly, human chess players often describe AlphaZero’s play as “alien”—yet its counterintuitive moves frequently turn out to be brilliant.

If human evaluators had judged such moves using the RL process in LLM post-training, they likely would have given them a thumbs down (as they appear to human experts as errors). This is why RL purists like Silver argue that to achieve superintelligence, AI must not only go beyond human knowledge but also set it aside—learning to reach goals from scratch using first principles.

Silver has stated that Ineffable Intelligence will aim to create “an endlessly learning superintelligence that self-discovers the foundations of all knowledge,” according to the source familiar with his plans.