Sam Altman Steps Down from OpenAI’s Safety Committee
Sam Altman, CEO of OpenAI, is stepping down from the company’s internal safety and security committee. This committee was established in May to advise the OpenAI board on critical safety and security issues related to the development of advanced artificial intelligence.
After a 90-day evaluation period, the committee presented its recommendations on September 16th. The top recommendation was to create independent governance for safety and security. Consequently, Altman will no longer serve on the safety committee. As per the committee’s recommendations, the newly independent committee will be led by Zico Kolter, Director of the Machine Learning Department at Carnegie Mellon University and a recent addition to OpenAI’s board. Other committee members include OpenAI board members Adam D’Angelo, Paul Nakasone, and Nicole Seligman. In addition to Altman, OpenAI board chair Bret Taylor and several technical and policy experts will also step down from the committee.
The committee also recommended enhancing security measures, ensuring transparency about OpenAI’s work, and unifying the company’s safety frameworks. The committee also plans to explore collaborations with external organizations, similar to those used for mitigating risks associated with dangerous capabilities.
The Safety and Security Committee is not OpenAI’s first attempt at independent oversight. OpenAI’s for-profit arm, established in 2019, is governed by a non-profit entity with a “mission-aligned” board. This board ensures the for-profit arm operates in line with the mission of developing safe and beneficial artificial general intelligence (AGI), a system surpassing human capabilities in most areas.
In November, OpenAI’s board dismissed Altman, citing a lack of transparency in his communication and hindering the board’s ability to fulfill its responsibilities. This decision sparked a backlash from employees and investors, leading to the resignation of Greg Brockman, the company’s president, and Altman’s subsequent reinstatement. Following these events, Tasha McCauley and Ilya Sutskever resigned from the board. Brockman later returned as president of the company.
This incident highlighted a significant challenge for OpenAI, which is rapidly expanding. Critics, including Toner and McCauley, argue that a formally independent board is insufficient to counterbalance the company’s strong financial incentives. Earlier this month, reports suggested that OpenAI’s ongoing fundraising efforts, potentially valuing the company at billions, might require a change in its corporate structure.
Toner and McCauley believe that board independence alone is not enough and that governments must actively regulate AI. They argue that even with good intentions, self-regulation without external oversight becomes unenforceable. This sentiment was expressed in May in the Economist, reflecting on OpenAI’s boardroom crisis.
While Altman previously advocated for AI regulation, OpenAI lobbied against California’s AI bill, which mandated safety protocols for developers. Over 30 current and former OpenAI employees publicly supported the bill, contradicting the company’s stance.
The establishment of the Safety and Security Committee in late May came amidst a tumultuous month for OpenAI. , the two leaders of the company’s “” team, focused on ensuring human control over AI systems exceeding human intelligence, resigned. Leike criticized OpenAI for prioritizing “shiny products” over safety in a post on X. The team was disbanded following their departure. That same month, OpenAI faced criticism for asking departing employees to sign agreements preventing them from criticizing the company or forfeiting their vested equity. OpenAI later clarified that these provisions were not and would not be enforced and would be removed from future exit paperwork.