Sam Altman argues the world must adopt sweeping reforms akin to the New Deal to address AI superintelligence’s profound impact, while critics claim these proposals are actually promoting regulatory nihilism.

(SeaPRwire) –   OpenAI suggests a global reevaluation of systems, from taxation to work schedules, to prepare for the profound societal shifts anticipated with superintelligence technology—the point where AI systems surpass the capabilities of the most intelligent humans.

On Monday, OpenAI released a 13-page paper titled “Industrial Policy for the Intelligence Age,” stating its intention to “kick-start” discussions with a “slate of people-first policy ideas.” However, the extent to which readers should trust OpenAI’s statements and motivations appears to be a central concern for many. The paper’s release coincided with The New Yorker publishing findings from an extensive eighteen-month investigation into OpenAI, which raised questions about CEO Sam Altman’s credibility on various matters, including AI safety.

Authored by OpenAI’s global affairs team, the document outlines numerous expected economic impacts of superintelligence and proposes various strategies for addressing them. An introductory blog post clarified, “We offer them not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process.”

The document’s self-described “slate of ideas”—encompassing concepts from public wealth funds to shorter workweeks—may do little to reassure a public increasingly anxious and disillusioned by the rapid pace and consequences of AI-driven change. Lucia Velasco, a senior economist and AI policy leader at the D.C.-based Inter-American Development Bank and former head of AI policy at the United Nations Office for Digital and Emerging Technologies, noted that OpenAI is, of course, one of the least impartial parties in this ongoing discussion, which forms the core tension of the document.

“OpenAI is the most interested party in how this conversation turns out, and the proposals it advances shape an environment in which OpenAI operates with significant freedom under constraints it has largely helped define,” she commented. She added that this isn’t a reason to dismiss the document, but “it is a reason to ensure that the conversation it is trying to start does not end with the same company that started it.”

Nonetheless, Velasco emphasized that OpenAI is correct in asserting that governments are lagging in developing policy solutions. “Most are still treating AI as a technology problem when it’s actually a structural economic shift that needs proper industrial policy,” she stated. “That‘s a useful contribution, and the document deserves to be taken seriously as an agenda-setting exercise, even if it’s a starting point.”

Soribel Feliz, an independent AI policy advisor who previously served as a senior AI and tech policy advisor for the U.S. Senate, concurred that OpenAI merits recognition for “putting this on paper.” She affirmed the accuracy of the acknowledgment that both U.S. institutions and safety nets are falling behind AI development and deployment, stating, “and the conversation needs to happen at this level at this moment.”

However, she stressed that much of what is being proposed is not novel: “Some of these pillars—‘share prosperity broadly, mitigate risks, democratize access’—have been the framework for every major AI governance conversation since ChatGPT came out in November 2022.

“I worked in the U.S. Senate in 2023–24, and we had nine AI policy fora sessions where all of this was said. I have it in my handwritten notes! All of this was already said, all of it,” she communicated to in a direct message. “The language around public-private partnerships, AI literacy, and worker voice reads like it came out of a Unesco or OECD AI policy framework report. The ideas are not wrong. The problem is the gap between naming the solutions and building real mechanisms to achieve them.”

Evidently, the intended audience is not its hundreds of millions of weekly ChatGPT users. Instead, it targets Beltway policymakers who have been advocating for AI regulation (or deferring it) in various forms since ChatGPT’s release in November 2022. In this regard, some observers noted it represents an improvement over previous efforts.

“I found this document to genuinely be a real improvement from previous documents that were even more floaty and high-level,” remarked Nathan Calvin, vice president of state affairs and general counsel of Encode AI. “I think some of the concrete suggestions around things like auditing or incident reporting and government restrictions on certain uses of AI are good ideas.”

However, he also drew attention to lobbying efforts led by OpenAI executives through the Leading the Future PAC, which advocates for policies favorable to the AI industry. Chris Lehane, head of global affairs, is considered a driving force behind these initiatives, with president Greg Brockman being the largest donor.

“I hope this document signals a move toward more constructive engagement, instead of attacking politicians pushing the very policies OpenAI is now endorsing,” Calvin stated, specifically referencing Leading the Future’s lobbying against New York congressional candidate Alex Bores, who authored and was the primary sponsor of the RAISE Act, the New York AI safety and transparency law recently signed by Gov. Kathy Hochul.

Calvin has also accused OpenAI of employing intimidation tactics to undermine California’s SB 53, the California Transparency in Frontier Artificial Intelligence Act, during its debate. He further alleged that OpenAI used its ongoing legal dispute with Elon Musk as a pretext to target and intimidate critics, including Encode, which the company implied was secretly funded by Musk.

Still, while OpenAI CEO Sam Altman compared Monday’s set of policy ideas to the New Deal in an interview with Axios, some critics suggest it resembles a Silicon Valley conceptual exercise more than actionable FDR-era legislation that will spontaneously translate into action.

For instance, Anton Leicht, a visiting scholar with the Carnegie Endowment’s technology and international affairs team, posted on X that, in reality, these ideas represent fundamental societal changes and significant political challenges. “They’re not just going to emerge as an organic alternative,” he wrote. “On that read, this is comms work to provide cover for regulatory nihilism.”

A more effective approach, he suggested, would be to redirect the AI industry’s political funding and lobbying capabilities towards advancing this type of policy agenda. However, he expressed limited optimism due to the document’s “vague nature and timing.”

This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.

Category: Top News, Daily News

SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.