Meta’s Strategy to Become an AI Infrastructure Leader and Its Commitment to a High-Investment Direction

Two weeks ago, CEO Mark Zuckerberg used his Threads social platform to introduce Meta Compute, a significant new initiative steered by the company’s highest-ranking leaders. This action reinforced Meta’s dedication to becoming a dominant force in AI infrastructure and indicated its resolve not to fall behind in the competitive expansion of data centers.

The purpose of this new entity is to acquire the enormous computational capacity—quantified in gigawatts, a single unit of which can supply electricity to hundreds of thousands of residences—required for Meta’s push to develop AI models that achieve “superintelligence.”

“Meta intends to construct tens of gigawatts within this decade, and hundreds of gigawatts or greater amounts in the future,” Zuckerberg stated. “Our approach to engineering, investment, and partnerships in building this infrastructure will become a key strategic benefit.”

Within the new framework, Zuckerberg explained that veteran Meta executive Santosh Janardhan will maintain oversight of the firm’s technical architecture, software, custom silicon, and the ongoing construction and management of its extensive data center network. Concurrently, Daniel Gross—a prominent AI hire from the previous summer, who co-founded Safe Superintelligence with ex-OpenAI chief scientist Ilya Sutskever—will head a new team concentrating on long-term strategy: forecasting Meta’s future computational requirements, determining optimal locations for construction, securing access to limited chips and energy, and modeling the business implications of these investments.

Zuckerberg also introduced a new Meta president and vice chair, Dina Powell McCormick, who will focus on forging government partnerships to fund and establish data centers globally. She previously served as deputy national security advisor for strategy to President Trump.

A growing perception that Meta is playing catch-up

For some observers of Meta, the unveiling of Meta Compute was confusing. The company is already a major player in AI infrastructure. It has been twelve months since the groundbreaking of its Hyperion facility—a 4-million-square-foot data center complex in northeast Louisiana that Zuckerberg once likened in size to lower Manhattan for President Trump. So why was it necessary to formally announce a new high-level organization for an endeavor already being pursued on a massive scale?

“This was somewhat puzzling initially; I didn’t grasp it at first,” remarked Patrick Moorhead, founder and principal analyst at Moor Insights and Strategy. He proposed that the Meta Compute message was primarily directed at investors and staff, affirming that Meta is a formidable competitor in a field dominated by companies like OpenAI and xAI. “This is Meta declaring, ‘This is our deployment strategy,’” he commented.

Rick Pederson of Bow River Capital concurred, stating that the announcement addresses an increasing sentiment among market analysts that Meta is trailing Google and OpenAI in the AI competition.

“It served as a method to articulate their concentration and deliberate approach to developing AI, computational ability, and infrastructure,” he noted. “I suspect other leading hyperscalers have comparable internal structures. However, Zuckerberg used this moment to formally outline it.” Despite the company allocating over $70 billion to AI infrastructure last year and intending to spend an additional $600 billion in the coming two years, Google and OpenAI are investing similar sums, he added. “Therefore, I believe this provided Zuckerberg a platform to discuss not only the priority but also the execution plan.”

Doubling down on infrastructure as an investment portfolio

Several specialists expressed little surprise at the decision to announce Meta Compute. “Meta is reinforcing its commitment to treating infrastructure as an investment portfolio instead of just an operational expense,” stated Lane Dilg, former head of infrastructure policy at OpenAI and founder of the advisory firm Apeiro.

As the AI surge continues, Meta’s perspective on data centers, GPUs, power agreements, and custom chips has evolved from viewing them as basic support systems to considering them strategic holdings—acting more like an investment manager than a technology firm. Essentially, Dilg noted, Meta is aligning itself not only with other hyperscalers but also with the most advanced global investment platforms.

In that context, she elaborated, the selection of Daniel Gross to co-lead the initiative is logical. “Gross’s background in creating AI-native and agentic platforms is significant, combined with his computational knowledge,” she said, referencing the supercomputer he developed with NFDG co-investor Nat Friedman, who is now Meta’s head of products. That project resulted in the Andromeda Cluster, a reservoir of computing resources comprising over 4,000 GPUs, offered to their portfolio companies at discounted rates.

Gross has indicated his recruitment strategy for Meta Compute, mentioning in a post that he is seeking individuals with expertise in “deep learning, supply chains, commodities, semiconductors, sovereigns, energy, Excel, prediction markets, monitoring situations, etc.” This implies Meta is getting ready to mitigate risks associated with fluctuating power and hardware expenses and to place long-term wagers influenced by energy markets, supply chains, and geopolitical factors, in addition to technology.

Powell McCormick is also a crucial strategic addition to the Meta Compute project, according to Umesh Padval, a seasoned investor and board member. “Hyperscalers are now concentrating on securing power and are funding power initiatives using cash and debt,” he said. “Given her experience in banking and politics, she would collaborate with Meta’s data center team to arrange financing and obtain permits to accelerate the development of computational capacity.”

Critics say Meta’s capital-intensive model would drag down returns

However, not everyone endorses Meta’s strategy. On the day of the Meta Compute announcement, “Big Short” investor Michael Burry posted on social media: “Meta capitulates, discarding its primary advantage. Expect ROIC to plummet.” Burry’s caution underscores a concern that Meta is forfeiting its capability to produce huge earnings without investing heavily in tangible assets. By adopting gigawatt-scale data centers, he contends, Meta is moving towards a much more capital-heavy business model—one that may reduce profitability and give the company characteristics akin to a utility.

But given that Meta has already invested tens of billions in AI data centers and pledged hundreds of billions more for long-term infrastructure, it appears that decision was made some time ago.