Sam Altman, Jensen Huang and the other AI bigwigs are solely to blame for the current economic scare

Sam Altman of OpenAI is perturbed. However, this is an issue that Altman and his fellow AI executives have themselves caused by overhyping their technology while simultaneously alarming the public about their future economic security in an AI-driven world.

As the chief promoter of AI, Altman recently expressed his dissatisfaction with the pace of AI’s advancement at an industry conference. He complained that there was “more resistance to ‘the diffusion, the absorption’ of AI into the culture and economy than he anticipated.” The Times also quoted Altman as saying, “Looking at what’s possible, it does feel sort of surprisingly slow.”

And he’s not the only one among AI titans. CEO Jensen Huang is quoted in the same article as stating that AI skeptics are “scaring people away from investing in AI” that would make it better.

Which isn’t surprising, considering that Anthropic CEO Dario Amodei regularly…

The executives are blaming others for this crisis in AI confidence. It’s the public’s fault. It’s the market’s fault. It’s the critics’ fault. But the problem is more fundamental: AI’s leading companies have violated a market development principle called the “adjacent possible.”

That principle posits that innovations only truly gain traction when two factors converge: One, the new thing functions reliably, and two, people understand why they need it. Simply creating a cool new technology is never sufficient; fail to bring the public along, and you’ll end up with either weak demand (think Segway) or a backlash (like with nuclear power in the 1980s).

While the demand for AI isn’t weak, it’s weaker than its proponents believe it should be. And at the same time, a backlash against AI has been building due to the technology’s potential impacts.

The concept of the adjacent possible was popularized in Steven Johnson’s 2010 book, Where Good Ideas Come From: The Natural History of Innovation. Historical patterns precede an explosive moment when an innovation – the pencil, the flush toilet, batteries, the smartphone – catches on and transforms the way we work or live.

“Possible” technologies already exist, work well, and have been adopted by consumers and businesses. “Not-yet-possible” technologies are untested, unreliable, and not yet well-understood by their target market.

Today, for example, mass-market electric cars fall into the possible. Flying cars in every driveway fall into the not-yet-possible.

The adjacent possible is a narrow band between those two zones. Innovations change the world when they land there, stretching boundaries and changing habits – but not so much that the technology keeps malfunctioning or makes us feel uneasy. When an innovation hits this sweet spot, the result is user delight and new consumption patterns that generate popular enthusiasm and rapid, widespread adoption.

For instance, when the Wright brothers first flew in 1903, all the necessary mechanics and theories – from the piston engine to wing aerodynamics – already existed. The Wrights just had to push the technology a bit further by assembling the right parts and adding some of their own key insights.

And by that time, inventors had been attempting to fly for years, so the public was ready to believe that a machine could fulfill the promise. Twenty years earlier, powered flight was science fiction to most people. But… and airplanes were soon embraced by an excited public.

Which brings us back to AI. While artificial intelligence has been around for decades, for much of the population, it seemed to suddenly闯入 their lives with the introduction of OpenAI’s ChatGPT in late 2022. AI has since advanced faster than any technology most of us have experienced.

We’re being told repeatedly by the tech crowd that AI is going to change everything – the way we work, our careers, our art, our politics – and might even come to control us.

It’s too much, too quickly. The mass market can understand that AI is better than search. The adjacent possible would indicate that it’s a leap we can make from where we are to where we’re going.

But telling us that we should already have a team of AI agents doing half our jobs and making us ten times more productive – or alternatively that we’ll all be unemployed in the near future – is just too large of a leap. Moreover, it’s a leap accompanied by a threat.

And Altman and Huang and other AI industry leaders wonder why AI adoption is falling short of their expectations?

AI companies currently need a substantial dose of adjacent possible “medicine.” The technology may be advancing at breakneck speed, but the general public isn’t. In tech product planning, it’s always better to hit the sweet spot now while building towards a future that may take time to assimilate.

So AI leaders might consider scaling back the revolution and instead focus on producing products and services today that push us into new territory at a human pace. Map out a journey into the future for us that we can embrace without feeling threatened – or risk more pushback from the public and, ultimately, policymakers.