Big things are happening in AI, but panic is the wrong response

When the 1964 World’s Fair arrived in Queens, New York, robots were showcased handling household chores, touted as imminent additions to nearby homes. After the Fair ended, the exhibits shifted to World and repeated the same promise for three decades: robots were just around the corner. But they never materialized.
The 1990s saw the growth of distributed computing power and large-scale investments in it, sparking claims of imminent, huge productivity gains. But those gains didn’t happen right away. It took years and related changes to work organization to actually boost productivity.
In the early 2000s, progress in data science and machine learning for predictions triggered fresh concerns, with 2010s claims that certain roles were “at risk” of being replaced by new AI tools. By the decade’s end, the perceived threat shifted back to robot-like machines poised to take over blue-collar jobs, with assertions that this would happen imminently. But that didn’t occur. Even the rise of robots in manufacturing hasn’t led to job losses—instead, new robots are linked to employment growth.
Experts have a long track record of bombarding us with doomsday predictions about technology—first eliminating our jobs, then wiping out humanity entirely because we’re a nuisance. The recent three-year panic over Large Language Models (LLMs) fits this pattern.
The uncomfortable fact is that by 2025, there was no widespread takeover of jobs by LLMs. Layoffs purportedly tied to AI increasingly appear unrelated—at most, they were based on expectations that AI would replace workers. Even OpenAI CEO Sam Altman has called out “AI washing,” noting that these so-called AI-driven layoffs are mostly superficial.
In 2026, we’re once again in a panic, fueled by new claims about AI’s dangers—even though there’s still no evidence of these feared changes.
Notice the pattern? Scientists and developers are justifiably excited about a new innovation and eager to vocalize their ideas for how the new tools might be used. Then vendors step in to sell those tools, amplifying the claims aggressively. This is the start of the hype cycle. They don’t consider whether those uses are practical—like cost, necessary配套 changes, or even if anyone needs the tools at all.
Research has found that three-fourths of public companies with traceable AI adoption saw minimal benefits, only 5% used AI systematically, and it hasn’t reduced many jobs. My own research takes a different approach: examining individual workplaces to see the before-and-after of actual AI implementation. Here’s why AI’s spread is slower than expected and why it hasn’t taken over many jobs.
The reality of AI adoption is different than the fears
First, AI is costly to implement. LLM companies don’t give away their tools, and the top-tier ones are pricey to use. The assumption that they’ll inevitably become cheaper isn’t a given. While many vendors offer LLM tools, nearly all rely on core technology from six vendors that already hold 80% of the market. Computing time isn’t getting much cheaper, and the electricity to power it is rising in cost.
But the largest cost is the time and effort required to customize AI tools for an organization and maintain them. Most of these costs are upfront. We still need human backup to fix issues LLMs can’t handle, and productivity gains that might reduce workforce size take much longer to materialize. Convincing a CFO—who’s focused on ROI—to invest in an expensive, upfront project with ongoing IT costs is tough when benefits are uncertain and only appear years later.
Second, tied to the ROI challenge is a misplaced emphasis on eliminating low-skill jobs. Two key points: First, cutting minimum-wage jobs doesn’t save much, especially since we still need employees to monitor and troubleshoot AI tools. Second, simple white-collar roles are straightforward because they require little judgment and are often binary—like sorting forms into the correct piles. But they demand 100% accuracy. These are ideal for Machine Learning, but ML is far more expensive than LLMs because it must be built for each specific task and constantly monitored and adjusted.
Third, LLMs can handle tasks in more complex jobs where “good enough” suffices, not perfection. They’re cheaper than ML but still need oversight. A typical human job involves many distinct, complex tasks that can’t be automated—at least not yet.
For example, LLMs are great for programming tasks, but programmers spend 70% of their time on non-programming work—mostly interacting with colleagues. If LLMs take over 20% of a school principal’s report-writing time, we can’t lay off 20% of a principal. But we can have them do new tasks.
I believe the true value of LLMs won’t be cost savings; it will be enabling us to do things we haven’t even imagined yet. Think about search engines—they drastically reduced research and answer-seeking time. I’ve never heard of search engines causing mass job losses. Instead, they created new businesses, work methods, and jobs. Most companies, for instance, have tons of data they can’t organize well enough to use. If the latest Claude/Anthropic tool lives up to its analytical claims, it could spend years just making sense of all that data.
Perhaps we should stop obsessing over what AI is cutting—like jobs—and instead focus on what it’s growing: all the new products and solutions AI can help us create.