Artificial intelligence is often hailed as the technology that will revolutionize work. Nowhere is this hype louder than in coding, where AI-powered tools are said to already outperform human developers. Some even claim that the dawn of “superintelligence” has arrived in software engineering.
But a surprising new study complicates that narrative. Conducted in July by the think tank Model Evaluation & Threat Research (METR), it delivered a shocking result: experienced developers actually became slower when using AI tools.
The Surprising Findings
In the experiment, professional software engineers were randomly assigned coding tasks, with some allowed to use AI tools and others working without them. Experts surveyed before the study predicted dramatic productivity gains. The consensus estimate was that AI would speed up coding by nearly 40%.
After completing the tasks, participants themselves guessed AI had made them about 20% faster. But when METR examined the hard data, the reality was starkly different: developers were 20% slower when using AI.
“No one expected that outcome,” said Nate Rush, one of the study’s authors. “We didn’t even really consider a slowdown as a possibility.”
This result challenges the assumption that AI tools like GitHub Copilot or ChatGPT are already delivering massive real-world productivity benefits.
Read More: Why AI hallucinations happen—and why fixing them could ruin ChatGPT.
The Capability–Reliability Gap
How can AI, so often praised for its coding skills, end up slowing people down? The answer lies in what researchers call the capability–reliability gap.
AI systems are capable of writing code that looks impressive at first glance. In some benchmarks, they can even solve coding challenges in minutes that might take a human nearly an hour. But their reliability is limited. Success rates often hover around 50%, meaning the AI produces useful results only half the time.
For real-world developers, that’s not enough. The study showed that participants spent significant time reviewing, debugging, and rewriting AI-generated code. One developer described it as “the digital equivalent of shoulder-surfing an overconfident junior developer.”
In other words, AI can produce a lot of code quickly, but it may be faster to write it correctly yourself.
Why This Matters for the Economy
The study’s results come at a paradoxical moment. On the one hand, the U.S. economy is experiencing a major AI-fueled boom. Tech giants are spending billions on AI infrastructure, investors are pouring money into startups, and Wall Street has priced in the belief that AI will dramatically raise productivity.
On the other hand, evidence of real-world productivity gains is scarce. Companies investing heavily in AI have yet to see significant profits. MIT researchers who studied 300 AI initiatives found that 95% failed to boost earnings. A McKinsey report revealed that 80% of companies using generative AI saw no tangible impact on profits.
If productivity doesn’t eventually match expectations, today’s AI enthusiasm could prove to be a bubble, potentially even larger than the dot-com bubble of the early 2000s.
Echoes of the Dot-Com Era
The parallels are striking. In the 1990s, investors poured billions into internet startups, assuming profits were inevitable. When those profits failed to materialize quickly, the market collapsed, wiping out nearly half the value of the S&P 500 between 2000 and 2002.
The internet eventually did transform the economy—but not before countless companies went bankrupt and investors lost fortunes. AI could follow the same path: a real revolution, but slower and messier than its boosters predict.
Read More: InnoVista Announces National Agentic AI Hackathon 2025 With Rs. 3 Million Prize Pool
Short-Term Pain, Long-Term Gain?
Some experts caution against declaring AI a failure too soon. Stanford economist Erik Brynjolfsson points to the “productivity J-curve.” New technologies often cause productivity to drop at first as companies struggle to integrate them. Later, once workflows adapt, efficiency soars.
Electricity is the classic case. Though invented in the 1880s, it didn’t meaningfully boost productivity until Henry Ford redesigned factories decades later. AI might follow a similar trajectory: early struggles, followed by explosive growth.
Optimists, including Anthropic CEO Dario Amodei, predict that by 2027 AI could be “better than humans at almost everything.” But skeptics argue that the recent slowdown in model improvements suggests the industry may be running into hard limits.
The AI Bubble Question
Right now, the Magnificent Seven—Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla—are carrying much of the U.S. stock market’s growth on the promise of AI profits. Together, they’ve spent over half a trillion dollars on AI infrastructure since 2024. Yet returns remain minimal.
If investor belief falters, the fallout could be dramatic. Unlike the dot-com crash, today’s AI economy represents a huge share of U.S. GDP. An AI crash could mean fewer jobs, slower growth, and possibly even a financial crisis if overextended loans collapse.
The Paradox of Productivity Illusions
Even more puzzling: businesses may believe AI is making them more productive—even when it isn’t. Managers under pressure to embrace AI may cut staff or reduce hiring, assuming efficiency gains that never actually arrive. This could push unemployment higher without offsetting productivity improvements.
History offers caution here. In the 1980s and ’90s, email and office software convinced companies they no longer needed secretaries. Higher-paid employees took over scheduling and communication, only to become less productive overall. Companies ended up spending more for the same output. AI could be walking down the same path.
What Comes Next
The METR study doesn’t prove that AI coding tools are useless. In fact, newer systems are already more reliable than the ones tested. Gains may be larger for beginners than for experts. And in some domains, like chip design or scientific research, even modest AI improvements could be worth millions.
But the broader lesson is clear: the hype around AI often outpaces the reality. It may still transform the world—but the road will be longer, costlier, and more uneven than most people expect.
FAQs
1. Why did developers become slower when using AI tools?
Because they had to spend extra time reviewing and correcting AI-generated code, which often contained subtle errors.
2. Does this mean AI coding assistants are useless?
Not at all. They can be helpful, especially for beginners or repetitive tasks, but they aren’t consistently reliable enough to boost expert productivity yet.
3. Could AI still improve productivity in the future?
Yes. Like past technologies, AI may follow a productivity J-curve—initial slowdowns followed by large long-term gains once businesses learn how to integrate it effectively.
4. Are we in an AI bubble?
Possibly. Massive investments and inflated valuations are being justified by expected productivity gains. If those don’t arrive soon, the market could face a painful correction.
5. What’s the biggest risk if AI fails to deliver quickly?
A widespread economic shock. With trillions tied up in AI infrastructure and credit markets, an AI crash could trigger recession or even a financial crisis.



