AI is increasing our speed and productivity, but it's also diminishing our critical thinking skills.
AI is ubiquitous, and the pressure to embrace it is unrelenting, while the proof of its role in enhancing our intelligence diminishes with each passing quarter.
On New Year’s Day 2026, a programmer named Steve Yegge introduced an open-source platform called Gas Town. This platform enables users to manage multiple AI coding agents at once, developing software at a speed beyond any individual human's capabilities.
One of the early users shared their experience using phrases that didn’t relate to increased productivity. “Everything happening at once is too much to fully understand,” they noted. “I felt a tangible sense of stress watching it.”
That statement deserves to be displayed in every executive suite, every venture capital meeting, and on every stage at CES where “intelligence” is tossed around casually. A curious shift is underway regarding the relationship between humans and the technology labeled as intelligent.
Machines are becoming faster, but the humans interacting with them are growing more fatigued, anxious, and, by several metrics, less able to perform the very function intelligence was meant to enhance: clear thinking.
The compulsion to adopt AI is so widespread that it has spawned its own language of coercion.
You must have AI.
You must utilize AI.
You must purchase AI.
Your rivals are already using it.
Your children will lag behind without it.
This language isn’t emerging from engineers quietly troubleshooting issues. Instead, it stems from earnings calls, product launches, and LinkedIn updates filled with the frantic energy of those confusing marketing a product with conveying reality.
During the World Economic Forum in Davos in January 2026, Microsoft CEO Satya Nadella articulated a notion so telling it merits analysis as a cultural artifact. He cautioned that AI might risk losing its “social permission” to use large amounts of energy unless it begun delivering real benefits to people's lives.
The framing was striking: It wasn't about whether the technology functions, but whether it could keep the public engaged while the industry determines its effectiveness. Nadella referred to AI as a “cognitive amplifier,” providing “access to infinite minds.”
A month later, a Circana survey showed that 35 percent of US consumers preferred not to have AI on their devices at all. Their primary reason wasn’t confusion or fear of technology; it was much simpler: they felt they didn’t need it.
The divide between rhetoric and reality is becoming harder to overlook. In March 2026, Goldman Sachs analyzed fourth-quarter earnings data and concluded, as senior economist Ronnie Walker stated, “no meaningful correlation between productivity and AI adoption at the economy-wide level.”
The bank observed that a record 70 percent of S&P 500 management teams addressed AI in their earnings calls, but only 10 percent had quantified its impact on specific applications, and just 1 percent measured its effect on profits. Meanwhile, it was anticipated that the five largest US tech companies would collectively invest $667 billion in AI infrastructure in 2026, marking a 62 percent increase from the previous year.
The National Bureau of Economic Research captured the scenario as a “productivity paradox”: perceived improvements were greater than the actual measured gains.
Despite concrete productivity enhancements, they are notably limited. Goldman reported a median gain of around 30 percent in two specific sectors: customer support and software development. Outside those categories, evidence of wider improvement was essentially nonexistent, leading the bank to conclude that the anticipated revolution is confined to two rooms in a large house.
Examining what is occurring in these spaces is crucial because even when AI succeeds, something else seems to be faltering.
In February 2026, researchers from UC Berkeley’s Haas School of Business released results from an eight-month study at a 200-person technology firm in the US. They discovered that AI did not alleviate workloads but rather intensified them. As tasks accelerated, so did expectations, which expanded the scope of responsibilities. Product managers began coding, and researchers took on engineering tasks. Job boundaries blurred because the tools made it seem feasible, but this resulted in widespread exhaustion.
The researchers identified a pattern they termed “workload creep”: an unnoticed accumulation of tasks that, when unchecked, leads to cognitive fatigue that impairs decision-making quality.
The Harvard Business Review labeled this situation more bluntly: “AI brain fry.” A Boston Consulting Group study involving nearly 1,500 US workers found that 14 percent of those relying on AI tools requiring substantial supervision reported experiencing this type of mental fog, characterized by difficulty focusing, delayed decision-making, and headaches after prolonged interaction with AI.
Those most affected were not the skeptics or holdouts, but the enthusiastic adopters—those who followed all the advice from keynote speeches.
The distribution of this fatigue isn’t random. According to the Harvard Business Review study, 62 percent of associates and 61 percent of entry-level employees reported AI-related burnout, while the figure dropped to 38 percent among
Other articles
AI is increasing our speed and productivity, but it's also diminishing our critical thinking skills.
AI is ubiquitous, yet the proof of its worth is scarce, the fatigue is genuine, and the term “intelligence” serves more as a marketing tool than a scientific term.
