AI is enhancing our speed and productivity, but is also diminishing our critical thinking abilities.
AI is ubiquitous, and the push to embrace it is unyielding, while the evidence of its ability to enhance our intelligence is diminishing every quarter.
On January 1, 2026, a programmer named Steve Yegge introduced an open-source platform called Gas Town, which allows users to coordinate numerous AI coding agents at once, creating software at speeds unattainable by a single human.
One of the initial users characterized the experience not in terms of improved productivity, but rather as overwhelming. “There’s truly too much going on to reasonably understand,” he noted. “I felt a tangible sense of stress while observing it.”
This remark should be displayed prominently in every executive office, venture capital meeting, and keynote stage at tech events where the term “intelligence” is often thrown around casually. A curious shift is occurring in the dynamics between humans and the technology we label as intelligent.
As machines become faster, those engaging with them are becoming increasingly exhausted, anxious, and—by several measures—less capable of what intelligence was meant to elevate: clear thinking.
The pressure to adopt AI has become so widespread that it has created its own lexicon of coercion.
You must have AI.
You need to utilize AI.
You ought to purchase AI.
Your competitors are already using it.
Your children will be left behind without it.
This language originates not from engineers quietly addressing problems, but from earnings reports, product unveilings, and LinkedIn updates written with the frenzied energy of those who have mistaken selling a product for presenting reality.
At the World Economic Forum in Davos in January 2026, Microsoft CEO Satya Nadella shared a statement so significant it warrants analysis as a cultural artifact. He cautioned that AI might lose its “social permission” to consume extensive energy unless it began to provide concrete benefits for people.
The framing was notable: the issue wasn’t whether the technology functions, but whether the public can be kept supportive while the industry determines its efficacy. Nadella referred to AI as a “cognitive amplifier,” claiming it offers “access to infinite minds.”
A month later, a Circana survey revealed that 35 percent of US consumers did not want AI on their devices at all. Their primary reason wasn’t confusion or fear of technology; rather, it was straightforward: they felt they didn’t need it.
The disparity between the rhetoric and the evidence has become challenging to overlook. In March 2026, Goldman Sachs released an analysis of fourth-quarter earnings data, stating, as senior economist Ronnie Walker noted, “no meaningful relationship between productivity and AI adoption at the economy-wide level.”
The bank pointed out that a record 70 percent of S&P 500 management teams mentioned AI during their earnings calls. However, only 10 percent quantified its influence on specific use cases, and just 1 percent measured its effect on earnings. Meanwhile, the top five US tech companies were projected to spend a combined $667 billion on AI infrastructure in 2026, reflecting a 62 percent rise from the previous year.
The National Bureau of Economic Research referred to this situation as a “productivity paradox,” where perceived improvements are far greater than measured ones.
There are genuine productivity advances, albeit remarkably narrow. Goldman discovered a median improvement of about 30 percent in two specific fields: customer support and software development. Beyond these areas, thorough evidence of widespread advancement was, according to the bank, essentially nonexistent. For now, the anticipated revolution is confined to two rooms in a much larger building.
What transpires within those rooms, however, deserves careful examination, because even where AI proves effective, something else seems to fracture.
In February 2026, researchers from UC Berkeley’s Haas School of Business published results from an eight-month investigation conducted within a 200-person US tech company. They discovered that AI did not lessen workloads. Instead, it intensified them. As tasks became faster, expectations increased. With rising expectations came an expanded scope of responsibilities. Employees found themselves taking on duties that previously belonged to others. Product managers started coding. Researchers began performing engineering tasks. Role boundaries blurred due to the capabilities of the tools, leading to exhaustion.
I felt fatigued just writing this.
The researchers identified a cycle they termed “workload creep,” which denotes a gradual buildup of tasks that goes unnoticed until cognitive fatigue compromises the quality of all decisions.
Harvard Business Review coined this phenomenon a rather blunt term: “AI brain fry.” A Boston Consulting Group study involving nearly 1,500 US workers revealed that 14 percent of those using AI tools requiring significant oversight experienced it, manifesting as a distinct kind of mental haze characterized by difficulties in concentration, slower decision-making, and headaches after prolonged interaction with AI.
The individuals most affected were not those resistant or slow to adopt. They were the eager users, those who followed every keynote’s advice to the letter.
The distribution of this
Other articles
AI is enhancing our speed and productivity, but is also diminishing our critical thinking abilities.
AI is ubiquitous, yet the proof of its worth is scarce, fatigue is palpable, and the term “intelligence” is serving more as a marketing tool than a scientific concept.
