Stanford's yearly AI report identifies a divide between AI experts and the general public.
The 2026 AI Index from Stanford’s Institute for Human-Centered AI highlights an increasing divide between expert optimism and public anxiety. Generation Z's frustration regarding AI is growing rapidly, with a decline in employment within AI-related fields for younger workers already observable. Additionally, the US has the lowest level of trust in its government to manage AI regulations compared to any other country surveyed.
Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) released its annual AI Index Report on Monday. The most striking finding is not related to model performance or investment levels but rather the expanding gap between those developing AI and those affected by it.
The comprehensive 423-page report reveals that, in nearly all areas examined, expert opinions and public views head in opposite directions. “AI experts and the US public disagree on nearly everything about AI’s future,” the report summarizes, except for the shared belief that AI could negatively impact elections and personal relationships.
The statistics are illuminating. According to a Pew Research survey referenced in the report, only 10% of Americans expressed that they felt more excited than worried about the increased integration of AI into everyday life.
In a corresponding survey among AI experts, 56% believed AI would positively influence the US in the next 20 years. The disparity is greatest concerning the economy and job prospects: 69% of experts viewed AI as beneficial to the economy, whereas only 21% of the general public agreed. When considering AI’s impact on job performance, 73% of experts responded affirmatively, in contrast to just 23% of the public.
Moreover, while 84% of experts predicted AI would significantly enhance medical care in the next 20 years, only 44% of the American public concurred. Simultaneously, nearly two-thirds of Americans, 64%, believe that AI will result in fewer jobs over the next two decades.
The report indicates a decline in employment for younger workers in AI-involved sectors, suggesting that public concern is not merely hypothetical.
Generation Z's interaction with AI is particularly telling. A Gallup poll conducted for the Walton Family Foundation and GSV Ventures in February and March 2026, which surveyed 1,572 individuals aged 14 to 29, revealed a drop in the percentage of Gen Z respondents expressing excitement about AI, from 36% in 2025 to 22% in 2026.
The percentage feeling hopeful decreased from 27% to 18%, while those feeling angry increased from 22% to 31%. This change occurs despite approximately half of Generation Z using AI on a daily or weekly basis.
Zach Hrynowski, a senior education researcher at Gallup, linked the rising anger to AI diminishing job opportunities for entry-level workers, highlighting that the oldest members of Generation Z, most exposed to the job market, feel the angriest.
On the topic of regulation, the geographical disparity is apparent. The US exhibits the lowest trust in its government’s ability to regulate AI, standing at 31%. In contrast, Singapore ranks highest at 81%.
Globally, 41% of Americans believe federal AI regulation is inadequate, while only 27% feel it is excessive. A separate Pew survey of 25 countries shows that the EU is regarded as more trustworthy than the US or China in terms of effective AI regulation.
The report also addresses the discrepancy between AI’s technological advancements and societal repercussions. AI reached 53% of the population faster than both personal computers and the internet. Reported incidents related to AI, defined as harms or near-harms caused by deployed AI systems, totaled 362 in 2025, an increase from 233 in 2024, with 88% of organizations now utilizing AI.
The environmental impact is also rising accordingly: training xAI’s Grok 4 is estimated to have produced over 72,000 tonnes of CO₂, and the water needed for GPT-4o inference workloads could support 12 million people.
Remarkably, the report notes that despite AI's rapid progress, leading models can correctly read analog clocks only about 50% of the time, compared to roughly 90% accuracy for average humans.
Stanford’s report recognizes its own limitations, acknowledging that it is financially backed by Google, OpenAI, and others, and was produced with help from ChatGPT and Claude. Its conclusion that “Responsible AI is not keeping pace with AI capability, with safety benchmarks lagging, and incidents rising sharply” serves as an implicit critique of the very organizations that contributed to its funding.
Altri articoli
Stanford's yearly AI report identifies a divide between AI experts and the general public.
According to Stanford's 2026 AI Index, there is a significant disagreement between AI experts and the general public regarding the future of AI. Merely 10% of Americans feel more optimistic than apprehensive about it.
