Stanford's yearly AI report reveals a disparity between those within the AI field and the general public.
The 2026 AI Index from Stanford's Institute for Human-Centered AI highlights an increasing divide between expert optimism and public concern regarding AI. Anger among Gen Z about AI is quickly escalating, and jobs in AI-influenced sectors for younger workers are already on the decline. Furthermore, the US ranks lowest in trust for its government to regulate AI among the countries surveyed.
Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) published its annual AI Index Report on Monday, and its most notable conclusion pertains not to model efficiency or investment levels, but to the growing rift between AI creators and those affected by it.
The 423-page report reveals that on almost every measure it investigates, expert perspectives and public opinions are at odds. "AI experts and the US public disagree on almost all aspects of AI's future," states the report, with the significant exception of both groups believing that AI may negatively impact elections and personal relationships.
The statistics are striking. A Pew Research study cited in the report found that only 10% of Americans expressed more excitement than concern about the rising integration of AI into daily life. In contrast, 56% of AI experts surveyed for this same report indicated that they believed AI would have a positive effect on the US over the next 20 years.
The most significant disparities are seen regarding the economy and job market: 69% of experts feel that AI will positively impact the economy, compared to just 21% of the general public. Regarding the effect of AI on job performance, 73% of experts responded positively, while only 23% of the public agreed. Meanwhile, while 84% of experts predicted that AI will benefit healthcare in the coming two decades, only 44% of the US populace concurred. Additionally, nearly 64% of Americans think that AI will result in job losses within the next 20 years.
The report highlights that employment in AI-exposed fields for younger workers has already begun to decline, indicating that public concerns are not just hypothetical.
The sentiment among Gen Z regarding AI is particularly telling. A Gallup poll conducted for the Walton Family Foundation and GSV Ventures in February and March 2026, which surveyed 1,572 individuals aged 14-29, revealed a drop in the percentage of Gen Z respondents who feel excited about AI from 36% in 2025 to 22% in 2026. The percentage that feels hopeful decreased from 27% to 18%, while those expressing anger increased from 22% to 31%. This shift occurs despite around half of Gen Z using AI on a daily or weekly basis.
Gallup's senior education researcher Zach Hrynowski attributed the rising anger to AI dampening opportunities for entry-level positions, highlighting that the oldest members of Gen Z, who are the most engaged with the job market, are the most discontented.
In terms of regulation, the geographic disparity is particularly striking. The US has reported the lowest confidence in its government to regulate AI, at 31%, while Singapore boasts the highest trust level at 81%. Globally, 41% of Americans believe that federal AI regulation would be insufficient, compared to only 27% who feel it would be excessive. According to another Pew survey involving 25 countries, the EU garners more trust than the US or China regarding effective AI regulation.
The report also outlines the disparity between AI’s technological advancements and its societal impacts. AI reached 53% of the population quicker than either the personal computer or the internet. Documented incidents involving AI, defined as harms or near-harms from implemented AI systems, rose to 362 in 2025, up from 233 in 2024, with 88% of organizations now reporting AI usage.
The environmental impact is similarly concerning: training xAI’s Grok 4 is estimated to have emitted over 72,000 tonnes of CO₂, with the water used for GPT-4o inference workloads capable of sustaining 12 million people.
Ironically, the report notes that despite AI’s fast progression, the most advanced models can still accurately read analog clocks only about 50% of the time, while unspecialized humans achieve an accuracy of approximately 90%.
Stanford's report acknowledges its limitations, noting that it is financially backed by Google, OpenAI, and others, and produced with help from ChatGPT and Claude. Its assertion that "Responsible AI is not keeping pace with AI capacity, with safety standards lagging and incidents increasing sharply" serves as an implicit criticism of the very organizations that funded its creation.
Other articles
Stanford's yearly AI report reveals a disparity between those within the AI field and the general public.
The 2026 AI Index from Stanford indicates that AI specialists and the general public have differing views on almost all aspects of AI's future. Merely 10% of Americans feel more excited than worried about it.
