Meta's $145 billion AI initiative eclipses the child safety lawsuits that may incur greater expenses.
**TL;DR** Meta's Q1 2026 earnings call centered on its AI investments ($125-$145B capex) while neglecting the escalating child safety crisis the company faces, including a lost addiction trial resulting in $6M damages, a $375M penalty in New Mexico, over 40 state attorney general lawsuits, youth bans in several countries, a recently launched EU probe, and proposed US Senate legislation aimed at AI chatbots for minors. CFO Susan Li acknowledged potential "material loss" due to these issues, but no investors questioned Zuckerberg about children.
Mark Zuckerberg's earnings call on Wednesday highlighted AI and the projected capital expenditure of $125 billion to $145 billion for Meta in 2026. It covered topics such as Llama models, recommendation engines, and the advertising framework contributing to $56 billion in quarterly revenue. However, children were not addressed. Investors did not press Zuckerberg regarding the social media addiction trial lost by Meta in March, the multitude of similar pending lawsuits, the newly announced EU investigation into underage users, or recent Senate committee legislation designed to curb minors' access to chatbots, nor the Indonesian ban on under-16s which impacted millions of young users. Zuckerberg told employees that the 8,000 layoffs this month were intended to shift resources from personnel to infrastructure without outlining how the company would handle the growing legal and reputational challenges posed by its existing products.
On March 25, a jury in Los Angeles County found Meta and Google accountable for creating addictive platforms that negatively impacted a young user, resulting in a $6 million damage award where Meta was deemed 70% responsible. This marked the first verdict in a social media addiction case. In a separate New Mexico trial, a jury ruled that Meta breached the state's Unfair Practices Act by hiding knowledge about child sexual exploitation and the negative mental health effects of its platforms on children, leading to a $375 million penalty. Massachusetts' highest court ruled in April that Meta must face a lawsuit accusing the company of intentionally designing addiction-inducing features for young users. Over 40 state attorneys general have initiated child safety lawsuits against Meta, with significant trials set throughout 2026. CFO Susan Li noted in her remarks that Meta faces increasing scrutiny over youth-related concerns and that ongoing trials may ultimately lead to substantial losses.
The term "material" carries significant weight here. The tobacco industry's master settlement in 1998 cost $206 billion over 25 years. With Meta's quarterly revenue at $56 billion, a comparable settlement would represent the largest corporate liability ever. Current cases in court are testing the legal theory behind the tobacco settlement: that the company recognized its product's dangers, concealed evidence, and continued marketing to minors. Multiple jurisdictions are actively investigating platforms for failures in child safety under new online safety laws, expanding legal challenges beyond American courts.
To prevent future issues, governments are enacting bans. Indonesia was the first Southeast Asian nation to prohibit social media for users under 16, affecting platforms such as Google’s YouTube, ByteDance’s TikTok, and Meta’s Instagram, Facebook, and Threads. Australia implemented a similar ban in December 2025, while France and Spain followed suit with their own restrictions. Additionally, the European Commission intensified its investigation into Meta's inability to stop underage users from accessing their platforms, which could lead to fines reaching up to 6% of global revenue. A US Senate committee has also endorsed legislation requiring Meta and other AI companies to restrict minors from using chatbots, thus broadening regulatory oversight to encompass AI-driven conversational tools.
This trend is both global and increasing. Each ban and inquiry adds to compliance costs, limits Meta's market for underage users, and contributes to regulatory pressure acknowledged by Li during the earnings call. The bans also present a natural experiment: if social media use among minors decreases in countries with restrictions, and youth mental health improves as a result, it will strengthen the evidence supporting the addiction lawsuits in the United States.
In a shift towards artificial intelligence, Meta has been laying off hundreds of employees across Reality Labs, recruiting, and sales. The proposed $125 billion to $145 billion capex for 2026 is approximately double last year’s expenditure, mostly directed toward data centers, GPUs, custom silicon, and infrastructure for Llama models and Meta’s Superintelligence Labs. The extended agreement with Broadcom, lasting until 2029, commits Meta to an extensive custom silicon program costing billions more. Despite this, Wall Street reacted negatively, with Meta's stock experiencing its most significant drop in six months post-earnings call, as analysts from Bank of America characterized the company as a “show-me” story concerning AI returns.
The investment premise suggests that AI enhancements will refine Meta’s recommendation models (keeping users engaged for longer), enhance advertising techniques (improving ad targeting), and ultimately lead to new revenue sources from AI products. However, the child safety lawsuits contend that Meta’s current recommendation models are already overly effective at maintaining user engagement,
Other articles
Meta's $145 billion AI initiative eclipses the child safety lawsuits that may incur greater expenses.
Meta suffered a defeat in its initial addiction trial and is confronted with over 40 lawsuits from state attorneys general, while bans are on the rise. During Zuckerberg's earnings call, the focus was on AI, with no investors raising questions about children.
