South Africa has retracted its national AI policy after discovering that at least 6 out of 67 academic references were identified as AI-generated fabrications.
TL;DR: South Africa’s Communications Minister Solly Malatsi retracted the draft national AI policy after News24 uncovered that at least six out of its 67 academic citations were fabricated, referencing non-existent articles in genuine journals. The Cabinet had approved the draft in March, which was subsequently opened for public feedback. Malatsi labeled this an “unacceptable lapse” and promised accountability. This incident leaves South Africa without an AI governance framework and raises concerns regarding its institutional ability to regulate the technology.
South Africa’s Department of Communications and Digital Technologies spent several months creating a national AI policy, proposing various bodies such as a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund. The policy emphasized five governance pillars: capacity for skills, responsible oversight, ethical and inclusive AI, preservation of culture, and human-centric implementation, using a risk-based approach modeled after the EU AI Act. The Cabinet approved the draft on March 25, and it was published in the Government Gazette on April 10 for public commentary. However, News24 examined the bibliography and found that multiple citations did not correspond to real articles. Though the journals were legitimate, the cited papers were not, and the credited authors had not written those works. Editors from the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy confirmed to News24 that the referenced articles were not published in their journals. The most likely reason, as indicated by Minister Malatsi, is that the drafters employed a generative AI tool but failed to validate any references. Thus, a policy intended to regulate AI was compromised by the very technology it aimed to govern.
The withdrawal of the policy
On April 27, Malatsi announced the policy's retraction, deeming the use of fake citations an “unacceptable lapse” that compromised the draft policy's integrity and credibility. He stated that those accountable for drafting and ensuring quality would face consequences. “This failure is not a mere technical issue,” he emphasized. The chair of the parliamentary portfolio committee humorously suggested that the department “skip using ChatGPT this time” in the revision process. Although the document will be revised and later reissued for public feedback, no specific timeline has been provided. Currently, South Africa lacks a formal AI governance framework at a time when countries worldwide are struggling to find ways to regulate AI, further damaging South Africa's credibility as a serious contributor to this debate.
The issue extends beyond the mere appearance of fake citations in a government document; it’s particularly critical because they were found in a policy regarding artificial intelligence, produced by the department overseeing the country's digital technology strategy, at a time when crucial global AI governance discussions are occurring in cities like Brussels, Washington, and Beijing. The EU AI Act, aiming to be the most comprehensive AI regulatory framework, is dealing with postponed standards and a timeline for implementation that has been extended to 2027 for high-risk systems. The United States currently lacks federal AI legislation as it observes states legislating independently while the White House attempts to preempt these efforts. China has implemented AI regulations, but selectively. In this context, South Africa proposed a policy that could not withstand a basic bibliography check.
The trend
The fabricated citations in South Africa represent a severe instance of a broader issue emerging in institutions that utilize generative AI for research and drafting. A study published in Nature revealed that 2.6 percent of academic papers published in 2025 contained at least one questionable citation, an increase from 0.3 percent in 2024. If this trend continues across the approximately seven million scholarly publications from 2025, over 110,000 papers might include invalid references. GPTZero, a Canadian detection startup, reviewed more than 4,000 research papers accepted at NeurIPS 2025, one of the leading AI conferences globally, uncovering over 100 hallucinated citations across at least 53 papers. Additionally, an independent multi-model study found that only 26.5 percent of AI-generated citations were entirely accurate. This issue is inherent: large language models produce citations based on probabilistic predictions rather than actual information retrieval. They don’t look up papers; they generate references based on patterns from their training data, and when the confidence level is high enough, they create a reference that appears authoritative but refers to nothing.
The South African scenario is notable not only because the technology misrepresented information — a well-acknowledged limitation of generative AI — but due to the publication of these inaccuracies in an official government policy document that received Cabinet approval without any verification of its references. The drafting included civil servants, expert consultations, and ministerial review. Dumisani Sondlo, who led the department's AI policy, had characterized the development process as “an act of acknowledging that we don’t know enough.” Unfortunately, that acknowledgment did not extend to recognizing the unreliability
Other articles
South Africa has retracted its national AI policy after discovering that at least 6 out of 67 academic references were identified as AI-generated fabrications.
South Africa withdrew its proposed AI policy after News24 discovered fraudulent citations in actual journals. Minister Malatsi described it as an "unacceptable lapse." This issue compromised the integrity of the policy intended to regulate AI.
