South Africa retracts its national AI policy after discovering that at least 6 out of 67 academic references were AI-generated fabrications.
TL;DR: South Africa's Communications Minister Solly Malatsi retracted the country's draft national AI policy after News24 revealed that at least six of its 67 academic citations were fictional, referring to non-existent articles in genuine journals. The policy had received Cabinet approval in March and was made available for public comment. Malatsi labeled the situation an “unacceptable lapse” and vowed there would be accountability. The incident has left South Africa without an AI governance framework and has raised concerns about its institutional capacity to regulate AI technology.
The Department of Communications and Digital Technologies in South Africa dedicated months to developing a national AI policy, which proposed establishing a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund. It structured five pillars for AI governance: skill development, responsible governance, ethical and inclusive AI, cultural preservation, and human-centric deployment, adopting a risk-based approach inspired by the EU AI Act. The draft received Cabinet approval on March 25 and was published for public commentary in the Government Gazette on April 10. However, News24 checked the references and discovered that at least six cited sources did not exist. While the journals are real, the attributed articles are not. The scholars noted for foundational work in AI governance had never authored the cited papers. Editors from the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy confirmed to News24 that the mentioned articles were never published. Communications Minister Solly Malatsi suggested that the drafters likely relied on a generative AI tool without validating any references. Consequently, a policy intended to govern AI was undermined by the technology it aimed to regulate.
The Withdrawal: Malatsi announced the withdrawal on April 27, criticizing the false citations as an “unacceptable lapse” that “compromised the integrity and credibility” of the draft policy. He indicated that accountability measures would be taken against those involved in drafting and quality assurance. “This failure is not a mere technical issue,” he stated. The chair of the parliamentary portfolio committee described the situation more succinctly, encouraging the department to “skip using ChatGPT this time” in the redrafting process. The document will be revised before being put out for public comment again, though no timeline has been provided. South Africa currently lacks a formal AI governance framework at a time when global governments are trying to regulate AI, and the credibility of the country as a serious player in this discussion has taken a hit that will extend beyond the policy revision.
The issue is not merely that fake citations were found in a government document; it's that they were included in a policy regarding artificial intelligence, drafted by the department responsible for the nation’s digital technology strategy during a critical period for AI governance discussions occurring in Brussels, Washington, and Beijing. The EU AI Act, the most ambitious AI regulatory framework, is facing delays in setting standards and has postponed its implementation schedule for high-risk systems until 2027. The United States lacks federal AI legislation, observing states enact independent laws while the White House seeks to influence those efforts. China has implemented AI regulations, but with selective enforcement. Within this context, South Africa presented a policy that could not withstand a bibliographic verification.
The Pattern: The problematic citations found in South Africa’s policy are an extreme example of a broader issue emerging among institutions employing generative AI for research and drafting purposes. A study published in Nature reported that 2.6 percent of academic papers released in 2025 contained at least one potentially fabricated citation, a rise from 0.3 percent in 2024. If this trend persists across about seven million scholarly publications from 2025, it implies over 110,000 papers would feature invalid references. A Canadian detection startup, GPTZero, analyzed more than 4,000 research papers accepted at NeurIPS 2025, a leading AI conference, uncovering over 100 fabricated citations across at least 53 papers. Additionally, a separate multi-model study revealed that only 26.5 percent of AI-generated bibliographic references were entirely accurate. This issue is systemic: large language models generate citations using probabilistic token prediction instead of retrieving actual information. They do not search for papers; rather, they predict how a citation should appear based on patterns from their training data, producing seemingly authoritative references that point to nothing.
The uniqueness of South Africa's situation lies in the fact that hallucinated citations were published in an official government policy document that underwent Cabinet approval without any verification of references. The drafting involved civil servants, consultations with subject matter experts, and ministerial reviews. Dumisani Sondlo, the department’s AI policy lead, had previously stated that policy development is an acknowledgment of what is not sufficiently known. However, that acknowledgment did not extend to admitting that the drafting tool employed was inherently unreliable. The six fictitious citations identified by News24 are those that were detected. The authenticity of
Other articles
South Africa retracts its national AI policy after discovering that at least 6 out of 67 academic references were AI-generated fabrications.
South Africa retracted its draft AI policy after News24 discovered fraudulent citations in legitimate journals. Minister Malatsi described it as an "unacceptable lapse." The policy intended to regulate AI was compromised as a result.
