South Africa has withdrawn its first draft national AI policy after reviewers discovered it contained fictitious sources โ citations that appear to have been AI-generated rather than drawn from real academic or policy research. The draft had been published for public comment as the country’s foundational framework for AI governance. The withdrawal is one of the most high-profile examples to date of AI hallucination reaching the level of official government policy โ and raises pointed questions about how AI tools are being used in public sector document production globally.
What Happened
South Africa’s Department of Communications and Digital Technologies published a draft national AI policy โ the country’s first attempt to establish a regulatory and governance framework for artificial intelligence. The document was released for public comment, the standard process for gathering feedback before policies are finalized.
During the review period, researchers and civil society organizations analyzing the draft noticed that several citations referred to sources that did not appear to exist. Further investigation confirmed that the fictitious sources had the hallmarks of AI-generated citations: correctly formatted references that match the style of real academic papers or policy documents, but that point to papers, reports, or studies that were never published.
The department subsequently confirmed the withdrawal of the draft, citing the need to revise the document. The department did not publicly confirm whether AI tools were used in preparing the draft, or which specific tool may have been involved.
Why This Keeps Happening in Institutional Contexts
The South Africa case is the latest in an accelerating series of institutional AI hallucination incidents in 2026. The pattern is consistent across contexts:
- A human or team uses an AI tool to draft a document that requires citations or source attribution
- The AI generates plausible-sounding references as part of the draft โ formatted correctly, sounding authoritative, thematically relevant
- The human reviewer does not verify every citation against the actual sources, either trusting the AI or lacking the resources to check comprehensively
- The document is published or filed with fabricated citations included
- The fabrications are discovered during review by external parties who do check
The Nebraska attorney suspended this month for filing a court brief with 20 AI-hallucinated case citations is the same failure mode applied to legal filings. The South Africa AI policy draft is the same failure mode applied to national governance documents. In both cases, the human responsible for the document trusted the AI’s output without independently verifying the citations it generated.
The Specific Irony of an AI Policy With AI Hallucinations
The content of the document makes this incident particularly striking. A national AI policy is specifically designed to establish how a country will govern, regulate, and mitigate the risks of artificial intelligence โ including, presumably, the risk of AI systems generating unreliable outputs. A document about AI safety that contains AI-generated falsehoods is not just an embarrassment; it undermines the credibility of the regulatory framework before it even begins.
For countries and institutions developing AI governance frameworks in 2026, the incident serves as a practical demonstration of exactly the kind of AI risk those frameworks are meant to address. The authors of an AI policy apparently did not apply to their own workflow the verification standards their policy would likely recommend for AI-assisted outputs.
The Global Institutional Risk
South Africa is almost certainly not the only government that has used AI tools to assist in drafting policy documents. The practical pressures on public sector writing โ tight deadlines, limited staff, complex technical subject matter โ make AI assistance attractive for exactly the same reasons it’s attractive in private sector contexts. The verification gap that produces hallucinated citations in law firms and government offices is structurally similar: people trust the output because it looks right, and checking every source is time-consuming.
Research earlier this year found that 47% of enterprise AI users had based at least one major business decision on hallucinated content. The equivalent figure for public sector AI use is unknown โ because most governments are not systematically auditing the AI-assisted documents they produce. The South Africa withdrawal is visible because the fabrications were caught during a public comment process. Documents that never receive equivalent external scrutiny may contain similar errors that are never discovered.
What Institutions Need to Do
For any organization using AI tools to assist in drafting documents that require citations โ policy documents, research reports, regulatory filings, legal briefs โ the practical implication of the South Africa incident is straightforward: every citation must be independently verified against the original source, regardless of how confident the AI’s output appears. AI models are not reliable citation generators. They are reliable text drafters whose citation outputs require human verification before publication.
Conclusion
South Africa’s AI policy withdrawal is a small story about a single government document. It’s also a signal about a systemic risk that applies everywhere institutions are using AI to produce authoritative documents without adequate verification workflows. Browse our directory to explore the AI writing tools at the center of this verification challenge โ and how to use them in ways that minimize the hallucination risks this incident highlights.