South Africa has withdrawn its Draft National Artificial Intelligence Policy following the confirmation that the document contained fabricated academic references, raising concerns about procedural integrity within public policymaking processes.
The decision was announced by the Minister of Communications and Digital Technologies, Solly Malatsi, after internal review processes substantiated earlier media reports that several cited academic sources did not exist. According to reporting by News24 and confirmation from the South African Government News Agency, the references were likely generated using artificial intelligence systems and incorporated into the policy document without sufficient verification.
The draft policy had previously been approved by Cabinet on 25 March 2026 and published for public comment in April, with submissions initially scheduled to close in June. Its withdrawal interrupts what had been positioned as a significant step in formalising South Africa’s national framework for artificial intelligence governance.
In a public statement, Malatsi acknowledged that the inclusion of fictitious sources undermined the credibility of the document, emphasising that the issue extended beyond technical oversight. The minister indicated that the episode highlighted the necessity of maintaining rigorous human oversight when integrating AI tools into governmental workflows.
The development occurs within a broader continental context in which African states are actively exploring regulatory frameworks for artificial intelligence. Several countries, including Rwanda, Kenya, and Nigeria, have advanced national strategies that seek to balance innovation with ethical considerations, data governance, and socioeconomic inclusion. South Africa’s draft policy had similarly aimed to position the country within this evolving landscape, addressing areas such as innovation ecosystems, skills development, and responsible AI deployment.
Analysts note that the incident underscores both the opportunities and vulnerabilities associated with the adoption of generative AI in administrative contexts. While such technologies can enhance efficiency and expand access to information, they also introduce risks related to accuracy, accountability, and epistemic reliability. The South African case illustrates how these risks may manifest when institutional safeguards are insufficiently robust.
At a continental level, the episode invites reflection on how African governments can shape AI governance frameworks that are grounded in local realities, knowledge systems, and developmental priorities. Rather than viewing the incident solely as a procedural failure, some observers suggest it may serve as a catalyst for strengthening institutional capacity, reinforcing peer review mechanisms, and fostering greater transparency in policy development processes.
The Department of Communications and Digital Technologies has not yet indicated a revised timeline for the reintroduction of the policy. However, the withdrawal is expected to prompt a reassessment of drafting processes and verification protocols, with implications for how digital governance frameworks are constructed in South Africa and beyond.
As African countries continue to articulate their positions within the global digital order, the emphasis on credibility, inclusivity, and contextual relevance remains central. The South African experience highlights the importance of ensuring that technological adoption within governance structures is accompanied by rigorous oversight, thereby safeguarding public trust while enabling innovation.







