There is a moment in clinical medicine that every doctor recognises. A laboratory result arrives. It looks clean. It is formatted correctly, printed on headed paper, filed in the right place, signed off by the system. And yet something is wrong. The experienced clinician pauses. Something in the history, something in the patient’s face, something in the accumulated weight of clinical experience says: check this again. The junior doctor, trained to trust the output of the machine, moves on. The senior doctor does not.
That moment, the pause, the question, the refusal to accept a result simply because it arrived with confidence, is what we call clinical oversight. It is not a sign of distrust in technology. It is the mark of a professional who understands that technology produces outputs, not judgements. Judgement remains a human responsibility.
South Africa forgot this. And the cost, for now, is credibility. In other contexts, in hospitals, in clinical systems, in diagnostic tools, the cost of forgetting it can be far greater.
In mid April 2026, South Africa’s Department of Communications and Digital Technologies released an 86 page Draft National Artificial Intelligence Policy for public comment. The document was ambitious. It proposed new governance structures, a National AI Commission, an AI Ethics Board, a regulatory authority, an AI Ombudsperson, a National AI Safety Institute, and even an AI Insurance Superfund. It positioned South Africa as a continental leader. It cited academic sources and presented itself as evidence based.
Within weeks, it was withdrawn.
At least six of its academic citations were fabricated. The journals were real. The articles were not. Editors confirmed that the papers had never existed. The most plausible explanation was simple. A generative AI system had been used. Its output was accepted. No one verified the references.
The document passed through officials, advisors, legal teams, and senior management. Not one person checked whether the sources existed.
There is a particular irony here that is hard to ignore. A policy designed to regulate artificial intelligence appears to have been undermined by the very technology it sought to govern. The problem, however, was not the technology. The system did exactly what it was built to do. It generated fluent, plausible, well structured content. It produced references that looked credible. It did not signal uncertainty. It did not hesitate.
The failure was human. Or more precisely, systemic.
Somewhere in the process, trust replaced verification. And once that happened, every subsequent layer of review inherited that assumption. Formatting became a proxy for truth. Confidence became a substitute for accuracy.
This is automation bias. And it is one of the most important risks in the age of artificial intelligence.
Automation bias is not about incompetence. It is a predictable human tendency to trust systems that appear authoritative. When information is presented clearly, confidently, and consistently, the instinct to question it weakens. In aviation, it has contributed to accidents. In radiology, it has led clinicians to miss abnormalities that algorithms fail to flag. In emergency medicine, it raises concerns about over reliance on decision support tools.
The pattern is consistent. The more capable the system appears, the less scrutiny it receives.
South Africa’s policy failure is not an outlier. It is a warning.
For African healthcare systems, that warning is urgent. If a policy document reviewed in calm conditions can pass through multiple layers without verification, what happens in a crowded hospital ward? What happens when a clinician, exhausted after a long shift, relies on an AI generated note? What happens in a rural clinic where a nurse depends on a triage system that appears more confident than it should be?
The conditions for error are not theoretical. They are structural.
Artificial intelligence will make mistakes. That is not a flaw unique to machines. It is a shared reality of all decision making systems. The question is not whether errors occur. It is whether they are caught.
In medicine, we already understand how to manage this. For high risk medications, a second check is required. Two professionals independently verify the same information. Not because one is incapable, but because the system recognises that error is possible and consequences are severe.
That principle must guide AI deployment. Human verification must be built into the process, not added after failure.
It is important to resist the temptation to treat South Africa’s experience as a failure of ambition. It is not. The country has shown seriousness in attempting to build a governance framework for artificial intelligence. What failed was process, not intent. And process failures, when acknowledged, are valuable. They reveal where systems are weakest.
The real risk would be to ignore the lesson.
Across Africa, governments are moving to integrate artificial intelligence into public services, healthcare, and governance. The promise is real. So are the risks. Fluency is not accuracy. Confidence is not truth. And the absence of an error message is not proof of correctness.
Human oversight is often described as temporary, something needed until systems improve. That framing is flawed. Oversight is not a concession to imperfect technology. It is a permanent requirement in any high stakes environment.
In African healthcare, this is even more critical. Systems are often stretched. Backup layers are limited. Errors are less likely to be absorbed and more likely to propagate. A misdiagnosis is not always corrected by a second opinion. A flawed recommendation is not always challenged.
In such contexts, the human check is not an extra layer. It is the final safeguard.
Leaders deploying AI across the continent must recognise this clearly. Verification must be designed into workflows. Training must emphasise questioning, not passive use. Systems must assume that automation bias will occur and be structured to counter it.
South Africa has offered a rare and valuable lesson. In this case, the damage was reputational. In healthcare, the consequences could be far more serious.
Artificial intelligence will transform African medicine. That transformation is already underway. But its success will depend not on how confidently machines speak, but on how consistently humans question.
The clinician who pauses is not resisting progress.
They are protecting it.
Written by Dr Brighton Chireka, a Zimbabwean born UK based primary healthcare physician, healthcare leadership educator, and international consultant. He writes and speaks on the responsible adoption of artificial intelligence in healthcare, focusing on human centred innovation, ethical data governance, and the future of African health systems.







