I have been reading the new FDA guidance on Clinical Decision Support Software.
At first glance, it looks reassuring. The key regulatory distinction uses one major principle: a healthcare professional must be able to independently review the basis of a recommendation. No blind trust. No black box dependency. Transparency becomes the decisive criterion.
On paper, that sounds entirely reasonable.
But here is the uncomfortable question I kept thinking about: What happens when software becomes so complex that a human can no longer realistically “independently review” it without assistance?
Modern clinical AI systems are not simple rule engines. They integrate longitudinal data, probabilistic modeling, adaptive thresholds, population-level priors, sometimes even digital twins. The idea that a clinician can simply “look at the logic” and validate it may be conceptually elegant — but technically naïve.
We may be entering a paradoxical situation.
To evaluate the transparency of complex AI-driven clinical systems, an HCP may increasingly need… another AI. Not to replace clinical judgment.
But to translate model behavior into cognitively manageable insight.
Transparency, in highly complex systems, is no longer a static property of the software. It becomes a layered architecture. A mediated process.
The regulatory requirement says: the clinician must not primarily rely on the software. But if understanding the software requires another layer of intelligent interpretation, what exactly counts as independent review?
This is not a criticism of regulators. It is an evolution problem. We are moving from “decision support” to “cognitive co-processing.”
From software tools to hybrid intelligence systems.
The real governance question is no longer: Is the logic explainable?
As someone working at the intersection of clinical trials, AI governance and hybrid intelligence, I believe this is where regulators can have an impact in establishing a framework for what counts as independent review.