By Dr. Sabine Kapasi, CEO at Enira Consulting, Founder of ROPAN Healthcare Pvt Ltd, and UN advisor
AI deployment is often framed as a software challenge, but public health agencies face a far more complex reality. Their workforce spans epidemiologists, field workers, data managers, and community health staff, all with very different levels of digital familiarity. Most AI training efforts remain short term or vendor driven. What is missing is an institutional learning structure that links AI literacy to long term workforce development and incentives.
The key problem is not outright rejection of AI but hesitant and uneven adoption. Public health systems are neither resisting AI nor fully benefiting from it. Many remain stuck in an inefficient middle ground. Policy discussions continue to focus heavily on familiar risks such as privacy breaches, bias, and lack of algorithmic transparency. These concerns are valid. Yet a quieter and more expensive challenge is emerging alongside them.
The concept of opportunity cost helps explain what is at stake. Every delayed diagnosis that could have been flagged earlier, every avoidable hospital visit, and every administrative hour lost to manual work represents value that never materializes. In public health, the loss is not only financial. It affects system resilience, workforce confidence, and public trust. Three structural gaps including fragmented data, uneven workforce readiness, and fragile institutional trust are particularly important.
Lots of data, limited usability
Public health systems today hold large volumes of data, but much of it cannot be effectively used by AI tools. Electronic health records remain scattered across platforms. Surveillance data is often stored in incompatible formats. Community level health information, especially from low resource settings, is still poorly digitized.
This fragmentation creates a chain reaction. AI models trained on incomplete or siloed data perform inconsistently in real settings. That inconsistency reinforces clinical skepticism. Health systems then spend additional resources cleaning legacy data, building integration layers, and correcting design choices that should have been addressed at the start. The cost is not just delay. It produces weaker and less trusted systems.
Data gaps also deepen inequity. When rural populations, informal care settings, or marginalized communities are missing from training datasets, predictive tools may quietly shift attention and resources toward populations that are already well documented. In public health, this becomes a structural policy problem rather than a technical flaw.
Technical workarounds such as synthetic data or federated learning can help, but they do not solve the core issue. The deeper problem is governance fragmentation. Questions of data ownership, consent standards, and cross system interoperability remain unresolved. Many procurement systems still treat data infrastructure as an IT expense instead of a core public health investment.
The Workforce Is Not Fully Prepared
What looks like a tech rollout is actually a workforce transition. Adoption theory makes a useful distinction between acquiring a technology, using it, and fully mastering it. Many health systems have not moved beyond the first stage.
Clinician hesitation is sometimes dismissed as resistance to change. In reality, it often reflects practical risk concerns. Liability frameworks remain unclear in many jurisdictions. Explainability is still uneven. When an AI supported decision goes wrong, professional accountability usually remains with the clinician. Under those conditions, caution is rational.
The hidden cost appears in workflow friction. Poorly integrated decision support tools generate excessive alerts and increase cognitive load. Instead of saving time, they create new forms of fatigue. Systems then spend additional money redesigning workflows, retraining staff, and managing adoption cycles that could have been smoother with better human centered design.
Public health teams also operate with wide variations in digital comfort, yet most AI training remains short term and vendor led. What is missing is a sustained institutional learning pathway that connects AI literacy to career progression and performance incentives.
Without that foundation, AI risks becoming another administrative burden rather than a capacity multiplier.
Trust Is Still the Hardest Barrier
In public health, trust operates at three levels that include clinician confidence, institutional credibility, and public acceptance. Weakness at any level slows meaningful use.
Concerns about black box models are only part of the issue. Performance variability is equally important. Many advanced models are sensitive to small data shifts. That sensitivity can produce unstable outputs in real world environments. In clinical settings, even rare anomalies can damage confidence quickly. A single visible failure can stall wider adoption.
Public trust in this space is especially fragile. Health data is deeply personal, and its use raises understandable concerns. The rollout of AI in healthcare is happening at a time when many people are already wary of surveillance, automation, and the growing commercialization of care. When communities suspect that opaque systems are shaping who gets access to services, pushback tends to rise, even in cases where the technology is meant to improve efficiency.
Many policy frameworks remain incomplete in this area. Regulatory conversations have focused heavily on safety classification and risk tiers. Much less attention has been given to social legitimacy, including participatory oversight, community communication, and visible accountability.
Slow Systems Are Holding Adoption Back
Public procurement systems add another layer of friction. Many approval processes were designed for static medical equipment rather than adaptive software. By the time AI tools pass through procurement cycles, models may already require updating. Maintenance and monitoring budgets are often separated from acquisition budgets, which creates lifecycle blind spots.
In many hospitals, AI is acknowledged as important but not treated as a strategic funding priority. This gap between stated importance and actual investment is a recurring pattern. It quietly sustains underuse.
What Successful Deployments Show
Programmes that worked invested early in regulatory alignment and built continuous clinician feedback into the rollout. For instance, AI-enabled tuberculosis screening tools used in several public health programmes in India gained traction because they were introduced in phases and validated alongside existing radiology workflows rather than imposed system-wide. Similarly, hospital early warning systems for sepsis in global health networks improved adoption when clinicians were involved in tuning alert thresholds and reviewing false positives on an ongoing basis.
In both cases, modular deployment and steady iteration proved far more effective than big bang rollouts. These examples show that the main barrier is implementation discipline.
Reducing the hidden cost of AI in public health calls for a practical reset. Data systems must be treated as core infrastructure, with steady funding for interoperability and common standards, not one-off pilots. Workforce planning should move beyond short training to build everyday AI literacy across clinical and public health roles, with human oversight built in from the start.
Trust has to be earned through visible safeguards such as continuous performance tracking, independent audits, and clear public disclosure of where AI is used. Procurement and regulation must also become lifecycle aware, since these tools keep evolving after rollout.