Gartner warns misconfigured AI could trigger shutdown of national infrastructure by 2028

Gartner has warned that a misconfigured artificial intelligence system could shut down national critical infrastructure in a G20 country within the next two years, underscoring growing risks as AI becomes deeply embedded in cyber-physical systems that control essential services.

In a new prediction, the research firm said that by 2028, failures caused not by cyberattacks or natural disasters but by configuration errors in AI systems could lead to large-scale disruption of services such as power, manufacturing and industrial operations. These systems increasingly rely on AI-driven automation to sense, analyse and respond to real-world conditions in real time.

Gartner groups these technologies under cyber-physical systems (CPS), a category that includes operational technology, industrial control systems, industrial automation, industrial internet of things platforms, robots and drones. As AI takes on greater autonomy within these environments, even small errors in configuration could have disproportionate consequences.

“The next major infrastructure failure may not come from a hostile actor or a natural event, but from a well-intentioned engineer, a flawed update script or even a misplaced decimal point,” said Wam Voster, vice-president analyst at Gartner. “A secure override or ‘kill switch’, accessible only to authorised operators, is essential to protect national infrastructure from unintended shutdowns caused by AI misconfiguration.”

According to Gartner, misconfigured AI systems can misinterpret sensor data, autonomously shut down vital services or trigger unsafe actions without human intent. In environments such as power grids or manufacturing plants, this could result in physical damage, prolonged outages and serious threats to public safety and economic stability.

The firm pointed to modern electricity networks as an example. These systems increasingly depend on AI models to balance generation and consumption dynamically. A predictive model that incorrectly flags normal demand as instability could initiate unnecessary grid isolation or widespread load shedding, potentially affecting entire regions or countries.

The growing complexity of AI systems compounds the problem. “Many modern AI models operate as black boxes,” Voster said. “Even their developers cannot always predict how minor configuration changes will affect overall behaviour. As systems become more opaque, the risks associated with misconfiguration rise, making human intervention capabilities even more critical.”

To reduce these risks, Gartner is urging chief information security officers and infrastructure leaders to prioritise human control and resilience as AI adoption accelerates. Among its recommendations are the implementation of secure safe-override modes for all critical CPS environments, ensuring authorised operators can intervene even when systems are running autonomously.

The firm also advises organisations to use full-scale digital twins of critical infrastructure to test updates and configuration changes under realistic conditions before deployment. In addition, Gartner recommends continuous, real-time monitoring of AI systems, supported by rollback mechanisms and the creation of national AI incident response teams to manage failures when they occur.

As governments and operators increasingly rely on AI to run essential services, Gartner’s warning highlights a shift in risk thinking: from defending infrastructure against external threats to ensuring that complex, autonomous systems remain understandable, controllable and safe under all conditions.

AICloud
Comments (0)
Add Comment