Who Is accountable when AI goes rogue?

By Syed Ahmed, AVP & Global Head of the Responsible AI Office, Infosys

Every leap in technology brings a corresponding leap in risk. The sharper the tool, the greater the harm if it misfires. Artificial intelligence follows this pattern at an unprecedented velocity. In its early days, basic safeguards such as data checks, bias reviews, and access controls were sufficient. However, with the rise of generative AI, these safeguards have expanded into toolkits, including watermarking, red-teaming, and continuous monitoring. Now, as systems gain greater autonomy, even those measures prove insufficient. Risks are no longer linear; they are multiplying.

When instructions fail

The recent Replit “vibe coding” experiment illustrates the stakes. In this case, an AI coding assistant was explicitly told to freeze changes and not touch production data. It ignored the instructions. The assistant reportedly deleted a production database and then generated thousands of fabricated records and misleading test results. For affected users, this meant disruption and confusion. For Replit, it meant a reputational crisis significant enough that its CEO issued a public apology and promised structural fixes such as stronger separation between development and production environments.

It is important to note that not all data was irretrievably lost; some backups and recovery paths existed. But the incident underscores a fundamental shift: when AI systems can override human direction, the question is no longer about technical failure alone. It is about accountability.

Beyond more guardrails

The immediate response to such incidents is often technical: more monitoring, tighter permissions, better rollback mechanisms. These are necessary, but they do not answer the core issue. Accountability cannot be engineered away. In the eyes of regulators, investors, and the public, responsibility will always sit with people, not with machines.

The law already reflects this stance. Product liability doctrines prevent manufacturers from blaming the hammer for injuries; instead, they hold the maker and seller responsible.. In the same way, “the AI did it” will not absolve companies of their obligations.

The governance gap

This exposes a troubling gap. Many organisations adopt AI rapidly but leave responsibility ambiguous. Does it lie with engineers who design the system? The product team that deploys it? The compliance officers tasked with oversight? Or the board that approved the initiative without mandating safeguards?

Such ambiguity is dangerous. It undermines internal discipline, weakens external trust, and guarantees reputational damage when failures become public. Without a clear answer, accountability risks being passed around like a hot potato in the aftermath of a crisis.

Boards cannot delegate this away

Boards must treat AI accountability as a fiduciary issue, not a technical one. Just as they are ultimately responsible for financial misstatements or cybersecurity breaches, so too will they be judged for AI failures. The Replit case serves as a warning, not an outlier. As AI systems manage increasingly sensitive operations, such as payments, logistics, medical triage, and even legal reasoning, the consequences of missteps will escalate from embarrassment to systemic risk.

The board’s role is not to design code. It is to ensure that governance frameworks exist, responsibilities are assigned, and escalation paths are clear. Financial scandals led to internal controls and audit committees; AI demands a similar maturation.

From guardrails to accountability frameworks

This evolution requires a step-change in governance:

– Board-level integration: AI risk must be part of enterprise risk management, with structured reporting to the board.

– Designated accountability: Every deployed AI system should have an identifiable executive owner, not just a diffuse team.

– Auditability: Documentation and explainability are not optional extras—they are the evidence base that allows leaders to stand behind AI-driven outcomes.

– Preparedness: Incident response must extend to AI, with tested playbooks and clear lines of authority for crisis management.

Accountability, done right, does not slow innovation. It ensures that innovation builds rather than erodes trust.

The unavoidable question

The Replit incident is a symptom of a larger reality: AI systems will fail, and they will do so in ways that are fast, public, and deeply damaging. Regulations will arrive too slowly to save an unprepared company. The burden of accountability will always fall on leadership.

For boards, this means the time for abstract debate is over. Accountability for AI must be explicit, owned, and enforced, rather than vague or aspirational. It should be embedded into risk management frameworks, backed by documented processes, and stress-tested like any other enterprise control.

When, not if, the next failure occurs, the world won’t ask what the AI was thinking. It will ask who was in charge. And the companies that have already answered that question will be the ones that survive with trust, legitimacy, and leadership intact.

AIInfosys
Comments (0)
Add Comment