AI and Insurance: Navigating emerging risks while driving new opportunities

By Sukumar Sakthivel, Chief Technology Officer, EDME Insurance Broker

The insurance industry has always priced uncertainty but AI is forcing it to reckon with a different kind of unknown altogether. This is not a technology upgrade at the margins. It is a structural re-architecture of how risk is assessed, priced, and managed, happening faster than most institutions are prepared to absorb. The same technology driving unprecedented operational gains is simultaneously generating risks the industry has no historical framework to price. That tension between opportunity and exposure is what defines this moment. For insurance leaders, the question is no longer whether to engage with AI, but whether they are moving with enough conviction to stay relevant in what is shaping up to be the industry’s most consequential transformation.

The numbers confirm the direction of travel. The AI in insurance market was valued at USD 8.63 billion in 2025 and is projected to reach USD 59.50 billion by 2033, growing at a CAGR of 27.32%. That kind of trajectory does not describe incremental change. It describes a sector in the middle of a fundamental re-architecture. Leadership across the industry recognises this. In a survey by EY, 99% of insurers reported they were either already investing in generative AI capabilities or planning to do so. The boardroom debate is settled. Execution is now the only variable that matters.

The Operational Transformation Is Already Underway
The clearest proof of AI’s value in insurance is not theoretical. It is operational, and it is happening now. Claims processing, historically the industry’s most friction-heavy function, has been the first major beneficiary. Insurers are using AI to automate claim intake, document review, fraud checks, and settlement workflows, reducing turnaround time and operational cost in ways that manual processes simply cannot match.

Underwriting tells an equally compelling story. The era of static actuarial tables built on population averages is giving way to dynamic, individual-level risk assessment that draws on behavioural data, telematics, satellite imagery, and IoT inputs simultaneously. What once took underwriters days of manual assessment can now be processed in minutes, with a depth of contextual insight that no human team could replicate at scale. This is not an upgrade to the existing model. It is a replacement of it. And for an industry that has priced risk the same way for generations, that shift carries consequences far beyond operational efficiency.

The financial case for AI is nowhere more stark than in fraud detection. At $308 billion in annual losses in the US alone, fraud remains one of the industry’s most stubborn drains and AI-powered anomaly detection, operating across millions of transactions in real time, has become its most effective answer yet. But the numbers that should command leadership attention go beyond fraud. Among insurers who have deployed AI at scale, McKinsey’s research documents a 10–15% increase in premium growth, a 20–40% reduction in customer onboarding costs, and a 3–5% improvement in claims processing accuracy. Taken together, these are not efficiency metrics. They are the early markers of a competitive gap that will be very difficult to close once it opens.

The Risks Are Real and Largely Underestimated
What makes this moment genuinely complex is that the same technology driving these gains is also generating risks the industry has no historical framework to price. The most structurally significant of these is opacity. Insurers increasingly rely on opaque AI models for risk assessment and pricing, raising serious ethical concerns about bias and exclusion. Regulatory frameworks are demanding explainable AI, yet most implementations remain black-box systems. In a sector whose entire legitimacy rests on fair and transparent decision-making, that gap between algorithmic capability and human accountability is not a technical footnote. It is an existential concern.

Algorithmic bias compounds the problem. AI models trained on historical data will reflect historical inequities, often without any intent or awareness on the part of the insurer deploying them. Regulatory scrutiny around data governance and AI bias has intensified sharply, and insurers are now investing in stronger governance frameworks and third-party model validation as a direct response. The direction of regulatory travel across the EU AI Act, IRDAI guidelines in India, and emerging frameworks across Asia-Pacific is unmistakably toward greater accountability. Insurers who are still treating governance as a compliance checkbox are misreading the room.

A New Risk Frontier: Insuring the Infrastructure of AI
There is a dimension to this story that has received far less attention than it deserves and it sits at the intersection of AI investment and physical infrastructure. The data centers powering the global AI boom have become a stress test for insurers in their own right. Global spending on data centers could reach $7 trillion by 2030, and the scale of capital concentrated in these facilities is creating capacity challenges the insurance market has never encountered before.

These are not conventional commercial risks with established loss histories. A single AI-optimised data center campus concentrates extraordinary value, draws power at a scale that strains regional grids, and houses hardware whose obsolescence cycle runs far shorter than the financing structures built around it.

The severity of losses, when they occur, bears no resemblance to what standard property underwriting models were designed to absorb. The industry is being asked to price risks it has never seen before, on assets whose failure modes are still being discovered in real time. That is not a comfortable position for a sector whose pricing discipline depends on historical loss data and it is a challenge that will only grow as the infrastructure buildout accelerates.

What Separates the Leaders
The insurers who will define the next decade are already visible. Not because they adopted AI first, but because they are deploying it with discipline. They are investing in explainability alongside capability. They are building governance structures before regulators force them to. They are treating data quality as a strategic asset rather than an operational afterthought.

The roles growing fastest in insurance today are experienced underwriters, compliance specialists, analytics professionals, and technologists. Positions defined by judgment, not processing. That signal matters. It tells us that the industry’s best performers understand something the technology evangelists sometimes miss: AI amplifies judgment, it does not replace it. The institutions that internalise that truth and build their AI strategies around it are the ones that will still be trusted, relevant, and profitable when this transformation cycle completes.

Comments (0)
Add Comment