Express Computer
Home  »  Interviews  »  In AI-driven fraud cases, synthetic identity creation is the biggest threat: Ankit Gupta, CTO, PolicyBazaar for Business

In AI-driven fraud cases, synthetic identity creation is the biggest threat: Ankit Gupta, CTO, PolicyBazaar for Business

0 0

Artificial Intelligence is transforming the nature of cybercrime. Deepfake scams, synthetic identities, and AI-powered impersonation attacks are rapidly becoming systemic business risks, particularly for BFSI platforms operating at scale.

In this exclusive interaction with Express Computer, Ankit Gupta, CTO at PolicyBazaar for Business, explains why AI-driven fraud is no longer just a cybersecurity issue but a trust, reputation, and business continuity challenge. He also shares how they are building defence-in-depth, zero-trust architectures, AI-led fraud detection, and identity intelligence, while preserving a frictionless customer experience and preparing for India’s evolving regulatory landscape.

What made deepfake scams and AI-powered cybercrime an urgent priority for you as a CTO?

In today’s AI era, deepfake fraud is not just a cybersecurity issue; it is a business risk.

Over the last year, we have seen a clear shift in the nature and scale of fraud. Attacks have expanded across multiple categories, affecting both retail customers and enterprise organisations. These scams are evolving daily, and that makes it our responsibility to strengthen our systems so deeply that risks such as deepfake-driven policy issuance or AI-powered cyber scams are controlled at every stage.

The first major impact is customer trust. When impersonation occurs and customer identities are misused, confidence erodes very quickly. That alone forced us to reassess our priorities.

The second aspect is reputational risk. Fraud of this nature directly impacts the organisation’s credibility.

The third is fraud exposure. Over the last year, fraud volumes have grown immensely and continue to evolve. This creates constant urgency for us, not only to protect our own platform but also to help enterprises understand these risks and select policies that safeguard them against modern cyber threats.

What types of deepfake-led attacks pose the greatest threat to digital onboarding and KYC integrity today?

In the AI-driven fraud landscape, synthetic identity creation is the biggest threat.

Here, attackers create identities using documents that closely resemble genuine ones, but the person does not exist. Traditional KYC checks often get bypassed because the identity itself is synthetic.

The second major threat comes from video and voice impersonation. Today, many people cannot distinguish between a real agent and an AI-generated voice. During video KYC or voice-based verification, systems must be powerful enough to detect whether the interaction involves a real person or a deepfake impersonation.

The third category is document deepfakes. Earlier, forged documents were easier to detect. With AI, documents can now be manipulated so convincingly that detecting alterations becomes extremely difficult.

Among all these, synthetic identity fraud is the most dangerous because it involves a non-existent individual, making traceability extremely difficult. That is why onboarding and KYC processes must evolve rapidly to address this threat.

What technology interventions are you using to counter manipulated content and synthetic identities without slowing business?

We approach this through what I call defence-in-depth (DID).

We do not rely on a single control. Instead, we deploy multiple layers of protection — advanced liveness detection beyond traditional biometrics, behavioural analytics, and in-house AI models.

We analyse how genuine users behave versus how deepfake or agentic AI behaves. Behavioural differences are often subtle but measurable.

We also use device intelligence and session intelligence to understand which devices users operate, how sessions are created, and how interactions unfold over time.

On top of this, we run AI-driven risk scoring, where every interaction is evaluated through a predictive scoring engine to determine whether it represents a genuine customer or a scam.

Finally, zero-trust architecture is essential. Without zero trust, no system can truly be secured, whether for customers, partners, or vendors.

How are you using AI defensively for real-time fraud detection and anomaly identification? Have you seen measurable improvements since adopting AI-led approaches?

Absolutely. Before the AI era, these attacks were minimal and easier to manage. However, post-AI, fraud volumes have increased exponentially. Industry studies suggest a 700 per cent rise in such scams over the last 12–18 months, and that figure only reflects reported cases.

The reality is that fraud has grown thousands of times compared to earlier periods. AI has made sophisticated fraud accessible even to non-technical attackers.

This has forced us to build far more advanced systems, and while AI has amplified threats, it has also enabled us to detect and manage them at scale.

How are partnerships evolving to counter AI-driven cybercrime?

There is no single solution that can solve this problem.

Insurers, enterprises, and technology providers must all collaborate. Earlier, organisations worked in silos. Today, shared threat intelligence and collaborative defence models are essential.

We follow a federated defence approach, where learnings and patterns are shared without exposing sensitive or proprietary data. This collective intelligence helps all participants strengthen their defences while respecting privacy and competitive boundaries.

Cyber insurance is increasingly viewed as a safeguard. How is demand evolving?

Cyber insurance is not the last line of defence; it is a strategic risk management tool.

Demand has increased dramatically, both for individuals and enterprises. Insurers are now offering richer products with add-ons designed specifically for modern AI-driven risks.

The cost of cyber insurance for individuals or families is often lower than everyday expenses, yet the protection it offers is immense. Recovery costs from a cyber incident are far higher than the cost of coverage.

I strongly recommend cyber insurance for individuals, families, and businesses alike.

How are you modernising your technology foundation to support resilience and scale?

Security must be built in from the first layer, not added at the end.

We follow a bottom-up approach, embedding security into APIs, cloud platforms, microservices, and automation workflows. Zero-trust access controls, encryption, masking, and privilege-based access are foundational.

Every system assumes no implicit trust. Access is always explicit and verified. This philosophy applies across our APIs, cloud infrastructure, and microservices architecture.

Looking ahead, which future technologies will define the next wave?

The next leap will come from the convergence of cloud computing, quantum computing, and agentic AI.

When quantum and cloud combine with AI, technology velocity will increase dramatically, and so will risk. Organisations must be prepared for this velocity-risk paradox.

GenAI and agentic AI will transform product development and solution design over the next four to five years, but they must be implemented responsibly.

How do you balance strong security with a frictionless user experience?

We rely on adaptive controls and what I call invisible security.

Security checks trigger only when risk crosses defined thresholds. For most users, security operates silently in the background, preserving a smooth onboarding and transaction experience.

Security must never become a business blocker. The goal is to grow the business while continuously reducing risk.

What lessons stand out from your experience handling real-world incidents?

Even knowledgeable individuals can panic during high-pressure incidents.

Authority-based social engineering can override rational judgement. That is why emotional intelligence and awareness are just as important as technical controls.

Security must be implemented at the foundational level, supported by education and behavioural understanding. Building AI-resilient trust systems is critical.

As India moves into 2026 with DPDP Act enforcement, how are you preparing?

DPDP timelines may appear distant, but they are not.

We are already strengthening data handling, consent management, centralisation, and access controls. Precision in data security is non-negotiable.

From both regulatory and technology standpoints, our focus is on ensuring compliance while maintaining business agility.

Our top priorities are zero-trust adoption, identity-centric fraud detection, and building security as the first line of defence and not the last.

Risk will continue to evolve, but with the right architecture, intelligence, and collaboration, businesses can grow securely in an AI-first world.

Leave A Reply

Your email address will not be published.