As GenAI scales, every data pipeline and API becomes a potential vulnerability: Sandeep Rai Sharma, Accenture

In an era where generative AI is reshaping business operations at breakneck speed, securing every layer of this digital transformation has become non-negotiable. Sandeep Rai Sharma, Lead – Security at Accenture’s Advanced Technology Centers Global Network, offers a frontline view of how AI is not just increasing the complexity of cyber threats but also becoming a critical ally in defending against them. In this in-depth conversation, Sharma unpacks insights from Accenture’s global ‘State of Cyber Resilience 2025’ report, outlines how Accenture platforms like mySecurity are integrating GenAI for predictive defense, and stresses the urgency of preparing for quantum risks and deepfake-driven attacks. He also underscores the need to cultivate a security-first culture, embedding cybersecurity into the DNA of organisations as they embrace AI at scale.

Why is it critical to embed security at every layer—data, models, infrastructure, and access controls—while adopting generative AI at scale?

As organisations scale generative AI and agentic AI, cybersecurity risks are escalating in speed, scale, and sophistication. Every touchpoint, from data pipelines to application programming interfaces (APIs), is becoming a potential vulnerability. Cybercriminals are leveraging dark LLMs and AI-powered deepfakes to launch advanced phishing and ransomware attacks, further amplifying these risks. 

Our recent ‘State of Cyber Resilience 2025’  research, which surveyed 2200+ CISOs and CIOs world-wide, including in India, found that 90% of organisations are not adequately prepared to secure their AI-driven future. However, the research also shows that companies with an adaptive and resilient cybersecurity posture are 69% less likely to face advanced attacks, 1.5 times more effective at blocking them, and enjoy a 15% boost in customer trust.

Adopting a trust-by-design approach is crucial, embedding cybersecurity into the digital core and across the GenAI adoption framework including data, AI model architecture, applications, and operating and governance playbooks. Key safeguards include shadow AI mitigation, real-time prompt monitoring and filtering, adversarial testing of AI models, deep-fake protection, and cyber recovery solutions, all supported by a fit-for-purpose cybersecurity governance framework for AI and a security-first organisational culture. 

How is GenAI transforming cybersecurity strategies? Can you share how platforms like Accenture’s mySecurity suite are enabling intelligent, predictive defense mechanisms?

Even as GenAI introduces new vulnerabilities, it also presents an enormous opportunity to proactively strengthen cyber defenses. GenAI-powered cyber defense technologies can help organisations efficiently automate cybersecurity processes to detect and prevent cyber threats in real time, reduce time taken to develop security-embedded infrastructure, develop predictive analytics to anticipate future attacks, and drive simulated attacks to test defenses.

For example, Accenture’s mySecurity suite of cybersecurity assets integrates GenAI and automation into core cybersecurity services, potentially enabling 40% faster security modernisation, 60% faster incident response, and 30% lower security operating costs for enterprises. It uses machine learning to analyse data, predict potential threats, and detect anomalies. The platform continuously incorporates the latest in threat intelligence from multiple sources and cybersecurity incidents to stay informed about current threats and refine its predictive capabilities over time, and uses automated response mechanisms to isolate affected systems, block malicious activity, and alert security personnel. 

With deepfakes becoming increasingly sophisticated, how can organisations build resilience and proactively detect and neutralise such threats?

Sophisticated deepfakes are using GenAI to create realistic deceptions via videos, audio, and texts, targeting customer contact centres and business videoconferences, as well as impersonating executives to manipulate individuals and organisations, and commit fraud. The surge in deepfake-related tool trading on dark web forums since late 2023 is a key indicator of this threat. 

To combat this, organisations must adopt a multi-layered strategy that combines technology, training, and governance. This includes investing in scalable AI-powered detection tools for real-time detection and validation of media content integrated into enterprise communication workflows. They must also enhance their Identity and Access Management systems with multi-factor authentication and zero-trust principles and undertake AI vulnerability testing. 

Employees at all levels should be trained to identify deep-fake tactics and respond to them. At Accenture, for instance, we leverage interactive e-learning simulations that mimic real-world scenarios to educate our people how to identify deep-fakes and flag suspicious content. We also keep them updated on the latest deepfake trends and verification tools to stay ahead of evolving threats.

What steps should enterprises take today to become quantum-safe? 

While quantum computing holds incredible promise for science and industry, it also threatens to put nearly 75% of current security encryption at risk, making it essential to adopt quantum-safe security solutions. Organisations must assess their cryptographic exposure, identify vulnerable systems, and transition to post-quantum cryptography. This transition requires careful consideration of regulatory requirements and the lifecycle of encrypted data, as well as testing and deploying quantum-resilient solutions in real-world conditions. Tools like Accenture’s Quantum Security Maturity Index, can help organisations benchmark their quantum security infrastructure and identify areas for improvement, supporting their transition to a more secure, post-quantum cryptography framework.

As GenAI becomes mainstream, how should organisations evolve their talent strategy to develop AI-native cybersecurity expertise and security-conscious workforce?

In a world where AI both empowers and endangers cybersecurity, human expertise continues to be the first line of defense. To stay ahead, organisations must upskill their cybersecurity professionals in AI-specific security concepts and instill a security-first mindset among data and AI practitioners. 

Moreover, comprehensive cybersecurity awareness training and clear AI security policies are essential for the entire workforce, not just AI and security practitioners, to combat the increasingly sophisticated AI-powered phishing and identity fraud threats.  Regular hands-on incident response drills and phishing simulations can enhance cyber readiness, while mandatory cybersecurity certifications can cultivate a security-conscious culture. Ultimately, organisations should treat AI agents with the same cybersecurity consideration as human employees and adopt a holistic zero trust security approach to ensure their secure deployment.

What are the top skills security professionals need to remain relevant and effective in the AI-first world?

To thrive in an AI-first landscape, cybersecurity professionals need hybrid skills that blend traditional cybersecurity expertise with AI fluency. This includes proficiency in AI, machine learning, data governance, and AI coding languages like Python. Moreover, practical expertise in zero-trust security architecture, threat intelligence and analysis, vulnerability testing, secure access service edge (SASE), authentication technologies, and AI forensics is key. Additionally,  knowledge of cloud security and automation is crucial as organisations deploy AI workloads across cloud-native environments.

AccentureAICybersecurityGenerative AI
Comments (0)
Add Comment