AI is used to detect threats by rapidly generating data that mimics realistic cyber threats: Bibhu Krishna, CISO, Policybazaar

Express Computer recently conducted an exclusive interview with Bibhu Krishna, the Chief Information Security Officer, Policybazaar. In this insightful conversation, he sheds light on the current state of information security at Policybazaar, outlining their strategies for building a robust cyber defence infrastructure and leveraging emerging technologies like AI and ML. From preventive measures against ransomware attacks to the integration of GenAI and the evolving landscape of pricing strategies, Bibhu Krishna provides valuable insights into the evolving realm of cybersecurity.

Can you provide an overview of your current information security system?

We are a regulated entity, adhering to policies set by IRDAI. Many of our guidelines are framed by the cyber security guidelines of IRDAI, which outline information security standards and protocols. With the growing challenges and threats, especially with AI coming into the picture, our systems and security solutions need to be more advanced and stay a step ahead of what threat actors are thinking about. 

For instance, having 24/7 monitoring in place and implementing DLP are some measures we have integrated into our framework. We also conduct periodic reviews of our security standards with independent auditors, specifically empaneled by CERT-IN. These audits provide us with a clear direction and highlight areas for improvement in our setup.

What steps have you taken to build a robust cyber defence infrastructure?

Uh, see, I think we work on all aspects of people, process, and technology. Humans are the most important aspect; we are a human-centric organisation with about 50,000 people. That’s one of the areas we heavily invest in, particularly in terms of security. Training is very important, and we provide role-based training. We also have an in-house development team, so technology training and operations training are conducted separately. We focus on dedicated training efforts.

Regarding technology, since we have an in-house development team, we customise many solutions for ourselves, in addition to using some plug-and-play solutions available in the market. For processes, given that we have multiple releases in a week’s time, testing everything at one point is impractical and requires a lot of manpower. Therefore, we have automated many of our processes. Automation plays a clear and decisive role.

AI is integral in this automation, helping us decide which systems need regular testing and at what intervals, depending on the type of release. So, when you look at people, process, and technology, all three are very clearly defined and developed.

Are you currently leveraging the benefits of AI and ML in your cybersecurity measures?

I think AI has been a buzzword for the last year and a half or two years now. Everybody talks about AI. In fact, last week, I was talking to one of my friends about what the last big buzzword was before AI. In my memory, it was the cloud, about 12 years ago. So, from cloud to AI, it seems like a natural progression. AI has been the most significant technological trend in the last couple of years. Generative AI is being used for enhanced threat detection.

We generate a lot of logs, and it’s impossible for humans, or even a group of people, to scan through all of them every time. AI is used to detect threats by rapidly generating data that mimics realistic cyber threats. This helps us simulate potential scenarios that might occur otherwise. 

For organisations with a digital-first mindset, AI facilitates quick onboarding. It essentially tells them, “You focus on your business, and we’ll ensure you achieve the results you need in information security with minimal manpower.” This not only enhances customer experience but also automates several customer interactions.

What tools or platforms do you have in place to facilitate customer conversations, such as chatbots?

We’ve made significant strides in this area. We’ve implemented chatbots and voicebots, leveraging them extensively. Risk modelling is also a crucial aspect of our operations, particularly given the nature of our business in insurance, where risk assessment is paramount.

In terms of risk modelling, we delve into various factors such as customer profiles, payment behaviours, and other pertinent details. For instance, when assessing a customer’s insurance application, we consider factors like their payment history and demographic information. Given the potential discrepancies between an individual’s current appearance and their outdated ID proofs submitted during KYC procedures, we employ advanced techniques like retina scanning to ensure the authenticity of identity.

We also scrutinise payment gateways rigorously, understanding that they represent a critical aspect of our operations. This comprehensive approach to risk management is an ongoing effort, constantly evolving to adapt to emerging threats and technologies.

How do you maintain a balance between the role of AI in both preventing and causing risks, considering its evolving nature and its potential for both risk creation and mitigation?

So, when we talk about AI, it’s essential to understand its fundamental workings—it operates based on the data it’s fed. Hence, the data input is crucial; it needs to be properly curated. Firstly, ensuring anonymisation is key; live customer data should never be directly integrated into the model to comply with regulatory standards.

Secondly, regulatory compliance is paramount. We must ensure that the data we feed into the framework adheres to all relevant regulations.

Lastly, many organisations grapple with outdated legacy tech stacks. It’s essential to modernise and streamline these systems to align with the requirements of contemporary AI technology.

Also, mitigating bias in AI is crucial. Since the data we use is created by humans, biases can inadvertently seep into the algorithms. Addressing this issue requires careful consideration and proactive measures to ensure fairness and impartiality.

How can we mitigate bias and ensure transparency in AI systems, especially considering the data collection and human oversight involved in content development?

It’s important for people to be highly aware of biases and misconceptions surrounding AI. We need to be conscious of the potential biases in AI systems. Ultimately, the quality of the data used to train the model plays a crucial role in mitigating bias.

What specific cybersecurity challenges have you encountered with the increasing trend of remote work, especially in the post-COVID-19 era?

COVID-19 has definitely triggered significant changes, no doubt about it. It’s shifted mindsets across the board, from employees to employers and beyond. Nowadays, any company launching its venture must reckon with the necessity of exposing their application to the wider world. With the ubiquitous presence of the internet and the emphasis on consumer-centricity, keeping everything within your own four walls simply isn’t viable anymore. It’s become clear that today’s landscape demands a digital-first approach.

We’ve always been a digital-first company, ensuring that all our applications prioritise consumer experience. However, COVID-19 has accelerated what might have taken several years to unfold under normal circumstances. Starting from scratch today means considering these factors from the get-go. But along with this shift, new vulnerabilities have emerged, exposing us to performance issues and threats from malicious actors.

In the past, the development process involved creating an application, testing it internally, then gradually expanding its reach. But now, right from day one, there’s no escaping the need for compliance, security, performance, and scalability. These aspects are non-negotiable from the outset. While I’m certainly not grateful for the pandemic itself, I do appreciate the changes it has spurred and the heightened focus it’s brought to IT considerations.

How would you describe the pace of transformation your industry has undergone in recent years?

The recent developments are quite fascinating. I came across an article discussing how the pace of technological advancement over the last century is now compressed into just a decade. It’s surprising, isn’t it? This acceleration, coupled with the emergence of AI, is shortening the timeline even further. Time to market has become incredibly short. If we don’t act, someone else will seize the opportunity.

Therefore, it’s crucial to ensure that our product ticks all the boxes: it must be robust, secure, scalable, and performance-driven. And let’s not forget about customer-centricity; it should always be at the forefront of our minds.

Should there be regulations for the acceptable use of AI in organisations across industries from a security perspective?

It’s actually a mix of positives and negatives. I haven’t come across a compelling case where AI completely replaces humans. There’s still a long way to go. However, AI has indeed automated tasks and boosted individual productivity, possibly even facilitating reskilling. So, the big question remains- Can AI truly replace humans? Yet, unquestionably, AI has brought about significant advancements. 

Returning to your question, I believe AI is still in its early stages, very much so. We anticipate a significant commercialisation of AI and LLM models in the coming years. With their multi-model capabilities, the potential for Gen AI to revolutionise businesses is enormous. We’re eagerly awaiting this transformation in the industry. The initial steps are promising, particularly concerning regulations. It’s crucial because, with tools like GenAI, there’s a risk of misuse, such as in deep fakes. Controlling such risks demands a regulatory framework akin to our existing IT policies, which govern the IT sector comprehensively. This framework should educate both consumers and professionals about the benefits and pitfalls of AI. With internet usage increasing in India, this training becomes paramount. People need to understand AI’s potential and its limitations.

What preventive measures does your company have in place to mitigate the risk of ransomware attacks?

I believe it’s crucial for people to understand why ransomware exists. The traditional approach to dealing with ransomware involves patching. By ensuring that your endpoint protection and systems are regularly updated and patched, many vulnerabilities can be addressed. It’s essential to have solutions and tools in place to maintain continuous operation. With the frequent release of patches, particularly for Windows systems since the advent of the internet, it’s imperative to stay updated.

In a large setup like ours, spanning across regions with numerous individuals working on systems, keeping everything updated is a significant undertaking. Regular checks are essential to ensure that tasks are completed thoroughly, leaving no room for errors. Even a 99% completion rate leaves room for potential issues.

Maintaining 100% compliance in endpoint protection is paramount. While I won’t endorse specific solutions, modern antivirus systems are evolving beyond signature-based approaches to trend-based methods, leveraging AI for better protection. These advancements are crucial in preventing ransomware attacks.

What emerging trends in cybersecurity do you believe will have the most impact, and how are you preparing for them?

Regarding emerging trends, I would say that GenAI is certainly one of them. Another significant trend from our standpoint revolves around governance and privacy, specifically data governance and privacy. These two are highly important trends that we anticipate. GenAI has been a topic of extensive discussion, highlighting its potential in streamlining various IT workloads, which would otherwise require significant time and manpower to manage. Additionally, it has implications for customer experience.

Concerning data privacy and governance, with the increase in data breaches and cyber attacks, there has been a notable surge in the focus on governance, data, and privacy within organisations. This entails not only involving more individuals but also establishing a robust privacy framework to bolster customer trust and ensure compliance with regulations and frameworks. These are two trends that I foresee shaping the industry moving forward.

Also, as mentioned earlier, we are still eagerly awaiting significant commercialization with GenAI. I believe it will become a major factor in the coming years, given its substantial capabilities that can enhance both business performance and customer experience.

How do you anticipate the pricing strategies of other companies evolving in the next few years, and what impact do you expect this to have on market competitiveness?

As previously mentioned, it’s important to address bias and reduce its impact. It’s easy to be influenced by the output, but maintaining a rational and practical perspective is crucial when evaluating the approach. Consider the data used to train the model, the type of model employed, and the duration of its training on that data. It’s essential to refrain from using fabricated or manipulated data, thereby minimising AI hallucination. This is important for ensuring the integrity of the AI’s functionality.

Comments (0)
Add Comment