CISOs Most Common Concerns with Generative AI

By Sven Krasser, SVP and Chief Scientist, CrowdStrike

Generative AI has taken centre stage within the security community. While employees may already use generative AI in their day-to-day work, whether it’s helping with emails or writing blogs (I promise this was written by a human), CISOs are apprehensive about incorporating generative AI into their tech stack. Their concerns are valid: CISOs need generative AI to be accurate, safe, and responsible. But does today’s technology meets their high standards?

Sven Krasser

CISOs and CIOs have heralded generative AI with both concern and excitement. They recognise the ability of generative AI to aid productivity or augment IT and security teams affected by the ongoing skills shortage. However, these benefits must be carefully weighed against the new risks of this transformative technology. Let’s take a look at some of the top security questions today’s leaders are asking before allowing generative AI in their environments, either as a tool for staff or as a product component.

New AI Tools Drive Greater Productivity
Let’s be realistic. Chances are, your staff is already using generative AI tools, which are well-known to be incredibly handy and simplify common tasks. Your Sales rep needs to get a well-written email out to a prospect. Done. Your Support team needs to write up explanations for your Knowledge Base. Also done. Does your Marketing team need an image for a brochure? It’s much faster to simply prompt an AI model than hunt for that perfect stock image. If a Software Engineer needs to quickly get some code written, there are models for just that, too. All of these use cases have one thing in common: they demonstrate the powerful appeal of generative AI to save time, boost productivity, and make everyday tasks more convenient for employees across all departments.

Where are the downsides? For starters, many of these tools are hosted online or rely on an online component. When your team submits proprietary data or customer data, the terms of service may offer very little in terms of confidentiality, security, or compliance. Furthermore, the submitted data could be used for AI training, meaning the names and contact information of your prospects are permanently stuck in the weights of the model. This means you need to vet generative AI tools in the same way you vet tools from other vendors.

Another top concern is AI models’ tendency to “hallucinate”, meaning they may confidently provide wrong information. Due to the training procedure of these models, they are conditioned to provide responses that seem accurate, not responses that are accurate.

Furthermore, there are various copyright concerns. For source code-generating models, there is a risk that the models inadvertently generate code that is subject to open source licenses, which may require you to also open source your parts of the code.

The Potential for More Powerful Products
Let’s say you want to use generative AI as part of your product. What do you need to consider? First, enforce your procurement process. If your engineers start trying vendors out on their own credit cards, you will run into the same confidentiality challenges outlined above. If you use any of the open models, you should ensure your legal team has a chance to review the license. Many generative AI models come with use case restrictions, both for how that model may be used and what you are allowed to do with the model’s output. While many such licenses look like open source at first blush, they are not, in fact, open source.

If you train your own models, which includes fine-tuning open models, you must consider what data you are using and if the data is appropriate for this use. What the model sees during training may come out again at inference time. Is that compliant with your data retention policies? Furthermore, if you train a model on data from Customer A and then Customer B uses that model for inference, then Customer B may see some data specific to Customer A. In other words, in the world of generative models, data may leak across the model.

Generative AI has its own attack surface. Your product security team will need to hunt for new types of attack vectors, such as indirect prompt injection attacks. If an attacker can control any input text provided to a generative large language model — for example, information the model is to summarise — then they can confuse the model into believing that text to be new instructions.

Lastly, you need to keep up with new regulations. Across the globe, new rules and frameworks are being developed to address the new challenges that generative AI poses. One thing is certain: generative AI is here to stay, and both your employees and customers are eager to tap into the technology’s potential. As security professionals, we have the opportunity to bring our healthy levels of concern to the table to drive responsible adoption so that the excitement we see now will not turn into regret tomorrow. CISOs and other business leaders should spend time to sit down and be thoughtful about the role AI plays in their enterprises and products. Approaching AI adoption in a thoughtful way can enable the business to accelerate and sustain with much lower risk well into a brighter future.

AIITtechnology
Comments (0)
Add Comment