Securing AI Platforms — Understanding Key Challenges and How to Address Them

By Prof. Sangeetha Viswanathan, Assistant Professor, BITS Pilani Work Integrated Learning Programmes (WILP)

Artificial Intelligence is becoming a new member of almost everyone’s everyday life. It is considered by many as the most important technology in the Fourth Industrial Revolution. The dimensions of AI applications are abounding — from smart homes to tech giants’ data-intrinsic AI platforms. However, such convenience from AI also makes it vulnerable to attacks.

When talking about AI and security in the same context, AI can be used to create more security solutions in business. While it is essential to think about what needs to be secured, securing AI itself has become a dire necessity as well. AI tools are highly susceptible to data manipulation and privacy breaches. Compromising security and privacy, either at the low end or at the high end might lead to severe consequences. From data poisoning, model theft, and inference attacks to creating polymorphic malware through modern-day Gen AI, the vulnerable scenarios are many.

AI and its impact on privacy
The influx of AI-based solutions in everyday lives range from virtual assistants such as Siri and Alexa, digital healthcare, task automation, and recommendation systems to Natural Language Processing solutions, such as ChatGPT. Such adverse functionality requires data intensive training and decision making ability, thereby requiring lot of personal information leading to data privacy concerns.

Some common risks to privacy in AI implementations are cyber-attacks, transparency issues, insider threats, and data mishandling. In addition to cyber-attacks, AI systems are susceptible to adversarial machine learning attacks as well, which are the fundamental vulnerabilities of ML systems right from the workflow, data, hardware, and software. According to OWASP, a non-profit foundation dealing with software security, the top 10 AI/ML security risks are Input manipulation attacks, data poisoning attacks, model inversion attacks, membership inference attacks, model stealing, AI supply chain attacks, transfer learning attacks, model skewing, output integrity attack, and model poisoning.

The issues of bias, discrimination, and data abuse
Data is a major concerning factor when dealing with AI systems and solutions. The AI system is precise and accurate only until it is trained with unbiased data. An AI system when trained with biased data, might limit its threat detection scope. Bias can also affect the real-time monitoring and auditing capacity of AI systems, causing them to overlook some critical risks and attacks. Data abuse and data breaches are common concerns with AI systems. Since a lot of personal information goes into training an AI model, there is a stake of data breach and abuse. In such cases, it is essential to ensure that the data collection complies with General Data Protection Regulation (GDPR).

Technologies to help address security challenges in AI model deployment
Like any software development and testing, the security of AI systems can be evaluated at software level, learning level, and distributed level. The system can be tested against various adversarial examples to develop a stronger robust version. Some fundamental techniques that ensure both data and model security are
-Differential privacy
-Federated learning
-Homomorphic encryption
-Adversarial training
-Distributed learning
Encryption is a strong technique to ensure data privacy and hence, organisations deploying AI models must enforce robust encryption measures to protect the sensitive data they collect. With the leap of Quantum Computing, a strong threat is being posed to data security and there is a need for advanced encryption techniques in the market.

Decentralised technologies, such as Blockchain, have opened newer dimensions where an AI system can be run on decentralised networks rather than on a central server. Healthcare and supply chain are potential applications of Blockchain, where AI models can be used to make analyses, without compromising data privacy. Applications, such as Ocean Protocol, DeepBrain Chain, and SingularityNet promote greater democratisation and access to AI solutions, thus holding up the promise for the future of secure AI.

Secure enclaves or trusted execution environment is at the core of confidential and secure computing at the hardware level. They are a set of security instructions built into CPUs. They ensure that the data in use is well protected, because the enclave is decrypted within the CPU, then only for the code and data within the CPU. Enclaves are one of a kind of hardware-level encrypted memory isolation. Vulnerable modules of AI systems could be embedded or run within the enclave to ensure data privacy and to avoid model theft.

Key questions to ask to help secure AI
Besides considering various technologies that help securing AI models, an understanding towards “what the future of Securing AI models will look like” will give us a clear motive to work ahead.
Some of the questions that one should ask when taking steps toward securing AI, are:

-What does security mean to an AI learning system?
-How do I detect when an AI system has been compromised?
-What are the measures to prevent the misuse of AI model?
-What are the measures to build more robust and resilient AI systems?
-What could be guidelines and policies that organisations should enforce to ensure securing AI?

Guidelines and regulations for Securing AI
Any organisation that builds AI solutions should consider ensuring three principles pertaining to the future of AI – Security, Safety, and Trust. Organisations should make sure that solutions are safe and secure from cyber-attacks, thereby earning trust from the public.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) have collaborated and launched the Guidelines for Securing AI System Development. These set of guidelines provide recommendations for Securing AI system development. These guidelines apply to all types of AI solutions, and not just frontier models. The main emphasis of the document is “Security as an integral foundation in AI Model Development”. The document helps stakeholders to make informed decisions about Secure Design, Secure Development, Secure Deployment, and Secure Operation and Maintenance. These would be reviewed periodically.

Create a directive in your organisation towards securing AI models
Organisations can create their guidelines for responsible and secure AI usage. They should consider including various key stakeholders, while setting up the guidelines and re-evaluating them periodically. Some strategies that help businesses curate such guidelines are:
-Implement Role Based Access Control (RBAC)
-Periodical system update
-Empower the workforce
-Assess the model risk
-Create a trusted execution environment and hardware enclaves
-Ensure data transparency
-Maintain the AI cybersecurity ecosystem
-Standardised management to ensure the security requirements of data collection
-Let the usage be Human and AI rather than just AI

Security enables better AI that works for all
With many solutions, frameworks, and models coming up in the future, it is the need of the hour to integrate security and continuously monitor the smooth working of such AI systems. By prioritising the security of AI, businesses will eventually be able to maximise their benefits as well.

Securing AI solutions must be looked upon at the holistic level with the necessary guidelines and measures. Organisations should educate their work professionals on their current AI practices and the different ways of securing usage of the same. Working professionals on the other hand, should effectively upskill or reskill themselves periodically, especially in areas such as AI threat modeling, secure ways of using Gen AI, privacy-preserving AI, and AI Security.

Let’s ensure safe and secure AI, not just because it is the right thing to do, but also because it is perhaps the only option that assures better path for our future, wherein we humans will likely leverage AI more than we ever did before.

AIcyberattackssoftwaretechnology
Comments (0)
Add Comment