Unlocking 2024 trends: CrowdStrike’s forecast on AI blind spots and corporate risks

By Elia Zaitsev, CTO, CrowdStrike

The global technology landscape is constantly evolving, introducing unprecedented advancements and innovations that redefine industries and shape our lives. As we enter 2024, the following trends focus on the issues that we are likely to see shortly.

Beating cloud adversaries will require a hardline focus on securing everything across the entire software development lifecycle: There’s never been a more critical time for cloud security. As organisations focus on managing remote and hybrid teams through an uncertain global economy, adversaries have become more sophisticated, relentless and damaging in their attacks. According to the CrowdStrike 2023 Global Threat Report, cloud exploitations increased by 95% and the number of cloud-conscious threat actors increased more than 3x in the last year. At the same time, the growth of cloud computing, the pace of DevOps, and the increased use of no and low code development platforms has led to an explosion of applications and microservices running within cloud environments. The speed and dynamic nature of application development makes it impossible for organisations to maintain a full picture of every application, microservice, database, and associated dependencies running in their environments. This creates a massive risk profile that cloud-savvy adversaries continually look to exploit. In 2024, enterprises must focus on securing their entire cloud estate – from both an application and infrastructure perspective – to win this battle.

AI Blind Spots Open the Door to New Corporate Risks. In 2024, CrowdStrike expects that threat actors will shift their attention to AI systems as the newest threat vector to target organisations, through vulnerabilities in sanctioned AI deployments and blind spots from employees’ unsanctioned use of AI tools.

After a year of explosive growth in AI use cases and adoption, security teams are still in the early stages of understanding the threat models around their AI deployments and tracking unsanctioned AI tools that have been introduced to their environments by employees. These blind spots and new technologies open the door to threat actors eager to infiltrate corporate networks or access sensitive data.

Critically, as employees use AI tools without oversight from their security team, companies will be forced to grapple with new data protection risks. Corporate data that is inputted into AI tools isn’t just at risk of threat actors targeting vulnerabilities in these tools to extract data, the data is also at risk of being leaked or shared with unauthorised parties as part of the system’s training protocol.

2024 will be the year when organisations will need to look internally to understand where AI has already been introduced into their organisations (through official and unofficial channels), assess their risk posture, and be strategic in creating guidelines to ensure secure and auditable usage that minimises company risk and spend but maximises value.

In addition, adversaries will see cloud-based AI resources as a lucrative opportunity. While many believe that AI will be a top trend in enterprise investment over the next few years, a recent study found that 47% of cybersecurity professionals admit to having minimal or no technical knowledge of AI. On top of that, AI presents new security challenges, as AI systems require access to large datasets often stored in the cloud. Securing this data and ensuring that AI models running in the cloud are not exploited for malicious purposes will be a growing concern, and in 2024, a comprehensive Cloud Native Application Protection Platform (CNAPP) will be more important than ever to fend off opportunistic adversaries.

CISOs and CIOs turn to platforms to drive the best security and IT outcomes. With CISOs and CIOs being tasked to do more with less, in 2024 we will see an industry-wide shift as organisations turn to platforms, rather than legacy point solutions, that break down operational silos and reduce complexity and cost. The increased collaboration between CISOs and CIOs is driving the need for a platform that can be the solution to both of their problems – an AI-native platform that stops breaches and provides a cost-effective single point of control for CIOs.

Generative AI’s potential to manipulate and impact the 2024 election cycle: As we approach the 2024 Indian General Election between April and May 2023, threat actors will likely target election systems, processes, and the general (dis)information environment. Nation-state adversaries – such as Russia, China and Iran – have an established history of attempting to influence or subvert elections globally through cyber means and information operations.

These adversaries have leveraged blended operations to include elements of ‘hack-and-leak’ campaigns, integration of modified or falsified content, and amplification of particular materials or themes. Given recent progress with generative AI – including audio, images, video, and text – threat actors will have additional tools, capabilities, and approaches to create malicious content – all of which could make it harder for voters to discern what is real. Stakeholders from across the Government, including the AI field, and the cybersecurity community at large will need to work together as appropriate to monitor developments in this space.

AICybersecurityITtechnology
Comments (0)
Add Comment