How AI will change the cybersecurity landscape in 2024

"The escalating use of AI-generated cyber attacks, exemplified by voice deepfakes and prompt injection, poses a significant threat to organisations, particularly those in the financial sector."
AI altering a woman's face to evade cybersecurity measures

By Deen Hans, Director of Security Operations at Deimos

Last year, cybercriminals dramatically escalated the use of AI-generated cyber attacks, unleashing tactics like voice deepfakes and prompt injection to target individuals and businesses. Anticipating a significant surge in these tactics throughout 2024, especially targeting businesses lacking robust cybersecurity investments, we foresee an intensified threat landscape. Financial institutions, in particular, are highly susceptible to voice deepfake attacks, especially phishing scams where callers pose as reputable sources to pilfer private information. At Deimos, our primary focus is collaborating with businesses tackling the escalating challenges in their industries. In this article, we highlight two of the most pressing issues we're helping companies combat:

  1. Verifying the authenticity of communication sources
  2. Validating LLM inputs to mitigate potential threats

While the benefits of AI for organisations are undeniable, prioritising its controlled and secure implementation within the organisational framework is crucial. Establishing comprehensive policies is paramount to ensuring AI operates within its intended parameters, and sensitive customer information is not processed by Language Model Models (LLMs) hosted outside a business’s systems. Businesses must meticulously scrutinise and evaluate AI tools for productivity and feature development, ensuring data privacy maintenance and compliance with regional data protection regulations. As cybercriminals increasingly target vulnerable entities, the imperative for businesses to secure assets, private information, and overall reputation becomes even more pronounced.

Deepfake 2 copy

Verifying the authenticity of communication sources

The challenge of verifying the authenticity of communication sources has become increasingly complex. Instances of real-time voice-modulated deepfakes for impersonation, pre-recorded video deepfakes mimicking individuals, AI-enabled fake accounts orchestrating social media campaigns with artificial engagement to disrupt the public image of an organisation or individual, and sophisticated email and messaging phishing attempts utilising AI-generated text are on the rise.

The solution

Organisations must adopt robust measures to ensure the authenticity of communications. One straightforward yet effective approach involves incorporating a second medium of verification to confirm the identity of an employee or client. An example of this SMS two-factor authentication is when you receive a text message containing a one-time password used to access a network, system, or application. Despite the simplicity and efficacy of this method in thwarting potential deepfakes, it often goes overlooked by many businesses.

A broader awareness of deepfake voices and their generation methods is essential. Organisations should take proactive steps to shield themselves from inadvertently providing voice training data that could be exploited for deepfake creation. Typically, scammers collect such data through unsolicited calls, manipulating victims into answering specific questions and unwittingly providing voice samples. Verizon reports that 74% of 5199 data breaches include the human element, with people being involved either via error, privilege misuse, use of stolen credentials or social engineering. To minimise this risk, organisations are encouraged to opt for video-calling interactions whenever possible to further verify the identity of individuals within the organisation, thereby reducing the likelihood of successful impersonation. In cases where a potential deepfake interaction is suspected, it is imperative to verify the communication over a second medium, preferably in person if feasible.

Deepfake 3 copy

Validating large language model inputs to mitigate potential threats

Another cybersecurity challenge which will be critical to mitigate against in 2024 is the increasing exploitation of input validation. With more businesses leveraging advanced LLM models like OpenAI's GPT-4, businesses must exercise caution in processing user input. Implementing moderation and employing security-focused language models are essential measures to detect malicious input and prevent prompt injection attempts.

Prompt injection attacks have gained prominence as a threat to AI-enabled applications and products. These attacks strategically aim to elicit unintended responses from large language-based tools, often by manipulating or injecting malicious content into prompts to exploit the system.

The solution

Customer-facing organisations should prioritise reinforcing their services with robust validation measures. Businesses utilising LLMs for customer interactions must ensure that their applications do not reveal internal details, such as specific training data and back-end information. Models need protection against attackers attempting to coerce them into switching personas or accepting unauthorised rules and instructions beyond their intended purpose.

Developing resilient system prompts, coupled with an additional layer of user input message verification and scrutiny of LLM outputs, can significantly minimise the impact of prompt injection. By implementing these measures, organisations can bolster their defences against evolving cyber threats.

Final thoughts

In the dynamic landscape of cybersecurity, the integration of AI into mainstream businesses has ushered in both unprecedented opportunities and challenges. The escalating use of AI-generated cyber attacks, exemplified by voice deepfakes and prompt injection, poses a significant threat to organisations, particularly those in the financial sector. As a cybersecurity firm operating in Africa, Deimos is at the forefront of addressing these challenges, collaborating with companies to fortify their defences and navigate the intricate terrain of AI security.

Looking ahead into 2024, the imperative for organisations lies in establishing controlled and secure AI adoption. Verifying communication sources, mitigating deepfake risks, and fortifying against prompt injection attacks are pivotal components of this strategy. By implementing vigilant policies, raising awareness, and embracing proactive measures, companies can safeguard their assets and maintain trust in an increasingly interconnected digital landscape.

Deen Hans is the Director of Security Operations at Deimos, he has been a driving force behind the company's success since December 2022. He is an engineer focused on Application and Infrastructure security with a strong background in architecting and building secure, scalable systems across various domains. 

Untitled design

FIND OUT HOW CYBERCRIME IS AFFECTING THE INTERNET BANKING LANDSCAPE IN THE JANUARY 2024 EDITION OF PUBLIC SECTOR LEADERS:

 

Leave a Comment

Get certified