The Road Ahead: What Awaits in the Era of AI-Powered Cyberthreats?
Artificial intelligence (AI) is rapidly infiltrating the business world and our daily lives. While revolutionizing how – and how efficiently – work gets done, it also introduces a new set of cybersecurity challenges. In response to the evolving, AI-shaped threat landscape, I foresee organizations adopting robust countermeasures.
Even Novice Cybercriminals Can Launch Sophisticated Attacks
The rise of AI in organizations goes hand in hand with a big leap in AI-powered attack vectors. The threats we already faced – think fake advertisements, highly tailored phishing lures, counterfeit social media profiles, and deceptive chatbots – are now even more advanced.
It should come as no surprise to see a sharp uptick in a new threat this coming year: ‘Fakes as a Service.’ We’ve already seen the emergence of Deepfakes-as-a-Service from Tencent Cloud. With just three minutes of live-action video and 100 spoken sentences, it creates a high-definition digital human for $145 fee.
As a commodified AI-driven deceit with low barriers to entry, ‘Fakes as a Service’ can empower even newbie threat actors to launch effective cyberattacks.
AI Supercharges Malicious Campaigns Tactics
Now let’s consider AI-fueled campaigns, which has elevated the impact of social-engineering attacks to newfound levels. Beyond creating deceptive content, AI can quickly and adeptly conduct detailed psychological profiling and leverage advanced social engineering tactics, such as using audiovisual deepfakes to impersonate someone’s close contacts. In real time, AI can even analyze campaign efficacy – and fine-tune it.
Already, threat actors are harnessing these tactics to worm their way into organizations by masquerading as a known third party. Using such tactics, they could launch a supply-chain attack. Or even conduct corporate espionage.
The potential risks are significant. And real.
The CEO of the world’s largest crypto exchange said scammers made a deepfake of him to trick contacts into taking meetings. In another case, criminals used AI voice cloning to trick a Hong Kong bank manager into transferring $35 million.
AI-Powered Disinformation Goes Mainstream
With major elections on the 2024 calendar around the globe, countries including the United States, United Kingdom, Russia, the European Union, Taiwan – and many nations across Africa and Asia – are on alert for AI-driven disinformation campaigns. A company offering digital avatars is already struggling to stop them from being used to spread misinformation.
Capable of manipulating public opinion, such campaigns pose a significant threat to election integrity and global stability. Even in the context of the 2024 Paris Olympic Games, AI-powered disinformation can cause disruption.
The potential scale and impact of these disinformation efforts reinforce the need for organizations and governments to urgently develop effective countermeasures, starting with awareness raising and prebunking.
Custom GPTs Are a Conduit into the Enterprise
The growing use of AI solutions within enterprises introduces yet another conduit for cybercriminals. Consider the rise in custom generative pre-trained transformers (GPTs). Already, 80% of Fortune 500 companies have adopted ChatGPT Enterprise, and OpenAI is further empowering them with custom GPTs.
Yet custom GPTs are especially vulnerable since they were designed to be used by those without programming or security expertise. These AI models are at risk of exploitation via prompt injection attacks that could potentially expose sensitive information or lead to model abuse and misuse.
The Rise of Proactive, Anticipatory Defense
In the face of the custom GPT vulnerability, we will see enterprises better define security requirements and develop a well-structured risk assessment methodology for their AI development tools and processes.
Even beyond that, they must find ways to avoid scrambling to clean up after an incident occurs. The best stance going forward will be underpinned by anticipation and prevention – something that is attainable by calling upon analytics and machine learning to predict and prevent threats before they materialize.
Backing this proactive stance will be the combined forces of Governance, Risk, and Compliance (GRC) and Resilience and Vulnerability Risk Management (VRM) teams. They will play a pivotal role identifying potential risks and vulnerabilities and implementing measures to mitigate them. In fact, these teams will become the cybersecurity rangers, training their eyes just over the horizon to enable an anticipatory defense.
At the same time, we will see soft skills emerge as a critical element in incident response. Cyberattacks that leverage AI, disinformation, and sophisticated social engineering tactics are more perceptual and psychologically nuanced. In other words, their impact now extends to people’s mental health. As a result, soft skills including empathy, communication, and psychological acumen will become essential. Watch for incident response training and preparedness exercises to increasingly include these soft skills so teams can better handle the human aspect of these attacks.
AI’s Security Quandary: Strategies for a Resilient and Protected Future
AI and cybersecurity are now interlinked. And that presents a puzzle for organizations, governments, and individuals. AI can significantly strengthen defense mechanisms. But it also positions cybercriminals and threat actors to launch sophisticated attacks.
As novel AI-driven threats unfold around us, businesses and governments must keep up to date and invest in fitting security strategies. Effective cybersecurity in an AI-dominated landscape is characterized by a combination of technology solutions and a deep understanding of cyber adversaries’ evolving tactics. Anticipating these challenges and proactively adapting cybersecurity postures means we can enter this era of new digital threats on our toes instead of on our heels.