How AI is Worsening the Threat of Phishing

AI is the new engine for cybercrime. Phishing attacks are now sophisticated, scalable, and highly effective. From deepfakes to voice cloning, distinguishing genuine communication is harder than ever. Learn how to fortify your defences against the new, sophisticated cyber wave.

Telecommunications, 5G, 6G, satellites and wireless technology run throughout every sector worldwide, and as a result of their rapid rise and continued adoption, their attack surface has exponentially expanded. Notably, artificial intelligence (AI) is fundamentally transforming industries, but similarly, its influence on cybercrime, specifically as far as phishing and social engineering attacks are concerned, is more worrying. These types of attacks are already calculated from the outset, but the presence of AI empowers methodical attackers to make these attacks more sophisticated, scalable and, worryingly, effective.

For organisations operating in the sectors of, or deploying technologies of, 5G, 6G, IoT and wireless ecosystems, understanding the pervasive threat of these AI-empowered attack vectors has become vital.

The Statistics of AI Phishing

Phishing is one of the most common forms of social engineering attacks, and encompasses a broad range of categories on its own. The statistics are alarming, with an average of 3.4 billion phishing emails sent daily, and roughly 85% of UK businesses reporting experiencing an attack this year alone.

Up to 87% of organisations worldwide have experienced an AI-led attack this year, and 82.6% of all phishing emails now use some form of AI language models or generators, which has exponentially increased (53.5%) since last year. More worryingly, AI-generated phishing emails report higher click-through rates compared to otherwise innocuous and human-sounding messages.

The financial services sector is one of the most at-risk industries. It was recently reported that the Financial Conduct Authority (FCA) issued 80% more scam warnings in recent years compared to previous periods, with the figure increasing by more than 300% over five years. While the word ‘scam’ can encompass a range of attack vectors, the escalated figure and rising trends reflect how convincing AI-led social engineering can become. As a result, they are increasingly challenging to detect, contain and defend against.

How AI Amplifies Traditional Threats

Historically, phishing emails could be easily identified, and they were often ridiculed for being feeble attempts to deceive users. Grammatical errors, awkward phrasing, generic text, and poorly concealed links were all telltale signs of an email that had, somehow, bypassed incumbent email security protocols and ended up in a recipient’s inbox. However, a quick permanent delete would resolve the situation.

However, now, AI has made these attacks more convincing, with LLMs now crafting exceptionally more ‘legitimate’ messages, referencing real projects, and incorporating more personalised details and nuance. AI models can harvest data from publicly available and open-source data, from company websites, social media platforms and government registers to craft significantly more believable messages, masquerading malicious links and files with remarkable accuracy.

What’s more, deepfake technology has grown and become pivotal to this type of personalised cyber threat. Early last year, a finance employee at the Hong Kong office of engineering firm Arup was deceived into transferring $25 million. The employee joined what appeared to be a legitimate video conferencing meeting with deepfakes of company executives. Furthermore, looking beyond email, voice phishing (vishing) attacks surged 442% last year, with AI voice cloning technology making it easy for malicious actors to impersonate high-ranking and influential officials, and even high-profile public figures, as recent incidents have highlighted.

These real-world incidents and worrying statistics illustrate the very real threat that AI-led social engineering presents. As such, with AI’s continued investment and growth potential, it’s going to become even more challenging to distinguish genuine communications from calculated phishing attempts.

Defending Against AI Phishing Attempts

Organisations must be methodical in their approach to prevent AI-powered phishing from compromising their incumbent systems, networks and devices.

Strong passwords and multi-factor authentication (MFA) should be deployed as a minimum, with backup security keys, biometric verification procedures, and regular password reset processes enforced to fortify initial defence layers.

Leveraging machine learning algorithms that analyse email patterns, detect false positives and identify AI-generated content with underlying auto-generated HTML will be a proactive solution. Behavioural analysis tools can flag unusual communication patterns that may signify compromise.

Out-of-band verification protocols should be created when handling sensitive data or financial transaction requests. Independent verification of video call requests should be deployed so that innocuous requests are not taken at face value.

Regular security awareness training with real-time phishing simulations and red team engagements with AI-led ‘vectors’ that replicate genuine threat tactics. This will validate incumbent team preparedness and response protocol effectiveness.

A Call to Arms

The National Cyber Security Centre emphasises that tried-and-tested cyber resilience barriers, while still effective, are not sufficient anymore. Cyber security must be treated as a company-wide responsibility requiring profound investment and active oversight from board-level executives. For Cambridge Wireless members operating at the intersection of telecommunications and emerging technologies, the stakes are exponentially higher. Those who view AI-powered cyber threats as what they are (real threats and not abstract anomalies) will be in a better position long-term, with a much stronger cyber security posture than those who don’t act now.