AI cyber defence: the immovable object for tomorrow’s irresistible force

AI cyber defence: the immovable object for tomorrow’s irresistible force

Written by Simon Rockman, on 1 Dec 2017

This article is from the CW Journal archive.

Companies such as Darktrace, Carbon Black, Trend Micro and Cylance are using AI to build systems which are better protected from cyber attack
CWJ’s Simon Rockman looks at the state of play in Machine Learning for defence.

Cyber security is one of the greatest technical challenges of our day. Human security teams are nowadays routinely outpaced by sophisticated attacks, which propagate at machine speed and often manage to remain undetected within networks for significant lengths of time. No security team of reasonable size could expect to secure everything on its network, even against known threats or those foreseen: every printer, every personal device, every security camera, every smart meter, every lift, every ventilation appliance and every VOIP phone. That’s before we even mention the computers and the clouds.

Then there's the issue of the unforeseen. Security practitioners tasked with keeping networks safe naturally ask themselves: "How do you defend networks against 'unknown unknowns'?"

The traditional approach to cyber defence rests on the assumption that you can look at yesterday's attacks, or attacks demonstrated by researchers, to define and pre-empt the attacks of tomorrow. The belief is that by truly understanding the nature of the threat we are trying to defend against, we can design security policies and measures to guard against future perils.

Pattern of Life

Pattern of life

Surveillance can detect insurgent activity in Afghanistan by watching for changes in human activity. AI can do the same by watching network traffic.

This has consistently fallen short in recent years in the face of sophisticated, stealthy cyber-attacks spreading at machine speeds – our hard-gathered intelligence is becoming increasingly defunct. If we can't expect attacks to follow certain patterns, then we aren't able to define in advance what we are trying to defend against. Attacks driven by AI and machine learning are a major upcoming threat. Polymorphic malware is being seen "in the wild", rewriting itself each time it propagates and hiding itself by encryption.

In addition to this, we are seeing novel variants of existing malware, such as the recent WannaCry attack which affected more than 230,000 computers across 150 countries and crippled numerous NHS trusts, spreading at machine speed. WannaCry innovatively combined commonly-seen ransomware with a worm-type spreading mechanism. Malware such as this poses risks not only to the infiltrated networks, but also to those who come in contact with them.

GET INVOLVED WITH THE CW JOURNAL & OTHER CW ACTIVITIES


BECOME A MEMBER

It’s not intelligence-led, it’s Artificial Intelligence

One answer to these problems could be the use of AI. It's an approach which is being taken by a number of vendors of security software. Trend Micro has its XGen, for example.

"The name XGen was carefully chosen to indicate the cross-generational technology fuelling our approach," says Eva Chen, Trend Micro CEO. "We've been using machine learning for years as part of our global threat intelligence system, the Trend Micro Smart Protection Network. Now, we're raising the bar by infusing 'high-fidelity' machine learning into our blend of protection techniques specifically to tackle more advanced threats like ransomware."

Security software vendor Carbon Black sees the role of Machine Learning as doing the heavy lifting in analysing activities, while intervention only takes place automatically when there is direct recognition of a threat. Carbon Black's security strategist Rick McElroy, a former United States Marine, told us: "We find that most organisations are not ready for Machine Learning defence."

The Carbon Black approach is to use Machine Learning to focus on patterns of attack, tactics and procedures and how you sift through large amounts of data. The vendor is working with IBM to use its Watson AI technology.

McElroy highlights the necessity for having good data, a common theme in most AI endeavours. "It starts with the data set," he says.

Other companies including Cylance build their data sets by using "features" of software derived from such attributes as file size and entropy, as well as parsed sections of executables: for example, the names of each entry in the section table or the base-2 logarithm of file size can be used to build a data set and, perhaps, train AI systems to recognise never-before-seen malware as being malware. Some features could be extracted conditionally based on other features; other features could represent combinations. The space of possible features is very large.

Embrace the Uncertainty

Unsupervised machine learning could potentially avoid the need for training with labelled data. Instead, it would be able to identify significant patterns and trends without the need for past or present human input. The advantage of unsupervised machine learning is that it avoids the great challenge of most AI projects: finding enough relevant, labelled data with which to train the system and demonstrate its reliability.

Darktrace uses this approach, and says that it deploys unsupervised machine learning algorithms to "embrace uncertainty". Instead of building on past threats to develop the ability to recognise new ones, Darktrace's AI monitors a client's network traffic and learns what is normal behaviour. Ordinary network traffic here takes the place of conventional training datasets: the idea is that the AI learns to recognise unusual network activity – deviations from normal, legitimate activity, or from the learned "pattern of life", as both Darktrace and military surveillance operators like to say. The idea is that Darktrace's systems will classify any detected deviation as more or less likely to be a serious threat, and thresholds can be adjusted to avoid too many false positives while alerting in real time to emerging problems.

Darktrace also offers the ability to specify different grades of response to different levels of classification. A customer might choose to queue low-graded anomalies for examination by its security team as time became available, to alert the security team out of turn for middle-graded ones,
and in the case of the highest-graded apparent problems to let the AI autonomously cut off network traffic to and from the affected nodes
and take the risk of disrupting legitimate activity while the matter is looked into.

Simon Rockman
Editor - CW Journal

Simon Rockman lives at both ends of the adoption bell curve. As an experienced technology writer he was the editor of Personal Computer World in the late 1980s and went on to found What Mobile magazine which he ran for ten years, and has reviewed over 300 handsets. As the mobile correspondent for The Register he championed CW writing a number of articles supporting the organisation. He has also had senior roles in telecoms having been the Creative Experience Director at Motorola where he looked at new uses for mobile and Head of Requirements at Sony Ericsson where we worked on innovative devices at entry level. He was the Head of the Mobile Money Information Exchange at the GSMA and has launched Fuss Free Phones an MVNO aimed at older users

Subscribe to the CW newsletter

This site uses cookies.

We use cookies to help us to improve our site and they enable us to deliver the best possible service and customer experience. By clicking accept or continuing to use this site you are agreeing to our cookies policy. Learn more

Start typing and press enter or the magnifying glass to search

Sign up to our newsletter
Stay in touch with CW

Choosing to join an existing organisation means that you'll need to be approved before your registration is complete. You'll be notified by email when your request has been accepted.

i
Your password must be at least 8 characters long and contain at least 1 uppercase character, 1 lowercase character and at least 1 number.

I would like to subscribe to

Select at least one option*