In this article, we take a look at the growing use of artificial intelligence in cyber security as security professionals try to stay a step ahead of the constant barrage of threats and cyber-attacks.
Background – Artificial Intelligence in Cyber Security:
Evolving technologies and the growing numbers of “always on”, “always connected” devices, tools and commodities are giving the instigators of cyber-threats increased opportunities for access and interference.
With statistics suggesting that assaults on individuals, corporations, and government bodies account for almost $400 billion in lost revenue annually, and some 90% of companies admitting to having been victim to some kind of attack – figures that translate to 18 individuals per second being affected by cyber-crime – countering these threats is a real and ongoing concern, for enterprises.
Security personnel are finding themselves overwhelmed by the multiplicity of attack vectors and tools available to the cyber-criminals, and are increasingly looking to a new ally, in the quest for cyber security: AI.
What is Artificial Intelligence?
AI is a sub-division of computer science dealing with the development of systems and software capable of acting intelligently, and doing things that would normally be done by people – equally as well, or sometimes better. AI refers to the science and methodology itself, and to the behaviour exhibited by the machines and programs which result from it.
The term was first introduced during The Dartmouth Conference of 1956, by Stanford University researcher John McCarthy. In its practical applications since then, three distinct approaches to AI have evolved.
Machines and applications in this field are designed to simulate the functions of actual human intelligence – to think as we think. Systems may also have the ability to explain why humans think the way they do. The “Holy Grail”of this approach is to create machines that are artificial simulations of human consciousness – a level that we’re some way distant from.
The products of this philosophy are functioning systems and software that do things that humans do – but not necessarily in the same way. Weak AI machines may behave like people on the surface, but they don’t have the capacity to reveal how humans think. An example of this would be the chess-playing capabilities of IBM’s Deep Blue.
AI “In Between”
The middle ground of Artificial Intelligence is the largest active field, and covers systems and software inspired or fuelled by human reasoning. Pattern recognition and machine learning techniques used by the likes of Google’s Deep Learning or the IBM Watson project fall into this category.
General or Narrow?
Systems designed with the ability to reason in general, but without applications for a specific purpose are classified as general AI. This contrasts with narrow AI, which includes machines and software designed for specific purposes. It’s these narrow systems that see application in the day to day running of computer systems.
How can the Cyber Security industry benefit from Artificial Intelligence?
Prevention of cyber-threats and the avoidance of attacks represents the ideal, but it’s almost inevitable that incidents will occur. And when they do, a rapid response is crucial, both in minimizing the damage caused by the assault, and in recovering from its effects.
With a “thinking machine”, rapid response could be written into its DNA. Algorithms dedicated to spotting potential threats could be implemented in real time, to give a moment by moment response to an attack.
Existing security software databases and algorithms have a limited scope, and are sometimes unable to keep pace with the rapid development and mutation of new threat vectors. Adaptive or machine learning algorithms designed into an intelligent security system have the potential to identify and respond to threats as occur – even changing ones.
And these intelligent security devices could have the inherent ability to keep on learning: to study current pools of knowledge and extrapolate from them to anticipate future threats, and appropriate responses.
AI also has the potential to increase the scale of resistance that a system has to ongoing attacks. If an organization has a large volume of hardware (office computer systems, mobile devices, smart devices worn by its members, appliances hooked up to the Internet of Things, etc.), it’s possible for a cyber-assault to target any number of these. In response to such an assault, automated mechanisms backed by Artificial Intelligence could be deployed to meet each incoming threat as it presents itself, taking counter-measures in real time.
Current and Future Implementation of Artificial Intelligence in Cyber Security: Defensive and Offensive Measures
In traditional security set-ups, real-time responses to cyber-threats are often hampered by the speed and sometimes shifting nature of the attack itself, and the sheer volume of data that needs to be analyzed in order to formulate a response, and plot out a remediation strategy.
Human security analysts typically can’t handle this task alone, and require at least some degree of automation in their cyber-threat response mechanisms. Today’s AI systems, with their machine learning algorithms and real-time (or near real-time) counter-measures are the early stage in what promises to be an evolving security landscape.
The next logical level is the application of intelligent learning capabilities in security software, so that it becomes better able to dynamically react to a range of threats (which may be changing, themselves).
Proactive measures are also part of the mix. It’s widely believed that several governments are already at work on artificially intelligent components to existing weapons systems, capable of acting autonomously to evaluate potential threats and eliminate them by deploying pre-emptive cyber-attacks or espionage activities of their own.
From a security perspective, the future presents not only opportunities for deploying AI technology, but also the need to ask some searching questions – and to consider some of the operational difficulties of AI in use.
For example, since AI is in essence a technology like any other, future applications will require some assured measures of quality control, so that software and systems will behave as they’ve been programmed to do – and not overstep their bounds. They must also be designed to be resistant to cyber-attacks, in their own right – with some kind of verification and validation measures in place, to confirm their continued usability, in the event of an assault.
There are also ethical and legal issues to consider, such as who bears responsibility for the actions of an autonomous or intelligent machine, in the field.