Is AI the Future of Cybersecurity?

15 December 2017

With so much personal data stored online, not to mention almost corporations building themselves around contemporary technology, cybersecurity is now a leading consideration. But are human agents lagging in the fight against cyberattacks, and can Artificial Intelligence prove itself as the first line of defense?

Ethical Hackers

In November 2017, the UK’S National Health Service (NHS) announced that they would be using “ethical hackers” to expose the vulnerabilities in the system’s protection. This comes six months after the May 12th WannaCry cyberattack, which saw at least 81 of the 236 NHS trusts affected either directly or indirectly by the ransomware.

AI Cybersecurity

In order to prevent similar malware infiltrating the system again, the NHS will work with a Security Operations Centre, which will use human agents to probe for flaws. Once security breaches have been made, it seems natural to identify any weak spots, but there is a new wave of security protocol that is taking a different approach.

Traditional Cybersecurity Lagging Behind

Firewalls are intended to keep out malicious software, but what happens when something does get inside the system? Anti-virus programs scan for infections, but often this comes too late – once the malware is inside, it has already begun wreaking havoc by shutting down processes or stealing data. So, how are companies staying at the forefront in the war against cyberattacks? Industry leaders such as Google and Microsoft are developing AI software to drive forward defense activities.

Microsoft’s Collaboration with Hexadite

Microsoft’s 2017 acquisition of Hexadite, an AI-driven security company, heralds the company’s further innovation in developing more accurate and responsive cybersecurity. Building on their Windows Defender Advanced Threat Protection, Windows collaboration with Hexadite will see automatic artificial intelligence-based responses to threats, rather than traditional methods of blocking known harmful data or identifying a threat once it has already infected the system.

Hexadite functions with minimal input from human agents: it is designed to identify common threats or an actual breach and then handle the issue itself, leaving human technicians free to deal with more complex attacks. Companies are often inundated with cyberattacks from various sources that are constantly evolving, and AI frees up human technicians to concentrate on identifying vulnerabilities and improving security.

Google’s AI Competition

Rather than adopting Microsoft’s technique of incorporating a security forerunner, Google has partnered with data science platform Kaggle to hold AI “battles” to further develop a defense against malicious attacks.

As part of the Neural Information Processing Systems (NIPS) Foundation 2017, AI systems will combat each other in attempts to either attack or defend data systems. The challenge will consist of three rounds: AI confusing a machine into not working properly, forcing a system’s AI to classify data incorrectly, and finally bolstering defences. With the number of AI-cyberattacks determined to rise, it makes sense to utilize AI in both capacities: to identify how they would attack, and how they might be used to defend.

AI Advantages

AI does not get tired, and it does not make human errors. This alone makes it attractive in quickly developing industries. Also, the vast amounts of data that AI is able to examine in comparison to human agents are staggering – and particularly useful for large corporations that are processing incredible amounts of information every hour. It can also transform terabytes of raw data into clearer forms, such as recording unusual events or creating behavioral profiles, and these activities can be more easily identified.

More importantly, an AI is able to rapidly recognize patterns, and so even with cyberattacks constantly evolving, AI is able to pick up on extracts of familiar rather than identical code and prevent similar attacks in the future.

The Limitations of AI Cybersecurity

With 91% of cyberattacks beginning with a phishing email, viruses can be invited into the system by human error. It only takes one person to access a malicious link or attachment from a reputable looking email, and the entire system is infected – with further opportunities to spread into contacts’ organizations.

Moreover, reliance on AI means that it becomes more accessible, and AI can be adapted for covert cyberattacks. Traditional phishing emails use easy to find information such as names and email addresses, but AI would be much more insidious, adopting users’ manner of speech, photos, and dates of birth to present the email as genuine correspondence from acquaintances and friends. Internet profiles could be gleaned in a second for such personal information.

The Dangers of Entrusting Security to AI

It seems that the natural foil to AI cyberattacks is AI protection, but it is not necessarily the foolproof defense that we would prefer it to be. It can be difficult to ascertain how much AI should be left to its own devices – it has, after all, been programmed to do a job and can’t choose to deviate it from that – but handing over complete control to algorithms is a heady concept. People worry that the rise of AI will reduce human career opportunities, but not necessarily: human technicians can work with AI rather than being replaced by it, reaping its benefits while ensuring that a human mind remains in control.

In September of last year, representatives from Microsoft, IBM, Google, Amazon and Facebook came together in the Partnership on Artificial Intelligence to Benefit People and Society in order to, amongst other things, develop best practice in ethics, fairness, and inclusivity in AI development.

In December 2016, the Institute of Electrical and Electronics Engineers presented a report titled ‘Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous System’, indicating not only how far AI is expected to advance, but how they feel it should be regulated.