Diamond IT Blog

The AI effect on Cybersecurity

Written by Samantha Cordell | June 24, 2019

 

We’re a long way from the Matrix or the Jetsons, but Artificial Intelligence (AI) is real and is quickly filtering through the systems around us in finance, social media, data systems and now cybersecurity.

While the Hollywood version of AI is about society-wide lurches into Utopia or Dystopia, what actually works is small, focused improvement through the use of technology to improve efficiency, especially in the case of mundane tasks.

While we mostly associate AI with corporations and governments, AI has also become part of the underworld's toolkit, assisting cybercriminals in a number of ways in their attempts to extract money and safety from us.

On the positive side, the same technology is being used to fight cybercrime, to automate simple tasks without our intervention, and in assisting large organisations such as governments and corporations in sorting through enormous piles of data.

Artificial Intelligence for defence

AI is helping us defend attacks on our privacy and security from intruders and misuse.

Microsoft Defender Antivirus

Microsoft’s Defender software uses Artificial Intelligence and Machine Learning (a subset of AI where the AI learns from experience) to defend against new threats.

In the case of the “Bad Rabbit” ransomware released in October of 2017, it took Microsoft’s Machine Language-enabled AI just 14 minutes to determine that the new file was ransomware after the first person opened the malware. Worldwide, there were just ten infections total before Microsoft’s systems had proved to a degree of 90.1% that the package was Malware and it was blocked worldwide.

In previous experiences with anti-malware software, it would typically take 2-3 days for a malware to be identified, diagnosed, and for signatures and hashes of malware to be distributed out to users through their update systems.

In the example of WannaCry and NotPetya ransomwares from earlier in 2017, billions of dollars of damage was done before the malware could be stopped.

Microsoft Defender Advanced Threat Detection (ATD)

In earlier versions of Windows Server (those prior to Server 2019), Windows would defend itself through layers of external protection, reduced attack surface (turning off features as a default to provide less opportunity for exploits) and by detecting known threats. Windows Server 2019 is now armoured, protected by the Microsoft Defender Advanced Threat Protection, (ATP).

No longer just passively blocking threats, Server 2019 now monitors and reacts to perceived threats. ATP goes beyond just Machine Learning-driven protection, incorporating mitigation techniques, rapid-response actions and reporting, and enterprise-wide security management.

ATD's Artificial Intelligence system proved itself already discovering a fileless malware stream, picked up because of a spike in WMIC commands, usually used by programmers and technicians to make configuration checks and changes in Windows.

The malware discovery was published by Microsoft's security team on July the 8th 2019, see the ZDNet article here for more information.

Cylance anti-malware

Cylance is an anti-malware security program with a difference – it’s entirely run on Artificial Intelligence.

Eschewing the traditional methods of keeping virus signatures and hashes, and blocking access to programs and folders, Cylance is entirely dependent on cloud-based artificial intelligence to protect its host.

Cylance gives its users the advantage of a tiny footprint (as there’s no need to keep signature files on thousands of Malware software), and reacts to threats dynamically as they happen.

There are compromises, however. The AI-based system may do an excellent job of detection, but it won’t recognise social engineering and other actions from the user, assuming the user is no threat.

Artificial Intelligence on the attack

Many of the more serious security breaches recently have been linked to AI, but it’s very difficult to tell if that’s a certainty.

One of the most likely uses of AI is in the development of credentials lists.

When hackers steal large lists of user credentials, they’re unlikely to use those same credentials on the site they stole them from. These credentials will be used on other sites, knowing that many people use the same passwords across multiple sites.

Additionally, these password lists are run through Artificial Intelligence algorithms to understand typical human patterns in building passwords, and use those same patterns against us. When used with password spraying and vulnerabilities such as Windows RDP and the EternalBlue SMB used by WannaCry ransomware, this opens up a dangerous potential for a massive cybersecurity event.

While it’s possible that hackers may have the resources to develop their own AI systems, it’s more likely that they will hack into existing systems to re-program them for their own uses.

It’s likely that many of the recent mega-breaches such as the Marriott chain’s leak of 500 million customer’s data records were accomplished with some assistance from AI.

Barriers to using AI in cybersecurity

Research carried out by Tech Pro Research in 2018 uncovered several concerns that company IT pros and C-suite executives (CEO, CIO, CFO, CISO) had with using AI in Cybersecurity.

The top three constraints identified in the research were the following:

  • Technology is not mature in the areas we require
  • Time and skilled resources to implement
  • Management commitment and budget

Seven risks were given in feedback to the research:

  • Loss of privacy due to the amount and type of data to be used
  • Over-reliance on the AI system in lieu of other systems
  • Lack of understanding of the limitations of the algorithms
  • Insufficient protection of data and metadata
  • Inadequate training solutions
  • Lack of visibility in decisions made through AI
  • The use of the wrong algorithms for a specific problem

Moving forward with AI and cybersecurity

Most of the AI we implement in our roles as IT professionals will be as part of the systems of large organisations such as Microsoft, Avast and our government and financial sectors.

At the coalface, the implementation of AI needs to be select and goal-focused:

  • Understand your current needs and uses of your systems
  • Make informed decisions about using AI in Cybersecurity roles
  • Understand your data and its limitations
  • Develop strong relationships with your security experts and your company stakeholders
  • Allow time to train the AI systems and to review and improve the intelligence

Artificial Intelligence is like any resource, and can provide great benefits for those willing to put the time and effort into smart, measurable implementations of the technology, from the largest corporations to the smallest of business providers.