This Is How Artificial Intelligence
Will Become Weaponized in Future Cyberattacks
Artificial intelligence has the potential to bring a select set of advanced techniques to the table when it comes to cyber offense, researchers say.
Last week, researchers from Darktrace said that the current threat landscape is full of everything from opportunistic attacks from teen hackers to advanced, state-sponsored assaults, and in the latter sense, attacks continue to evolve.
However, for each sophisticated attack currently in use, there is the potential for further development through the future use of AI.
Within the report, the cybersecurity firm documented three active threats in the wild which have been detected within the past 12 months. Analysis of these attacks – and a little imagination – has led the team to create scenarios using AI which could one day become reality.
"We expect AI-driven malware to start mimicking behavior that is usually attributed to human operators by leveraging contextualization," said Max Heinemeyer, Director of Threat Hunting at Darktrace. "But we also anticipate the opposite; advanced human attacker groups utilizing AI-driven implants to improve their attacks and enable them to scale better."
Trickbot. The first attack relates to an employee at a law firm who fell victim to a phishing campaign leading to a Trickbot infection.
Trickbot is a financial Trojan which uses the Windows vulnerability EternalBluein order to target banks and other institutions. The malware continues to evolve and is currently equipped with injectors, obfuscation, data-stealing modules, and locking mechanisms.
In this example, Trickbot was able to infect a further 20 devices on the network, leading to a costly clean-up process. Empire Powershell modules were also uncovered which are typically used in remote, keyboard-based infiltration post-infection.
AI's Future Role.Darktrace believes that in the future, malware bolstered through artificial intelligence will be able to self-propagate and use every vulnerability on offer to compromise a network.
"Imagine a worm-style attack, like WannaCry, which, instead of relying on one form of lateral movement (e.g., the EternalBlue exploit), could understand the target environment and choose lateral movement techniques accordingly," the company says.
If chosen vulnerabilities are patched, for example, the malware could then switch to brute-force attacks, keylogging, and other techniques which have proven to be successful in the past in similar target environments.
As the AI could sit, learn, and 'decide' on an attack technique, no traditional command-and-control (C2) servers would be necessary.
AI's Future Role.It is possible that AI could be used to further adapt to its environment. In the same manner, as before, contextualization can be used to blend in, but AI could also be used to mimic trusted system elements, improving stealth.
"Instead of guessing during which times normal business operations are conducted, it will learn it," the report suggests. "Rather than guessing if an environment is using mostly Windows machines or Linux machines, or if Twitter or Instagram would be a better channel, it will be able to gain an understanding of what communication is dominant in the target's network and blend in with it."
Take It Slow. In the final example, Darktrace uncovered malware from a medical technology company. What made the findings special was that data was being stolen at such a slow pace and in tiny packages that it avoided triggering data volume thresholds in security tools.
Multiple connections were made to an external IP address, but each connection contained less than 1MB. Despite the small packets, it did not take long before over 15GB of information was stolen.
By fading into the background of daily network activity, the attackers behind the data breach were able to steal patient names, addresses, and medical histories.
AI's Future Role. AI could not only provide a conduit for incredibly fast attacks but also "low and slow" assaults. It can also be used as a tool to learn what data transfer rates would flag suspicion to security solutions.
Instead of relying on a hard-coded threshold, for example, AI-driven malware would be able to dynamically adapt data theft rates and times to exfiltrate information without detection.
"Defensive cyber AI is the only chance to prepare for the next paradigm shift in the threat landscape when AI-driven malware becomes a reality," the company added. "Once the genie is out of the bottle, it cannot be put back in again."