AI hacking: How IT security leaders can mitigate emerging cyber risks

Author: Phil Muncaster

In cyber security, artificial intelligence can be a double-edged sword. Cybercriminals are leveraging AI to drive success rates and returns on investment; AI hacking is just one of the latest developments in a long-running cyber security arms race. But AI can also be deployed by IT security teams as an effective cyber security solution—a sword to counter a sword.

How AI works in attacks

AI—or, more accurately, its machine learning sub-discipline—is all about training machines to make intelligent decisions. Where cybercriminals can find value in AI is in its algorithms, which can analyze huge data sets and automate attacks on large numbers of users while personalizing each attack for maximum effectiveness. AI can also be used to impersonate people, probe systems for vulnerabilities and circumvent cyber defenses.

Artificial intelligence hacking examples include:

Brute-force attacks: In 2017, researchers developed an algorithm that cracked 27% of the passwords from a data set from more than 43 million LinkedIn profiles.

Phishing: Attackers could deploy AI to eavesdrop on corporate emails, learn and impersonate users' writing styles and other behavioral traits and insert personalized phishing emails into existing threads.

Hiding in plain sight: After infiltrating a corporate network, attackers could use AI to locate high-value data stores and analyze the best way to further penetrate the system and remain undetected.

Direct impersonation: AI algorithms can also be deployed to mimic a person's voice or appearance to carry out fraud or sabotage. In March 2019, a CEO was tricked into wiring $243,000 to scammers after they used AI to mimic his boss's voice over the phone.

Adversarial techniques: AI software can trick CAPTCHAs into believing it's human and confuse facial recognition systems.

Hitting home

AI hacking can support faster, more effective and larger-scale attacks. Every internet-connected part of the business is theoretically at risk, most especially senior execs and users with elevated privileges.

It's not clear how widespread AI hacking is in 2020, but it's highly likely that rudimentary algorithms are already being used to automate bot attacks.

"A lot of it is, at the moment, theoretical," Europol EC3 head of strategy Philipp Amann told ZDNet in March 2020, "but that's not to say that it hasn't happened."

Fighting back

Remember, though, that AI is a double-edged sword—one that can be used to defend as well as attack. Cyber security teams can deploy around-the-clock AI tools to spot patterns that human eyes might miss. Penetration testing, for example, can run automated tests that probe systems to seek out attack vectors and vulnerabilities and support target selection. Other tools can bolster defense efforts by:

  • Determining normal traffic patterns and, by extension, more accurately detecting advanced threat activity
  • Learning typical phishing and spam traits to block malicious emails
  • Analyzing existing malware to spot updated variants  and zero-day attacks
  • Recommending nearly uncrackable passwords
  • Automating mundane security tasks to free up IT security teams to work on higher-value tasks

Though AI has its vulnerabilities, it also creates opportunities for stronger cyber security protection.

Discover how Verizon's comprehensive security solutions can protect your organization.