AI—or, more accurately, its machine learning sub-discipline—is all about training machines to make intelligent decisions. Where cybercriminals can find value in AI is in its algorithms, which can analyze huge data sets and automate attacks on large numbers of users while personalizing each attack for maximum effectiveness. AI can also be used to impersonate people, probe systems for vulnerabilities and circumvent cyber defenses.
Artificial intelligence hacking examples include:
Brute-force attacks: In 2017, researchers developed an algorithm that cracked 27% of the passwords from a data set from more than 43 million LinkedIn profiles.
Phishing: Attackers could deploy AI to eavesdrop on corporate emails, learn and impersonate users' writing styles and other behavioral traits and insert personalized phishing emails into existing threads.
Hiding in plain sight: After infiltrating a corporate network, attackers could use AI to locate high-value data stores and analyze the best way to further penetrate the system and remain undetected.
Direct impersonation: AI algorithms can also be deployed to mimic a person's voice or appearance to carry out fraud or sabotage. In March 2019, a CEO was tricked into wiring $243,000 to scammers after they used AI to mimic his boss's voice over the phone.
Adversarial techniques: AI software can trick CAPTCHAs into believing it's human and confuse facial recognition systems.