Is a computer the best detective?
AI is helping find criminals, but it still has a lot of learning to do.
Crime doesn’t look like it used to in movies. Gone are the days of masked robbers holding up bank tellers with menacing, hand-scribbled notes. Now, thugs can hack into an unsuspecting consumer’s account in the wee hours of the morning, launder money with crypto-currency or use cracked passwords to embezzle.
Technology has dramatically changed how hustlers break laws, and so have the approaches to catch digitally savvy crooks. In some cities, software also directs how the police targets their efforts and judges sentence criminals. Science fiction fans may be reminded of “Minority Report,” the story about a team of psychic humans who are able to predict “future crimes.” Like the implications raised by that prophetic tale, the amazing capabilities of these applications have also been met with critical studies and concerned experts.
So, what are the best uses of these programs?
Cybersecurity analysts regularly utilize artificial intelligence (AI) to detect insurance fraud and fight money laundering. Companies also use AI to prevent employee theft and track insider trading. Such programs can detect and interpret patterns across billions of pieces of data faster than any cigar-chomping, hard-nosed detective.
Financial institutions face three major types of digital crime
First, are crimes that have been committed in the past such as a hacker transferring small amounts of money to different accounts, scoring millions of dollars in 25 cent increments. Second, are “known unknowns,” which are present fiscal crimes that may be based on past attacks. For example, a system may be able to detect routine transfers to an account in one country that suddenly are being rerouted toward an account in another country. Third, are the “unknown unknowns” -- attacks beyond the scope of anything that an institution has seen before.
Those unknown possibilities are “the most powerful and critical types of money laundering crimes because they can be completely catastrophic, posing legal and reputational risk to a firm,” says Steve Mann, chief marketing officer of the AI-based data analytics company ThetaRay.
ThetaRay’s “IntuitiveAI” algorithm focuses on the connections between data and can parse the difference between normal and abnormal relationships. For example, if a user is sending $150 every week to the same person at the same time of day, and then sending that same amount to 10 different people, each in a different location, and eventually increasing the amount to, say, $300, IntuitiveAI will flag such transactions as unusual.
Beyond ThetaRay’s techniques, financial crimes can also be detected with rule-based systems. These may target transfers that surpass a certain amount of money. Supervised machine-based learning can also train programs to identify familiar patterns.
However, these approaches have limited success, and cannot catch crimes beyond the scope of what’s been done before. They also generate false positives at a rate of more than 95 percent which require substantial resources to properly evaluate.
Financial crimes often tie into other types of crime as well. Laundered money may be converted into cryptocurrency and used for nefarious purposes, including funding human and drug trafficking operations, as well as terrorist organizations.
AI’s ability to tabulate and analyze massive amounts of data makes it a natural partner for law enforcement. U.K. police departments utilize behavioral algorithms to assist in crime prevention. However, these methodologies are still developing—and some may result in false positives that adversely affect those who are low-income and people of color.
Computer-driven frameworks are also employed in sentencing decisions. The COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) software program used by several U.S. courts, including in Broward County, Fla., considers factors such as criminal history, substance abuse, and residential stability, to identify recidivism in offenders. Studies have shown that such risk assessment tools can result in unfair sentencing, particularly in communities of color. In 2018, researchers from Dartmouth College published a study that suggested COMPAS was no more effective in predicting repeat offenders than querying a random sampling of people online.
Cybersecurity expert Nikita Malik warns of the dire consequences of using computer-generated crime-fighting tools and previously suggested creating “an international commission on the regulation of AI, where countries and consumers can input into the development of systems and ensure that codes of conduct and laws meet the threshold set by international human rights standards.”
The use of AI-based crime-fighting tools is increasing across various settings—and the potential for using technology for the greater good is increasing along with it. ThetaRay’s CEO Mark Gazit acknowledges that technological advances often inspire trepidation, but while considering such fears, the upside of AI-based strategies is substantial and must not be overlooked.
“In the world of financial crime, the perpetrators are using AI—and they’ve been using it for a while,” Gazit says. “As human beings, we have an obligation to create a system to protect ourselves.” Considering the other types of crimes money laundering is often associated with, he adds, “We can keep using [AI] to make a difference.”
For more information, see:
For related media inquiries, please contact email@example.com