Lawrence K. Zelvin: Fraudsters have artificial intelligence too

Soon, personal artificial intelligence agents will streamline and automate processes that range from buying your groceries to selling your home. You’ll tell it what you want, and it will do the research and legwork, log into your personal accounts and execute transactions in milliseconds.

It is a technology with extraordinary potential, but also significant new dangers, including financial fraud. As Gail Ennis, the Social Security Administration’s inspector general, recently wrote: “Criminals will use AI to make fraudulent schemes easier and faster to execute, the deceptions more credible and realistic, and the fraud more profitable.”

The story of cyberfraud has always been a technological arms race to innovate faster between criminals and those they’re trying to rob. In banking, AI’s advent both supercharges that competition and raises its stakes.

When scammers used an AI-powered audio deepfake to convince the CEO of a British utility to transfer $243,000 to a Hungarian bank account in 2019, it was called “an unusual case” because it involved AI. That is not the case anymore.

Earlier this year, criminals made headlines when they used deepfake technology to pose as a multinational company’s chief financial officer and tricked one of the company’s employees in Hong Kong into paying the scammers $25 million.

Globally, 37% of businesses have experienced deepfake-audio fraud attempts, according to a 2022 survey by identity verification solutions firm Regular, while 29% have encountered video deepfakes. And that doesn’t include individuals who receive realistic-sounding calls purportedly from hospitalized or otherwise endangered family members seeking money.

As these AI-enabled fraud threats proliferate, financial institutions such as BMO, where I lead the financial crimes unit, are working to continually innovate and adapt to outpace and outsmart the criminals.

With an estimated annual tab of $8.8 billion in 2022, fraud was a festering problem even before the COVID-19 pandemic, which sparked a dramatic increase in online financial activity. According to TransUnion, instances of digital financial fraud increased by 80% globally from 2019 to 2022, and by 122% for U.S.-originating transactions. LexisNexis Risk Solutions calculated in 2022 that every dollar lost to fraud costs $4.36 in total as a result of associated expenses such as legal fees and the cost of recovering the stolen money.

Generative AI, by its very nature, doesn’t require high-tech skills to get benefits — a fact criminals are leveraging to find and exploit software and hardware vulnerabilities. They also use AI to improve the tailoring of their phishing attacks through enhanced social media and other publicly available information searches.

Then there’s synthetic fraud, one of the fastest-growing categories of cyberfraud, in which the AI fabricates identities from real and made-up details and uses them to open new credit accounts. In one instance, criminals created roughly 700 synthetic accounts to defraud a San Antonio bank of up to $25 million in COVID-19 relief funds. TransUnion last year estimated that synthetic account balances reached $4.6 billion in 2022 while a previous Socure report projected the cost of this fraud would reach $5 billion this year.

When it comes down to rolling out new technology solutions before security controls are well in place, we’ve been down this road before. For example, when businesses rushed headlong to embrace the transformative power of cloud computing, security was a bolt-on to which they paid attention only after suffering the sorts of massive data breaches that have become all too frequent, such as those suffered by Yahoo in 2013, in which the personal data of 3 billion people was exposed; Equifax in 2017, 147 million; and Marriott in 2018, 500 million.

As the international affairs think tank Carnegie Endowment noted in 2020, “Despite various efforts to contain these risks over the past 25 years, the costs of cyber-attacks continue to increase, not decrease, and most organizations — governments and companies — struggle to effectively protect themselves.”

The good news is that financial institutions are moving to combat AI fraud with the best tool available: AI. Nearly three-quarters of respondents to a 2022 Bank of England survey said that they were developing machine-learning models to fight financial fraud. Other next-generation defenses are also in the works: Passkeys are replacing passwords, and quantum key distribution is becoming more widespread.

It’s a good start, but it’s just that, a start.

Along with more and better technological and AI advances to protect information and funds, we actually need to lean back into the human element. Companies, financial institutions, regulators and consumers must collaborate to produce and adopt secure, resilient and robust controls for handling this threat. This means education — between institutions and consumers, and among families and friends. It means following protective online practices to keep access information secure. It means pulling all of the tools available — both online and off and at the government, organizational and individual levels to shore up our defenses like a shield.

The alternative — a patchwork series of solutions — will have exploitable seams. And the problem is going to roll downhill, hitting medium- and small-sized businesses and individuals the hardest as they won’t have multinational corporations’ ability to afford sophisticated defenses.

Artificial intelligence is speeding everything up. We cannot afford to let this accelerated clock tick too long without developing a global, industrywide security standard to harden us against the coming fraud storm.

If we don’t act, the money we already have lost to fraud will seem like small change.

Lawrence K. Zelvin is the head of the financial crimes unit at BMO. 

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

Related posts