ChatGPT, DALL•E 2, Stable Diffusion, and other artificial intelligence products have captured imaginations over the past year by generating striking images never before drawn and writing stories never before read. A parallel renaissance has taken over fraud control.

Banks and credit unions for years have employed AI to reduce fraud, and they are increasingly doing so today. A recent survey by the identity security company Alloy found that 59% of financial service companies are looking to invest in machine learning and AI-powered models to deter fraud over the coming 12 months, more than physical biometrics (46%), behavioral biometrics (44%) and open source and social media data (43%).

Even as artificial intelligence improves, fraud schemes have also grown in sophistication. This has led to an increase in the overall cost of fraud, according to federal data, even as the number of fraud incidents has decreased, according to industry data.

Alloy’s report indicated that most (90%) regional banks surveyed reported over $500,000 in fraud losses. The same was also true of community banks and credit unions (61%) and national banks (69%). The report is based on a September survey of 251 decision makers working at financial services companies, conducted by the survey platform Qualtrics.

Previous iterations of AI models used to identify phony credit applications and questionable transactions mostly mimicked the decisions of their predecessor technology, rules-based fraud detection, some observers say. According to one expert on AI-based fraud control technology, these models have now matured past the “vaporware” stage to outperforming rules-based technology.

“AI is less vaporware and a real technology that is powerful and better developed,” said Mike Sekits, co-founder and managing director of Btech Consortium, an investor in technology designed for community banks. “Once properly trained, AI can deliver on the promise as computing power allows for scale and efficiencies to review millions of transactions almost instantly.”

One of the most obvious areas of success for AI fraud control has been identifying possibly fraudulent credit card transactions, Sekits said. Credit card companies that employed AI to identify fraud initially identified too many transactions as potentially fraudulent, but as they trained the models, they got better at distinguishing true fraud from atypical but approved purchases.

“Despite being a prime space for technology innovation, the complexity and regulatory and compliance needs of financial services slowed venture capitalists’ ability to grasp the opportunity fully,” Sekits said. “Once they did, new terms like fintech, insurtech and regtech came into vogue as VCs raised billions around the financial technology sector opportunity.”

The same thing is happening with artificial intelligence and machine learning, Sekits said. The technology first developed without specific or notable use cases, and much of the early development was not truly AI but “smoke and mirrors” of database queries dressed up as AI.

“As AI technology has improved, companies have started looking for powerful use cases to derive value,” Sekits said. “Fraud is an excellent example.”

The improvement in AI fraud detection is the product of more fundamental advances — faster computer chips, better connected hardware via cloud services and greater access to data via consortium contribution — according to Christopher Schnieper, senior director of fraud and identity at the data analytics company LexisNexis Risk Solutions.

“Anecdotally, I feel that AI/ML techniques have improved and will continue to be a cat and mouse game with [the technology] continually adapting to increasingly sophisticated fraud attacks,” Schnieper said. 

For tech investor Sekits, some of the evidence about how AI has fared against fraud is obvious — improvements to credit card fraud prevention is one example — but much of the evidence is harder to find.

“It’s hard to assess who is winning” between the fraudsters and the financial institutions “because of how underreported fraud is by financial institutions,” Sekits said. “Still, AI promises to keep the scammers working hard and forcing them to innovate, while the AI technology developed for financial institutions is becoming smarter, faster and cheaper.”