There’s an innovation race growing over digital identity. On one hand are banks and merchants that have traditionally been reliant on tools like passwords, one-time passcodes, biometric selfies and voice checks to power online authentication.
On the other are fraudsters using artificial intelligence (AI) to replicate those signals at scale through deepfakes, synthetic identities and cloned voices.
“AI is great. It’s exciting, it’s changing commerce, and we’re all embracing it,” James Mirfin, senior vice president, Global Head of Risk and Security Intelligence Solutions at Visa, told PYMNTS during a discussion for the March edition of the What’s Next In Payments (WNIP) series, “How Will AI Change Identity?”
“But the criminals and the bad actors have been using [AI] probably faster than some of the businesses that are trying to defend against it,” he said.
The result is that commerce is now entering an AI identity era, where trust must be established not just through credentials but through behavioral intelligence, cryptographic tokens and ecosystem-wide cooperation.
Advertisement: Scroll to Continue
After all, the same technologies that are driving innovation are also expanding the attack surface.
How AI Is Forcing Payments to Reinvent Identity Verification
For years, financial institutions have invested heavily in identity verification tools designed to reduce fraud while maintaining a seamless user experience. These systems include document authentication, facial recognition with liveness detection and voice biometrics that allow customers to access accounts using spoken phrases.
However, the industry is seeing where the technology is advancing and improving existing defenses for fraud protection: Synthetic identities can now be generated in seconds, while AI-driven voice models can replicate a person’s speech patterns with alarming accuracy. The result is a growing sense that static authentication methods, no matter how sophisticated, may no longer be sufficient on their own — a layered approach is needed, with behavioral identity included in the mix. Behavioral identity looks at how the person acts over time, instead of asking who someone claims to be.
Mirfin said there’s a danger that when you’re building these defenses, you focus on one and think that’s the weakest link or most vulnerable. However, the reality is that companies have to be vigilant about all of these: fake voices, fake faces, fake behavior.
The result is a reframe of the very architecture of fraud detection, which has relied heavily on verifying identity at a single moment, such as when a user logs in or creates an account. As AI grows more sophisticated, that approach is giving way to continuous monitoring that evaluates how someone behaves over time.
For large enterprises with dedicated fraud teams, adapting to this new landscape is already underway. But smaller merchants often lack the resources to build advanced security systems on their own.
Ensuring Security Across the Entire User Journey
The challenge will only intensify as “agentic commerce” emerges, where AI assistants and software agents begin shopping and transacting online on behalf of consumers. In that world, merchants and payment networks will need new ways to distinguish legitimate automated activity from malicious AI agents.
“If you go back 18 months or two years, most merchants would say: ‘Bot? Stop,’” Mirfin said. “Now you’ve got good bot behavior interacting with your website, and that changes the game.”
He added, “If a consumer chooses to use an agent to shop for them, the merchant needs to be ready to accept that and recognize that interaction.”
To that end, Visa and other industry players are developing standards to help merchants and consumers trust in agent-driven transactions. One example is the introduction of “trusted agent” protocols designed to confirm that an AI acting on behalf of a consumer has legitimate authorization.
The core function of payment networks, Mirfin noted, has always been establishing trust between parties that do not know each other.
We’d love to be your preferred source for news.
Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!
“The one that we’re spending time looking at, and it’s probably harder to detect, is fake behavior. These are bots that start to mimic human behavior or human interactions,” he said.
These illicit bot systems look less like traditional fraud and more like authentic customers navigating a website due to their capability for imitating the data-driven rhythms of human browsing patterns, typing speeds and purchasing habits.
Mirfin compared the concept of defending against these next-generation software threats to an old-school bank teller assessing customers in person.
“If you are sitting in a café or restaurant, people’s behavior typically is fairly similar,” he said. “But you can spot someone that looks a bit nervous or twitchy. It’s the same in banking. Historically, bank tellers were looking for anomalous behavior.”
In the digital world, those anomalous signals come from, amongst other things, transaction patterns, device data, and location information. When analyzed together, they form a behavioral fingerprint that is far harder for fraudsters to replicate consistently.
“You can’t just focus on account creation or setup,” Mirfin said. “It’s about identifying good behavior and good activity over time.”
What Today’s Landscape Reveals About the Next Frontier
Another foundational technology gaining renewed relevance in this AI-driven environment is tokenization.
“There’s no reason for raw credentials or identity data to move around the internet in plain form anymore,” Mirfin said. “Tokens can do more than protect payment information. They can define limits, parameters and permissions around a transaction.”
Yet technology alone may not be enough. One of the biggest structural challenges facing banks, Mirfin argued, lies inside the organizations themselves.
Fraud prevention, cybersecurity and identity verification are often handled by separate teams within financial institutions, each operating with its own tools and priorities. In an AI-driven threat environment, those silos could become a vulnerability. Gartner recently predicted structural convergence, forecasting that by 2031, 50 percent of large financial institutions will consolidate online fraud prevention, identity, and cybersecurity responsibilities under teams reporting to the CISO, driven by the identity-centric nature of modern fraud.
“Traditionally you’ve had cyber teams, fraud teams and identity teams operating separately,” Mirfin said. “I’d love to see the industry … [bring] those groups together.”
Better collaboration across those domains, he said, could help financial institutions stay ahead of rapidly evolving threats while also reducing friction for consumers.
“We all shop every day,” Mirfin said. “We want to buy things in the easiest way possible, but we also want to know we’re protected.”





