In a wide-ranging interview, AU10TIX’s chief development officer looks at how wealth management KYC needs to change in the deepfake era.
We are nearing a deepfake crisis in the region, with the number of incidents up by an alarming 1,530 percent, as finews.asia recently indicated.
Still, KYC at most banks and with regulators remains stuck in a legacy ID-passport copy mindset. The imperative question now is – how safe and reliable is IDV information held in banks actually going to stay?
To get a fuller answer to that question, we asked the AU10TIX chief development officer, Ofer Friedman, a global leader in digital IDV matters.
Mr Friedman, we previously wrote that private bankers are soon going to have to figure out the proper way to ask private clients for a selfie. How near are we to that reality?
Well, it’s going to be even more interesting than that. From a regulatory point of view, private clients are no different than any other client; both need to comply with mandated fraud prevention measures. Private clients may appreciate less intrusive identity verification, but if told it’s for protecting their assets, they may understand. However, technology is going to alleviate the burden. In the near future, your phone will also be your ID, driver’s license, etc. Once your digital/mobile ID is encrypted on your smartphone, a selfie will be just one option for unlocking the credentials you need to share.
It’s reality. Digital/mobile IDs are already working. They are issued by governments all over the world, and by commercial platforms like Apple and Google. The European Digital Identify (EUDI) wallet will be among the first in 2025. In principle, it will create an ecosystem run by commercial companies, although the actual issuing of identity will still be held by governments. It’s not a «tomorrow morning» thing. There are still questions about implementation and interoperability, but this train has left the station.
«The expectation for the foreseeable future is that people will hold two or more ID wallets.»
What about the US?
Actually, the US presents a microcosmos of what it may look like everywhere: A minestrone of wallets and credentials. Many states already issue mobile driver’s licenses, yet Apple, Google, and other commercial players are joining the party. On top of that, various companies issue their own wallets. Eventually, your digital wallet will look more like a rolodex of personal credentials (metaphorically).
In the US, as in Europe, there are enough people who’d rather steer away as much as they can from government-issued (and potentially monitored) ID facilities. You are likely to see this in borderline domains like gaming, gambling, crypto, etc. The expectation for the foreseeable future is that people will hold two or more ID wallets. One will be for government and banking purposes and the other for private purchases and things like blockchain.
All in all, it is going to become a kind of primordial soup that gradually evolves into survival of the fittest.
But how useful is it to introduce new tech as long as regulators require certified passport and ID copies?
No one is throwing anything away tomorrow morning. The rule that’s already in practice starts with digital IDs as an option. Once it picks up and gains the trust of citizens and private clients, you are likely to see usage veering into digital IDs either by enforcement or by offering advantages in convenience or trust ratings. What is increasingly troubling in this rosy scenario is that Godzilla in already in the horizon. Its popular name is Deepfake. You can Deepfake faces and ID document images. It is becoming increasingly easy to fake being the CEO of a hijacked bank, for example, and crimes will be ever easier to commit. I imagine there will be cases where accounts will be taken over and employees will be tricked by a deepfake image of a senior executive into transferring millions of dollars to criminals.
«That is where Deepfake will hit.»
Once you have an eID, no one should ever ask to verify your ID document ever again. However, you may have additional touchpoints with your account that do not require an ID document. That is where Deepfake will hit. The good news is that Deepfake detection is developing in parallel, so that outside your credentials, your call or video call may also be monitored immersively, similar to how anti-virus works on your computer.
But is it possible to hack the soft copies of the PDFs of a bank’s KYC files?
Oh yes. PDF is editable. You can obtain that capability from Adobe itself. But there are also plenty of sources, not even necessarily on Darknet, that will sell you a perfect, editable template of ID documents, driver’s licenses, and even proof-of-address documents. But who wants to work hard? There are enough «novelty» physical ID documents with stolen personal data that can be made to your liking. If you’re willing to invest a bit more, then you can get these with proper holograms and security features.
But even that is old school. With the proper tools, you don’t actually have to have either an image or physical ID document (or face, for that matter). You can have Gen-AI produce and transmit these images as if your mobile camera is capturing them live in real-time. I am tempted to send you a link that shows those kinds of things for sale.
«That’s why a paradigm shift is needed.»
Remote verification actually offers fraud detection capabilities that are not available in a physical document. Yet the level of fraud sophistication enabled by today’s AI and Gen-AI technologies is challenging to the level of requiring more than just keener analysis. Mind you, regulations put severe limitations on a vendor’s ability to find large-scales sources of samples for AI to learn from. That’s why a paradigm shift is needed. The standard way to verify identity documents and/or faces is Case Level Analysis (CLA). That means analyzing the images submitted by the customer, under the assumption that manipulation is somehow visible. It is increasingly not. A two-layered paradigm is needed, where Case-Level Analysis is complemented by Behavior-Level Analysis (BLA).
«The size of the bank doesn’t really matter.»
The latter, which we introduced already a few years ago, tracks fraud-committing behavior itself. After a deep analysis of the submitted collateral, we employ our Serial Fraud Monitor system to look for anomalies that reflect the work of a professional, systematic attack. We dismantle the images into all the elements that professional fraudsters manipulate, so we can detect behavioral anomalies, even if two documents feature completely unrelated details and images. This layer of defense has proven itself highly effective in detecting perfect fakes that would otherwise be overlooked by standard solutions.
How feasible is it for smaller players to introduce new tech given they have limited spending capabilities?
Way more feasible than at any time in the past. First, the portfolio of fraud detection tools is modular, so you can choose a solution to match your risk aversion appetite. The size of the bank doesn’t really matter. I believe it is certainly much less a question of compromise when it comes to private clients. It’s a trust business with higher stakes for all parties. But any way you look at it, the cost per client in private banking is negligible.
What is the most common identity fraud affecting banks that you currently see?
Fraud actually features geographical variations. Looking at it by region, Asia Pacific is high on the fraud list relative to other regions. It has to do with the presence of fraud rings, as well as the financial activity of people in these regions. Take Forex, for instance; APAC reigns supreme in terms of the number of active brokers. It also has to do with the market activity level. Markets that are «hot» attract more fraud. When it comes to private banking, you can imagine that more fraud activity takes place where the hubs are, but also where fraudsters assume regulations are more lenient or where banks may be less strongly protected.
The types of identity fraud affecting banks are following the path enabled by technology. You’ve probably read about cases where Deepfake technology enabled fraud – and these are only the cases that hit the media. It certainly doesn’t include the cases that have not been made public, nor those that have not even been detected.
The bottom line is that anyone serious about trust in banking, let alone private banking, is advised to implement a multi-layered defense system that is updated on an ongoing basis and enhanced by close cooperation between the bank and service provider. I’m not aware of any private banking client who won’t appreciate that.
Ofer Friedman is chief business development officer for AU10TIX, the global technology leader in identity verification and ID management automation. He has 15 years of experience in the identity verification and compliance technology sector and has worked with household names such as PayPal, Google, Payoneer, Binance, eToro, Uber, Rapyd, and Saxo Bank. Ofer began his career in advertising/marketing, working for the BBDO and Leo Burnett agencies.