And World-Check became a massive success. How did that feel?

We became the gold standard so quickly, it was unbelievable. We were passionate about what we were doing, and we were meticulous with the data. The Swiss financial industry, in particular, backed us from the start, and that support helped us expand globally. In 2011, we sold the business to Thomson Reuters, which felt like a full-circle moment for me, as I had started my career with Thomson Financial Publishing, selling OFAC sanction lists.

«Compliance has grown into a monster, flagging every potential risk and swamping banks with false positives, while those truly involved in criminal networks are largely unaffected.»

Today, World-Check is still widely used, but you’ve expressed concerns about the direction it’s taken since you left.

Yes, and it’s complicated. Originally, our approach was quality over quantity. We didn’t aim to flag every minor offense. Banks wanted serious risks—bribery, corruption, and terrorism, especially after 9/11. But now, the database has grown to over six million profiles. It’s no longer about precision, and this shift has created problems. There are serious concerns about whether these databases are compliant with data protection, as so many profiles go years without being updated.

Six million profiles sounds massive. What are the consequences of that scale?

Well, it’s created a nightmare of false positives. Banks get flooded with potential matches for people who aren’t really risks. This has led to skyrocketing compliance costs and forced banks to dedicate enormous resources just to verify these false positives. Today, onboarding a client in private banking can take six to eighteen months. Compliance has become an incredible burden—and a huge cost.

So banks are facing a lot of challenges?

Absolutely. Banks are spending billions on remediation to deal with these false positives. Compliance teams are overwhelmed because every potential match must be investigated, even when it’s obvious that most of these matches are incorrect. This isn’t just a staffing issue; it’s a structural problem caused by the sheer size and lack of specificity in these databases. Compliance teams are now hundreds of people strong in some banks, yet the task is almost impossible to manage.

And the impact extends beyond financial cost, correct?

Yes, the personal consequences are significant. I get calls from people who’ve been mistakenly flagged, who find themselves suddenly «de-banked.» Imagine having your account closed, perhaps even without a clear explanation, simply because your name showed up in a compliance database. This is no small matter. If you have savings, mortgages, or business accounts tied to that bank, it’s a nightmare to unravel. Sometimes people don’t even find out until they apply for an account at another bank and are rejected there as well.

The recent case of Nigel Farage, which finews.com covered extensively (article in German), highlighted the issue of PEP de-banking.

It did. Farage’s case sparked outrage because it put a high-profile face to the problem. He’s a politically exposed person (PEP), but de-banking can happen to successful businesspeople too, sometimes for tenuous links or distant associations. Just being in the wrong room or on the same board as a PEP can be enough. People who aren’t involved in any criminal activity are suddenly shut out of the banking system, and in many cases, they have no idea why.

And the data used to flag these people isn’t always reliable?

That’s one of the biggest issues. The research that goes into these databases today is often rushed or poorly verified. An unverified article or baseless allegation can land someone on a database like World-Check, even if the claims are completely unfounded. When we created World-Check, we were careful to verify information because we understood the impact it could have. Today, with millions of profiles and so much emphasis on growth, there’s a lot of shallow data that makes its way into these databases. An article with baseless claims on a low-quality news site can easily end up causing real harm to someone’s financial life and their reputation.

So people can end up facing consequences based on incorrect or incomplete information. Is there a way to get names removed from these databases?

It’s complicated, but yes, there is. A whole business model has emerged around it. Some law firms now specialize in helping clients challenge and remove inaccurate or outdated information in these databases. The fees are staggering—I’ve seen cases where a single letter can cost upwards of 150,000 pounds. For billionaires, cases can cost millions. It’s ironic, really. Lawyers and reputation management firms have found an opportunity in helping people correct data that should never have been there in the first place.

«Criminal organizations are highly motivated to avoid detection… moving funds through complex corporate structures and international trade deals, where the money is layered and cleaned in ways that are far more difficult to trace.»

What happens when someone files a data access request, or DSAR in the case of the United Kingdom, to find out why they’re on a database? How do the owners respond?

It’s an interesting process. When a DSAR is filed, the database providers are legally required to respond, at least in places like the UK, where World-Check, today owned by LSEG, is based. Typically, they have one month to provide a copy of the data. Sometimes they can extend it to three months if it’s a complex request. But here’s the thing: as soon as they receive a DSAR, there’s often a bit of panic. They might review and update the profile before sending it back, so the data looks cleaner and more accurate than what was actually in the system.

So the data you get back may have been revised just because you requested it?

Exactly. In some cases, I’ve seen profiles that were updated right before they were sent out in response to a DSAR. I’ve tested this, comparing profiles from before and after a DSAR, and sure enough, you’ll see the «last updated» date magically coincide with the request. They tidy up the data and send it over, making it look more reliable. It’s a bit of a loophole that allows them to sidestep accountability for the poor quality of the original data.

That actually reminds me—you mentioned you once had trouble boarding a flight. What happened?

Yes, that’s a story I’ll never forget. Last year, I was at London City Airport, first in line to board a British Airways flight. They scanned my boarding pass, and the attendant said, «I’m afraid we’re not allowing you to board today.» I thought she was joking, honestly—I even laughed. But then she asked me to step aside, saying security would escort me shortly. At that point, I realized she was serious.

What was going through your mind?

My immediate thought was that someone, somewhere, had flagged my name due to my association with World-Check. Over the years, my name has appeared in articles mentioning high-risk individuals, terrorist organizations, and financial crimes. AI systems or databases might see my name linked to these topics and flag it. It was a surreal moment and a stark reminder of the possible unintended consequences of these systems. In my case, the reason for not letting me board the flight turned out to be more innocent–but the databases are a factor to be reckoned with. They are afterall used by hundreds of government agencies too.

With AI and big data advancing, do you see these issues becoming more common?

Yes, and it’s worrying. When you feed outdated or inaccurate data into AI, it can lead to serious consequences. Imagine an AI system denying you access to banking or air travel because of a mistaken identity or an unverified connection to a high-risk individual. Once AI systems make determinations, it’s going to be nearly impossible to correct those mistakes. We’re already seeing data protection agencies struggling to handle data correction requests; add AI into the mix, and it’s a recipe for frustration.

«Our original purpose was to prevent corruption and serious crime—not to make life difficult for regular, law-abiding people.»

Given these challenges, what changes would you recommend to prevent these kinds of issues?

First, data protection agencies need more power and resources. AI and big data aren’t going away, so we need stronger oversight. Agencies need authority to ensure that databases are kept accurate and up-to-date, especially as they scale. Secondly, regulators must enforce a strict definition of who qualifies as a PEP. Not everyone with a political connection should be on the list. We need to revert to quality over quantity if compliance is to work effectively.

So, a return to the core principles of responsible compliance?

Absolutely. Our original purpose was to prevent corruption and serious crime—not to make life difficult for regular, law-abiding people. Now, the system is catching innocent people while still failing to stop large-scale money laundering. Compliance needs to refocus on meaningful data and avoid the «bigger is better» mentality that’s taken over. And we, as data subjects need to know that data protection laws and requirements are being fulfilled, correctly and rigorously by the KYC databases.


David Leppan, who founded World-Check in 2000, grew up in South Africa before attending university in Salzburg, where he studied Political Sciences though never completed his thesis. After World-Check, he went on to co-found WealthX and Captis Intelligence and most recently has launched Managing Reputational Risk (MRR), an endeavor to raise concerns about Big Data, data protection violations, «dirty data» and to bring about industry-wide change.