How Financial Regulators View Biometric Liveness Technology
A research-level analysis of how financial regulators view biometric liveness technology in AML, KYC, PSD2, and remote onboarding oversight.

How Financial Regulators View Biometric Liveness Technology
Financial regulators biometric liveness expectations have changed fast over the last few years. What used to look like a vendor-side product feature now shows up inside remote onboarding guidance, AML controls, fraud supervision, and strong customer authentication debates. For banks, fintechs, and identity platform teams, the message from regulators is not that liveness is optional nice-to-have security polish. It is that remote identity systems need a credible way to prove a real person is present, especially when onboarding happens without staff intervention.
“Where a credit or financial institution uses biometric identification methods as part of the customer onboarding process in an unattended remote customer onboarding solution, it should ensure that the process includes liveness detection.” — European Banking Authority, Guidelines on the use of remote customer onboarding solutions, 2023
Why regulators now care so much about biometric liveness
Regulators did not suddenly become fascinated by face biometrics for their own sake. They are reacting to a practical problem: more customer onboarding has moved online at the same time that spoofing attacks, synthetic identity fraud, replay attacks, and deepfake-assisted account opening have become easier to run at scale.
That is why financial regulators usually frame biometric liveness technology in four buckets:
- customer due diligence and KYC reliability
- AML and counter-terrorist financing controls
- payment authentication and account security
- privacy and governance for sensitive biometric data
The tone is important. Regulators rarely say, “use this exact liveness model.” What they do say is that remote onboarding controls must be strong enough to confirm that the applicant is a genuine, physically present person and that institutions must understand the risks of impersonation, presentation attacks, and data misuse.
What major regulatory bodies are actually signaling
| Regulator or standards body | What they focus on | Why it matters for financial firms |
|---|---|---|
| European Banking Authority (EBA) | Remote onboarding controls, liveness in unattended flows, reliability of biometric checks | Makes liveness a practical expectation for EU-regulated onboarding programs |
| FATF | Reliable digital identity for AML/CFT, assurance levels, governance | Pushes firms to treat digital identity proofing as a risk-managed compliance control |
| NIST | Presentation attack detection evaluation, identity assurance, spoof-resistance | Gives procurement and risk teams a way to judge whether liveness claims are technically credible |
| Bank of Thailand | Governance, consent, process controls, communication with customers | Shows how national regulators combine fraud reduction with biometric data oversight |
| PSD2 / European payments supervision | Strong customer authentication and inherence factors | Forces payment providers to think beyond convenience and toward anti-spoofing resilience |
The broad pattern is consistent across jurisdictions. Regulators are comfortable with biometrics when they are embedded inside a controlled identity framework. They are much less comfortable when firms treat biometrics as a black-box shortcut.
The EBA view: liveness is part of remote onboarding credibility
The clearest regulatory signal in Europe came from the European Banking Authority's remote customer onboarding guidelines. In the 2023 guidance, the EBA said firms using biometric identification in unattended remote onboarding should include liveness detection. That is a pretty direct statement by regulatory standards.
What makes the EBA position notable is that it does not present liveness as marketing jargon. It treats liveness as a control that helps verify physical presence during onboarding. The EBA also ties biometric use to broader expectations around:
- the quality and integrity of identification data
- human oversight and escalation paths
- fraud monitoring and incident management
- testing of the technologies being used
- data protection and clear customer communication
In other words, the regulator's view is practical: biometric liveness is useful, but only if the institution can explain how it works inside the full onboarding process.
That same logic shows up in our earlier analysis of remote identity proofing for government services and passive liveness detection technology. The common thread is that trust comes from layered controls, not from face matching alone.
The FATF view: digital identity can support AML, but assurance matters
The Financial Action Task Force took a slightly broader route. In its digital identity guidance, FATF did not write a liveness-specific rulebook. Instead, it argued that reliable digital ID systems can support customer identification and verification for AML and counter-terrorist financing purposes when institutions understand assurance levels, governance, and fraud risks.
That matters because FATF shapes how many national regulators think about remote onboarding. Its framework pushes compliance teams to ask harder questions:
- How strong is the identity proofing process?
- Can the institution defend against impersonation?
- What evidence exists that the person is real and present?
- How are exceptions, attacks, and model weaknesses handled?
This is the part many product teams underestimate. From a regulator's perspective, liveness is not mainly a UX enhancement. It is evidence. It helps show that the institution did not simply trust a selfie and hope for the best.
NIST's influence: regulators and buyers want tested anti-spoofing, not vague claims
NIST is not a financial regulator in the same way the EBA or a central bank is, but its influence is hard to overstate. When financial institutions evaluate biometric liveness, NIST's identity guidance and its Face Analysis Technology Evaluation for presentation attack detection give risk teams a shared technical baseline.
That baseline matters because regulators keep pushing firms toward demonstrable control effectiveness. NIST's work makes it easier to distinguish between:
- face recognition performance
- liveness or presentation attack detection performance
- performance against known attacks versus unknown attacks
- security claims versus independently measured results
Researchers such as Ignina Chingovska, André Anjos, and Sébastien Marcel at the Idiap Research Institute helped establish the benchmarking culture behind modern face anti-spoofing back in 2012 with the REPLAY-ATTACK work. More recently, NIST's FATE PAD work has kept pressure on the market by showing that spoof resistance should be measured, not assumed.
For regulated financial firms, that changes procurement. A compliance team can no longer rely on broad vendor phrases like “AI-powered identity assurance.” They need to know what attack classes were tested and how often bona fide users were rejected.
How regulators balance fraud control against privacy risk
This is where things get more complicated. Regulators generally like the anti-fraud value of liveness, but they also treat biometric data as unusually sensitive.
Under GDPR, biometric data used for uniquely identifying a person falls into a special category. That brings stricter obligations around lawful basis, minimization, retention, access control, and transparency. Similar pressure exists outside Europe too, even where the legal language differs.
The Bank of Thailand's guidance on biometric technology adoption is useful here because it reflects a common regulatory instinct: if a bank wants to use biometrics, it needs proper governance, clear customer communication, secure collection processes, and controls around storage and use.
So the regulator's position is not simply “more biometrics equals more security.” It is closer to this:
- use biometrics where they materially reduce risk
- prove the process is reliable
- protect the biometric data as sensitive information
- explain the process clearly to customers
- maintain fallback and review processes when automation fails
That is a tougher standard than most vendor decks imply. Honestly, it should be.
Industry applications regulators are watching most closely
Remote account opening
This is the most obvious use case. Banks, lenders, exchanges, and fintech apps want customers to onboard remotely without branch visits. Regulators want those flows to resist print attacks, replay attacks, and impersonation attempts. Liveness has become one of the clearest signals that a selfie step is doing more than capturing an image.
High-risk transaction step-up
Financial firms are also using biometric liveness for account recovery, payee changes, large withdrawals, and other high-risk actions. Regulators tend to view this favorably when the control is part of a broader risk-based authentication framework rather than a standalone gimmick.
Cross-border digital identity and eKYC
As more onboarding crosses borders, firms have to map one biometric process onto multiple legal regimes. This is where FATF-style assurance language becomes useful. Liveness helps create a more defensible digital identity record, but only when the institution can show consistent governance across markets.
Current research and evidence
The research base behind liveness is much deeper than the average compliance memo suggests. Chingovska, Anjos, and Marcel's 2012 REPLAY-ATTACK paper at Idiap helped define how face anti-spoofing systems should be benchmarked. That mattered because early biometric deployments were often too easy to fool with printed photos or replayed screens.
Since then, the market has moved from basic anti-spoofing demos toward more formal PAD measurement. NIST's FATE PAD evaluations have kept attention on attack presentation classification error rates and bona fide rejection trade-offs. That technical language sounds dry, but it maps directly to regulatory concerns:
- how many attacks get through
- how many real users get blocked
- whether the system generalizes beyond familiar attack samples
- whether the institution can defend its controls to auditors and supervisors
The regulatory takeaway is simple: financial supervisors are more likely to trust liveness when firms can connect it to documented testing, standards language, and operational controls.
The future of biometric liveness in financial regulation
The next phase will probably be less about whether liveness is useful and more about what kind of liveness stands up to modern fraud.
A few changes look likely:
- regulators will ask tougher questions about deepfake and injection attacks, not just printed-photo spoofs
- more jurisdictions will treat liveness as a baseline component of remote onboarding rather than an advanced feature
- independent testing and auditability will matter more in procurement
- privacy regulators will push harder on storage, consent, and model governance for biometric data
There is also a strategic shift happening. Financial firms increasingly want passive liveness because it reduces friction compared with challenge-response flows. Regulators usually accept that direction if the control remains effective and well governed. If it becomes a conversion-first shortcut with weak spoof resistance, that acceptance will fade fast.
Frequently Asked Questions
Do financial regulators explicitly require liveness detection?
Sometimes yes, sometimes indirectly. The EBA's 2023 remote onboarding guidelines are unusually explicit in saying that unattended biometric onboarding should include liveness detection. In other jurisdictions, the requirement often appears indirectly through expectations around fraud prevention, customer verification, and strong authentication.
Is biometric liveness mainly an AML tool or an authentication tool?
Both. Regulators care about it during onboarding because it strengthens KYC and customer due diligence. They also care about it in payment and account-security contexts because it helps prevent spoofed inherence-based authentication.
Why isn't face matching alone enough for regulators?
Because face matching tells you whether the image resembles a customer, not whether the input came from a live human. Regulators worry about replay attacks, printed photos, masks, and increasingly deepfake-assisted fraud. Liveness helps close that gap.
What is the biggest regulatory risk when deploying liveness?
Two risks usually sit at the top: weak anti-spoofing performance and poor biometric data governance. A system that cannot defend against attacks creates fraud exposure. A system that mishandles biometric data creates privacy and compliance exposure.
Financial institutions are not being pushed toward biometric liveness because it sounds modern. They are being pushed there because remote onboarding needs stronger evidence of real-person presence than a static image can provide. If your team is evaluating where passive liveness and presentation attack detection fit in that stack, Circadify is building for exactly that direction. See the Integration guide → circadify.com/solutions/fraud-detection.
