The EU AI Act entered into force on 1 August 2024. Prohibitions on unacceptable risk AI systems applied from February 2025. Obligations for high-risk AI systems — the category most relevant to payment and credit providers — apply from August 2026. That is not a distant deadline. For firms using AI in fraud detection, credit scoring, transaction risk assessment, or customer onboarding decisioning, meaningful compliance requires work that most teams have not yet started.
Which Payment and Credit Activities Are Covered
The AI Act's high-risk classification applies to AI systems used in specific regulated activities listed in Annex III. The most relevant for payment and credit firms are:
- Credit scoring and creditworthiness assessment — AI systems that evaluate the creditworthiness of natural persons, including those used in BNPL decisioning, credit limit setting, and loan origination.
- Risk assessment and pricing in financial services — AI systems that assess risk profiles for insurance or financial products.
- Biometric identification — AI systems used in remote biometric identification, which covers liveness detection and facial comparison in eKYC flows where the result is used for identity verification.
Fraud detection systems that operate post-transaction — flagging completed transactions for review — are not automatically high-risk under Annex III. However, real-time fraud decisioning systems that determine whether a transaction is authorised or declined are closer to the line, and whether they fall in or out of the high-risk classification depends on how the system is characterised and the extent of human oversight in the decisioning flow.
What High-Risk Classification Requires
Conformity Assessment
High-risk AI systems must undergo a conformity assessment before deployment. For most financial services AI systems, this is a self-assessment — an internal process that produces a documented evaluation of compliance with the Act's requirements. Third-party assessment by a notified body is required only for a narrower category of systems including those used in biometric identification. The self-assessment must be documented and the documentation retained for 10 years after the system is placed on the market or put into service.
Technical Documentation
The AI Act requires detailed technical documentation covering: the intended purpose of the system; the data used to train and test it, including data governance practices; the metrics used to evaluate performance; the human oversight measures built into the system; and the cybersecurity measures in place. This documentation must be maintained and updated throughout the system's lifecycle.
Data Governance
Training, validation, and testing datasets must be subject to data governance practices covering data collection methodology, data preparation operations, identification and mitigation of possible biases, and gaps and shortcomings. For credit scoring models trained on historical lending data, this requirement directly addresses the long-standing regulatory concern about models that perpetuate historical discrimination — denying credit to segments that were historically underserved not because of individual risk factors but because of correlated demographic characteristics in the training data.
Human Oversight
High-risk AI systems must be designed so that natural persons can effectively oversee them, including the ability to intervene or override the system's output. For automated credit decisioning or fraud scoring systems that currently operate with minimal human review, the human oversight requirement is operationally significant — it is not sufficient to have a theoretical override capability if the volume of decisions makes exercise of that capability impractical.
Transparency to Users
Where a high-risk AI system makes or contributes to decisions that affect individuals, those individuals must be informed that an AI system was involved. For credit decisions, existing requirements under the Consumer Credit Directive and GDPR already mandate explanation rights — the AI Act builds on these rather than replacing them, but extends the obligation to cases not previously covered.
The GDPR Intersection
Article 22 of GDPR already restricts automated decision-making that produces legal or similarly significant effects on individuals, requiring either explicit consent, contractual necessity, or a legal basis under member state law. High-risk AI systems used in credit decisions or identity verification sit squarely in Article 22 territory. The AI Act's requirements layer on top of, rather than replace, GDPR obligations. Firms with robust GDPR automated decision-making documentation are better positioned — but AI Act documentation requirements are more detailed on model performance metrics and bias testing than GDPR has typically required in practice.
What to Do Before August 2026
- Inventory your AI systems. Map every system where an AI model contributes to a decision that affects consumers or the firm's risk position. Assess each against the Annex III high-risk criteria.
- Classify with legal support. The boundaries of the high-risk classification are genuinely ambiguous for some real-time fraud systems. Get a documented legal opinion on the classification of systems that sit near the line.
- Begin technical documentation. Most firms will not have the training data governance documentation, performance metrics documentation, or bias testing records that the AI Act requires at the level of detail required. Building this retrospectively takes longer than expected.
- Review human oversight design. If critical AI systems currently operate with automated-only decisioning and no practical human review pathway, redesigning that workflow before August 2026 is a genuine implementation project, not a paper exercise.