Skip to main content
WebID

EU AI Act

What you Need to Know About the EU Regulation on the Use of Artificial Intelligence (AI)

The EU AI Act is the world’s first comprehensive law regulating the use of artificial intelligence (AI). It aims to build trust in AI solutions while ensuring security, fundamental rights, and European values.
This law, which will be introduced in 2025, defines high requirements for the AI-based implementation of critical infrastructures, establishes transparency obligations for AI-generated content, and at the same time promotes innovation by allowing scope for use cases that potentially pose lower risks.

Key Aspects of the EU AI Act

The EU AI Act regulates which AI systems are permitted and which are prohibited in the European Union. The regulation establishes uniform rules for all member states for the sale and use of AI systems in the EU.

EU AI Act Defines a Risk-Based Classification

In addition to the requirements for the use of artificial intelligence that are binding for all industries, the EU AI Act focuses on the risk of specific use cases. AI systems are classified according to their potential risk to health, safety, and fundamental rights in a four-level model:

  • Unacceptable risk
    The use of AI-based solutions that manipulate human behavior or use social scoring, such as emotion recognition systems in the workplace or in educational institutions, is prohibited.
  • High-risk AI
    Systems that are used in critical areas and could pose a particular threat to fundamental rights must meet strict requirements in terms of data quality, transparency, infrastructure, human-based oversight, and conformity assessment. This also applies, for example, to AI solutions used in healthcare, education, human resources management, law enforcement, border control, and access to social benefits.
  • Transparency requirements
    AI systems that are subject to special transparency requirements because they could be used to manipulate people (such as chatbots or systems that artificially generate images and/or videos) have been classified in the third risk classification level. Such AI-generated content must always be clearly labeled as such, so users must be able to recognize that they are interacting with a machine or viewing machine-generated content.
  • General AI models (GPAI)
    Companies that provide or use systems with minimal risk for which there are no specific requirements can adhere to voluntary codes of conduct. However, basic models (such as ChatGPT) must ensure transparency regarding training data and copyrights.

Who Must Comply with the EU AI Act?

The EU AI Act distinguishes between providers/developers (“providers”), users in their own business operations (“deployers”), and distributors of high-risk systems. Regulated companies such as banks, payment service providers, fintech’s, or crypto players can take on multiple roles depending on their setup, for example, being a provider for in-house developed models and at the same time a deployer for purchased SaaS solutions.

Providers based outside the EU must also comply with the AI Act if they provide their AI solution on the EU market or if it is used on persons in the EU.

The enforcement requirement is underscored by the threat of heavy fines (of up to EUR 35 million or 7% of global annual turnover) that may be imposed for non-compliance with the requirements.

High-risk AI in Compliance & AML

For AI systems subject to the GwG/AML regulation, the EU AI Act stipulates, among other things, strict requirements for biometric identification procedures, but at the same time makes it clear that pure biometric verifications are not automatically treated as comprehensive identifications. Many AML/CFT applications, such as transaction monitoring, network analysis, fraud scoring, and others, are considered high-risk systems and must be classified and documented accordingly.

At the same time, the AI Act defines specific exceptions. For example, AI-based fraud and money laundering prevention solutions are explicitly recognized as a legitimate purpose, which is why these systems are not covered by the general prohibitions. However, here too, it must be ensured that the protection of fundamental rights, high data quality, and human controls are guaranteed.

Key EU AI Act Requirements for Obligated Companies

AI-based high-risk systems must undergo formal risk and quality management, including data quality checks, model testing, logging, technical robustness, and cybersecurity. Providers must also carry out conformity assessments, issue an EU declaration of conformity, and affix a CE mark.

Companies in the financial sector have additional obligations, such as ensuring appropriate use, training employees, monitoring ongoing operations, and reporting serious incidents. Existing governance rules from financial regulatory law are partially considered, but do not completely replace the AI Act obligations.

Interfaces to AML/KYC & Governance

Under the EU AI Act, obligated companies that use AI-based systems for transaction monitoring, screening, or KYC onboarding, for example, must not only meet regulatory expectations for effectiveness and traceability, but also systematically embed AI Act obligations such as data governance, documentation, and human oversight. This applies, for example, to model changes, threshold adjustments, new data sources, and the handling of false positives and bias risks.

AI & Model Risk Governance Framework

Financial institutions should ideally establish an AI & Model Risk Governance Framework, i.e., a structured system of policies, processes, responsibilities, and controls that can help identify, assess, minimize, and continuously monitor the risks associated with the use of artificial intelligence and other mathematical models. Possible steps could include the following:

  • Creation of an AI use case inventory and a high-risk use case (scoring, monitoring, biometrics).
  • Conducting an interface analysis between AI Act obligations and existing AML, MaRisk, and GDPR controls to identify potential gaps.
  • Creation of adapted governance structures (roles, approval processes, monitoring, training) and definition of a roadmap that includes the effective implementation of the most important obligations (from 2026).

The Rising Trend of Reusable Digital Identities

This white paper delves into the emerging trend of reusable digital identities on a global scale, explores their evolution, benefits, challenges and the role they play in shaping the future of digital identity management.
Request now
Whitepaper: The Rising Trend of Reusable Digital Identities