Practical AI Act Compliance

AI Act: Glossary of 50 Essential Terms for Compliance

July 6, 2025 18 min read Yuliia Habriiel
AI Act: Glossary of 50 Essential Terms for Compliance

Updated 6 July 2025 – 44 key terms essential for navigating upcoming obligations under AI Act. The EU AI Act is transforming how AI systems are developed, deployed, and governed across Europe an

Updated 6 July 2025 – 44 key terms essential for navigating upcoming obligations under AI Act.

The EU AI Act is transforming how AI systems are developed, deployed, and governed across Europe and globally. Whether you’re an AI startup founder, compliance officer, lawyer, or data scientist, mastering these 44 key terms is crucial for navigating upcoming obligations and building AI systems that are ethical, trustworthy, and legally compliant.

From high-risk AI systems, risk classification, and general-purpose AI (GPAI) to conformity assessments, CE marking, and fundamental rights impact assessments, this glossary demystifies the regulation’s language:

  • Understand how providers, users, importers, distributors, and authorised representatives are defined, and what post-market monitoring, human oversight, and data governance mean under this new law.
  • Explore critical concepts like training data quality, bias mitigation, transparency obligations, technical documentation, market surveillance, and notified bodies.
  • Learn how harmonised standards, common specifications, and sandboxing support compliance, while terms such as social scoring, emotion recognition systems, and remote biometric identification outline prohibited or restricted practices.
  • Discover how the AI Office, national supervisory authorities, and regulatory sandboxes interact to enforce the Act, and why audit-ready outputs, corrective actions, withdrawals and recalls, and serious incident reporting are essential for operational readiness.

Stay Ahead of AI Regulation

Join AI Act Alert, eyreACT’s newsletter delivering concise updates, compliance tips, and insights to keep your AI systems market-ready and trusted.


  • Grasp nuances around substantial modifications, systematic monitoring, third-party evaluation, deactivation requirements, data logging, and general-purpose AI foundation models.
  • Finally, understand strategic elements such as regulatory interpretation engines, AI compliance copilots, automated risk categorisation, real-time compliance monitoring, and custom legal LLMs that are emerging to support companies under the AI Act.
European AI Act Compliance Course: From Basics to Full Mastery

European AI Act Compliance Course: From Basics to Full Mastery

The EU AI Act is here—and compliance is now a must. This course gives you the tools to turn complex AI regulation into action. Learn the Act’s core principles, risk categories, and obligations, then put them into practice with ready-to-use templates and checklists.

€299

High-Risk AI Systems

High-risk AI systems are AI applications that pose significant risks to health, safety, or fundamental rights under the EU AI Act. Examples include AI used in medical devices, biometric identification, recruitment, and critical infrastructure management.

High-risk AI systems require strict compliance with risk management, documentation, transparency, and human oversight obligations. Companies deploying high-risk AI must implement rigorous controls to ensure their systems are safe, fair, and legally compliant.


Risk Classification

Risk classification under the AI Act determines the level of regulatory requirements an AI system must meet. AI systems are categorised into unacceptable risk (prohibited), high-risk (strict obligations), limited risk (transparency requirements), and minimal risk (no specific obligations).

Proper risk classification is the first step in compliance, as it defines whether an AI system must undergo conformity assessments and documentation. Companies must assess their AI systems against Annex III categories and use-case criteria to classify risk accurately.


General-Purpose AI (GPAI)

General-purpose AI refers to AI systems designed to perform broadly applicable functions like text generation, image recognition, or language translation. These models can be integrated into various downstream applications across industries and tasks.

Under the AI Act, GPAI providers have specific transparency, documentation, and risk mitigation obligations to ensure safe deployment. Examples include large language models like GPT and image classification models used in multiple sectors.


Conformity Assessments

Conformity assessments are formal processes to verify that high-risk AI systems meet all regulatory requirements before being placed on the EU market. This involves evaluating risk management systems, data quality controls, technical documentation, and human oversight measures.

Conformity assessments can be conducted internally for some systems or require third-party notified body evaluation for others. Successfully completing this process results in CE marking and legal market access within the EU.


CE Marking

CE marking indicates that an AI system complies with the EU AI Act and other applicable European regulations. It acts as a declaration by the provider that the system meets safety, transparency, and compliance standards required for market access.

High-risk AI systems must undergo conformity assessment procedures before receiving CE marking. Displaying the CE mark allows AI products to be legally offered and used within the European Economic Area.


Fundamental Rights Impact Assessments

A fundamental rights impact assessment (FRIA) evaluates how a high-risk AI system may affect individuals’ fundamental rights, such as privacy, non-discrimination, and freedom of expression.

Under the AI Act, providers and users deploying high-risk AI must conduct these assessments as part of their compliance process. The FRIA identifies potential rights-related risks, mitigation measures, and documentation to demonstrate compliance. It is essential for ethical AI deployment and building trust with users, regulators, and society.


AI Providers

Providers are natural or legal persons (companies or individuals) who develop an AI system or have it developed and place it on the EU market under their own name or trademark.

Providers are responsible for ensuring their AI systems comply with all applicable AI Act requirements. This includes conformity assessments, technical documentation, and post-market monitoring. Providers play a critical role as the primary accountable entity under the AI Act framework.


Users

Users are individuals or organisations that deploy or use an AI system under their authority within professional or commercial activities. They are responsible for operating AI systems according to the provider’s instructions and compliance requirements.

Under the AI Act, users of high-risk AI systems also have obligations, such as monitoring performance and reporting serious incidents. Users are distinct from consumers who use AI in a personal, non-professional capacity.


AI Importers

Importers are entities established within the EU who place AI systems from non-EU countries onto the EU market. They must ensure the imported AI systems comply with the AI Act before distribution.

The responsibilities of AI importers include verifying CE marking, conformity assessments, and ensuring technical documentation is available. Importers act as compliance gatekeepers when AI systems cross into the European market from external jurisdictions.


AI Distributors

Distributors are companies or individuals who make AI systems available on the market without altering their properties. Their role is to ensure that the AI systems they distribute have the required CE marking and conformity declarations.

While the obligations of AI distributors are lighter than providers or importers, they must still act if they believe an AI system is non-compliant or poses a risk. Distributors facilitate market access while supporting regulatory enforcement.


Authorised Representatives

Authorised representatives are EU-based persons or organisations appointed by a non-EU AI provider to act on their behalf regarding AI Act obligations. They serve as the official contact point for market surveillance authorities.

Their responsibilities include ensuring the availability of technical documentation and cooperating on compliance actions. Authorised representatives enable non-EU companies to place AI systems on the EU market legally.


Post-Market Monitoring

Post-market monitoring involves continuous review of an AI system’s performance after it has been placed on the market. Providers must proactively collect and analyse operational data to detect risks, non-compliance, or performance issues.

This process supports rapid corrective actions if safety, rights, or compliance concerns emerge. Post-market monitoring is critical to maintain trust and legal alignment throughout an AI system’s lifecycle.


Human Oversight

Human oversight ensures that AI systems, especially high-risk applications, remain under meaningful human control. The AI Act requires systems to be designed and operated in a way that enables effective human intervention, supervision, and the ability to override outputs if necessary. This mitigates risks of automation bias, discrimination, or harm. Human oversight is a foundational safeguard for ethical and accountable AI deployment.


AI Compliance Copilots

AI compliance copilots are interactive AI assistants that guide users through compliance processes step by step. They combine natural language processing with regulatory knowledge bases to answer user questions, automate documentation drafting, and recommend next actions. Here’s how eyreACT approaches AI copilot as its own AI Compliance Assistant:

Ready to simplify your AI Act compliance?

eyreACT is the definitive EU AI Act compliance platform. From automated AI system classification to ongoing risk monitoring, we’re creating the platform of developer-friendly, business-friendly tools you need to confidently deploy AI within the regulatory European framework.

Training Data Quality

Training data quality refers to the relevance, accuracy, representativeness, and integrity of datasets used to train AI systems. Under the EU AI Act, providers of high-risk AI must ensure that training data is sufficiently broad and free from errors or biases to produce reliable outputs.

Poor data quality increases risks of discriminatory or unsafe AI behaviour, potentially leading to regulatory non-compliance. Source: EU AI Act, Article 10


Bias Mitigation

Bias mitigation includes techniques and processes to detect, assess, and reduce unfair or discriminatory biases within AI models and their outputs. The EU AI Act mandates providers to proactively implement bias mitigation measures, especially for high-risk systems affecting fundamental rights. This involves using balanced data, fairness testing, and appropriate model design interventions. Source: EU AI Act, Recital 44


Transparency Obligations

Transparency obligations require that AI systems, particularly high-risk ones, provide users with clear information on their capabilities, limitations, and intended uses.

Under the EU AI Act, providers must ensure users can interpret and appropriately use AI outputs, and inform individuals when they are interacting with AI systems. This enhances accountability, prevents misuse, and builds trust in AI deployments. Source: EU AI Act, Articles 13 and 52


Technical Documentation

Technical documentation under the AI Act is a comprehensive set of records demonstrating that an AI system complies with regulatory requirements. It includes system design, risk management procedures, data quality processes, performance testing results, and human oversight measures. Technical documentation must be available to market surveillance authorities for audits or conformity assessments. Source: EU AI Act, Annex IV


Market Surveillance

Market surveillance refers to the monitoring and enforcement activities conducted by national authorities to ensure AI systems on the market comply with the EU AI Act. Authorities can request technical documentation, conduct inspections, and mandate corrective actions or recalls if non-compliance is detected.

Market surveillance protects public safety, rights, and maintains trust in AI deployments. Source: EU AI Act, Chapter VI


Notified Bodies

Notified bodies are independent organisations designated by EU Member States to assess the conformity of certain high-risk AI systems before they are placed on the market. They conduct audits, technical reviews, and testing to verify regulatory compliance. The assessment by notified bodies is required for AI systems involving complex or sensitive risks, ensuring systems meet the strict standards set by the regulation. Source: EU AI Act, Article 33


Harmonised Standards

Harmonised standards are technical specifications developed by European standardisation organisations to support regulatory compliance. Under the AI Act, adherence to harmonised standards creates a presumption of conformity with related requirements. This simplifies compliance for AI providers by offering clear technical pathways to meet legal obligations. Source: EU AI Act, Article 40


Common Specifications

Common specifications are legally binding technical requirements adopted by the European Commission when harmonised standards are insufficient or absent. They define precise compliance expectations to ensure safety, reliability, and rights protection for AI systems.

Providers must follow common specifications if no harmonised standard exists for their system (read the full text: EU AI Act, Article 41)


Sandboxing

Sandboxing refers to controlled environments created by regulatory authorities for testing innovative AI systems before full market deployment. Under the AI Act, regulatory sandboxes facilitate experimentation while ensuring compliance with fundamental rights and safety standards. This supports innovation by allowing AI providers to validate products under supervised conditions.

Regulatory sandboxes are controlled environments established by Member States to allow AI system testing under supervision before full market deployment. They promote innovation while ensuring compliance with safety and fundamental rights requirements. Participation in sandboxes provides guidance on meeting AI Act obligations. Source: EU AI Act, Article 53


Social Scoring

Social scoring is the evaluation or classification of individuals based on social behaviour, socio-economic status, or personal characteristics to infer general trustworthiness or worth.

The EU AI Act prohibits social scoring by public authorities due to risks of discrimination, surveillance, and human dignity violations. This ensures AI is not used for mass behavioural control or unjust profiling. Source: EU AI Act, Article 5(1)(c)


Emotion Recognition Systems

Emotion recognition systems detect or infer human emotional states based on biometric data, facial expressions, voice, or physiological signals. Under the AI Act, their use is highly restricted in public spaces and regulated contexts due to privacy and fundamental rights risks. They are often categorised as high-risk systems needing strict safeguards. Source: EU AI Act, Article 5, Annex III


Remote Biometric Identification

Remote biometric identification uses AI to identify individuals at a distance based on biometric data such as facial images or gait analysis. The AI Act generally prohibits real-time remote biometric identification in public spaces for law enforcement, with limited exceptions (EU AI Act Art. 5(1)(d)). This restriction safeguards privacy, freedom of assembly, and prevents mass surveillance misuse. Source: EU AI Act, Article 5(1)(d)


National Supervisory Authorities

National supervisory authorities are designated by EU Member States to oversee AI Act compliance within their jurisdiction. They conduct market surveillance, enforce penalties, and review conformity assessments (EU AI Act Art. 59). These authorities work with the AI Office to ensure harmonised enforcement across the EU. Source: EU AI Act, Article 59


Audit-Ready Outputs

Audit-ready outputs are system-generated reports and documentation demonstrating AI Act compliance. They include risk assessments, technical documentation, and conformity evaluation evidence ready for inspection by authorities or notified bodies.

Having audit-ready outputs ensures operational readiness and reduces the risk of enforcement penalties. Source: EU AI Act, Annex IV and conformity assessment provisions


Corrective Actions

Corrective actions are measures taken by AI providers or users to address non-compliance, safety risks, or performance issues in AI systems. They may include software updates, algorithm adjustments, or operational changes (EU AI Act Art. 65).

Providers are required to implement corrective actions promptly upon identifying issues through post-market monitoring or regulatory notifications. Source: EU AI Act, Article 65


Withdrawals and Recalls

Withdrawals and recalls involve removing AI systems from the market or user environments if they pose serious risks or fail to meet compliance requirements.

Under the AI Act, providers must withdraw or recall systems proactively upon discovering non-conformities or if ordered by authorities (EU AI Act Art. 66). This ensures user safety and maintains trust in AI technologies. Source: EU AI Act, Article 66


Serious Incident Reporting

Serious incident reporting is the obligation for providers and users to notify national authorities of AI system incidents that lead to, or could lead to, death, serious health deterioration, or rights infringements.

Reports must be submitted without undue delay and include detailed incident descriptions. This enables authorities to take appropriate risk mitigation actions. Source: EU AI Act, Article 62


Substantial Modifications

Substantial modifications are significant changes to an AI system’s design, purpose, or functionality that could affect its compliance with the AI Act. Providers must reassess conformity if such modifications are made post-certification. This ensures ongoing compliance despite system updates or repurposing. Source: EU AI Act, Article 3(23)


Systematic Monitoring

Systematic monitoring refers to continuous, structured observation of AI system performance and compliance throughout its lifecycle. It is part of post-market monitoring obligations under the AI Act to identify risks or deviations from intended performance. Effective monitoring supports prompt corrective actions and regulatory reporting. Source: EU AI Act, Article 61


Third-Party Evaluation

Third-party evaluation involves external conformity assessments conducted by notified bodies to verify that high-risk AI systems comply with the AI Act. This applies to certain high-risk categories where self-assessment is insufficient. It enhances trust and accountability in high-impact AI applications. Source: EU AI Act, Article 43


Deactivation Requirements

Deactivation requirements mandate that AI systems, especially high-risk or biometric identification tools, include functionality to be turned off or overridden by human operators. This is critical to maintain human oversight and control over automated decisions. Providers must implement these features as part of their risk management strategies. Source: EU AI Act, Recital 47


Data Logging

Data logging involves recording AI system operations, inputs, outputs, and decisions to ensure traceability and auditability. The AI Act requires high-risk systems to maintain logs that can demonstrate compliance and support investigations in case of incidents (EU AI Act Art. 12). Effective data logging is essential for transparency and regulatory accountability. Source: EU AI Act, Article 12


General-Purpose AI Foundation Models

General-purpose AI foundation models are large-scale AI systems trained on broad datasets to perform versatile tasks, such as language generation, translation, or image recognition. Under the AI Act, these models must meet transparency and risk mitigation obligations if integrated into downstream high-risk applications. Providers of GPAI models hold specific responsibilities to ensure safe deployment across contexts. Source: EU AI Act, Recital 60 & Article 52

Book a demo with eyreACT to simplify your AI Act compliance

EU AI Act is more complex than GDPR but we help you nail it. From automated AI system classification to ongoing risk monitoring, we’re creating the platform of developer-friendly, business-friendly tools you need to confidently deploy AI within the regulatory European framework.

Regulatory Interpretation Engines

Regulatory interpretation engines are AI-powered tools designed to analyse and translate complex legal texts into actionable compliance steps. They process regulatory requirements like the EU AI Act, identify applicable obligations for a company’s specific use cases, and suggest workflows or documentation needs (Brkan & Bonnet, 2022).

AI Act engines, such as used by eyreACT, reduce reliance on manual legal interpretation, saving time and improving accuracy in compliance implementation. For more guidance, read Brkan & Bonnet, AI and the Law (2022)


Data Governance

Data governance under the AI Act refers to managing the quality, integrity, and appropriateness of data used to train, test, and validate AI systems. It includes ensuring datasets are relevant, representative, free from bias, and processed in compliance with data protection laws.

Strong data governance minimises risks of discrimination or unsafe AI outcomes. It is a mandatory requirement for all high-risk AI system providers under the regulation.


Automated Risk Categorisation

Automated risk categorisation uses AI to classify AI systems based on regulatory risk levels under frameworks like the EU AI Act. It analyses system functionality, use case, and deployment context to determine if it is unacceptable, high-risk, limited-risk, or minimal-risk (EU AI Act, Annex III). This automation speeds up compliance assessments, ensuring companies prioritise the correct obligations and avoid regulatory misclassification. Source: EU AI Act, Annex III


Real-Time Compliance Monitoring

Real-time compliance monitoring involves continuous assessment of AI systems against regulatory requirements to detect non-conformities as they arise. These tools integrate with development and deployment pipelines to flag risks, missing documentation, or operational issues instantly (PwC, 2023). For the EU AI Act, real-time monitoring supports ongoing conformity, rapid corrective actions, and audit readiness. Source: PwC, AI Governance and Compliance (2023)


Custom Legal LLMs

Custom legal LLMs (large language models) are specialised AI models trained on legal datasets and client-specific regulatory contexts to provide accurate, hallucination-free compliance outputs. Unlike general LLMs, they incorporate domain-specific knowledge, legal terminology, and jurisdictional nuances (Stanford HAI, 2023).

For AI Act compliance, custom LLMs enable scalable regulatory interpretation, document drafting, and legal analysis with greater precision and trustworthiness. Source: Stanford HAI, Legal AI Models (2023)


AI Office

The AI Office is a central EU-level body established to support consistent implementation and enforcement of the AI Act across Member States. It coordinates national supervisory authorities, develops guidelines, and monitors emerging AI risks (EU AI Act Art. 64).

The AI Office also facilitates regulatory sandboxes and maintains public AI system registries. Source: EU AI Act, Article 64Under the EU AI Act, copilots help legal, product, and engineering teams navigate requirements efficiently while ensuring accuracy and accountability. Read this excellent resource: EY, AI compliance enablement (2023)


For the latest EU AI Act updates and implementation guidance, consult official EU sources and qualified legal counsel. This glossary reflects understanding as of June 2025 and should be verified against current regulatory interpretations. For the latest updates and best practices, sign up to our AI Act Alert newsletter.

Frequently Asked Questions (FAQ)

Updated 6 July 2025 – 44 key terms essential for navigating upcoming obligations under AI Act. The EU AI Act is transforming how AI systems are developed, deployed, and governed across Europe and globally. Whether you’re an AI startup founder, compliance officer, lawyer, or data scientist, mastering these 44 key terms is crucial for navigating […]

All organizations developing, deploying, or using AI systems in the EU must ensure compliance.

Different provisions of the EU AI Act have varying timeline requirements, with full compliance required by August 2026.

eyreACT provides automated compliance tools, documentation systems, and expert guidance to ensure full EU AI Act compliance.

Ready to Start Your EU AI Act Compliance Journey?

Take our free 5-minute assessment to understand your compliance requirements and get a personalized roadmap.


Tags:

Share: