The European Union’s Artificial Intelligence Act (AI Act) introduces a comprehensive framework for AI governance, including a set of precise definitions for roles, processes, and technical concepts. Understanding these key terms – many of which are defined in Article 3 of the Act – is crucial for business, policy, and tech professionals aiming to comply with the regulation. Below, we break down important terminology in a clear and accessible way, grouped by theme. Each term is explained concisely and linked to its definition in the AI Act, with references to relevant articles and recitals for further context.
Actors and Roles
These terms describe the main actors in the AI value chain and their roles under the AI Act:
- AI system: The core subject of the AI Act, defined as any “machine-based system” with some autonomy that can generate outputs (predictions, recommendations, decisions, content) influencing physical or virtual environments. In simpler terms, an AI system under the Act is software (often using techniques like machine learning or logic-based approaches) that, given input data, produces outcomes which can affect the world or its users. This broad definition means the Act covers a wide range of AI-powered software, from simple algorithms to complex machine learning models, as long as they exhibit autonomy or adaptiveness in achieving their objectives.
- Provider: The developer or supplier of an AI system, who takes responsibility for placing it on the market. The AI Act defines a provider as “a natural or legal person, public authority, agency or other body that develops an AI system (or has one developed) and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.” In practice, this is the entity that brings an AI system to market – for example, the company that built the AI software or a vendor rebranding and distributing someone else’s AI system under their own trademark. Providers bear primary compliance obligations (especially for high-risk AI systems), such as ensuring the system meets the Act’s requirements before deployment.
- User (Deployer): The end-user organization or individual that uses an AI system in a professional capacity, referred to in the Act as the “deployer.” Under Article 3, a deployer is “a natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.” In other words, this is the person or entity that deploys the AI system in the real world (for instance, a company using an AI tool in its operations). The term deployer is used to clarify that it excludes private, non-professional use – so a private individual casually using an AI app at home would not be a “deployer” under the regulation, but a business implementing the AI app for customers would. Deployers have their own set of obligations (e.g. using the system as intended and monitoring for issues per Article 29 and Article 56 obligations).
- Importer: If an AI system’s provider is based outside the EU, an importer is the entity in the EU that takes on responsibility for bringing that system into the EU market. Formally, an importer is “a natural or legal person in the Union that places on the market an AI system that bears the name or trademark of a person established in a third country.” In practice, this would be the EU-based company that imports an AI solution from abroad to sell or use in the EU. Importers must ensure the foreign AI system complies with the AI Act (much like importers under product safety law), and they inherit certain compliance duties – for example, verifying that the provider outside the EU has met requirements and that the AI system has the required conformity assessment and documentation.
- Distributor: A distributor in the AI Act context is any intermediary in the supply chain, other than the provider or importer, who makes an AI system available on the market. The Act defines it as “a natural or legal person in the supply chain, other than the provider or importer, that makes an AI system available on the Union market.” Distributors can include resellers, online marketplaces, or other entities that commercially distribute or facilitate the availability of an AI system. They have obligations to act with due care, for example not supplying AI systems that they know are non-compliant and cooperating with authorities if issues arise. In essence, distributors must ensure that the AI systems they offer maintain the compliance credentials (e.g. CE marking, documentation) provided by the upstream provider or importer.
Processes and Compliance Mechanisms
The AI Act introduces several processes and compliance mechanisms to ensure AI systems meet its standards. Key terms include:
- Placing on the market / Putting into service: These terms describe two moments when an AI system enters use, and they have specific meanings in EU law. Placing on the market means the first time an AI system (or model) is made available in the EU market (whether for sale or free). It’s essentially the act of introducing the product to the EU market – for example, when a provider first sells or distributes the AI system in the EU. Putting into service refers to the first use of an AI system directly by an end-user (deployer) or for the provider’s own use in the EU, for its intended purpose. This covers cases where an AI system might not be sold as a product but is internally deployed or used. In summary, an AI system is “placed on the market” when it’s first made available for others, and “put into service” when it’s actually used in the EU for the first time. Both events carry regulatory significance: before either occurs, high-risk AI systems must undergo compliance steps like conformity assessment, and once on the market or in service, the system falls under post-market monitoring and incident reporting obligations.
- Conformity assessment: This is the process to verify that a high-risk AI system meets all the requirements of the AI Act before deployment. Article 3 defines a conformity assessment as “the process of demonstrating whether the requirements set out in [the Act] (Chapter III, Section 2) relating to a high-risk AI system have been fulfilled.” In practice, it is an evaluation procedure (which can include testing, auditing, or certification) to ensure the AI’s compliance with requirements such as risk management, data governance, transparency, human oversight, accuracy, and cybersecurity. Providers of high-risk AI systems must perform a conformity assessment before placing the system on the market or putting it into serviceonetrust.com. This can be done either through internal checks (for some systems) or by involving external notified bodies (independent certification organizations) depending on the risk and the conformity route defined in the Act. Successfully passing a conformity assessment is a prerequisite for obtaining the CE marking on a high-risk AI system.
- CE marking: In the EU, the “CE” marking is a familiar symbol on products indicating conformity with European health, safety, and environmental protection standards. Under the AI Act, high-risk AI systems will bear a CE marking to signal compliance with the Act’s requirements. The Act defines “CE marking” as a marking by which a provider indicates that an AI system is in conformity with the requirements set out in [the Act’s high-risk requirements] and other applicable Union harmonisation legislation. In other words, the CE mark on an AI system means the system has passed its conformity assessment and meets the legal standards of the AI Act (as well as any other applicable EU rules). For providers, affixing the CE marking is the final step after a successful conformity assessment, and it allows the AI system to be freely sold or used across the EU. From a practical standpoint, the CE mark becomes the “passport” for a high-risk AI system’s entry to the EU market, similar to how medical devices or machinery carry a CE mark to show compliance.
- Serious incident: The AI Act introduces a requirement to report “serious incidents” involving AI systems, especially high-risk ones. A serious incident is defined as an incident or malfunctioning of an AI system that leads (directly or indirectly) to significant harm. According to Article 3(49), this includes any AI-related incident that results in death or serious injury to a person, serious damage to someone’s health, irreversible disruption of critical infrastructure, violation of fundamental rights protections, or serious damage to property or the environment. In essence, if an AI system causes major harm or legal rights infringements, it’s a “serious incident.” For example, if a faulty AI medical diagnosis tool led to a patient’s death, or an AI system in critical infrastructure caused a power grid outage, those would qualify. The AI Act requires that providers of high-risk AI (and deployers, in some cases) report serious incidents to the authorities (typically within a tight timeframe, such as 15 days). This reporting mechanism is part of post-market monitoring and is designed to ensure regulators are informed of major AI failures or harms, enabling oversight and potential corrective measures to prevent future occurrences.
- AI regulatory sandbox: To spur innovation while ensuring oversight, the Act encourages the use of “AI regulatory sandboxes”. An AI regulatory sandbox is a controlled experimentation environment set up by a competent authority, where companies can develop and test innovative AI systems under relaxed regulatory conditions or enhanced oversight. In the Act’s words, it’s “a controlled framework… which offers providers or prospective providers the possibility to develop, train, validate and test… an innovative AI system, pursuant to a sandbox plan, for a limited time under regulatory supervision.” Practically, this means regulators (such as national authorities) can allow AI developers to try out new AI solutions (including potentially high-risk applications) in cooperation with the authority. During sandbox participation, certain compliance obligations might be waived or guided by the authority, while ensuring that safety and fundamental rights are protected during testing. The goal is to foster innovation by giving AI creators a space to fine-tune their systems with feedback and oversight, before full market launch. Businesses in a sandbox can gain clarity on regulatory expectations, and regulators gain insight into emerging technologies – a win-win that the Act formalizes via Article 53–59 provisions on sandboxes.
Technical and Risk Terms
The AI Act also uses specific terms related to technical concepts and risk management approaches. Key ones include:
- Transparency: In the context of the AI Act, “transparency” refers to the obligation to disclose certain information about AI systems to ensure that affected people are aware they are interacting with AI and understand its outputs. The Act imposes transparency requirements in several ways. For high-risk AI systems, providers must supply clear instructions and information to deployers and users (e.g. explaining the system’s intended purpose, how to use it, and any known limitations or risks). This allows organizations and individuals to use high-risk AI in an informed manner. Additionally, for AI systems that interact with humans or generate content, the Act mandates explicit disclosure: for example, if a chatbot or an AI assistant is interacting with a person, the person should be informed that they are dealing with an AI and not a human. Likewise, if an AI system creates synthetic or manipulated media (so-called “deepfakes”), it must be clearly labeled as AI-generated content. Systems that perform emotion recognition or biometric categorization (assigning people to categories based on biometric data) also must be designed to ensure transparency, meaning individuals subjected to these systems should be aware of their operation. Overall, transparency under the AI Act is about making AI operations visible and understandable – whether through documentation, user notices, or explanatory statements – so that users, affected persons, and regulators are not kept in the dark about AI involvement. This enhances trust and allows human oversight and accountability, aligning with the Act’s emphasis on human-centric AI.
- Risk management: The AI Act takes a risk-based approach to regulation, requiring stronger controls for higher-risk AI. A cornerstone of this approach is the obligation for providers of high-risk AI systems to implement a risk management system (Article 9). Risk management here means a structured, ongoing process to identify, analyze, and mitigate risks associated with an AI system throughout its lifecycle. In practical terms, providers must proactively assess what could go wrong with their AI (e.g. safety issues, bias or discrimination, cybersecurity vulnerabilities, etc.), both in intended use and in reasonably foreseeable misuse. They then need to take steps to reduce or eliminate those risks – for instance, by refining the system’s design, putting safeguards in place, or providing usage guidelines. The risk management process is continuous and iterative, meaning it should be repeated and updated regularly as the AI system is developed, tested, deployed, and even after it’s on the market (incorporating lessons from real-world use and monitoring). This is akin to a life-cycle risk management in product safety or quality management in other industries. By embedding risk management, the Act ensures that high-risk AI systems are not a one-time check, but are continually evaluated for potential harms to health, safety, and fundamental rights, and that measures are in place to address those risks as effectively as possible. This requirement is central to AI Act compliance for providers, as it underpins many of the specific obligations (from data governance to transparency) and is subject to review during conformity assessments.
- Biometric identification: Biometric identification refers to using biometric data to identify a person’s identity. Under the AI Act, the term is defined as “the automated recognition of physical, physiological, behavioral, or psychological features of a person for the purpose of establishing their identity, by comparing their biometric data to data in a database.” In simpler terms, an AI system does biometric identification when it takes something like a fingerprint, face image, voice, or other biometric trait and checks it against stored profiles to find out who that person is. A common example is an AI-driven facial recognition system matching a face from CCTV footage to a gallery of known faces. The AI Act treats biometric identification (especially when done without the person’s active involvement) as sensitive. If an AI system performs biometric identification in certain contexts, it can be considered high-risk (for instance, real-time biometric ID systems used by police are addressed separately as prohibited or tightly controlled – see below). It’s important for businesses to recognize if their AI system falls under this definition, as it triggers specific requirements. Notably, systems for biometric identification in public spaces can raise privacy and fundamental rights concerns, which is why the Act imposes strict rules on their use.
- Remote biometric identification: This term specifically means biometric identification at a distance, without the subject’s active participation – typically using cameras and AI to identify people in public or remote scenarios. The Act defines a remote biometric identification system as “an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance, by comparing a person’s biometric data with data in a reference database.” The classic example is live facial recognition in public spaces, where an AI system scans faces in a crowd and matches them to a watchlist database. Remote implies the person is not actively presenting their biometric (unlike, say, unlocking a phone with your fingerprint – that’s biometric identification, but not remote). This technology is among the most controversial covered by the AI Act due to its implications for privacy and civil liberties. In fact, the AI Act essentially bans the use of “real-time” remote biometric identification in publicly accessible spaces for law enforcement purposes, except in extreme cases like searching for missing persons or preventing a terrorist threat as explicitly allowed by law. This means police can’t broadly deploy live facial recognition in public places unless very strict conditions are met. For other contexts (e.g. private sector use of remote biometric ID), the Act classifies it as high-risk, subjecting it to strict compliance requirements if it’s deployed. The bottom line is that remote biometric identification is highly regulated: organizations must carefully assess if their AI system performs this function. If it does, they face either a prohibition (in law enforcement/public space scenarios) or heavy obligations as a high-risk system. This reflects the EU’s cautious stance on technologies that can covertly identify individuals at a distance, prioritizing the protection of fundamental rights like privacy and freedom of movement.
Conclusion
The EU AI Act’s terminology might seem dense at first, but these definitions form the backbone of the regulation’s compliance structure. By clearly delineating who is responsible (providers, deployers, etc.), what an AI system entails, and how key processes (like conformity assessment or incident reporting) work, the Act creates a shared language for AI governance. For businesses and stakeholders, familiarizing oneself with these terms is more than just legal compliance – it’s essential to operationalize the AI Act’s requirements. With the above explanations, professionals should have a practical understanding of the actors, processes, and technical concepts most relevant to AI Act compliance, enabling them to navigate the regulation with greater confidence and clarity.