Material Scope of the EU AI Act: Which AI Systems Are Covered?

The EU Artificial Intelligence Act (AI Act) is a comprehensive regulation that applies to a broad range of AI systems in the European Union. It uses a risk-based framework to categorize AI systems into four levels of risk – from those completely prohibited to those with minimal or no regulatory requirements​. In simple terms, the Act governs any AI system placed on the EU market or used in the EU, with only a few exceptions (e.g. purely military uses, experimental systems in research, or personal non-commercial use)​. This article breaks down the material scope of application of the AI Act by explaining what types of AI systems fall under each of the four risk categories and what obligations apply. (Note: General-purpose AI models, such as large language models or foundation models, are addressed separately and not covered in this overview.)

 

Prohibited AI Systems (Unacceptable Risk)

Prohibited AI systems are those AI applications that pose an “unacceptable risk” to people’s safety or fundamental rights. Article 5 of the AI Act bans these outright​. In practice, this means it is illegal to develop, deploy, or use such systems in the EU. The Act explicitly lists eight prohibited AI practices​, including:

  • Subliminal or deceptive AI techniques that manipulate behavior – e.g. AI using imperceptible cues to substantially distort a person’s decisions in a harmful way​.
  • Exploitation of vulnerable groups via AI – systems that prey on vulnerabilities of children, the elderly, or disabled, materially distorting their behavior and risking harm​.
  • Social scoring systems – especially broad social credit systems that judge or rank people based on their behavior or characteristics, leading to unfair treatment​.
  • Predictive policing or crime risk scoring – AI that profiles individuals (e.g. based on personality or demographics) to predict criminal tendencies​. (Notably, only certain invasive forms are banned; using purely factual crime data for predictions is excluded from the ban​.)
  • Mass biometric data scraping for identification – for example, indiscriminate scraping of online images or CCTV feeds to create facial recognition databases (as done by systems like Clearview AI)​.
  • Emotion recognition in sensitive contexts – AI that detects or infers people’s emotions in workplaces or schools, which is deemed intrusive and inappropriate​.
  • Biometric categorization into protected traits – AI that analyzes personal biometric data (like facial images) to infer sensitive attributes such as ethnicity, political beliefs, religion, gender orientation, etc., which is largely prohibited (especially outside law enforcement)​.
  • Real-time remote biometric identification in public by law enforcement – for instance, live facial recognition in public spaces by police is banned by default due to its high risk to privacy (with only narrow exceptions for serious crime scenarios)​.

These prohibited practices are essentially the red lines under the AI Act – AI uses that are seen as a “clear threat” to people’s rights or safety are not allowed at all. The list of banned AI practices may be updated over time, and heavy penalties (fines up to 7% of global turnover or €35 million) can be imposed for violations​. Most banned categories also include specific exceptions or carve-outs – for example, certain law enforcement uses of otherwise banned techniques might be permitted under strict conditions – but these are limited and case-specific​. For businesses, the key point is that if an AI system falls into one of these prohibited categories, it cannot be placed on the EU market or used in the EU at all.

 

High-Risk AI Systems

High-risk AI systems are the next tier in the AI Act’s scope. These are AI applications that aren’t banned outright but pose significant risks to health, safety, or fundamental rights – enough that they warrant strict oversight and controls​. Article 6 of the Act defines which AI systems are classified as “high-risk.” In general, there are two ways an AI system can be high-risk​:

  1. Safety components of regulated products: AI systems that serve as safety functions in products already governed by EU safety legislation (listed in Annex I of the Act). For example, AI controlling a medical device, an autonomous driving feature in a vehicle, or a safety monitoring system in an industrial machine might fall here. These are high-risk if the product’s regulations require a strict conformity assessment before market release​.
  2. Sensitive use-cases listed in Annex III: The Act’s Annex III enumerates eight critical areas of AI use that are deemed high-risk by nature​. These include:
    • Biometric identification systems – e.g. AI used for facial recognition (non real-time) or fingerprint matching, especially in public contexts.
    • Management of critical infrastructure – e.g. AI that controls power grids, water supply, transportation networks, where failures endanger lives​.
    • Education and vocational training – e.g. AI used in scoring exams or proctoring, which can affect access to education or certification​.
    • Employment and human resources – e.g. resume-sifting algorithms, AI tools for hiring or for monitoring employees, which can impact someone’s job prospects​.
    • Access to essential services – e.g. credit scoring systems used by banks, AI that decides eligibility for welfare benefits, or insurance risk assessment algorithms​.
    • Law enforcement applications – e.g. AI used by police for evidence analysis, suspect profiling (within legal bounds), or evaluating the risk of re-offending​.
    • Migration and border control – e.g. AI lie detectors at borders, visa application evaluation systems, or tools used in asylum decision-making​.
    • Administration of justice and democratic processes – e.g. software used to assist in judicial decisions or to filter legal documents, which could influence court outcomes or elections​.

In essence, AI systems that can deeply affect people’s life opportunities, safety, or rights (from getting a job or loan to being judged by a court) are likely to be classified as high-risk​. The AI Act will keep Annex III under review, so new use-cases can be added if they become high-risk in the future​.

  • Regulatory obligations for high-risk AI systems are extensive. Providers of high-risk AI must meet a detailed set of requirements before and after putting the system on the market​. Key obligations include:
  • Strict risk management and testing: Developers must perform risk assessments and mitigate risks throughout the AI system’s lifecycle​. For example, they need to identify potential harms (like bias or safety failures) and address them by design.
  • High-quality data and record-keeping: The data used to train and test the AI should be relevant, representative, and free of harmful bias as far as possible​. There must also be logging and traceability of the AI’s operations to audit its decisions.
  • Transparency and information: Detailed technical documentation is required, explaining the AI system’s design, purpose and performance, which can be reviewed by authorities​. Users (deployers) of the AI must be provided clear instructions and information about the system’s capabilities, limitations, and proper use.
  • Human oversight: High-risk AI systems should be designed to allow human intervention or monitoring. The idea is to prevent the AI from making unchecked decisions in critical matters – a human should be able to understand and, if needed, overrule or shut down the AI to avoid harm​.
  • Robustness, safety and accuracy: The AI must meet standards of reliability, including cybersecurity against attacks, accuracy in its predictions or decisions, and resilience to errors​. In other words, it should work as intended and safely even under reasonably foreseeable conditions or misuse.
  • Conformity assessment and CE marking: Before a high-risk AI system can be deployed or sold in the EU, the provider usually must undergo a conformity assessment (similar to product certification) to verify compliance with all these requirements​. If it passes, the AI system can receive a CE marking indicating it meets EU standards. High-risk AI systems will also be registered in an EU database for monitoring.

These obligations are meant to ensure that high-risk AI is trustworthy and under control before it impacts people. For companies, this means developing a high-risk AI system involves significant investment in compliance (documentation, testing, involving notified bodies for assessment, etc.). Deployers (users) of high-risk AI also have duties, like using the system as intended and monitoring its performance. Failure to comply can lead to substantial fines or orders to withdraw the AI system from the market. In summary, if an AI system falls in the high-risk bucket, it is allowed in the EU but comes with a heavy compliance burden to safeguard the public​.

 

Limited-Risk AI Systems (Transparency Obligations)

The AI Act classifies a middle tier of AI systems as limited-risk, sometimes described as having “transparency risk.” These are AI systems that don’t have to meet the full gamut of requirements like high-risk systems, but still warrant some transparency to users because they can influence people in important ways​. In other words, these AI systems are generally allowed with minimal friction, provided that people are kept informed about their AI nature or outputs.

What counts as limited-risk AI? The Act identifies a few specific types of AI activities that trigger transparency obligations (see Article 52 in the Act). Common examples include:

  • AI systems that interact with humans: If a person might not realize they are engaging with an AI, the system should disclose itself. For instance, a chatbot or virtual assistant should clearly inform users that it is an AI and not a human​. This prevents deception – users have the right to know when content or communication is coming from an algorithm rather than a person. (If it’s obvious by context that it’s a machine, no explicit notice is needed​.)
  • AI-generated content (deepfakes): AI that produces synthetic content – whether images, video, audio or text – must be designed so that the output is identifiable as AI-generated​. For example, a generative AI model that creates realistic human faces or voices should include a watermark or metadata tag indicating the content is artificial​. In particular, any “deepfake” (fake media impersonating real people) or AI-written news article for the public should be clearly labeled as such​. This transparency measure aims to combat misinformation and manipulation by fake AI content.
  • Emotion recognition or biometric categorization systems: When AI is used to analyze a person’s emotions (e.g. via facial expressions or voice tone) or to categorize people by biometric traits (like classifying someone’s gender or ethnicity from a photo), the individual being analyzed must be informed​. For instance, if a store uses an AI camera system to gauge customers’ moods, or an online tool analyzes a video of you to guess your characteristics, you should be notified that this AI processing is happening. (Such practices may be high-risk or even prohibited in some contexts, but when they are used lawfully, transparency is required.)

Beyond these, any AI system that could trick or affect people without them knowing it’s AI is likely to be in this limited-risk category. The core idea is openness: people deserve to know when AI is playing a role in what they see or experience​.

Regulatory obligations for limited-risk AI are lightweight compared to high-risk systems. There is no need for formal conformity assessments or audits. The main requirement is to implement the appropriate transparency or disclosure mechanisms described above. Providers and deployers of such AI should ensure these notices are clear and effectively reach users. For example, an AI chatbot’s interface might include a simple statement like “(Automated AI system)” whenever it replies, or a generative image tool might automatically imprint “AI-generated” on images it creates. These measures are about building user trust and awareness, not restricting the technology’s use​.

It’s worth noting that the European Commission can update the list of AI practices that require transparency (the Act mandates periodic reviews)​. So, in the future, new types of AI that pose similar limited risks of deception might also be added to this category. However, limited-risk AI systems face no bans or heavy compliance rules – they are simply required to be transparent so that humans know AI is involved. Businesses deploying such systems should integrate the required disclosures into their user experience. By doing so, they can freely use these AI tools while complying with the AI Act.

 

Minimal-Risk AI Systems

All other AI systems that do not fall into any of the above categories are considered minimal or no-risk AI systems. This is by far the largest category. In fact, the vast majority of AI applications today are minimal-risk in the eyes of the AI Act​. These include most everyday and business AI solutions that have limited implications for users’ rights or safety. For example, AI features in video games, email spam filters, recommender systems for shopping or media, grammar and spell-checking tools, AI-driven analytics for market research, etc., would typically be regarded as minimal-risk. They might be useful and important, but their potential to cause harm or significant negative impact is low, so the law doesn’t single them out for special restrictions.

For minimal-risk AI, the AI Act imposes no new legal obligations. There are no specific compliance requirements under the Act for developing or using such systems​. In other words, if your AI system isn’t prohibited, high-risk, or in the limited transparency category, it can be developed and used just as before – though of course general laws (like consumer protection, product liability, or data protection laws) still apply in the bigger picture. The AI Act deliberately leaves this low-risk segment largely unregulated to avoid stifling innovation for harmless or routine AI applications​.

That said, the Act does encourage voluntary best practices for all AI. Providers of minimal-risk AI systems are encouraged to adopt codes of conduct to adhere to ethical or quality standards on a voluntary basis​. For instance, an industry group might develop a code of conduct to apply some of the high-risk requirements (like transparency or fairness evaluations) to lower-risk AI systems as well, even though not mandated​. Adopting such voluntary measures can improve an AI system’s trustworthiness and possibly serve as a market differentiator for companies (“we follow responsible AI practices”). However, these are optional. The legal bottom line is that minimal-risk AI systems are not regulated by the AI Act’s specific provisions.

 

Conclusion

In summary, the material scope of the EU AI Act covers virtually all AI systems placed on the EU market or used in the EU, categorized by risk:

  • Unacceptable-risk AI (Prohibited)Not allowed at all due to threats to safety or rights (e.g. coercive manipulation, social scoring, certain biometric surveillance)​.
  • High-risk AIAllowed with strict conditions: subject to rigorous requirements (from design and testing to documentation and oversight) and assessment before use (e.g. AI in critical infrastructure, hiring, credit, law enforcement)​.
  • Limited-risk AIAllowed with transparency: must inform users or subjects that AI is in use, to prevent deception (e.g. chatbots disclosing they are AI, AI-generated content labeled as such)​.
  • Minimal-risk AIFreely allowed with no AI Act requirements: the default category for all other AI (e.g. most consumer and business AI tools that don’t significantly impact rights)​.

By clearly defining these categories and the obligations for each, the EU aims to ensure AI is developed and used in a way that balances innovation with safety and fundamental rights. Businesses and professionals can use this risk-based map to determine how their AI systems will be regulated: if an AI application falls under the high-risk or limited-risk lists, specific compliance steps are needed, whereas benign AI uses remain largely unrestricted​. Understanding the material scope of the AI Act is crucial for navigating its requirements – it tells you which rules (if any) apply to your AI system. The Act’s approach strives to foster trust in AI by tackling the riskiest uses head-on, while letting low-risk AI flourish with minimal interference.

Share: