The European Union’s Artificial Intelligence Act represents the world’s first comprehensive AI regulation, and at its core lies Article 5—a groundbreaking provision that draws clear red li
The European Union’s Artificial Intelligence Act represents the world’s first comprehensive AI regulation, and at its core lies Article 5—a groundbreaking provision that draws clear red lines around AI practices deemed too dangerous for society.
These prohibitions took effect on February 2, 2025, marking a new era in AI governance where certain applications are simply off-limits, regardless of their potential benefits.
The Stakes: Understanding the Penalties
Before diving into what’s banned, it’s crucial to understand the severity of non-compliance. Companies violating these prohibited practices face penalties of up to €35 million or 7% of their total worldwide annual turnover—whichever is higher. These penalties represent the highest tier of fines under the AI Act, reflecting the EU’s determination to eliminate what it considers unacceptable AI risks.
The Eight Prohibited Practices Under AI Act
The EU AI Act categorically bans eight specific AI practices, each targeting fundamental threats to human dignity, autonomy, and rights:
1. Subliminal and Manipulative Techniques
The Act prohibits AI systems that use subliminal techniques or manipulative methods to distort behavior and impair informed decision-making, causing significant harm. This encompasses AI that operates below the threshold of human consciousness to influence decisions without people’s awareness.
Why it’s banned: Such systems fundamentally undermine human autonomy and informed consent, core principles of democratic society. They transform users from decision-makers into unwitting subjects of manipulation.
2. Exploitation of Vulnerabilities
AI systems that exploit vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour and cause significant harm are strictly forbidden. This particularly protects children, elderly individuals, and economically disadvantaged populations.
Real-world impact: This prohibition would cover voice-activated toys that encourage dangerous behavior in children or predatory lending algorithms targeting financially vulnerable populations.
3. Social Scoring Systems
EU AI Act bans AI systems that evaluate or classify individuals based on their social behavior or personal characteristics, leading to treatment that is unjustified or disproportionate to their actions. This directly targets China-style social credit systems.
Why it matters: Social scoring creates a surveillance state where citizens are constantly monitored and judged, fundamentally altering the relationship between individuals and society. It can lead to social exclusion and discrimination based on algorithmic assessments.
4. Predictive Policing Based on Profiling
AI systems that assess individuals’ risk of committing criminal offenses based solely on profiling, personality assessments, or characteristics derived from biometric data are prohibited. This ban specifically targets algorithmic predictions about criminal behavior.
The concern: Such systems risk perpetuating and amplifying existing biases in criminal justice, potentially criminalising individuals before any wrongdoing occurs.
5. Untargeted Facial Image Scraping
The creation or expansion of facial recognition databases through the untargeted scraping of facial images from internet or CCTV footage is banned. This prohibition aims to prevent the indiscriminate collection of biometric data.
Privacy implications: Mass facial recognition databases pose unprecedented surveillance risks and violate privacy expectations in digital spaces.
6. Emotion Recognition in Workplace and Education
AI systems that infer emotions in workplace and educational settings are prohibited, recognizing these as environments where power imbalances make such monitoring particularly problematic.
\Why it’s problematic: Emotion recognition in these contexts can create oppressive environments where natural human expressions become subject to algorithmic interpretation and potential punishment. For this reason, our “mothership” platform, eyre.ai, never uses “sentiment analysis” of business meetings.
7. Biometric Categorisation for Protected Characteristics
The Act bans AI systems that categorise people based on biometric data to infer or deduce race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
Fundamental rights concern: This directly protects against algorithmic discrimination and the reduction of human complexity to biometric categories.
8. Real-Time Remote Biometric Identification in Public Spaces
Perhaps the most debated prohibition covers real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes. However, this ban includes specific exceptions for serious crimes and immediate threats.
Balancing act: While generally prohibited, exceptions exist for preventing terrorist attacks, searching for missing children, or pursuing suspects of serious crimes—but only with judicial authorization.
AI Act Limited Exceptions and Safeguards
The EU AI Act recognises that some prohibited practices may serve legitimate law enforcement needs. However, these exceptions are narrowly defined and heavily regulated.
Law enforcement agencies can use real-time biometric identification only in specific circumstances with proper authorisation and judicial oversight.
The Broader AI Act Context: Why These Bans Matter
These prohibitions reflect deeper European values about human dignity, privacy, and the role of technology in society. The EU has taken the position that certain AI applications are simply incompatible with fundamental rights, regardless of their potential efficiency or security benefits.
The prohibitions also serve a global function. As European companies must comply with these rules, and international companies serving European markets must adapt their practices, these standards are likely to influence AI development worldwide—a phenomenon known as the “Brussels Effect.”
AI Act Implementation and Compliance Challenges
While these prohibitions are now in effect, implementation presents significant challenges. Companies must audit their existing AI systems, modify practices that may fall within prohibited categories, and establish ongoing compliance mechanisms. The complexity of determining what constitutes “manipulation” or “significant harm” requires careful legal and technical analysis.
For businesses operating in or serving European markets, the message is clear: the era of unregulated AI experimentation is over. The EU AI Act’s prohibited practices represent a fundamental shift toward human-centric AI governance, where technological capability must be balanced against human rights and societal values.
As enforcement mechanisms develop and case law emerges, these prohibitions will likely evolve in their interpretation and application. However, the core principle remains firm: in the European vision of AI governance, human dignity and autonomy are non-negotiable constraints on technological development.
Turn AI Act compliance from a challenge into advantage
eyreACT is building the definitive EU AI Act compliance platform, designed by regulatory experts who understand the nuances of Articles 3, 6, and beyond. From automated AI system classification to ongoing risk monitoring, we’re creating the tools you need to confidently deploy AI within the regulatory framework.
Frequently Asked Questions (FAQ)
The European Union’s Artificial Intelligence Act represents the world’s first comprehensive AI regulation, and at its core lies Article 5—a groundbreaking provision that draws clear red lines around AI practices deemed too dangerous for society. These prohibitions took effect on February 2, 2025, marking a new era in AI governance where certain applications are simply […]
Ready to Start Your EU AI Act Compliance Journey?
Take our free 5-minute assessment to understand your compliance requirements and get a personalized roadmap.
