News Detail Banner
All News & Events

Initial Prohibitions Under EU AI Act Take Effect

July 03, 2025
Business Litigation Reports

On February 2, 2025, the European Union’s AI Act reached its first major milestone as prohibitions on “unacceptable risk” AI practices became legally binding across all 27 EU member states. These initial prohibitions target AI systems deemed to pose the greatest threats to fundamental rights, including subliminal manipulation techniques, exploitation of vulnerable groups, or social scoring systems.

The immediate implications are substantial: companies deploying AI systems with any EU nexus face potential penalties of up to €35 million or 7% of global annual turnover, whichever is higher, for violations. Both EU-based companies and U.S. entities offering AI services to EU users must take immediate action to ensure compliance, as the Act’s extraterritorial reach extends to any AI system that affects individuals located in the EU.

The EU AI Act

The EU AI Act represents the world’s first comprehensive regulatory framework for artificial intelligence, and it has followed a trajectory analogous to that of GDPR for personal data protection. After nearly four years of legislative development, the Act was formally adopted in May 2024.

The core of the Act is a risk-based regulatory approach that categorizes AI systems into four tiers based on their use cases: unacceptable risk (such as social scoring systems), high risk (such as employment-related AI), limited risk (such as chatbots), and minimal risk (such as spam filters).

The Act’s phased implementation strategy reflects the complexity of regulating AI and the perceived urgency of addressing its most dangerous applications. Although the act will fully enter into force only in 2026, its application has been fast-tracked for AI systems that pose “unacceptable risks” to fundamental rights and democratic values.

The risk-based approach mirrors familiar regulatory frameworks in product safety and financial services, but with a notable difference: the AI Act’s definitions of risk focus primarily on impacts of fundamental rights, rather than on physical or financial harm. An AI system falls into the “unacceptable risk” category not because it might malfunction, but because its design or intended use conflicts with values such as of human dignity, non-discrimination, and democratic governance.

Prohibited AI Practices

The AI Act’s initial prohibitions target eight categories of AI systems, identified below. Save for narrow exceptions, these prohibitions are categorical and apply to both public and private entities. Moreover, Article 2(1) of the AI Act explicitly applies these rules to providers and deployers of AI systems regardless of their location, provided the output of the AI system is used within the EU. This means a U.S. company offering an AI service that EU residents can access faces the same compliance obligations as an EU-based competitor.

Subliminal Manipulation Beyond Awareness – Article 5(1)(a): AI systems that deploy “subliminal techniques beyond a person’s consciousness” to distort behavior materially in ways that cause or are likely to cause physical or psychological harm.

Exploitation of Vulnerable Groups – Article 5(1)(b): AI systems that exploit vulnerabilities related to age, physical or mental disability, or social or economic situation.

Social Scoring by Public Authorities – Article 5(1)(c): AI systems used to evaluate or classify people based on their social behavior or personal characteristics over time, leading to detrimental treatment that is either unrelated to the evaluated behavior or characteristics, or disproportionate to their behavior.

Risk Assessment Based on Profiling – Article 5(1)(d): AI systems that assess or predict the risk of a person committing a criminal offense based solely on profiling or personality traits, except when used to augment human assessment of specific criminal conduct, based on objective facts linked to criminal activity.

Facial Recognition Databases Through Untargeted Scraping – Article 5(1)(e): AI systems that create or expand facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage.

Emotion Recognition in Workplace and Educational Settings – Article 5(1)(f): AI systems that infer emotions of individuals in workplace or educational institutions, except for medical or safety reasons.

Biometric Categorization Based on Sensitive Attributes – Article 5(1)(g): AI systems that use individuals’ biometric data to infer sensitive details such as race, political opinion, or religious beliefs. There is a carve-out in this article for labeling or filtering of lawfully acquired biometric data for law enforcement purposes.

Real-time Remote Biometric Identification in Publicly Accessible Spaces – Article 5(1)(h): AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, except in strictly limited cases.

Compliance Strategies and Enforcement Landscape

The enforcement architecture for the AI Act’s prohibitions combines centralized EU oversight with decentralized national implementation, creating multiple venues for potential regulatory action. Each EU member state must designate at least one national competent authority with powers to investigate violations, demand information, conduct audits, and impose penalties. These authorities coordinate through the European AI Board, which ensures consistent interpretation and application across the EU.

As noted, the penalty structure involves maximum fines of €35 million or 7% of total worldwide annual turnover, whichever is higher. This calculation method particularly impacts large technology companies, where 7% of global revenue could result in multi-billion-Euro penalties.

Effective compliance is likely to require a systematic approach, beginning with AI system inventory and risk assessment. Companies should catalogue all AI systems that could affect EU residents, evaluate them against the prohibition categories, and document their analysis—ideally while taking care to maintain privilege over those evaluations. The evaluations should serve as compliance tools and as a potential defense in enforcement proceedings. Such reviews should focus on systems already in operation that could violate the new prohibitions. Red flags include systems that collect psychological or behavioral data, target specific demographic groups, make automated decisions affecting access to services, or process biometric information. Companies should thoroughly assess whether “borderline” systems fall within the prohibited categories, modify or discontinue those systems to the extent necessary to ensure compliance, and formulate clear legal reasoning as to why those systems are not prohibited.

Ensuring compliance may also require technical and organizational innovation. On the technical side, this may include designing systems with built-in safeguards against prohibited uses, implementing access controls, and maintaining audit logs. Organizationally, companies need clear governance structures, training programs, and escalation procedures for potential violations. The appointment of an AI compliance officer, although not explicitly required, may assist companies with significant AI operations.

Anticipated Developments

The activation of the AI Act’s initial prohibitions marks the beginning, not the end, of the EU’s AI regulatory journey. Specifically, the governance rules and obligations for general-purpose AI models become applicable on August 2, 2025, whereas on August 2, 2026, the EU AI Act will become generally applicable and requirements around “high-risk AI systems” will go into effect. These high-risk systems include those used in law enforcement, healthcare, education, critical infrastructure, and more. The most extended deadline applies to high-risk AI systems embedded in certain regulated products, which have an extended transition period until August 2, 2027. Notably, the EU is under substantial pressure to amend the AI Act or to delay its rollout, from both European AI businesses and the Trump administration. Regardless of the final form or timetable for the AI Act, companies that establish robust compliance frameworks now will be better positioned to address the full implementation of the AI Act, and any associated litigation.