News Detail Banner
All News & Events

The EU’s Regulation on Artificial Intelligence

September 13, 2022
Firm Memoranda

To view a pdf of this client alert, please click here.

In April 2021, the European Commission released a draft regulation on the use of artificial intelligence or AI (the “Draft Regulation”).  If adopted, the new regulation would be the first comprehensive regulatory scheme to focus solely on the development and use of AI.[1]  It would establish rules on the development, market placement, and use of AI systems across all sectors—both public and private—in the European Union (EU).[2] 

But the Draft Regulation would reach far beyond Europe’s borders.  It would apply to any providers or distributors of AI who place their services or products in the EU market.[3]  It would also apply to providers and users outside of the EU if “the output produced by the AI system is used in the EU.”[4]  The Draft Regulation thus has legal implications for all companies that develop, sell, or use AI-based products and services, particularly those that are categorized as posing “high-risks.” 

The exact implications, however, are still unknown and some of the shortcomings identified by commentators are expected to be addressed as the draft regulation moves through the EU legislative process.  In this note, we describe the key provisions of the draft regulation, including the scope, compliance requirements, and penalties, as well as some of the expected changes we anticipate the Draft Regulation will require of companies developing, selling, or using products with AI systems.

I. Background

The Draft Regulation is not the first piece of AI regulation in Europe.  For example, the General Data Protection Regulation (GDPR)—EU’s data security and privacy law, which went into effect in 2018—has provisions that regulate automated decision-making and profiling, i.e., AI algorithms.[5]  In the U.S., different agencies have proposed specific industry regulatory guidelines,[6] and several city and state governments have taken the steps to ban or restrict law enforcement use of certain types of AI technologies such as facial recognition.[7]  But the Draft Regulation is the first comprehensive attempt at regulating the development and use of the AI, and could become the new global standard.

II. Key Provisions

a. The Scope of the Regulation

Article 3(1) of the Draft Regulation defines AI systems as “software that is developed with one or more of the techniques and approaches listed in Annex 1” to the Draft Regulation (e.g., machine learning, logic- and knowledge-based approaches, and statistical approaches), which “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”[8]  This broad definition “aims to be as technological neutral and future proof as possible, taking into account the fast technological and market developments related to the AI.”[9] 

The Draft Regulation is both industry and sector-agnostic.  It covers providers and users of AI systems in public and private sectors “to ensure a level playing field.”[10]

The Draft Regulation also reaches non-EU providers and users (just as the GDPR does), suggesting the European Commission’s intention to set global standards.  Specifically, the Draft Regulation applies to: 

(1) providers who make AI systems available in the EU market or supply the AI systems for use by a user in the EU market;

(2) providers and users outside of the EU where the output of the AI system is used in the EU; and

(3) users of AI systems within the EU.[11]

In other words, the Draft Regulation seeks to regulate anyone who brings their AI systems or use the outputs of the AI systems in a way that affects individuals in the EU.  The Draft Regulation does not apply to AI systems that are developed or used exclusively for military purposes, and it does not apply to use by public authorities in a third country or international organizations under international agreements for law enforcement and judicial cooperation.[12]

b. The Risk-Based Approach

The Draft Regulation organizes and regulates AI systems by the level of risks such AI systems create:  (1) unacceptable-risk AI systems, which are banned; (2) high-risk AI systems, which are subject to extensive technical, monitoring, and compliance obligations; (3) limited-risk AI systems, which are subject to transparency obligations; and (4) minimal or no risk AI systems, which do not trigger any additional legal obligations.[13]  Most of the Draft Regulation is focused on defining and setting restrictions for high-risk applications.

Unacceptable Risk Use

Article 5 of the Draft Regulation prescribes a blanket ban on certain uses that are considered “a clear threat to the safety, livelihood and rights of people.”[14]  These include:

  • “dark-pattern AI” – technologies that deploy subliminal techniques to materially distort human behavior in a manner that is likely to cause physical or psychological harm and exploit vulnerabilities of persons due to their age, physical or mental disability.
  • “social scoring” by public or law enforcement authorities – evaluation or classification of the trustworthiness of individuals.
  • real-time remote biometric identification (e.g., facial recognition) in publicly accessible spaces for law enforcement purposes.[15]
    1. High Risk Use

If an AI system creates a high risk to the “health and safety or fundamental rights of persons,”[16] it will be subject to a range of new obligations, including establishing and implementing a risk assessment and mitigation system, eliminating or reducing risks through design and development as well as through detailed documentation, human oversight, and disclosure of information to users, and data training and testing.[17]  Other provider responsibilities include ensuring that the products undergo an appropriate conformity assessment procedure, registering and affixing the CE (Conformité Européenne) marking, taking necessary corrective actions and informing the national competent authority of “serious incidents” or “malfunctioning,” and demonstrating conformity of the AI system with the new requirements upon request of a national competent authority.[18]

The Draft Regulation identifies two categories of high-risk AI systems:  (1) AI systems that are embedded in products and are already subject to a third-party assessment under the current EU framework (listed in Annex II); and (2) stand-alone AI systems that have been identified as high risk when they are used in certain areas (listed in Annex III).[19] 

These include: 

  • biometric identification and categorization of natural persons
  • management and operation of critical infrastructure (e.g., transportation)
  • education and vocational training (e.g., scoring of exams)
  • employment, workers management and access to self-employment (e.g., CV-sorting software for recruitment procedures)
  • access to and enjoyment of essential private services and public services and benefits (e.g., credit scoring)
  • law enforcement (e.g., evaluation of the reliability of evidence)
  • migration, asylum and borer control management (e.g., verification of authenticity of travel documents)
  • administration of justice and democratic processes[20]

Title III of the Draft Regulation also establishes various obligations for importers (Article 26), distributors (Article 27), and users (Article 29).[21]

Limited Risk Use

AI systems falling under the “limited risk” category are subject to transparency obligations under Title VI.  These are applications that “(i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’).”[22]  By imposing disclosure requirements, the Draft Regulation aims to help users “make informed choices or step back from a given situation” when interacting with AI systems that recognizes their emotions or other characteristics, or when using machine-generated content.[23]

Minimal or No Risk Use

Finally, the European Commission makes clear that the Draft Regulation permits the use of minimal-risk AI systems without additional obligations—i.e., existing AI systems that are not specifically addressed in the Draft Regulation.  Applications falling under this category covers “[t]he vast majority of AI systems currently used in the EU,” including AI-enabled video games or spam filters.[24] 

c. Enforcement and Penalties

A breach of the prohibited AI practices or failure to put in place a compliant data governance program for high-risk AI systems will trigger a fine of up to €30 million or 6 percent of the infringer company’s global annual revenue, whichever is higher.  By comparison, this is significantly higher than the current maximum penalty of €20 million or 4 percent of global annual revenue under the GDPR.[25]

All other types of incompliance with the new regulation will result in a fine of up to €20 million or 4 percent of global annual revenue, whichever is higher.  Supplying incorrect, incomplete or false information to notified entities and national authorities could also result in fines of up to €10 million or 2 percent of global annual revenue, whichever is higher.[26]

Enforcement of the new regulation will occur through a “governance system” at the EU Member State level by one or more designated national competent authorities, and a “cooperation mechanism” at the EU level by a European Artificial Intelligence Board, composed of representatives from the national supervisory authorities, European Data Protection Supervisor, and the European Commission.[27]  In addition, the Draft Regulation creates a framework for the creation of voluntary codes conduct for non-high-risk uses[28] and regulatory sandboxes,[29] a controlled environment where companies (small and medium-sized companies and start-ups in particular) can “test innovative technologies for a limited time,” in order to facilitate innovation and ease the regulatory burden.[30]

III. Practical Implications

As discussed above, the Draft Regulation is focused on putting in place mechanisms for standardized regulations of high-risk AI systems.  But a wide range of “high-risk” AI systems are already being used in regulated sectors—such as aviation, cars, boats, elevators, medical devices, industrial machinery, etc.—and are subject to existing conformity assessment processes as performed by sectoral regulators—such as the EU Aviation Safety Agency for planes or by a mix of approved third-party organizations and a central EU body in the case for medical devices.[31]  A report by the Brookings Institution claims that the Draft Regulation would thus “only change[] the specifics of this oversight but not its scope or process.”[32]  It will, however, affect the way how private sectors use “stand alone” AI systems—AI applications in hiring, employee management, access to education, and credit scoring.  The details of the changes we can expect to see in these two categories of high-risk AI systems and the new transparency requirement are described below.

d. High Risk AI Systems Already Covered by Product Safety Regulations

Although it is expected that the Draft Regulation will not pose additional substantive obligations on companies selling products that are already regulated in the EU, changes to its procedural obligations will not be trivial.  Because the Draft Regulation sets a “new and higher floor for considering AI systems in products,” rather than focusing on the broader function of the product, the companies will have to implement, for example, a new risk management and documentation process that assesses, documents, and monitors the AI systems within the regulated products.[33]  A study requested by the European Commission suggests that compliance costs associated these new requirements could be close to €10,000, with additional €30,000 for added services or staff.[34]  Once companies implement these costly changes (particularly for those that manufacture products via large-scale industrialized processes), they will likely have a strong incentive to oppose any regulations (anywhere else in the world) that are inconsistent with the Draft Regulation requirements.[35]

From consumers’ perspective, however, such requirements amount only to an internal check-off with no significant public oversight.  Although the AI system providers must make available a “declaration of conformity” and are subject to relevant regulators’ review/audit, the public will not be able to review an audit report or the companies’ declarations.[36]  “Instead, the public gets a ‘mark’ affixed to the AI system indicating compliance with the rules.”[37]  Some of these shortcomings may be addressed through the lengthy legislative process in the upcoming months.[38]

e. “Stand-Alone” High Risk AI Systems

This second category of high-risk AI systems do not currently fall under the existing EU regulatory framework, but are identified by the Draft Regulation (and listed in Annex III) as posing high risks to human rights.  For private sector use (e.g., hiring, access to education, credit scoring, etc.),[39] the Draft Regulation will significantly affect the AI systems that are incorporated into geographically dispersed platforms rather than localized software.[40]  For example, under the Draft Regulation, an entirely interconnected platform with no geographic boundaries, such as LinkedIn, would have to change its “modeling process or function, as well as the documentation and analysis of the AI system.”[41]  This would also mean that any discrepancy between the Draft Regulation standards and regulations from another country would create an immediate issue for LinkedIn, “possibly requiring [it] to create different risk management processes, try to minimize varying metrics of bias, or record multiple different entries for record-keeping.”[42]  Microsoft, as a parent company, would thus have an incentive to resist any legislative requirements that are not consistent with the Draft Regulation in other countries.[43]

On the other hand, the Draft Regulation will not significantly change how the companies operate stand-alone AI systems that are not connected to dispersed platforms and are built on local data.  For instance, a multinational company that outsources the development of AI hiring systems to analyze resumes and assess job candidate evaluations build different AI systems for different job offerings in different geographic regions, and would have to comply with the new regulation in only EU countries.[44]  But if the software is built into an online platform, companies will have to evaluate whether it will be used outside of the EU, and may end up creating a uniform oversight and management process to enable universal use.[45] 

Further, commentators have noted that, more so than the first category of high-risk AI system, the “stand-alone” AI systems require only industry self-assessment and could create divergent approaches to compliance.  To address this concern, the European Economic and Social Committee (EESC), a consultative body to the European Commission, has recommended imposing third-party assessment obligations for all high-risk AI systems.[46]  This will, however, increase compliance burden and costs on companies.

Finally, Annex III does not cover the AI “algorithms used in social media, search, online retailing, app stores, mobile apps or mobile operating systems as high risk.”[47]  Thus, the Draft Regulation leaves untouched (so far) major Big Tech companies which have been the focus of intense public criticisms for their use of AI-driven algorithms in recent years.

f. Transparency Requirement

The Draft Regulation’s disclosure requirements for limited-risk use applications appear to be minimal.  For example, chatbots used for websites and apps (if they are not categorized as high-risk because they are advising on eligibility for a loan or public service) will likely display a small notification disclosing that the chatbot is an AI system.[48]  But the Draft Regulation is unclear on what must be disclosed exactly.  It requires that users be informed when they “interact with” an AI system, or when their emotions or other human characteristics (e.g., gender, race, ethnicity, sexual orientation, etc.) are being “recognized” by an AI system, but it does not seem to require disclosure when people are algorithmically sorted for public benefits, credit, education, or employment.[49]

g. Other Concerns

Commentators from all over the world have also raised various other concerns regarding the scope and burden of compliance, particularly on small and medium enterprises (SMEs).  For example, a group of SMEs in the EU argues that the Draft Regulation’s standardized and highly regulated approach will push SMEs, which often provide tailor-made (rather than mass-produced) software for clients, out of the market, as they will not be able to comply with the conformity assessment requirement in advance.  They claim that this will stifle innovation and competition in the AI market.[50]  SMEs also argue that conformity assessment and compliance costs will be prohibitive for SMEs in general.[51]  Some of these concerns may or may not be addressed by the legislative process.

IV. What’s Next?

Since the unveiling of the regulation, the European Parliament has received over three thousand proposed amendments.[52]  The proposals range from the definition of AI to the expansion of the scope to include AI in the metaverse and a potential complete ban on facial recognition.[53]  The negotiations are led jointly by two European parliamentary committees—the Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) committees—which produced a report last April.[54]  After the debate and amendment of this joint report, the European Parliament is expected to engage in final negotiations with the EU member states later this year.[55]  Although it is hard to predict exactly when the regulation will become effective, it could be passed into law as early as the first half of 2023,[56] after which it will be subject to a two-year implementation period.[57]


If you have any questions about the issues addressed in this memorandum, or if you would like a copy of any of the materials mentioned in it, please do not hesitate to reach out to: 

John Quinn
Phone: 213-443-3200

Kate Vernon
Phone: +44 20 7653 2002

Linda Moon
Phone: 212-849-7655

September 13, 2022

To view more memoranda, please visit

To update information or unsubscribe, please email


[1]   Proposal for a Regulation laying down harmonized rules on artificial intelligence, European Commission (Apr. 21, 2021),; Alex Angler, The EU Act will have global impact, but a limited Brussels Effect, Brookings Institution (June 8, 2022),

[2]   Explanatory Memorandum 3, 6 (Apr. 21, 2021),

[3]   Art. 2.

[4]   Art. 2.

[5]   James Alford, et al., GDPR and Artificial Intelligence, Regulatory Review (May 9, 2020),

[6]   Adam Satariano, Europe Proposes Strict Rules for Artificial Intelligence, N.Y. Times (Apr. 21, 2021).

[7]   Id.; Kashmir Hill, How One State Managed to Actually Write Rules on Facial Recognition, N.Y. Times (Feb. 27, 2021).

[8]   Art. 3(1).

[9]   Explanatory Memorandum, at 12.

[10]   Id. at 12.

[11]   Art. 2.

[12]   Explanatory Memorandum, at 39.

[13]   Explanatory Memorandum, at 12-15; Regulatory framework proposal on artificial intelligence, supra n.1.

[14]   Regulatory framework proposal on artificial intelligence, supra n.1.

[15]   But such use is permitted where “an express and specific authori[z]ation by a judicial authority or by an independent administrative authority of a Member State” is provided.  Explanatory Memorandum, at 22. 

[16]   Art. 6; Explanatory Memorandum, at 13.

[17]   Id. at 13-14, 30, 34, 46-52; Regulatory framework proposal on artificial intelligence, supra n.1.  See also Simmons & Simmons, Quick guide to the EU draft AI regulation 4 (Apr. 2021),

[18]   Explanatory Memorandum, at 52-56.

[19]   Id. at 13.

[20]   Annex III; Regulatory framework proposal on artificial intelligence, supra n.1

[21]   Explanatory Memorandum, at 56-69.

[22]   Explanatory Memorandum, at 14.

[23]   Id.

[24]   Regulatory framework proposal on artificial intelligence, supra n. 1.

[25]   Katie Collins, GDPR Fines: The Biggest Privacy Sanctions Handed Out So Far, CNET (Feb. 21, 2022),  For example, in 2021, Amazon was fined €756 million ($775 million) for breaching the GDPR, and Google was fined €4.3 billion ($4.5 billion) in 2018 for breaching EU’s antitrust laws.  Melissa Heikklia, A quick guide to the most important AI law you’ve never heard of, MIT Technology Review (May 13, 2022).

[26]   Explanatory Memorandum, at 82.

[27]   Id. at 3, 15, 72-74.

[28]   Id. at 16, 36, 80-81.

[29]   In June 2022, the European Commission and the Spanish government presented a pilot of the first regulatory sandbox on AI at an event held in Brussels.  European Commission, First regulatory sandbox on Artificial Intelligence presented (June 27, 2022),  The Commission states that the regulatory sandbox is “a way to connect innovators and regulators and provide a controlled environment for them to cooperate,” which would “facilitate the development, testing and validation of innovative AI system with a view to ensuring compliance with the requirements of the [Draft] regulation.”  Id.  The sandbox initiative is “expected to generate easy-to-follow, future-proof best practice guidelines and other supporting materials” and could help companies to successfully implement the new rules.  Id.  This initiative will remain open to all EU Member States.  Id.  The tests begins in October 2022, with the budget of approximately €4.3 million for about three years, and will result in a publication in the second half of 2023.  Id.

[30]   Explanatory Memorandum, at 3, 15, 34-35.

[31]   Angler, supra n. 1.

[32]   Id.

[33]   Id. (emphasis added).

[34]   Lilkov D. Regulating artificial intelligence in the EU: A risky game. European View. 20(2):166-174 (2021),

[35]   Angler, supra n. 1.

[36]   Mark MacCarthy & Kenneth Propp, Machines learn that Brussels writes the rules: The EU’s new AI regulation, Brookings Institution (May 4, 2021),

[37]   Id.

[38]   Id.

[39]   The Draft Regulation also covers certain government use of high-risk AI systems (e.g., judicial decision-making, border control, law enforcement, etc.), but such use will not likely to have a global impact.  Angler, supra n. 1.

[40]   Id.

[41]   Id.

[42]   Id.

[43]   Id.

[44]   Id.

[45]   Id.

[46]   Josh Meltzer & Aaron Tielemans, The European Union AI Act 4, Brookings Institution (May 2022),

[47]   MacCarthy & Propp, supra n. 35.

[48]   Angler, supra n. 1.

[49]   Id.

[50]   Fact Sheet: AI Act & SMEs, European Digital SME Alliance 1 (March 2022),

[51]   Id.

[52]   ECOMMERCE EUROPE, State of play on the Artificial Intelligence Act (June 30, 2022),

[53]   Luca Bertuzzi, AI regulation filled with thousands of amendments in the European Parliament, EURACTIV (June 2, 2022),

[54]   European Parliament, Draft Report (Apr. 20, 2022),

[55]   Kasper Peters, Artificial Intelligence: CCIA Welcomes European Parliament Progress on EU AI Act, Urges Further Improvements, Computer & Communications Industry Association (Apr. 22, 2022),  The negotiations among EU member states are ongoing on the European Council side as well.  See ECOMMERCE EUROPE, supra n. 51.

[56]   Expert explainer: The EU AI Act Proposal, Ada Lovelace Institute, (last accessed Aug. 18, 2022).  The GDPR was proposed in 20212, adopted in 2014, and went into effect in 2018.  Thus, it took GDPR about four years to become finalized and binding.  Luciano Floridi, The European Legislation on AI: a Brief Analysis of its Philosophical Approach, Philos. Technol. 34, 215–222 (2021),

[57]   European Union directives (Mar. 16, 2022),; Lori Witzel, 5 Things You Must Know Now About the Coming EU AI Regulation, Medium (Sept. 17, 2021),