In recent years, the European Union (“EU”) has intensified its focus on regulating artificial intelligence ("AI"), becoming the first jurisdiction to adopt a risk-based regulatory framework in the form of the Artificial Intelligence Act (“EU AI Act” or “Act”).[1],[2] As a result, companies operating within the AI ecosystem and offering products and services in the EU must gradually adapt their operations to comply with the Act’s requirements. This note focuses on the obligations pertaining to general-purpose artificial intelligence (“GPAI”) models — models capable of performing a wide array of tasks with limited human intervention and fine-tuning — examines the practical implications of their regulatory treatment, and offers a handful of predictions.
I. GPAI Models Subject to the EU AI Act
Article 3(63) of the EU AI Act defines a GPAI model as an AI model “that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.” In other words, a GPAI model, often developed by foundational model providers, is the result of training an algorithm on data. The outcome is then integrated into AI systems (i.e., in software and infrastructure that will be capable of generating outputs) and placed on the market to accomplish varied objectives without requiring the model to be rebuilt or extensively retrained for each new task. Models used solely for research, development or prototyping activities before being placed on the market do not qualify as GPAI models under the EU AI Act.[3]
According to the guidelines published by the European Commission (the “Commission”) on 18 July 2025,[4] a model is likely to qualify as GPAI model if (i) the amount of computational resources used to train it is superior to 10²³ floating point operations per second (“FLOPS”); and (ii) it is capable of generating language (text or audio), text-to-image, or text-to-video outputs.[5] This threshold would therefore not only encompass well-known Large Language Models (“LLMs”) such as GPT-4, Claude Sonnet 4.5, Gemini 3 Pro or Llama 4, but also capture other specialised systems that companies may not initially have considered as general-purpose. By contrast, models that use at least 1024 FLOPS but are trained exclusively for narrowly defined tasks (e.g., playing chess or transcribing speech to text) would not be considered sufficiently general to qualify as GPAI models.[6]
We expect that the question whether a certain AI model is sufficiently general and versatile such as to qualify as a GPAI model under the EU AI Act, which is to be assessed on a case-by-case basis, is likely to be the subject-matter of debate and litigation.
The EU AI Act distinguishes GPAI models based on the level of risk they present. A GPAI model will be classified as posing systemic risk either if it has “high-impact capabilities,”[7] or if the Commission designates it according to the specific criteria laid down in Annex XIII of the Act (e.g., the quality or size of the dataset, the number of registered users, or the number of parameters of the model).[8] The Commission will adopt delegated acts defining the “appropriate technical tools and methodologies, including indicators and benchmarks” for evaluating such “high-impact capabilities” of GPAI models, although AI models with cumulative amount of computation used for training exceeding 10²⁵ FLOPS are already presumed to possess such capabilities.[9]
Providers of GPAI models with high-impact capabilities must notify the Commission within two weeks after meeting that requirement or prior to training the model, if they can reasonably foresee that the requirement will be met.[10] Providers also have the right to contest the classification of their GPAI model as posing systemic risk by submitting evidence refuting that the model has “high-impact capabilities,” or by demonstrating that, despite having such capabilities, the model does not present any systemic risk.[11] Such arguments may be submitted simultaneously with the notification itself. By contrast, when the classification results from a designation decision adopted by the Commission pursuant to Article 51(1)(b) of the EU AI Act, the provider must wait for at least six months before requesting the Commission to reassess that decision.[12] In both cases, the Commission maintains a broad discretion to accept or reject such rebuttals.[13] Moreover, these challenges do not suspend the application of the obligations referred to in Section 2 below.
The EU AI Act thus imposes onerous obligations on companies, which must continuously invest in accurate monitoring of floating-point operations across their training computes and self-assess if and when their GPAI model’s computational usage is likely to trigger notification obligations under the EU AI Act, maintain precise records for potential regulatory scrutiny, as well as monitoring potential changes in the criteria utilised by the Commission to qualify a model as general-purpose.[14] It also remains to be seen whether the Commission, which has a statutory duty to update the thresholds classifying GPAI models as posing systemic risk,[15] will continue to use FLOPS as the principal metrics, particularly for situations such as DeepSeek which had allegedly been able to build its LLM with a more limited access to computing power. Similar questions arise in relation to a large number of models, which, although have exceeded or may exceed in the future the FLOPS threshold, are not nearly as large as the well-known LLMs and are seeing their regulatory compliance costs increase significantly due to the EU AI Act, in turn hindering their competitiveness.
II. Obligations on GPAI Model Providers
GPAI model providers must comply with the obligations laid down in Articles 53 and 55 of the EU AI Act as soon as the model meets the conditions laid down in Article 51(1)(a), or when the provider is informed of the Commission’s designation decision.[16]
A “provider” is defined broadly as any natural or legal person, public authority, agency or other body that develops a GPAI model or that has a GPAI model developed, and places it on the market.[17] The notion “placing on the market” is equally broad and refers to the first time the GPAI model is made available for commercial or non-commercial distribution in the EU through any means (e.g., via a software package, via an application programming interface, as a physical copy, or via a cloud computing service).[18] Entities active downstream that modify existing GPAI models also qualify as GPAI model providers if the modification leads to a “significant change in the model’s generality, capabilities or systemic risk.”[19] This will likely be the case when the training compute used for the modification is greater than one third of the training compute used to develop the original GPAI model.[20]
Given the complexity of such models and the variety of actors involved both during their development and marketing phases, there is little doubt that the Commission’s case-by-case identification of companies as GPAI model providers will be the subject-matter of dispute and litigation.[21]
GPAI model providers have an obligation to (i) maintain information relating to the technical documentation of their model (including on how they train and test it, as well as the results of the evaluation); (ii) make available certain information to other AI providers that plan to integrate with their model; (iii) establish a policy to comply with EU copyright law; and (iv) publish a detailed summary of the content used for training their model.[22]
The EU AI Act provides for exemptions to the first two obligations for “providers of GPAI models that are released under a free and open-source licence that allows for the access, usage, modification and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available.”[23] It follows that models that are provided under a dual licensing model (e.g., requiring payment for commercial usage or over a certain scale) or that are subject to distribution restrictions would not benefit from the exemption.
Providers of GPAI models that pose systemic risk must comply with an additional set of burdensome obligations, as set out in Article 55 of the EU AI Act, which include (i) the performance of model evaluations to assess the model’s capabilities, propensities, affordances and effects;[24] (ii) the assessment and mitigation of systemic risks; (iii) the report of serious incidents to relevant authorities; (iv) the adoption of corrective measures; and (v) the implementation of an appropriate level of cybersecurity for the model and its physical infrastructure.[25] Moreover, GPAI models that pose systemic risk cannot benefit from the “open-source” exemptions mentioned above.[26]
Obligations concerning GPAI models apply since 2 August 2025. However, models that were placed on the market prior to that date have until 2 August 2027 to comply with Chapter V of the Regulation (i.e., the obligations for providers of GPAI models).[27] In addition, providers of GPAI models placed on the market before 2 August 2025 are not required to retrain or unlearn models in case it would be unfeasible or would represent a disproportionate burden, and provided that such instances are disclosed and justified both in the copyright policy and in the summary of the content used for training.[28] As for the others, it will be necessary to swiftly assess their regulatory exposure, seek support from the AI office, and implement appropriate compliance measures before enforcement actions begin in August 2026. Indeed, consequences may be damaging as penalties of up to EUR 15 million or 3% of the company’s total worldwide annual turnover in the preceding financial year can be imposed in case of non-compliance with the aforementioned obligations.
III. The GPAI Code of Practice
To help companies comply with the obligations contained in Articles 53 and 55 of the EU AI Act concerning GPAI models, the Commission issued on 10 July 2025 a GPAI Code of Practice. According to the Commission, voluntary compliance with the code will reduce the administrative burden on businesses, and increase legal certainty in comparison to situations where compliance through other methods is chosen and “trust from the Commission and other stakeholders.”[29] Moreover, commitments given in the context of the Code may be taken into account as mitigating factors in determining the amount of fines ensuing from potential future violations.[30]
The Code contains 12 commitments (each of them supported by corresponding measures) across three chapters: (i) “Transparency” that introduces, inter alia, a standardised “Model Documentation Form” comprising relevant information to be shared automatically with business customers integrating the model into AI systems and with regulatory authorities upon formal request; (ii) “Copyright,” which requires, for instance, the use of web crawlers following standard instructions to respect rightsholders’ preferences when collecting data, or the implementation of safeguards against verbatim reproductions of protected works from the model’s training data; and (iii) “Safety and Security,” which applies only to providers of GPAI models with systemic risk and includes measures such as implementing technical safeguards or conducting ongoing post-market monitoring of the model’s use and potential unintended consequences. Adherence to the Code is unconditional, as “[a]ny opt-out from chapters of the code of practice results in losing the benefits of facilitating the demonstration of compliance in that respect.”[31]
Several leading tech companies including Amazon, Anthropic, Google, IBM, Microsoft, OpenAI, Aleph Alpha, and Mistral AI have already signed up to the Code.[32] By contrast, Meta indicated it does not intend to sign the Code,[33] while xAI only signed up to the third part of the Code. Given that companies choosing not to sign up to, or opting-out of certain parts of, the Code must rely on other methods to demonstrate compliance, the Commission is likely to scrutinise them more rigorously.
IV. Predictions
Enforcement of the EU AI Act remains at a very nascent stage. Several of its provisions — particularly those governing the definition and classification thresholds for GPAI models — are likely to provoke ongoing debate and may evolve as technology and market conditions shift. The Commission is expected to refine its methodological tools and thresholds over time, especially as industry stakeholders raise concerns about the disproportionate burden the Act places on emerging players.
Against the backdrop of the EU’s broader digital competitiveness agenda, including the Digital Omnibus on AI proposal published on 19 November 2025, enforcement authorities — notably the AI Office, which stands to gain expanded powers — are unlikely to adopt an overly rigid posture in these initial stages. Instead, they are expected to favour a more collaborative approach with GPAI model providers, seeking to balance innovation, competitiveness, and systemic risk mitigation. Notwithstanding, companies should remain vigilant, continuously self-assess their compliance, and closely monitor regulatory developments.
***
If you have any questions about the issues addressed in this memorandum, or if you would like a copy of any of the materials mentioned in it, please do not hesitate to reach out to:
Marixenia Davilla
Partner
Brussels
Marixeniadavilla@quinnemanuel.com Tel: +32 2 416 50 13
Nicolas Papageorges
Associate
Brussels
Nicolaspapageorges@quinnemanuel.com Tel: +32 2 415 50 15
END NOTES:
[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[2] See also our previous alert: https://www.quinnemanuel.com/the-firm/publications/artificial-intelligence-eu-regulation-and-competition-law-enforcement-addressing-emerging-challenges/.
[3] EU AI Act, Art. 3(63).
[4] Annex to the Communication of the Commission – Approval of the content of the draft Communication from the Commission – Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act) (the “Guidelines on GPAI models”), C(2025)5045 final, 18 July 2025.
[5] Id., paras 16-20. While the Commission considers “training compute” to be the most suitable approach at present, it recognises that this remains an imperfect proxy for generality and capabilities and that it may change its approach as technology and the market evolve. It follows that the Commission may choose to depart from this threshold depending on the model’s capability to competently perform a wide range of distinct tasks.
[6] See id., paras 20-21 and related examples.
[7] EU AI Act, Art. 51(1)(a).
[8] Id., Art. 51(1)(b).
[9] Id., Arts 51(2) and (3).
[10] Id., Art. 52. See also Guidelines on GPAI models, paras 28-32.
[11] Para. 35 of the Guidelines on GPAI models precises that “providers should include information available to them at the time of notification about the model’s achieved or anticipated capabilities, including in the form of actual or forecasted benchmark results (for example based on scaling analyses)” and that providers “are strongly advised to also include any other information that may have a bearing on the model’s capabilities, such as model architecture, number of parameters, number of training examples, data curation and processing techniques, training techniques, input and output modalities, expected tool use, and expected context length.”
[12] EU AI Act, Art. 52(5). In so doing, the providers should put forward “objective, detailed and new reasons that have arisen since the designation decision.”
[13] See Guidelines on GPAI models, paras 33-42 and 47.
[14] With respect to the training compute threshold, para. 16 of the Guidelines on GPAI models states that “the Commission’s approach may change in the future as technology and the market evolve” and that it “will continue to investigate the availability of other criteria that could be used to assess generality and capabilities with relative ease, especially for smaller actors.”
[15] According to Art. 51(3) of the EU AI Act, “[t]he Commission shall adopt delegated acts in accordance with Article 97 to amend the thresholds listed in paragraphs 1 and 2 of this Article, as well as to supplement benchmarks and indicators in light of evolving technological developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to reflect the state of the art.” See also Guidelines on GPAI models, para. 29.
[16] Id., paras 41 and 45.
[17] EU AI Act, Art. 3(3).
[18] Guidelines on GPAI models, paras 49 et seq.
[19] Guidelines on GPAI models, para. 62.
[20] Id., para. 63.
[21] See, for instance, para. 51 of the Guidelines on GPAI models explaining that “[i]f a collaborative or consortium has a general-purpose AI model developed for it by different individuals or organisations and places the model on the market, then usually the coordinator of the collaborative or the consortium is the provider. Alternatively, the collaborative or the consortium might be the provider. This must be assessed on a case-by-case basis.”
[22] EU AI Act, Art. 53.
[23] Id., Art. 53(2). See also Guidelines on GPAI models, para. 72.
[24] The GPAI Code of Practice provides the following model evaluation methods: Q&A sets, task-based evaluations, benchmarks, red-teaming and other methods of adversarial testing, human uplift studies, model organisms, simulations, and/or proxy evaluations for classified materials.
[25] Appendix 4 to the GPAI Code of Practice specifies the security mitigation objectives to be met such as the prevention of unauthorised network access, the reduction of the risk of social engineering or the reduction of the risk of malware infection and malicious use of portable devices.
[26] EU AI Act, Art. 53(2).
[27] Id., Arts 111(3) and 113(b).
[28] See Guidelines on GPAI models, para. 111.
[29] See https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai. See also Guidelines on GPAI models, paras 94-100.
[30] Guidelines on GPAI models, paras 94-96.
[31] Id., para 94.
[32] See https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai.
[33] See Euronews, “Meta won’t sign EU’s AI Code, but who will?,” 23 July 2025, available at: https://www.euronews.com/my-europe/2025/07/23/meta-wont-sign-eus-ai-code-but-who-will.