News Detail Banner
All News & Events

Artificial Intelligence, EU Regulation and Competition Law Enforcement: Addressing Emerging Challenges

June 02, 2025
Firm Memoranda

Artificial Intelligence (“AI”)[1] has emerged as a uniquely powerful tool that promises to revolutionise businesses and society alike in unprecedented ways. It is trite to observe that companies increasingly make use of AI to conduct business, and that consumers do so to in their day-to-day lives. At the forefront of this technological revolution is generative AI,[2] which has garnered worldwide attention as a result of its use in AI chatbots that rely on natural language processing to create human-like conversational dialogue.

AI is rightly viewed as a colossal and unmissable investment opportunity by both private and public actors. Notably, the European Commission (the “Commission”) announced on 11 February 2025 its InvestAI initiative, which aims to secure EUR 200 billion for investments in AI, including a EUR 20 billion fund to invest in AI gigafactories.[3] Similarly, President Trump announced on 21 January 2025 a private sector investment of up to USD 500 billion in AI infrastructure. The project, called Stargate, will take the form of a joint venture between OpenAI, Oracle and Softbank.[4]

              But because the immense power of AI can be used not only for good but also for ill, regulators around the world are in the process of adopting rules that seek to mitigate the risks arising from some of the most nebulous aspects of its deployment and use. And AI is also increasingly in the crosshairs of antitrust enforcers on both sides of the Atlantic. The present note discusses the key European Union (“EU”) regulations and competition law enforcement priorities in this rapidly evolving field.

I. EU Regulatory Framework Governing AI

              (i) The EU AI Act

              The EU has been a pioneer in AI regulation with the adoption on 13 June 2024 of the Artificial Intelligence Act (the “EU AI Act”), the first-ever comprehensive legal framework on AI worldwide.[5]

              The EU AI Act imposes obligations on six categories of economic actors active in the AI sector, namely providers, importers, distributors, product manufacturers, and deployers of AI systems,

as well as appointed authorised representatives. These are collectively referred to as “operators”.[6] The regulation has extraterritorial effect. As such, it applies not only to businesses established in the EU, but also to those based outside the EU if their AI systems are placed on the EU market, used within the EU, or if their outputs are intended for use within the EU. Non-EU providers of both AI systems and general-purpose AI (“GPAI”) models are therefore required to appoint an authorised representative established in the Union to ensure compliance with the EU AI Act’s requirements.[7]

The EU AI Act adopts a risk-based approach that differentiates between AI systems placed on the market, or deployed, in the EU according to the different levels of risk that they pose, namely (i) unacceptable risk; (ii) high-risk; (iii) limited-risk; (iv) minimal risk; and (v) in the case of GPAI models, systemic risk.

The EU AI Act prohibits outright those AI practices that are considered as posing an “unacceptable risk”.[8] These include AI systems that manipulate people’s decisions or exploit their vulnerabilities,[9] that evaluate or classify people based on their social behaviour or personal characteristics (social scoring),[10] that predict a person’s risk of committing a criminal offence,[11] that create or expand facial recognition databases by scraping images from the internet or CCTV footage,[12] that infer emotions in the workplace or educational institutions (unless they serve a medical or safety purpose),[13] and that categorise people based on their biometric data,[14] or enable real-time remote biometric identification.[15] This prohibition of AI practices that entail an “unacceptable risk” came into force on 2 February 2025. Failure to comply with the prohibition will attract a fine capped at EUR 35 million or 7% of the operator’s total worldwide annual turnover in the preceding financial year.[16] The EU AI Act does provide, however, for a limited exception to the outright prohibition of ‘real-time’ remote biometric identification systems. These are permissible if they are necessary for law enforcement purposes, and provided that proper safeguards, including prior judicial or administrative authorisation, are in place to ensure the protection of fundamental rights.[17]

“High-risk” AI systems are those believed to create significant potential harm to health, safety and fundamental rights. They include AI systems that are used as a product, or as a safety component of a product. Such systems are required to undergo a third-party conformity assessment under the legislation referenced in Annex I of the EU AI Act,[18] which includes, inter alia, regulations governing the safety of toys, medical devices and lifts.[19] Other AI systems such as those intended to be used for biometric identification systems, biometric categorisation, emotion recognition, as safety components in the management and operation of critical infrastructure, or by law enforcement authorities,[20] also fall within the “high-risk” category.[21] All providers and deployers of such “high-risk” AI systems must comply with strict obligations, including the implementation of a risk management system, quality controls, record-keeping and transparency obligations, and some level of human oversight.[22] These obligations will start applying as of 2 August 2026 for those AI systems referred to in Annex III, and as of 2 August 2027 for those referred to in Annex I EU AI Act.[23] Penalties of up to EUR 15 million or 3% of the company’s total worldwide annual turnover in the preceding financial year can be imposed in case of non-compliance.[24]

AI systems that do not fall within the prohibited or “high-risk” categories and are considered as presenting limited risks will be only subject to information and transparency obligations as of 2 August 2026.[25] These “limited-risk” AI systems include those intended to interact with natural persons or to generate content, which may pose specific risks of impersonation or deception. For instance, the regulation provides that, when interacting with chatbots, users must be made aware of this. Similarly, deployers of AI systems that generate or manipulate image, audio or video content (i.e., deep fakes), must disclose that such content has been artificially generated or manipulated (with very limited exceptions, including instances in which such content is being used to prevent a criminal offence).[26] Providers of AI systems that generate large quantities of synthetic content must implement sufficiently reliable, interoperable, effective and robust techniques and methods (e.g., watermarks) to ensure that it can be easily determined that the output has been generated or manipulated by an AI system and not by a human being. Moreover, employers must inform workers and their representatives of the deployment of AI systems in the workplace. Non-compliance with these transparency obligations can lead to fines similar to those imposed on “high-risk” AI systems.[27]

AI systems that present minimal risks for individuals (e.g., spam filters) are not subject to any additional obligations and are required simply to comply with other legislation already in force such as the General Data Protection Regulation (the “GDPR”).[28]

Finally, the EU AI Act contains special rules regarding GPAI[29] models and GPAI models that pose systemic risks.[30] GPAI models are subject to transparency obligations (e.g., the obligation to make detailed summaries of training data sets publicly available) and EU copyright protection obligations applicable.[31] In addition to these, the providers of GPAI models posing systemic risk are required to constantly assess and mitigate the risks they pose[32] and to ensure cybersecurity protection by, inter alia, documenting and reporting serious incidents (e.g., violations of fundamental rights) to the AI office and implementing corrective measures.[33] Non-compliance with these obligations risks the same level of penalties as those applicable to “high-risk” and “limited risk” AI systems.[34]

Since the enactment of the EU AI Act, the Commission has taken and/or is preparing to take several actions to clarify the text of the regulation and/or facilitate compliance:

  • On 4 February 2025, it published a set of Guidelines on prohibited practices[35] providing practical examples and further specifying the measures that may be taken to avoid offering or using AI systems in ways that are likely to be prohibited by Article 5 of the EU AI Act. Inter alia, the Guidelines clarify that emotion recognition systems that are not considered as posing an “unacceptable risk”[36] will nevertheless be considered “high-risk” AI systems.[37] The same applies to certain AI-based scoring systems, such as those used for credit-scoring or risk assessment by health and life insurance companies, which do not fulfil the conditions for outright prohibition provided by Article 5(1)(c) of the regulation.[38] The Guidelines also explain how, in the Commission's view, the EU AI Act will intersect with other related or overlapping EU statutes,[39] notably, the GDPR, the Law Enforcement Directive,[40] and Regulation (EU) 2018/1725.[41]
  • On 11 March 2025, the AI Office published the third draft of the “General-Purpose AI Code of Practice”, with the final version expected in May 2025.[42] By outlining commitments and detailed implementation measures, the purpose of this code is to support providers of GPAI models in meeting their obligations under the EU AI Act. A dedicated AI Act Service Desk is expected to be launched in summer 2025 to accompany the progressive entry into force of the EU AI Act’s obligations. This platform will provide stakeholders with interactive tools to help them assess their legal obligations and compliance steps.[43] The creation of the AI Act Service Desk is one of the Commission’s strategic priorities set out in its AI Continent Action Plan, unveiled on 9 April 2025.[44]

It is clear that the EU AI Act has enormous practical implications, not least because it increases significantly the regulatory burden on businesses. Companies must now evaluate their current as well as future use of AI and undertake a thorough self-assessment to ensure compliance with the new regulation. They will also need regularly to monitor any updates, as the list of practices currently prohibited by the EU AI Act is likely to change over time. Providers should stay informed about the Commission’s upcoming post-market monitoring plan, due by 2 February 2026, and prepare their strategies accordingly. If already subject to post-market requirements, especially in regulated sectors like finance, companies should consider integrating compliance with the EU AI Act into their wider compliance program. In cases where a company’s product interacts directly with individuals or presents AI-generated content, transparency must be embedded from the outset by implementing clear user disclosures and developing mechanisms to label AI-generated material. As regards GPAI models, companies must prepare technical documentation in accordance with the EU AI Act’s requirements, considering how design choices, training data, and risk assessments may affect long-term compliance. They should also determine the appropriate timing for the system’s launch in the EU market and conduct a thorough legal assessment, particularly regarding copyright and data usage. Monitoring updates to systemic risk thresholds is equally important, as these may change through delegated acts. Moreover, given the technical and legal complexity of the issues at stake, obtaining independent legal advice from experts in this area, strong governance and proper documentation, compliance training, as well as assigning compliance oversight to a dedicated AI officer or a team within the business will be crucial. Notably, a robust risk management framework is critical, with regular reviews, mitigation strategies, and ongoing monitoring. Comprehensive documentation should be maintained for each AI system, outlining its function, design, and performance. Data must be regularly reviewed to eliminate biases that could result in discriminatory outcomes, as explicitly prohibited by the EU AI Act.

In sum, the EU AI Act is a highly ambitious regulation that has inevitably attracted serious criticism, including that it sets forth very stringent requirements that create barriers for non-EU companies, potentially discouraging market entry due to the cost and complexity of compliance, which could reduce competition and hinder innovation within the EU. Conversely, European SMEs and startups may also find themselves at a disadvantage compared to rivals operating in less regulated regions. Compounding these challenges, the new regulation also raises enforcement issues, arising in particular from the lack of clarity of some of its provisions and/or the presence of seemingly contradictory ones. This gives rise to the risk that it will be enforced in legally defensible but unanticipated ways, increasing the likelihood of inconsistent implementation by the different EU Member States. Gaps in the EU AI Act’s coverage have also drawn criticism. Notably, in February 2025, fifteen cultural organisations sent a letter to the Commission arguing that it provides insufficient protection to content creators such as writers and musicians. Others have argued that the regulation fails to fully address copyright issues linked to generative AI, especially regarding the broad interpretation of text and data mining exemptions, which could enable large tech firms to exploit creative content at scale. This concern has already led to legal action from artists and authors.[45]

It remains to be seen whether, despite these shortcomings, the EU AI Act will succeed in its desire to shape a transparent and ethical AI landscape in the EU without undermining Europe’s competitiveness.

(ii) Application of the Digital Markets Act and Digital Services Act to AI

In addition to the EU AI Act, the Commission may also seek to rely on the Digital Markets Act (the “DMA”)[46] and the Digital Services Act (the “DSA”)[47] to regulate the field of AI.

More specifically, the Commission may use the DMA to regulate AI in circumstances where AI-related services are integrated or embedded into designated “core platform services”, or where it designates key inputs for AI applications as “core platform services”. To date, the Commission has not designated AI and cloud computing services[48] as such "core platform services".[49] However, this could change in the future, with the new Commissioner for Competition, Ms Teresa Ribera, having vowed to intensify its vigorous enforcement and targeted implementation of the DMA in 2025, including by expanding its scope. Moreover, certain Member States,[50] and the European Parliament’s Working Group on the DMA implementation have been pressuring the Commission to monitor closely AI and cloud services, and to designate them as “core platform services”.[51] We would thus expect the Commission to launch a market investigation under Article 19 of the DMA, with a view to including AI and cloud computing services in the list of core platform services laid down in Article 2(2) of the DMA.

The DSA is equally set to play an important role in AI regulation, as shown by the steps already undertaken by the Commission (DG Connect) to that effect. Specifically, in March 2024, DG Connect sent requests for information to Microsoft (Bing), Google (Search and YouTube), Meta (Facebook and Instagram), Snapchat, ByteDance (TikTok) and X asking them about the measures they were adopting to mitigate systemic risks[52] linked to generative AI.[53] Similarly, in May 2024, the Commission sent an additional request for information to Microsoft concerning an alleged failure by the company to disclose certain documents related to risks stemming from the Bing search engine's generative AI features, namely, “Copilot in Bing” and “Image Creator by Designer”.[54] It has also been reported that the Commission is monitoring ChatGPT with a view to potentially designating it as a systemic platform in light of its online search functionality.[55]

(iii) AI and Product Liability

To address AI-specific issues, the EU has also modernised its (no-fault) product liability framework, pursuant to which consumers are entitled to claim damages caused by defective products without needing to prove negligence or fault on the part of the seller or supplier. The new Product Liability Directive explicitly applies to consumer-facing generative AI systems (including chatbots) and other AI tools, expanding the definition of “product” to include digital files, online platforms, and all types of software, including applications, operating systems, and AI systems.[56] Moreover, claimants will now be entitled to seek compensation if they suffer damages due to missing or insufficient software updates, weak cybersecurity protection, or the destruction or corruption of data.[57] The new Directive will apply to products placed on the market as of December 2026.[58]

By contrast, the EU recently abandoned a proposed directive on adapting non-contractual civil liability rules to AI.[59] The Commission argued that the proposed directive would have reduced the evidentiary burden required for claimants, thereby mitigating some of the inherent challenges posed by the black-box nature of most AI models and algorithms by enabling claimants with a plausible damages claim to request disclosure of evidence about specific “high-risk” AI systems suspected of having caused damage, and introducing a targeted rebuttable presumption of causality.

II. Antitrust and Competition Enforcement in the AI Sector

Competition authorities have traditionally focused on algorithmic collusion in cases in which AI tools are used to monitor and adjust prices, or to facilitate the sharing of competitively sensitive information. Most recently, however, competition concerns have been raised about the AI sector as a whole, accompanied by numerous statements emphasising the need for cooperation between regulators to ensure fair competition in the AI fields.

Notably, the Commission, the U.K. Competition and Markets Authority (“CMA”), the U.S. Department of Justice and the U.S. Federal Trade Commission (“DoJ” and “FTC”) issued a joint statement outlining their views on the issue.[60] Similarly, the competition authorities of Canada, France, Germany, Italy, Japan, United Kingdom and United States laid down guiding principles following the G7 summit held in October 2024.[61] These declarations identified three primary competition concerns: (i) the concentrated control of key components for developing AI foundation models, such as specialised chips, computing power, cloud capacity, large-scale data and specialist technical expertise, (ii) the entrenchment or extension of large incumbent digital firms’ market power in AI-related markets, and (iii) anti-competitive partnerships and arrangements involving key AI players. Ensuring fair dealing, interoperability, and choice among diverse products and business models were also identified as common principles for safeguarding competition in the AI sector.

AI and cloud computing also rank high in Ms Ribera’s agenda, and the EU is likely to become a key jurisdiction for antitrust enforcement in the AI space. Testament to this is the Competition’s Policy Brief on Generative AI published in September 2024,[62] which reaffirms the Commission’s avowed interest in investigating emerging market trends involving large digital players holding critical inputs for AI.

Albeit not novel, the main antitrust issues and theories of harm identified by the Commission’s Policy Brief in relation to AI are: (i) exclusivity arrangements leading to the exclusion or marginalisation of rivals; (ii) discrimination/preferential access arrangements; (iii) self-preferencing; (iv) refusal to supply, (v) tying or bundling, (vi) non-compete and lock-in strategies; (vii) margin squeeze by vertically integrated players; (viii) anticompetitive agreements between rivals in the AI space; and (ix) killer or reverse acquisitions.

The Commission indicates that it will examine any such issues taking into account the specific characteristics of the digital and AI space, including any barriers to entry resulting from the highly complex and technical nature of the sector, ecosystem dynamics,[63] network effects, and the need for highly specialised employees, in particular engineers.

In particular, the Commission will focus on vertical integration, exclusivity and/or preferential access arrangements between incumbent large digital players that may currently enjoy preferential access to key components of generative AI (e.g., Graphic Processing Units (“GPUs”), supercomputing power, cloud capacity, as well as data or specialised engineers), and certain third parties such as AI foundation model developers, such that the dominant players could deprive rivals of access to such key components, or reduce the quality/number of available components. The risk identified is that such partnerships may increase the degree of market concentration and dependency on a few dominant players, thus making access to critical inputs more difficult and increasing the likelihood of market foreclosure.[64]

For instance, the Commission sought to investigate the USD 13 billion partnership between Microsoft and OpenAI to build new Azure AI supercomputing technologies. Although the deal in question did not qualify as a notifiable concentration under the EU Merger Regulation[65] because it did not result in an acquisition of control on a lasting basis, the Commission expressed a willingness to investigate it under the abuse of dominance prohibition set out in Article 102 TFEU.[66] Similarly, in the UK, the CMA investigated Amazon’s and Google’s strategic collaboration with Anthropic, although – like the Commission – it eventually concluded that no relevant merger situation had been created.[67]

The Commission is also expected to carefully monitor the development of smaller AI foundation models capable of running on mobile devices and offline, such as the integration of Google’s Gemini Nano AI model in Samsung’s Galaxy S24 and S25 series.[68] The Commission will examine, in particular, whether such integration could raise anticompetitive concerns, taking the form of exclusivity agreements and default pre-installation on popular device brands that could lead to the anticompetitive foreclosure of rivals.[69]

Moreover, the hiring of highly skilled employees in the AI sector, which is critical for the development of AI but may be relatively difficult for small companies, has also attracted the attention of competition authorities. For example, Microsoft’s hiring of Inflection employees was the subject of a referral request to the Commission by several competition agencies of EU Member States under Article 22 of the EU Merger Regulation, although the request was subsequently withdrawn[70] following the Court of Justice’s ruling in Illumina.[71]

              Finally, the Commission can be excepted to scrutinise deals and conduct related to key components necessary for the development of AI foundation models, as it considers that these are most likely to result in increased barriers to entry or expansion, or lead to anticompetitive foreclosure. In particular, the Commission considers that access to large and qualitative datasets may be hindered by the cost of data licensing agreements entered into between the holders of high-quality/large amount of data and players active in the AI-space. The necessity to obtain specialised chips supporting AI neural networks, such as GPUs, Tensor Processing Units and other AI accelerators, may similarly be impeded by their cost and long lead times.[72] For instance, the Commission recently used Article 22 of the EU Merger Regulation to review NVIDIA’s acquisition of Run:ai, as both are active in the GPU industry.[73]

III. Conclusion

              It is undeniable that AI is a groundbreaking technology with unprecedented power to change our world. Various concerns about the potential misuse of AI will be legitimate, and some overarching measures ought to be taken to mitigate such risks. Similarly, seeking to ensure that key AI technologies and products do not end up concentrated in the hands of just one or two very powerful players and that competition in the AI sector will thrive is also laudable. However, over-regulation of nascent technologies could also hamper innovation and handicap European companies, including the SMEs and startups of which Europe is in dire need to spur economic growth. Enforcement of the sector-specific and antitrust rules already in place should not come at the cost of progress, and it would be wise for the Commission and other national authorities to limit their regulatory and antitrust enforcement action to what is strictly necessary and proportionate to safeguard fundamental rights and promote competition.

***

If you have any questions about the issues addressed in this memorandum, or if you would like a copy of any of the materials mentioned in it, please do not hesitate to reach out to:

Marixenia Davilla
Email: marixeniadavilla@quinnemanuel.com
Phone: +32 2 416 50 13

Miguel Rato
Email: miguelrato@quinnemanuel.com
Phone : +32 2 416 50 04  

Nicolas Papageorges
Email: nicolaspapageorges@quinnemanuel.com
Phone: +32 2 416 50 15

To view more memoranda, please visit www.quinnemanuel.com/the-firm/publications/

To update information or unsubscribe, please email updates@quinnemanuel.com

 

[1]      According to a report issued by the Commission, AI refers to “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals”. See “A definition of Artificial Intelligence: main capabilities and scientific disciplines”, 18 December 2018, available at https://digital-strategy.ec.europa.eu/fr/node/2226.

[2]      According to the Commission, generative AI refers to “neural networks that can generate high-quality text, images, and other forms of content based on the data they were trained on”. Competition Policy Brief, “Competition in Generative AI and Virtual Worlds”, September 2024, p. 1, available at https://competition-policy.ec.europa.eu/document/download/c86d461f-062e-4dde-a662-15228d6ca385_en (“Competition Policy Brief on Generative AI”).

[3]      See Press Release, “EU launches InvestAI initiative to mobilise €200 billion of investment in artificial intelligence”, 11 February 2025, available at https://ec.europa.eu/commission/presscorner/detail/en/ip_25_467.

[4]      See Reuters, “Trump announces private-sector $500 billion investment in AI infrastructure”, 22 January 2025, available at https://www.reuters.com/technology/artificial-intelligence/trump-announce-private-sector-ai-infrastructure-investment-cbs-reports-2025-01-21/.

[5]      Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).

[6]       Id., recitals 1-3, 6-8 and art. 1.

[7]       Id., recitals 9-11, 82 and arts 2, 22 and 54.

[8]      Id., art. 5.

[9]      Id., arts 5(1)(a) and (b).

[10]     Id., art. 5(1)(c).

[11]     Id., art. 5(1)(d).

[12]     Id., art. 5(1)(e).

[13]     Id., art. 5(1)(f). Recital 44 of the EU AI Act elaborates on the risks justifying the prohibition: “There are serious concerns about the scientific basis of AI systems aiming to identify or infer emotions, particularly as expression of emotions vary considerably across cultures and situations, and even within a single individual. Among the key shortcomings of such systems are the limited reliability, the lack of specificity and the limited generalisability. Therefore, AI systems identifying or inferring emotions or intentions of natural persons on the basis of their biometric data may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the concerned persons. Considering the imbalance of power in the context of work or education, combined with the intrusive nature of these systems, such systems could lead to detrimental or unfavourable treatment of certain natural persons or whole groups thereof”.

[14]     Id., art. 5(1)(g).

[15]     Id., art. 5(1)(h).

[16]     Id., art. 99(3).

[17]      For instance, art. 5(1)(h) of the EU AI Act provides that the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement is prohibited, unless and in so far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; and (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for certain types of offences. See also, id., art. 5(2)-5(7).

[18]     Id., art. 6(1).

[19]     Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170, 30.6.2009, p. 1); Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC; and Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts.

[20]     These include AI systems intended to be used: (i) to assess the risk of a natural person becoming the victim of criminal offences; (ii) as polygraphs or similar tools; (iii) to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences; (iii) for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups; and (iv) for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences. See EU AI Act, Annex III.

[21]     Id., art. 6(2) and Annex III.

[22]     Id., chapter III.

[23]     Id., art. 113.

[24]     Id., art. 99(4). Art. 99(6) provides that such an amount should be lower in the case of small and medium sized enterprises (“SMEs”).

[25]     Id., art. 50.

[26]     Id., art. 50(4).

[27]     Id., art. 99.

[28]     Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[29]     According to art. 3(63) of the EU AI Act, “’general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development of prototyping activities before they are placed on the market”.

[30]     According to arts 3(64) and 51, a GPAI model will be classified as posing systemic risk if it has or considered as having high impact capabilities (i.e. capabilities that match or exceed the capabilities recorded in the most advanced GPAI models).

[31]     Id., art. 53.

[32]     Art. 3(65) of the EU AI Act explains that “’systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”. See also id., recital 110 which further clarifies that systemic risks include “any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety; any actual or reasonably foreseeable negative effects on democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory content,” […] “chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use; offensive cyber capabilities, such as the ways in vulnerability discovery, exploitation, or operational use can be enabled; the effects of interaction and tool use, including for example the capacity to control physical systems and interfere with critical infrastructure; risks from models of making copies of themselves or ‘self-replicating’ or training other models; the ways in which models can give rise to harmful bias and discrimination with risks to individuals, communities or societies; the facilitation of disinformation or harming privacy with threats to democratic values and human rights; risk that a particular event could lead to a chain reaction with considerable negative effects that could affect up to an entire city, an entire domain activity or an entire community”.

[33]     Id., art. 55.

[34]     Id., art. 101.

[35]     Annex to the Communication from the Commission - Approval of the content of the draft Communication from the Commission - Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act), C(2025) 884 final (“Guidelines on prohibited AI practices”).

[36]     EU AI Act, art. 5(1)(f).

[37]     Pursuant to art. 6(2) and Annex III, point (1)(c) of the EU AI Act.

[38]        EU AI Act, art. 5(1)(c). See also id., recitals 37-38, 58 and Annex III.

[39]     Guidelines on prohibited AI practices, paras 42-52. See also, id., paras 135-145, 178-183, 219-221, 238 and 287-288. The Commission has approved the draft guidelines, but not yet formally adopted them as of the date of publishing of the present note.

[40]     Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA.

[41]     Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC.

[42]     See Press Release, “Third Draft of the General-Purpose AI Code of Practice published, written by independent experts”, 11 March 2025, available at https://digital-strategy.ec.europa.eu/en/library/third-draft-general-purpose-ai-code-practice-published-written-independent-experts.

[43]     See Call for Tenders, “Commission launches a call for tender as part of the efforts to establish the AI Act Service Desk”, 16 April 2025, available at https://digital-strategy.ec.europa.eu/en/funding/commission-launches-call-tender-part-efforts-establish-ai-act-service-desk.

[44]     See Press Release, “The AI Continent Action Plan”, 9 April 2025, available at https://digital-strategy.ec.europa.eu/en/library/ai-continent-action-plan. The Commission’s strategic roadmap to advance AI development and adoption across the EU focuses on five areas: (i) building a large-scale AI computing infrastructure; (ii) expanding access to high-quality data; (iii) promoting AI deployment in strategic sectors; (iv) strengthening AI-related skills and talent; and (v) facilitating the implementation of the EU AI Act through the creation of the AI Act Service Desk.

[45]     See The Guardian, “EU accused of leaving ‘devastating’ copyright loophole in AI Act”, 19 February 2025, available at https://www.theguardian.com/technology/2025/feb/19/eu-accused-of-leaving-devastating-copyright-loophole-in-ai-act.

[46]     Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and far markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act).

[47]     Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act).

[48]     DMA, art. 2(i).

[49]     See, e.g., MLex, “AI, cloud competition risks must face DMA scrutiny, EU lawmakers say”, 23 January 2025, available at https://content.mlex.com/#/content/1626113/ai-cloud-competition-risks-must-face-dma-scrutiny-eu-lawmakers-say?referrer=search_linkclick.

[50]     See MLex, “Tech companies should see tighter AI competition rules, EU countries say”, 10 February 2025, available at https://content.mlex.com/#/content/1630052/tech-companies-should-see-tighter-ai-competition-rules-eu-countries-say?referrer=search_linkclick.

[51]     See MLex, “AI, cloud competition risks must face DMA scrutiny, EU lawmakers say”, 23 January 2025, available at https://content.mlex.com/#/content/1626113/ai-cloud-competition-risks-must-face-dma-scrutiny-eu-lawmakers-say?referrer=email_instantcontentset&paddleid=202&paddleaois=2000.

[52]     Under art. 35 of the DSA, “[p]roviders of very large online platforms and of very large online search engines shall put in place reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified pursuant to Article 34, with particular consideration to the impacts of such measures on fundamental rights”. These include, for instance, the adaptation of the design of the services, the terms and conditions, or the content moderation processes.

[53]     See Press Release, “Commissions sends requests for information on generative AI risks to 6 Very Large Online Platforms and 2 Very Large Online Search Engines under the Digital Services Act”, 14 March 2024, available at https://digital-strategy.ec.europa.eu/en/news/commission-sends-requests-information-generative-ai-risks-6-very-large-online-platforms-and-2-very.

[54]     See Press Release, “Commission compels Microsoft to provide information under the Digital Services Act on generative AI risks on Bing”, 17 May 2024, available at https://digital-strategy.ec.europa.eu/en/news/commission-compels-microsoft-provide-information-under-digital-services-act-generative-ai-risks.

[55]     MLex, “ChatGPT faces possible designation as a systemic platform under EU digital law”, 30 April 2025, available at https://content.mlex.com/#/content/1650470/chatgpt-faces-possible-designation-as-a-systemic-platform-under-eu-digital-law?referrer=search_linkclick.

[56]     Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC (“Product Liability Directive”), art. 4.

[57]     Id., arts 7-8 and 11.

[58]     Id., art. 21.

[59]     Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), 2022/0303 (COD).

[60]     M. Vestager, S. Cardell, J. Kanter and L. Khan, “Joint Statement on Competition in Generative AI Foundation Models and AI Products”, 23 July 2024, available at https://competition-policy.ec.europa.eu/document/download/79948846-4605-4c3a-94a6-044e344acc33_en?filename=20240723_competition_in_generative_AI_joint_statement_COMP-CMA-DOJ-FTC.pdf.

[61]     G7 Competition Authorities and Policymakers’ Summit, “Digital Competition Communiqué”, 4 October 2024, available at https://en.agcm.it/dotcmsdoc/pressrelease/G7%202024%20-%20Digital%20Competition%20Communiqu%C3%A9.pdf%20%20.

[62]     Competition Policy Brief on Generative AI, fn. 2.

[63]     See, e.g., Commission Decision of 5 September 2023 in Case M.0615 – Booking Holdings/eTraveli Group, C(2023) 6376 final, paras 904 et seq. See also Competition Policy Brief on Generative AI, p. 8; French Competition Authority Opinion 24-A-05 of 28 June 2024 on the competitive functioning of the generative artificial intelligence sector, pp 58 et seq.; Portuguese Competition Authority Issues Paper of 6 November 2023 on competition and generative artificial intelligence, November 2023, pp 33 et seq.; and CMA AI strategic update of 29 April 2024, available at https://www.gov.uk/government/publications/cma-ai-strategic-update/cma-ai-strategic-update.

[64]     Competition Policy Brief on Generative AI, p. 3.

[65]     Council Regulation (EC) No 139/2004 of 20 January 2004 on the control of concentrations between undertakings (the EC Merger Regulation), art. 3.

[66]     See Speech by EVP Margrethe Vestager at the European Commission workshop on “Competition in Virtual Worlds and Generative AI”, 28 June 2024, available at https://ec.europa.eu/commission/presscorner/detail/en/speech_24_3550.

[67]     CMA’s Decision on relevant merger situation, “Amazon.com Inc.’s partnership with Anthropic PBC”, ME/7100/24; and CMA’s Decision on relevant merger situation, “Alphabet Inc.’s partnership with Anthropic PBC”, ME/7108/24.

[68]     See https://www.pcmag.com/news/google-gemini-ai-assistant-samsung-galaxy-s25.

[69]     Competition Policy Brief on Generative AI, p. 4.

[70]     Press Release, “Commission takes note of the withdrawal of referral requests by Member States concerning the acquisition of certain assets of Inflection by Microsoft”, 18 September 2024, available at https://ec.europa.eu/commission/presscorner/detail/en/ip_24_4727.

[71]     Joined Cases C-611/2 P and C-625/22 P, Illumina v Commission, EU:C:2024:677.

[72]     Competition Policy Brief on Generative AI, p. 4.

[73]     Commission Decision of 20 December 2024 in Case M.11766 - NVIDIA/Run:ai, C(2024) 9365.