The AI Act: A Landmark in European Digital Innovation Regulation
Finally, it has happened. After a journey that began in 2021,[1] which seemed to have stalled only to regain momentum as Ursula von der Leyen term neared its then supposed end,[2] the so-called ‘AI Act,’ the first regulation on the governance of artificial intelligence (hereinafter ‘AI’) of the European Union (hereinafter ‘EU’), reached final approval by the EU Council on May 21, 2024, then was published on the Official Journal of the European Union on July 12, 2024, as Regulation (EU) 2024/1689.[3]
The AI Act came into force on August 1, 2024, thus starting the countdown for the entities it applies to comply within the set deadline — or rather, deadlines. Indeed, similar to what has occurred with other EU regulations (e.g., the Digital Services Act and the Markets in Crypto-Assets Regulation, with which the AI Act forms a sort of trio for digital innovation regulating), the AI Act is highly complex and divided into a series of rule blocks that will come into application at different stages. This phased approach ensures that the various stakeholders have adequate time to adjust and comply with the new rules.
Global Relevance: Territorial Scope of Application
Before focusing on the specific issues that the AI Act aims to regulate and the respective compliance deadlines, we should clarify why the AI Act matters not only for the European market but also for stakeholders worldwide. As seen with numerous EU Directives and Regulations, the territorial scope of application of this new act is based on the market targeted by operators, in addition to the location of a company’s headquarters.[4] The essence is: it does not matter whether you operate from the United States or Beijing; if you enter the EU market to perform any of the activities outlined in Article 2, then Brussels’ laws apply to you. While this concept is not new, its implications could be deeper than usual in the case of the AI Act. At this stage, we cannot rule out the possibility that the AI Act could be another instance of the ‘Brussels Effect,’ a term coined by Anu Bradford.[5] According to Bradford, in certain circumstances, when global companies must comply with stringent EU laws to operate in the EU market, they often choose to adopt these regulations globally rather than create different production or service lines for different markets. This leads to the EU indirectly influencing global norms and regulations. We have already witnessed a similar phenomenon with the GDPR, which has become a global standard due to its indirect influence and because certain countries have entered into international trade agreements with the EU. These agreements explicitly required the non-EU country to adopt internal rules that substantively adhere to GDPR principles to ensure that data transfers resulting from international trade do not suffer protection reduction when moved from one legal system to another. Examples of such treaties include the EU-Japan Economic Partnership Agreement and the post-Brexit EU-UK Trade and Cooperation Agreement.
There are several reasons to believe that the above referred patterns could occur again due to the AI Act, primarily because of the strong thematic connection between the GDPR and the AI Act. In fact, the operation of AI requires training algorithms on datasets that could include personal data. Furthermore, the final AI-based software product may also be used to process personal data. It is no coincidence that before the AI Act, in the absence of specific AI rules, provisions applicable to AI-based software (such as those used by HR managers in recruiting processes) were found in the GDPR. Such is the case of Article 22, which places significant limitations on automated decision-making processes, into which AI uses could fall. Moreover, such data processing activities always require the possibility of human intervention (as outlined in Recital 71) and the explainability of the algorithm (as outlined in Recital 71).
Operators and AI: Material Scope of Application
Based on these premises and preliminary reflections, the following lines will attempt to offer a concise overview of the rule sets established by the AI Act.
On a general level, the AI Act aims to enhance the internal market and promote trustworthy AI, ensuring health, safety, and fundamental rights protection while supporting innovation. It establishes harmonized rules for AI system market placement, high-risk AI requirements, transparency, market monitoring, and innovation support, especially for SMEs and start-ups.[6]
Given the context, the core of the material scope of application is based on: (i) categories of operators; and (ii) the types of AI involved in their operations. The main regulated operators are the providers and deployers, where ‘provider’ means anyone who develops or has AI developed to place it on the market or put it into service under their own name or trademark, whether for remuneration or free of charge; while ‘deployer’ means anyone who uses AI for purposes other than private non-professional activities.
Regarding the types of AI, the Act AI applies to the provision and deployment of:
- ‘AI systems,’ meaning any “machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;”[7]
- ‘general-purpose AI model’ (hereinafter ‘GPAI model’), meaning any “AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated
-
into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”[8]
To make it easy to understand the difference between these two definitions, it could be said that while AI systems are products ready for use by end-users, GPAI models “require the addition of further components, such as for example a user interface, to become AI systems.”[9] GPAI models “are typically integrated into and form part of AI systems,” even though they “may be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct downloads, or as physical copies.”[10] When GPAI models are integrated into AI systems, they are referred to as ‘GPAI systems’ within the AI Act.[11] Based on that, we could say that ‘ChatGPT’ is an AI system while GPT-4 should qualify as a model. The AI Act specifies that GPAI systems may be used as high-risk AI systems or integrated into them, and mandates that GPAI system providers should cooperate with such high-risk AI system providers to enable the latter’s compliance.[12]
It is worth reporting that the following uses of AI are expressly excluded from the material scope of application of the AI Act
- military, defense, and national security (Article 2.3);
- international cooperation between foreign public authorities or international organizations in law enforcement/judicial cooperation (Article 2.4);
- scientific research and development (Article 2.6);
- pre-market research and testing, excluding real-world testing (Article 2.8);
- purely personal non-professional activities (Article 2.10);
- AI systems released under free and open-source licenses, unless they qualify as ‘high-risk’ or fall under Article 5 or 50 (Article 2.12).
Categorization and Rule Sets Based on a Risk-Based Approach
Given all this, what are the obligations that apply to providers and deployers of AI systems or GPAI models?
Regarding AI systems, the AI Act establishes a risk-based regulatory approach,[13] leading to a categorization of AI and the respective applicable rules into four risk categories. This ‘risk hierarchy’ is explained by the EU Commission by referring to a pyramid, at the top of which is 'unacceptable risk,' referring to explicitly prohibited AI practices.[14] The following is a general overview of the categories and their respective obligations.
Chapter II lists the explicitly prohibited AI practices, which include: manipulation of behaviors; exploitation of vulnerabilities (e.g., age or disability); social scoring, analyzing individuals to predict the risk of committing crimes; untargeted scraping of facial data for facial recognition; emotional recognition at work and in education; biometric categorization to infer race, political opinions, trade union membership, religious or philosophical beliefs, sexual life or orientation; real-time biometric surveillance with certain exceptions.
Chapter III governs the high-risk AI systems, meaning AI systems that could negatively impact the safety and health of users, and their use is permitted under the Regulation provided they comply with strict requirements and limits. Such systems could be grouped into two categories. The first, referred to in Article 6.1, concerns AI systems considered high-risk because they are intended as a safety component of a product or are products covered by harmonization legislation listed in Annex I, and they require a third-party conformity assessment before market placement or service. The second category concerns high-risk AI systems listed in Annex III, which includes, among others, systems used as safety components of medical devices, systems that affect a person's access to essential private and/or public services and benefits, and systems used by law enforcement. A provider who considers that an AI system referred to in Annex III is not high-risk must document its assessment before that system is placed on the market or put into service. If the system falls into the high-risk category, the compliance requirements are numerous and include the adoption of adequate risk management systems, appropriate data training and governance procedures, human oversight, and advanced cybersecurity measures.[15] In particular, deployers are subject to the following obligation set forth by Article 27: prior to deploying certain high-risk AI systems specified in the Regulation referred to in Article 6(2), they must perform an assessment of the impact on fundamental rights that the use of such systems may produce (‘FRIA’). As discussed a few paragraphs above, this rule closely mirrors the data protection impact assessment requirement provided under Article 35 of the GDPR (‘DPIA’).
Chapter IV is dedicated to limited-risk AI systems, meaning AI systems that do not raise particular concerns for the safety and rights of users, and therefore impose only a limited set of transparency obligations on providers and deployers. This category includes AI systems that interact with humans (e.g., chatbots) or systems that generate or manipulate images, audio, or video content (e.g., generative AI). Chapter IV is limited to Article 50, which contains these obligations. For example, end users must be informed that they are interacting with an AI system or that their emotions or characteristics are being recognized through automated tools. In the case of AI systems that generate or manipulate images, audio, or video content that significantly resemble the original (i.e., deep fakes), there will be an obligation to declare that the content is generated through automated means, with certain exceptions for legitimate purposes.
Lastly, there are minimal risk AI systems, a broad category under which most AI systems, such as AI-enabled recommender systems and spam filters, currently fall into according to the EU Commission.[16] Such systems have been intentionally excluded from the scope of the AI Act, and therefore are subject to pre-AI Act legislation without additional legal obligations. The EU Commission suggests that providers of such systems may voluntarily choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.[17]
Coming to GPAI, Chapter V mandates that providers of GPAI models are subject to the following main obligations: create technical documentation that includes training and testing processes and evaluation results; provide information to downstream providers intending to integrate the GPAI model into their AI systems, helping them understand the model’s capabilities and limitations to ensure compliance; establish a policy to comply with the Copyright Directive; draw up and make publicly available a sufficiently detailed summary of the content used for training the GPAI.[18] Free and open license GPAI models, where parameters like weights, model architecture, and usage are publicly available, need only comply with the requirements to publish training content summaries and establish a copyright policy unless they pose ‘systemic risks’.
Systemic risks refer to significant impacts on the EU market due to reach or negative effects on public health, safety, security, fundamental rights, or society, which can spread across the value chain.[19] The legal status of GPAI models with systemic risk is reached either if the model has high impact capabilities, that is, when the cumulative amount of computation used for its training measured in floating point operations is greater than 1025, or when the Commission decides so, ex officio or following a qualified alert from the scientific panel provided for by the Regulation.[20] In addition to the obligations listed in Articles 53 and 54, providers of GPAI models with systemic risk must fulfill several additional requirements: (a) perform evaluations using standardized protocols and tools that reflect the current state of the art; (b) assess and mitigate possible systemic risks at the Union level, including those arising from the development, marketing, or use of the AI models; (c) keep track of, document, and report any serious incidents and possible corrective measures to the AI Office and relevant national authorities without undue delay; (d) ensure an adequate level of cybersecurity protection for the AI model and its physical infrastructure. Providers may rely on codes of practice to demonstrate compliance with these obligations until harmonized European standards are published. Compliance with these standards grants a presumption of conformity. Providers who do not adhere to an approved code of practice or harmonized standard must demonstrate alternative means of compliance for assessment by the Commission. Additionally, any information or documentation obtained, including trade secrets, must be treated confidentially as per Article 78.
The overview proposed so far is not intended to be exhaustive, and its purpose is to merely facilitate a preliminary schematic understanding of the AI Act. Based on the above, the reader should have an indicative idea of the complexity that market operators and legal professionals will face. The remaining question is ‘when.’
Deadlines
The timeline of the AI Act is no less intricate than the rest of the Regulation, so an explanation of the deadlines indicated in Article 113 will be provided in the following lines.
Chapters I and II shall apply from 2 February 2025. Chapter I includes the rules on the purpose and scope of application, definitions, and a rule on AI literacy, under Article 4, according to which providers and deployers of AI systems must ensure that their staff and others involved in operating and using these systems have adequate knowledge and skills, considering their technical background, experience, education, training, and the context and individuals the AI systems will impact. Chapter II, as previously explained, contains the prohibitions on unacceptable-risk AI practices.
Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, with the exception of Article 101. Chapter III Section 4 pertains to the implementation obligations of the bodies involved in the application processes of the AI Act, and includes the obligation for Member States to designate the notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation, and notification of conformity assessment bodies (i.e., bodies independent of the provider of a high-risk AI system in relation to which they perform conformity assessment activities), as well as the rules and requirements that these bodies must observe once notified. Chapter V, as already explained, provides the rules for GPAI models. Chapter VII is about the governance at the EU level for the application of the AI Act and national authorities. Chapter XII concerns sanctions for infringements of the AI Act, which include penalties for operators, administrative fines on Union institutions, bodies, offices and agencies, as well as a set of fines expressly meant for providers of GPAI models. Article 78, finally, addresses the confidentiality obligations that the Commission, market surveillance authorities, notified bodies, and any other natural or legal person involved in the application of the Regulation must respect concerning the information and data obtained in the performance of their tasks and activities. In short, this set of rules can be summarized as obligations for GPAI models and preparatory rules for the bureaucratic and sanctioning apparatus that will underpin the application of the AI Act.
All other rules of the AI Act shall apply from August 2, 2026, including those on high-risk AI systems and limited-risk AI systems, with the sole exception of Article 6.1 on high-risk AI systems considered as such based on the criteria expressed in this article and the related obligations. This excluded set of rules will apply from August 2, 2027.
Speaking of deadlines, it is important to mention the AI Pact, an initiative by the EU Commission designed to help organizations prepare for the AI Act, which came into force on August 1, 2024. The AI Pact essentially encourages early voluntary compliance by organizations that will be subject to the AI Act at different stages, with particular emphasis on those falling under the obligations related to high-risk AI systems. The Commission reports that over 550 organizations responded to the first call for interest in November 2023 and explains that the AI Pact is structured into two main components: Pillar I creates a network for sharing best practices and providing guidance on AI Act implementation, while Pillar II urges AI providers and deployers to disclose their compliance measures through pledges, detailing specific actions and timelines. The Commission's expectation is that participants will benefit from understanding the AI Act, sharing knowledge, and enhancing the credibility and trust of their AI technologies.
Implications for Academia and Legal Practitioners
Due to the complexities presented so far, the AI Act could have profound impacts on legal professionals in the coming months, affecting both academics and practitioners from all over the world. On the one hand, Academics should seriously consider how and to what extent the AI Act might impact both the EU and global markets, as well as evaluate whether there is a balance between protecting individual rights and fostering business development. Indeed, the AI Act raises significant human rights protection issues but also poses a potential threat to digital entrepreneurship in the EU, which struggles to compete against American and Chinese tech giants. Are we truly certain that individual rights are the only stakes? Are we confident that these hyper-regulatory processes will not inadvertently stifle EU startups with excessive legal compliance costs, rather than curbing the dominance of Western and Eastern giants?[21] Are the measures in support of innovation envisioned under Chapter VI, which include the establishment of regulatory sandboxes, enough to counterbalance the hyper-regulatory approach of Brussels?
On the other hand, practitioners across jurisdictions should ponder their approach to assisting AI projects that aim at the EU market. Currently, it seems unlikely that a practitioner qualified, for instance, in Istanbul, would possess both local legal expertise and the extensive knowledge of EU law required to guide a Turkish company in compliance with EU standards (it should be also considered to what extent domestic bar association ethic rules permit that). Similarly, an EU practitioner might face challenges advising a foreign company without considering the company’s need to comply with its domestic regulations. Therefore, it is crucial, as seen with previous EU regulations like the GDPR, for small, local boutique law firms specializing in new technologies to establish strong relationships with similar firms in other EU and non-EU jurisdictions. This collaboration is essential to prevent the emerging legal market related to digital technologies from becoming the exclusive domain of large international law firms. Obviously, there is also the issue of how to remain competitive in pricing when small firms cannot benefit from the economies of scale and processes of large international networks. One humbly suggested approach could be developing a holistic approach to compliance of digital innovation. For instance, the risk analysis processes required by both the GDPR and the AI Act that partially overlap, respectively resulting in two ‘impact assessment’ outputs - the DPIA under the GDPR and FRIA under the AI Act - could be conducted in a synergistic and integrated manner. This approach would avoid redundancies and wasted hours that could result from separately handling related issues, which would burden the client’s finances and potentially overlook important interactions between the provisions of different Regulations.
Last but not least, both academic and practitioners should consider what role they could play to contribute to AI literacy required under Article 4.
Final remarks
In light of the overview offered so far, it is evident that the AI Act represents a significant step in the regulation of AI within the European Union and potentially far beyond, due to its global implications. Its comprehensive and risk-based approach underscores the EU's commitment to safety, fundamental rights, and trustworthiness in AI technologies. However, the Act also raises important questions about its impact on innovation and competition, particularly for smaller market players. The phased implementation and the AI Pact demonstrate the EU’s pragmatic approach to regulatory enforcement, encouraging early voluntary compliance and fostering a collaborative environment. Additionally, the Act underscores the critical role of education and continuous learning in AI literacy.
In conclusion, the AI Act is a landmark regulatory framework with far-reaching implications for the development and governance of AI technologies. It challenges us to think critically about the intersection of regulation, innovation, and global influence. As we move forward, it will be essential for all stakeholders to engage deeply with the Act's provisions, embrace collaborative approaches to compliance, and invest in AI literacy to navigate this complex and evolving regulatory landscape successfully. The journey towards trustworthy AI is just beginning, and it will require concerted effort, thoughtful reflection, and proactive action from all corners of the AI ecosystem.
[1] The Proposal made by the EU Commission dates back to April 22, 2021. More information on the procedure and the initial draft are available at https://eur-lex.europa.eu/legal-content/EN/HIS/?uri=CELEX:32024R1689.
[2] President of the EU Commission from December 2019, re-elected in July 2024.
[3] The final version published in the Official Journal of the EU is available at https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689.
[4] See Article 2.1, “This Regulation applies to: (a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country; (b) deployers of AI systems that have their place of establishment or are located within the Union; (c) providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union; (d) importers and distributors of AI systems; (e) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark; (f) authorised representatives of providers, which are not established in the Union; (g) affected persons that are located in the Union.”
[5] See Anu Bradford, ‘The Brussels Effect’ (2012), Northwestern University Law Review, Vol. 107, No. 1, 2012, Columbia Law and Economics Working Paper No. 533, Available at SSRN: https://ssrn.com/abstract=2770634, and more recently Bradford, Anu, ‘The Brussels Effect: How the European Union Rules the World’ (New York, 2020; online edn, Oxford Academic, 19 Dec. 2019).
[6] See Article 1 of the AI Act.
[7] Given the breadth and sensitivity of the definition, it was preferred to faithfully reproduce the text of Article 3(1) of the AI Act in this case. Also, refer to Recital 12 of the AI Act.
[8] For the same reasons, the text of Article 3(63) is faithfully reproduced.
[9] See Recital 97 of the AI Act.