The Tortuous Path to AI Act Compliance: A Competitive Burden for Companies

Godefroy de BoiscuilléGodefroy de Boiscuillé

Associate Professor (GREDEG/CNRS), University Côte d'Azur (Nice Sophia Antipolis)

The AI Act will influence market competition in the EU, bringing significant challenges and obligations for businesses. The regulatory requirements could pose a considerable burden and constitute regulatory barriers to market entry. This chapter points out the tortuous path to AI Act compliance. It examines the main legal barriers that could therefore impact companies operating in the internal market, permanently preventing firms from entering a market or delaying the arrival of new companies.

Read exclusively on

I. Ex-Ante Regulation: Regulatory Barriers

The AI Act promotes an ex-ante regulation. To comply with the regulation, any EU company will have to follow five main steps: (i) be identif ied as a provider, (ii) then to classify their AI system as high-risk or low-risk, (iii) if it is high-risk, the AI provider or deployer must carry out a conformity assessment, (iv) it must ensure that stand-alone AI systems are registered in an EU database, (v) it must sign a declaration of conformity1 and the AI systems should bear the CE marking to be placed on the market.2 If significant changes occur in the AI system’s lifecycle, the company must return to step (ii) and repeat the risk analysis. Compliance with any one of the standards is not sufficient. In other words, businesses must comply with all the obligations mentioned. Consequently, if a company carries out a compliance assessment on the use of an AI model that has not been properly registered, it could expose itself to legal risks. This European regulatory severity3 creates regulatory barriers to market entry at many stages of this compliance journey. In our view, the main legal obstacles to market entry are to be found in points (ii) and (iii) mentioned above.

1. Overestimated Regulatory Obstacle

On the one hand, to understand how the AI Act impacts EU based companies, it is crucial to answer two fundamental questions: first, who is mainly targeted by the regulation? Second, are the definition of the key actors concerned so blurred that it is difficult for them to understand the scope of application?

Regarding the first question, the regulation gives a clear answer: it impacts key players within the AI value chain consisting of provider, deployer, product manufacturer,4 importer,5 distributor,6 and authorized representative.7 The most heavily regulated subjects under the AI Act are providers of AI systems.8 Concerning the second question, it has been said that AI law creates a blurred distinction between suppliers and deployers which makes it difficult to analyze legal risk. In this author’s view, the debate seems overrated. The roles are clearly defined. A provider is a person or entity that develops an AI system and places it on the market.9 A deployer is a person or entity that uses an AI system.10 In short, companies will need to identify their precise role in order to determine their compliance obligations. Even when they don’t fall into the provider “box”, companies must remain vigilant. Extensive customization of an AI system by a company could lead to its reclassification as a provider.11 In any case, the European regulation clearly defines the role of each party, and companies can refer to it to assess their obligations.

2. Underestimated Regulatory Obstacles

On the other hand, the most important legal barrier for companies relies in fact on the risk assessment of their AI system. All other heavy obligations depend on this qualification.12 EU based companies must indeed analyze the risk of their AI system before implementing a conformity assessment, registering their AI system in the EU database and signing a declaration of conformity. In contrast to traditional legal approaches based on experience and a black-and-white approach shaped by interpretation, risk is a fuzzy concept that means “the combination of the probability of an occurrence of harm and the severity of that harm.”13

II. Defining and Classifying Risks

First, the term “risk”14 lacks a clear definition in all the versions of the AI Act.15 It seems that the regulation confuses the sensitive sector in question with the nature of the AI system used. AI systems are indeed classified high-risk because of the sectors in which they are applied:16 biometricsbased systems, education, critical infrastructure, etc. The meaning of high-risk is debatable. AI systems should also be qualified as high-risk in terms of their autonomy, i.e., their ability to escape human control and prediction. The universal concern surrounding AI is the following: can it be smarter than humans? This is an obsessive question because intelligence has always been an instrument of domination in civilizations.17 As a consequence, European institutions require IA to be transparent. The prejudice perceived is as follows: AI is not risky as long as humans can dominate it, i.e., control it, which leads to the next question. At what point does AI become high-risk? Following this reasoning, it should be so not just in terms of the risky factor, but in terms of its capacity to emancipate itself from human monitoring. On this last point, the dichotomy dividing the world of AI is helpful: on the one hand, there is the weak artificial intelligence known as the expert system, which implies the presence of a human expert. The reasoning of this AI system is perfectly  transparent. The system makes simple syllogisms where it reasons by deduction. This type of AI is not risky, as it is perfectly predictable. On the other hand, advanced artificial intelligence such as machine learning, is based on reasoning by processing billions of pieces of data. This type of AI surpasses human intelligence in certain tasks. It can learn and generalize. It reasons not by deduction but by induction. Accordingly, the classification of high-risk AI systems is based on a caricatured view of risk. AI in the employment or education sector can be risk-free if the AI system is perfectly predictable and controlled. In contrast, a conversational agent (which seems to be classified as a low-risk AI system) may be high-risk if it starts conversing semi-autonomously and manipulating young people for example. Classification of AI systems regarding the sector is thus problematic. It could harm competition in certain sensitive markets where, in particular, small- and medium-sized enterprises (SMEs) would not have the financial capacity to meet all compliance obligations. For instance, a medium-sized company wishing to compete with Tesla in the autonomous car sector will be subject to a heavy regulatory burden, as an AI system developed in this field will be classified as high-risk AI.

1. Definition of AI system

Second, the risk is linked to AI systems which are not clearly defined either. Definition of “AI systems”18 is too broad, which could complicate the risk assessment associated with the use of artificial intelligence. For instance, stakeholders in the healthcare sector raised the point that the definition of AI is not consistent in the EU and potentially includes all medical technologies with software components that are not necessarily considered AI.19 This observation could apply to many other technology sectors. Divergent interpretation of the definition of AI systems may lead to the fragmentation of the internal market and may decrease legal certainty for companies that develop AI systems, thus harming innovation and hence competition.

2. Amendment to the List of High-Risk Use Cases

Third, the risk of stand-alone AI systems can be assessed via a list that is potentially flawed and which can be amended by the Commission following the delegated act procedure.20 The European Parliament’s wording gives the Commission the possibility of frequently amending the list. Indeed, AI systems that pose a significant risk of harm “to health and safety, or an adverse impact on fundamental rights” could be added to the current AI Act Annex III.21 In other words, a company that develops an AI system that is not listed in Annex III could still be subjected ex post to compliance obligations if the list is amended after the product is marketed. Immigration prediction tools is an example of a use case not listed in the Commission’s AI proposal which was added in other versions of the text. On one hand, the delegated act procedure is necessary to anticipate rapid technological development which will quickly render the Commission’s list obsolete. On the other hand, the apparent legal certainty for companies of referring to a list22 is in fact contradicted by the Commission’s ability to amend the list according to broad criteria such as the risk of harm to safety or fundamental rights. In other words, one can legitimately question the usefulness of this list, given that the Commission can easily modify it in the light of the broad criteria that empower it to do so. Broad delegation powers can create legal uncertainty for companies.

Companies would have to anticipate, at the time they innovate, whether the product in question could fall into the high-risk list in the near future. Several AI systems such as chatbots, while not on the list of high-risk AI systems, could pose a risk of harm or an adverse impact on fundamental rights. Consequently, the list of high-risk AI systems does not create legal certainty as the Commission claims. In contrast, the probable and frequent amendment of this list creates legal uncertainty. Uncertainty could have ambivalent effects on competition and market structure. According to Ashford, “although excessive regulatory uncertainty may cause industry inaction on the part of the industry too much certainty will stimulate only minimum compliance technology. Similarly, too frequent change of regulatory requirements may frustrate technological development.”23 This is precisely the risk at issue: too frequent changes to the list of high-risk AI systems may discourage new entrants into the market and a lack of stability in the regulatory framework could hinder innovation.

3. Risk of Harm

Fourth, AI systems are also in the high-risk list because of the risk of harm in several sectors.24. EU based companies must assess the risk in terms of: (i) severity, (ii) intensity, (iii) probability of occurrence, (iv) duration combined together, and (v) they must also determine whether the risk may affect an individual, a plurality of people or a particular group of people. Risk assessment forces companies to predict the future. It could also result in conflicting forecasts. The implementation of AI systems may for instance result in a high-risk severity but low probability of occurrence. It may also affect a group of people with low intensity over a long period. The problem is not the risk assessment itself. These types of assessments already exist in many sectors such as medical devices, food and drug administration. This works well in sectors that benefit from long-established practices and quantifiable results. Risk assessment for AI is clearly more challenging for EU based companies for a simple reason: the AI Act extends risk analysis beyond health and safety to assess impacts on fundamental rights. The regulation adopts a human-centric approach to protect fundamental rights and democracy. As a result, company developers will need to predict their AI system’s impact on a wide range of factors based on very complex, fragmented, and evolving case law. The interpretation of fundamental rights is often known to be vague and controversial varying across legal systems within the EU. The European Court of Human Rights (ECtHR) and the Court of Justice of the European Union (CJEU) are regularly criticized for delivering contradictory and insufficiently reasoned judgments. Decisions are often based on assertions that do not allow the litigant to understand the reasons that led the courts to make these choices.25 In a nutshell, how can a company implement fundamental rights risk thresholds without the expertise or the authority to interpret legislation? The author wonders whether this assessment is reasonable for companies, especially for new entrants in a competitive market, who will not have the resources or expertise to master all these combinations and assess the damage.

4. Product Covered by Union Harmonization Legislation

Fifth, apart from the list in Annex III of the AI Act, some forms of AI are classed as high-risk where they are intended to be used: (i) as a product covered by Union harmonization legislation listed in Annex II, or (ii) are a safety component of a product.26 This implies that EU based companies need to handle possible contradicting definitions between the definition of safety components in the AI Act and harmonization legislation. The meaning of “safety component” under the relevant harmonization legislation (machinery regulation for instance27) will sometimes differ from the meaning of “safety component” in the AI Act. One might point out that the relevant harmonization legislation prevails. Nonetheless, there is a clear issue of inconsistent interpretation resulting in a lack of legal certainty between AI law and other EU legislation.

5. Utopia of Compliance with Certain Rules

Sixth, high-risk AI systems activate compliance rules. Numerous obligations will be difficult to comply with,28 among which are obligations relating to data governance; the subject of much criticism. Article 10 of the AI Act states that: “Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose29 ( … ). Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system.”30 These practices entail several necessary precautions, such as an assessment of the availability, quantity, and suitability of the data sets that are needed. Ebers et al have pointed out the impossibility of enforcing the regulation in practice as the data quantity requirement is not realistic from a technical perspective: it is not possible to precisely quantify the required amount of data for training an AI system.31 Also, the obligation to maintain technical documentation is a standard that illustrates the paradox of the EU’s digital single market. The core philo sophy of the single market is to remove barriers that limit freedom of all kinds (services, goods, people, capital). The obligation to maintain technical documentation is a good example of an obstacle that can discourage innovation and the circulation of goods. This documentation seems extremely cumbersome32 and could lead to a distortion of competition between companies that can afford to maintain such documentation and those that cannot.33

6. Overlap in Legislation

Seventh, the AI Act poses a more general problem of its relationship with other legislation. AI legislation intertwines and merges to the point of creating many rules imposing a disproportionate burden on providers to comply with a number of regulatory requirements. For instance, a large platform creating an AI system to generate all kinds of content could be simultaneously subject to the Digital Services Act, the Digital Markets Act, the AI Act, and the General Data Protection Regulation (GDPR). In the same way, a company that develops AI systems to provide new medical devices (MD) or in-vitro diagnostic medical devices (IVD) will have to comply with the GDPR, the AI Act, the Medical Devices Regulation (MDR), and the In-vitro Diagnostic Medical Devices Regulation (IVDR). But then again, the conformity assessment carried out by a company may have to be repeated several times by other companies in some sectors. For example, an AI system provider that enhances machine safety will have to implement a conformity analysis. This conformity assessment will have to be repeated by the machine manufacturer under the AI Act and the Machinery Regulation. To be concise, as soon as the AI Act calls for a conformity analysis of AI systems embedded in products, there will be a duplication of conformity assessments needed between the AI Act and all the regulations concerning the products at issue (AI Act vs Machinery Regulation, or AI Act vs Medical Devices Regulation, etc.).

Moreover, the AI act does not solve the issue of damages caused by AI systems. The interplay between the regulation and the AI Liability Directive is not clear. The regulation aims solely to prevent the risk of damage, without addressing the question of the civil liability regime applicable in the context of harm caused by AI systems. There is still legal uncertainty as to how the damage caused to specific businesses could be appropriately compensated. The AI Liability Directive34 is still at the proposal stage and suffers from a number of shortcomings: (i) the proposal seems to favor the fault-based liability regime that is not suited to liability claims for damages caused by AI-enabled products,35 (ii) the rebuttable presumption to facilitate proof of damage is admitted only under excessively strict conditions,36 (iii) the relationship between the AI Liability Directive and the revised products liability directive creates overlaps and needs to be clarified.37 One can question the consequences on competition of the lack of clarification on the civil liability model by underlining at least three potential effects. The first effect is to decrease the effectiveness of private enforcement of competition law related to damages caused by AI systems. In a stand-alone action, companies would have to prove the competitor is at fault. This would be very difficult to do given that AI is sometimes considered a “black box”. Questions arise, such as: how can we prove what is opaque? How can we counter the “black box effect” of algorithms? Assuming this is possible, will victims have the means to do so? If so, at what cost? The second effect is to reduce access to the law. Indeed, the numerous reasons for AI systems to malfunction38 added to the opacity of algorithms39 will increase the cost of obtaining evidence. This will reduce access to the law due to the cost of civil litigation. Small- and medium-sized businesses will not have the financial resources to hire specialist firms to assess their losses. The final effect is that it will encourage large companies to cause damage to small businesses. The moment private enforcement of competition law becomes non-deterrent, the incentive to infringe market rules will be stronger.40 The low risk for large companies developing AI of being exposed to liability and civil action from small- and medium-sized enterprises decreases their incentives to comply with the regulation.

III. Cost of Compliance

In more general terms, apart from the specific problems mentioned above, all the companies will face the same issue: the fragmentation of sources of EU law. This will have the consequence of significantly increasing the cost of compliance for EU based companies. The cost of compliance could be a strong economic barrier for all small- and medium-sized companies, reducing access to the single market.41 The problem is not new, as similar criticisms have been raised regarding the GDPR.42 AI startups are much more financially vulnerable than large companies.43 According to the European Commission impact assessment of the AI Act, compliance costs are estimated to be between €193k to €330k.44 The Commission has simply assessed the compliance cost for deploying high-risk AI systems. The cost of compliance due to the overlap with all the other legislation is not included in the European Commission’s estimate. Also, dual or triple regulatory compliance (GDPR, AI Act, DMA, etc.) could lead to an accumulation of fines. Lack of legal clarity and the resulting compliance costs could permanently prevent companies from entering a market or delay the arrival of new companies.

However, it could be argued that the cost of compliance is not necessarily too high. It must only be proportionate to the objective pursued. The proportionality principle requires an assessment of whether EU measures are: (i) suitable for achieving the desired aim, (ii) are necessary to achieve the desired aim, and (iii) impose an excessive burden on individuals or companies, in relation to the objective to be achieved.45 On the first two points, AI law is suitable for monitoring high-risk AI systems and necessary to protect citizens and fundamental rights. Nevertheless, on the third point, only the application of the AI regulation over time will answer the question of whether the regulatory burden on companies is too high. We could be predicting an unsolvable question. In principle, the burden imposed on companies is assessed regarding the specific objective to be achieved. But the AI Act follows many objectives.46 Reading paragraph 1 of the AI regulation is dizzying, as are the numerous objectives.47 The difficulty is therefore as follows: it is necessary to assess the conformity of the AI Act not only in relation to one objective (freedom of goods, for example), but also in relation to several objectives (freedom of goods, but also fundamental rights, security, ethical principles, etc.). The more objectives there are in the internal market, the more conflicts there will be between them. The greater the regulatory burden to protect fundamental rights, the greater the risk of limiting innovation and therefore the EU’s economic prosperity, which is the general aim of the Common Market. Litigation in European law speaks for itself. The Charter of Fundamental Rights is full of principles that coexist and ultimately clash: the principle of equality versus the principle of freedom; the principle of freedom versus security,48 freedom of expression versus freedom of goods49 or the right to privacy, freedom of movement versus the right to strike,50 and so on. This leads to several observations that will increase the cost of regulation: first, the cohabitation of fundamental rights and the need to respect them in the AI Act will cause conflicts of rights. Second, by dint of enshrining multiple objectives in the name of diverse fundamental principles, the law will become less and less comprehensible, and the cost of regulation increasingly high. Third, the AI Act will cause an increasing overlap of fundamental rights in the case law of the CJEU. Fourth, as the regulation various fundamental rights, freedoms and principles put at the same level as essential objectives to be reached, the Court of Justice of the European Union will have to favor certain rights and freedoms to the detriment of others. Last, litigation before the CJUE will give rise to judgments where fundamental rights will prevail over fundamental freedoms. Indeed, if the AI regulation appears to be in line with the objectives of the Common Market, the digital single market introduces a philosophy that is less liberal than the internal market. The main focus is on citizens’ fundamental rights, security and respect for ethical principles, with freedom taking a back seat.

IV. Conclusion

The AI Act increases the cost of compliance significantly for EU based companies. Legal obstacles to compliance with the regulation are numerous. This chapter points out the winding path to AI Act compliance by underlining at least seven legal barriers for businesses that increase the cost of compliance significantly: (i) the risk-based approach, which is subject to differing interpretations due to the broad and vague notion of risk, which is not a legal concept, (ii) the definition of AI, which is just as vague, raising the question of how to define risk on the basis of an ill-defined object, (iii) the legal uncertainty surrounding the list of high-risk AI systems that can be modified by the Commission under the delegated acts procedure, (iv) the risk assessment which involves predicting violations of fundamental rights on the basis of inconsistent and fragmented case law, (v) the lack of legal certainty due to possible conflicting definitions between the definition of safety components in the AI Act and that in harmonization legislation, (vi) unrealistic compliance rules that undermine incentives for innovation and distort competition in the internal market, and (vii) the duplication of conformity assessments between the AI Act and all product regulation and interaction with multiple legislation that creates uncertainty, overlap and collision. These defects can be corrected over time, but as always with the single market, it is a question of striking the right balance between protection for citizens and innovation for businesses.

(1) YourEurope, europa.eu, (last visited Sept. 1, 2024: “An EU declaration of conformity (DoC) is a mandatory document that you as a manufacturer or your authorised representative need to sign to declare that your products comply with the EU requirements. By signing the DoC you take full responsibility for your product’s compliance with the applicable EU law.”), https://europa.eu/youreurope/business/product-requirements/compliance/technical-documentation-conformity/index_en.html

(2) YourEurope, europa.eu, https://europa.eu/youreurope/business/product-requirements/labels-markings/ce-marking/index_en.html (last visited Sept. 1, 2024). CE Marking: “Many products require CE marking before they can be sold in the EU. CE marking indicates that a product has been assessed by the manufacturer and deemed to meet EU safety, health and environmental protection requirements. It is required for products manufactured anywhere in the world that are then marketed in the EU.”

(3) VAGELIS PAPAKONSTANTINOU & PAUL DE HERT, THE REGULATION OF DIGITAL TECHNOLOGIES IN THE EU, ACT-IFICATION, GDPR MIMESIS AND EU LAW BRUTALITY AT PLAY, 48–60 (2024), p. 56.

(4) Persons that provide, distribute, or use Al systems in the EU with their products under their own name or trademark.

(5) EU persons that release Al systems bearing non-EU based provider’s name and mark.

(6) Persons that make Al systems available in the EU Market.

(7) EU persons appointed by a provider to perform obligations under the EU Al Act.

(8) Commission Regulation (EU) 2024/1689 of the European Parliament and of the Council of June 13, 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) 300/2008, (EU) 167/2013, (EU) 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), art. 16.(9) Commission Regulation (EU) 2024/1689 (Artificial Intelligence Act), art. 3: “provider” means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.

(10) Example: A bank that uses an AI system developed by a third party to make loan determinations is a deployer. Also, an organization can act as both a developer and a deployer. Example: a company develops AI software that monitors customer transactions and then uses it on its own platforms. The company is both a developer and a deployer.

(11) Example: tailoring an AI system could tip companies into the provider category if the original AI system is high-risk and its tailoring results in an AI system different from the original (i.e., it is a "substantial modification"), but the system remains high-risk.

(12) If it is high-risk, the provider or deployer must carry out a conformity assessment; it must ensure that stand-alone AI systems are registered in an EU database; and it must sign an EU declaration of conformity.

(13) Commission Regulation (EU) 2024/1689 (Artificial Intelligence Act), art. 3(2).

(14) Id. art. 3(2).

(15) In fact, as mentioned, the latest version Commission Regulation (EU) 2024/1689 (Artificial Intelligence Act) defines the risk as “the combination of the probability of an occurrence of harm and the severity of that harm.” It seems that it is less a definition of the risk than a method for quantitative estimate of risk probability in use–risk assessment.

(16) Id. Annex III.

(17) Nuclear weapons and vaccines, for example, are inventions were made possible by human intelligence.

(18) Commission Regulation (EU) 2024/1689 (Artificial Intelligence Act), “AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

(19) Stakeholder Joint Statement on Access to Innovative Healthcare under the Artificial Intelligence Act (AI Act), MedTeCh Europe (June 15, 2023), https://www.medtecheurope.org/news-and-events/news/stakeholder-jointstatement-on-access-to-innovative-healthcare-under-the-artificial-intelligence-act-ai-act/.

(20) Once a European law has been adopted, it may need to be updated to reflect developments in a particular sector or to ensure its correct implementation. The EU Parliament and the Council may authorize the Commission to adopt delegated or implementing acts. The Commission can adopt a delegated act on the basis of a delegation granted in the text of an EU law, in this case the AI Act. In the AI Act, the Commission can amend the list of high-risk AI systems, via delegated acts, to take into account the rapid technological development, as well as the potential changes in the use of AI systems, Commission Regulation (EU) 2024/1689 (Artificial Intelligence Act), § 52).The Commission’s power to adopt delegated acts is subject to strict limits (for instance, the delegated act cannot change the essential elements of the law the legislative act, but the difficulty is often to determine precisely what are the essential elements of the legislative act). The Commission prepares and adopts delegated acts after consulting expert groups, composed of representatives from each EU country, which meet on a regular or occasional basis.

(21) Id. art. 7: “The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend Annex III by adding or modifying use-cases of high-risk AI systems where both of the following conditions are fulfilled: (a) the AI systems are intended to be used in any of the areas listed in Annex III; (b) the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.”

(22) The list is indeed useful to reassure companies that can refer to it to assess the risk of their AI system.

(23) Nicholas Ashford et al., Using Regulation to Change the Market for Innovation, 9 Harv. Env’t L. Rev. 419–466 (1985). – See also, Nicholas Ashford, George R. Heaton, Jr, Regulation and Technological Innovation in the Chemical Industry, Law & Contemp. ProbS. 109–157 (1983).

(24) AI Act, Regulation (EU) 2024/1689, Annex III. the Annex contains eight fixed areas. These are: (i) Biometric and biometrics-based systems; (ii) Management and operation of critical infrastructure; (iii) Education and vocational training; (iv) Employment, workers management and access to self-employment; (v) Access to and enjoyment of essential private services and public services and benefits; (vi) Law enforcement; (vii) Migration, asylum and border control management; and (viiii) Administration of justice and democratic processes.

(25) Hanneke C.K. Senden, Interpretation of fundamental rights in a multilevel legal system: an analysis of the European Court of Human Rights and the Court of Justice of the European Union, (Nov. 8, 2011) (Doctoral thesis, School of Human Rights Research Series, Intersentia, Antwerp), https://scholarlypublications. universiteitleiden.nl/handle/1887/18033.

(26) The malfunction of the AI system embedded in a product could pose a danger to safety and health of persons.

(27) Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing Directive 2006/42/EC of the European Parliament and of the Council and Council Directive 73/361/EEC (Machinery Regulation), 2023, O.J. (L 165/1).

(28) Quality of data sets used to train, validate and test the AI systems (Commission Regulation (EU) 2024/1689 (Artificial Intelligence Act), art. 10); technical documentation (Recital 46, art. 11, and Annex IV) ; Recordkeeping in the form of automatic recording of events (art. 12); Transparency and the provision of information to users (Recital 47, and art. 13); Human oversight (Recital 48, and art. 14); Robustness, accuracy and cybersecurity (Recitals 49 to 51, and art. 15).

(29) Id. art. 10(3).

(30) Id. art. 10(2).

(31) Martin Ebers et al., The European Commission’s Proposal for an Artificial Intelligence Act – a Critical Assessment by Members of the Robotics and AI Law Society (RAILS), 4 MDPI 589–603 (2021), https://doi.org/10.3390/j4040043.

(32) The technical documentation requires companies to detail the design specifications of the AI system (see Annex IV of the European Commission’s AI Act) including the general logic of the algorithm and documented rationales and assumptions that were made during the design (e.g., the main classification choices, what parameters it is optimized for, descriptions of outputs, etc.). The technical documentation must also assess whether the functioning of the AI system complies with several of EU fundamental rights covered in ch. 2 of the AI Act.

(33) Thibault Schrepel, Decoding the AI Act: A Critical Guide for Competition Experts 4–5 (AmSterdam L. & TeCh. InSt., Working Paper No. 3-2023, 2023).

(34) Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), COM (2022) 496 final (Sept. 28, 2022).

(35) Indeed, how will victims be able to demonstrate the failure of the algorithm that caused the damage? The commission itself compares artificial intelligence to a black box. Proving the company is at fault (in other words, its negligence) by demonstrating that it is indeed responsible for the algorithm’s failure implies proving the defect in the artificial intelligence system, which is a particularly difficult evidential burden to bear for the claimant. See, Proposal for an AI Liability Directive, COM (2022) 496 final, art. 4, “The fault of the defendant has to be proven by the claimant according to the applicable Union or national rules. Such fault can be established, for example, for non-compliance with a duty of care pursuant to the AI Act.” See also, art. 1, indicating the subject matter and scope of this directive: “it applies to non-contractual civil law claims for damages caused by an AI system, where such claims are brought under fault-based liability regimes.”

(36) In principle, the AI Liability Directive will make it easier to prove a causal link between a relevant party’s fault and the output of an AI system that causes the damage through rebuttable presumptions. In fact, “such a presumption should only apply when it can be considered reasonably likely, from the circumstances in which the damage occurred, that such fault has influenced the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the damage.” (Proposal for an AI Liability Directive, COM (2022) 496 final, § 25).

(37) Indeed, as the proposals stand, it is highly likely that certain losses could give rise to an action both on the basis of the revised Defective Products Directive and on the basis of the AI Liability Directive. It would then be up to the claimant to make the right procedural choice, at the potential risk of seeing their action declared inadmissible because of the option chosen. COM(2022) 495 - Proposal for a directive of the European Parliament and of the Council on liability for defective products. Also, the European Parliament, on 12 March 2024, formally adopted the new Product Liability Directive. The Council of the EU still needs to formally adopt the directive, following which it will be published in the Official Journal of the EU and then enter into force 20 days after its publication. The new rules will apply to products placed on the market 24 months after entry into force.

(38) Errors in data selection or labeling, choice of non-representative data, erroneous choice of algorithm, human bias, etc.

(39) Jenna Burrell, How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms, 3 Big Data & SoC’y 1–12 (2016).

(40) Godefroy de Boiscuillé, La faute lucrative en droit de la concurrence, (Concurrences, Sept. 2022). See also, Godefroy de Boiscuillé, Relevance and Shortcomings of Behavioral Economics in Antitrust, 11 J.o European Competition L. & PraC. 228–237 (2020).

(41) Id.

(42) Regarding the PwC report, the cost of maintaining GDPR compliance amount to more than $1 million (about €900,000). The report mentions cases where this figure could be significantly higher. See, PwC report, https://uk.insight.com/content/dam/insight/EMEA/blog/2017/06/GDPR-Infographic-design-final.pdf (last visited Sept. 1, 2024).

(43) Weiyue Wu & Shoaoshan Liu, Why Compliance Costs of AI Commercialization May Be Holding Start-Ups Back, Harv. Kennedy SCh. Rev. (May 5, 2023) “Based on the OECD Regulatory Compliance Cost Assessment Guidance, we quantitatively compare the financial vulnerability of tech giants versus AI startups. We found that start-ups’ operating margins are significantly impacted by compliance costs, in contrast to tech giants (…) When the fixed compliance cost increases by 200%, the operating margin of the startup changes from 13% to -7%, causing the f irm to lose money. In contrast, such a change only causes a slight dip in the operating margin for tech giants.”

(44) EUROPEAN COMMISSION, STUDY SUPPORTING THE IMPACT ASSESSMENT OF THE AI REGULATION (Apr. 21, 2021), https://digital-strategy.ec.europa.eu/en/library/study-supporting-impact-assessment-ai-regulation.See also, CECIMO Paper on Artificial Intelligence, CECIMO (Nov. 1, 2023), https://www.cecimo.eu/wp-content/ uploads/2022/10/CECIMO-Paper-on-the-Artificial-Intelligence-Act.pdf.

(45) The principle of proportionality is laid down in the Treaty on European Union (TFEU), art. 5(4).

(46) The AI regulation appears to be in line with the objectives of the creation of a digital single market. The digital single market introduces a philosophy that is less liberal than the internal market. The main focus is on citizens’ fundamental rights, security and respect for ethical principles, with freedom taking a back seat.

(47) Commission Regulation (EU) 2024/1689 (Artificial Intelligence Act), § 1, “The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of:– health– safety– fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’) including– democracy– the rule of law– and environmental protection, to protect against the harmful effects of AI systems in the Union,– and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.”

(48) Examples from everyday life bear this out, from seatbelt laws to food safety regulations.

(49) See, Case C-368/95, Vereinigte Familiapress Zeitungsverlags- und Vertriebs GmbH v. Bauer Verlag, ECLI:EU:C:1997:325. The case concerned Austria’s ban on the sale of newspapers containing games of chance. The Court ruled that maintaining the diversity of the press and thus safeguarding the freedom of expression constituted “an overriding requirement justifying a restriction on the free movement of goods.”

(50) See also, Case C-112/00, Eugen Schmidberger Internationale Transporte Planzüge v. Austria, ECLI:EU:C:2003:333. This case illustrates a clash between the free movement of goods and the right of expression and the right of assembly.

Read other publications