Why the European Parliament should reject (or substantially amend)  the  Commission’s proposal on EU Information Security (“INFOSEC”). (1) The issue of “classified information”

By Emilio De Capitani

1.Setting the scene of EU legal framework on access to documents and to confidential information before the Lisbon Treaty

To better understand why the Commission “INFOSEC” draft legislative proposal (2022/0084(COD) on information security shall be substantially amended, let’s recall what was before the Lisbon Treaty and of the Charter, the EU legal framework on access to documents, and notably of EU classified information. With the entry into force of the Amsterdam Treaty on May 1999 the EP and the Council have been under the obligation (art.255 TCE) of adopting in two years time new EU rules framing the individual  right of access to documents by establishing at the same time “the general principles and limits of public interests” which may limit such right of access.(emphasis added).

Notwithstanding a rather prudent Commission’s legislative proposal the EP strongly advocated a stronger legal framework for access to documents, for legislative transparency and even for the treatment at EU level of information which, because of their content, should be treated confidentially (so called ,“sensitive” or “classified information”). 

Needless to say  “Sensitive” or “classified information” at Member States level, are deemed to protect “essential interests”  of the State and, by law, are subject to a special parliamentary and judicial oversight regime.[1] As a consequence, at EU level, even after Lisbon, national classified information are considered an essential aspect of national security which “.. remains the sole responsibility of each Member State” (art. 4.2 TEU) and “..no Member State shall be obliged to supply information the disclosure of which it considers contrary to the essential interests of its security;”(art 346.1(a)TFEU.

However, if national classified information is shared at EU level as it is the case for EU internal or external security policies it shall be treated as for any other EU policy by complying with EU rules. Point is on what legal basis these rules should be founded. This issue came to the fore already in 2000 when the newly appointed Council Secretary General Xavier SOLANA negotiated with NATO a first interim agreement on the exchange of classified information. The agreement which mirrored at EU level the NATO Classification standards (“Confidential”, “Secret” and “Top Secret”) was founded  on the Council internal organizational power  but this “administrative” approach was immediately challenged before the Court of Justice by the a Member State (NL) [2]and by the European Parliament itself [3] which considered that the correct legal basis should had been the new legislation on access to documents foreseen by art 255 of TEC which was at the time under negotiation.  The Council, at last, acknowledged that art.255 TEC on access to documents was right legal basis and a specific article (art.9[4]) was inserted in in Regulation 1049/01 implementing art.255 TEC and the EP and NL withdrew their applications before the CJEU[5].

Point is that Art.9 of Regulation 1049/01 still covers only the possible access by EU citizens and such access may be vetoed by the “originator” of the classified information. Unlike national legislation on classified information art.9 didn’t solved, unfortunately, for the lack of time, the issue of the democratic and judicial control by the European Parliament and by the Court of Justice to the EUCI. Art.9(7) of Regulation 1049/01 makes only a generic reference to the fact that “The Commission and the Council shall inform the European Parliament regarding sensitive documents in accordance with arrangements agreed between the institutions.” A transitional and partial solution has then been founded by negotiating Interinstitutional Agreements between the Council and the EP in 2002 [6]and in 2014 [7]and between the European Commission[8] in 2010.

Point is that interinstitutional agreements even if they may be binding (art.295 TFEU) they can only “facilitate” the implementation of EU law which, as described above,  in the case of democratic and judicial control of classified information still does not exists. Not surprisingly, both the Council and the Commission Interinstitutional agreements consider that the “originator” principle should also be binding for the other EU institutions such as the European Parliament  and the Court of Justice.

This situation is clearly unacceptable in an EU deemed to be democratic and bound by the rule of law as it create zones where not only the EU Citizens but also their Representatives may have no access because of “originator’s” veto. As result, in these situations the EU is no more governed by the rule of law but only by the “goodwill” of the former.

To make things even worse the Council established practice is to negotiate with third Countries and international organizations agreements [9]covering the exchange of confidential information by declaring that the other EU Institutions (such as the EP and the Court of Justice)  should be considered “third parties” subject then to the “originator” principle.

Such situation has become kafkianesque with the entry into force of the Lisbon treaty which recognize now at primary law level the EP right to be “fully and timely” informed also on classified information exchanged during the negotiation of an international agreement[10]. Inexplicabily , fourtheen years since the entry into force of the Traty the European Parliament has not yet challenged before the Court of Justice these clearly unlawful agreements.

That Institutional problem kept apart, fact remains that until the presentation of the draft INFOSEC proposal none challenged the idea that in the EU the correct legal basis supporting the treatment also of classified information should be the same of access to documents which after the entry into force of the Lisbon treaty is now art.15.3 of the TFEU[11].

2 Why the Commission choice of art 298 TFEU as the legal basis for the INFOSEC proposal is highly questionable [12]

After the entry into force of the Lisbon Treaty and of the Charter the relation between the fundamental right of access to documents and the corresponding obligation of the EU administration of granting administrative transparency and disclose or not its information/documents has now been strengthened also because of art.52 of the EU Charter.

In an EU bound by the rule of law and by democratic principles,  openness and the fundamental right of access should be the general rule and  “limits” to such rights should be an exception  framed only “by law”. As described above the correct legal basis for such “law” is art.15 of the TFEU which, as the former art.255 TEC, states that  General principles and limits on grounds of public or private interest..” may limit the right of access and the obligation of disclosing EU internal information / documents. Also from a systemic point of view  “limits” to disclosure and to access are now covered by the same Treaty article which frames (in much stronger words than art 255 before Lisbon) the principles of “good governance”(par 1), of legislative transparency  (par 2) and of administrative transparency (par 3).

Such general “Transparency” rule is worded as following:    “1. In order to promote good governance and ensure the participation of civil society, the Union institutions, bodies, offices and agencies shall conduct their work as openly as possible.(..) Each institution, body, office or agency shall ensure that its proceedings are transparent and shall elaborate in its own Rules of Procedure specific provisions regarding access to its documents, in accordance with the regulations referred to in the second subparagraph.”

Bizarrely, the European Commission has chosen for the INFOSEC regulation art.298 TFEU on an open, independent and efficient EU administration by simply ignoring art.15 TFEU and by making an ambiguous reference to the fact that INFOSEC should be implemented “without prejudice” of the pre-Lisbon Regulation 1049/01 dealing with access to documents and administrative transparency.  How a “prejudice” may not exist when both Regulations are overlapping and INFOSEC Regulation is upgrading the Council Internal Security rules at legislative level is a challenging question.

It is indeed  self evident that both the INFOSEC Regulation and Regulation 1049/01 deal with the authorized/unauthorised “disclosure” of EU internal information/documents.

Such overlapping of the two Regulations is even more striking for the treatment  EU Classified information (EUCI) as these information are covered both by art. 9 of Regulation 1049/01 and now  by articles 18 to 58 and annexes II to VI of the INFOSEC Regulation.

As described above, Art 255 TCE has since Lisbon been replaced and strengthened by art 15 TFEU so that the Commission proposal of replacing it with art.298 TFEU looks like a “detournement de procedure” which may be challenged before the Court for almost the same reasons already raised in 2000 by the EP and by NL.  It would then been sensible to relaunch the negotiations on the revision of Regulation 1049 in the new post-Lisbon perspective but the Commission has decided this year to withdraw the relevant legislative procedure. Submitting a legislative proposal such INFOSEC promoting overall confidentiality and withdrawing at the same time a legislative proposal promoting transparency seems a rather Commission’s strong message to the public.

3 Does the INFOSEC proposal grant a true security for EU internal information?

Point is that European administrative transparency is now a fundamental right of the individual enshrined in the Charter (Article 42).The protection of administrative data is one of the aspects of the “duty” of good administration enshrined in Article 41 of the Charter which stipulates that every person has the right of access to their file, “with due regard for the legitimate interests of confidentiality and professional and business secrecy.”  

However Art.298 TFEU is not the legal basis framing professional secrecy. It is only a provision on the functioning of the institutions and bodies which, “in carrying out their tasks … [must be based] on an “open” European administration”[13] and is not an article intended to ensure the protection of administrative documents.

This objective is better served by other legal basis of the Treaties.

First of all, protecting the archives of EU institutions and bodies from outside interference is, even before being a legitimate interest, an imperative condition laid down by the Treatiesand the related 1965 Protocol on the Privileges and Immunities of the Union adopted on the basis of the current Article 343 TFEU. Articles 1 and 2 of that Protocol stipulate that the premises and buildings of the Union, as well as its archives, “shall be inviolable.”

Furthermore, in order to ensure that, in the performance of their duties, officials are obliged to protect the documents of their institutions, Article 17 of the Staff Regulations stipulates that

1. Officials shall refrain from any unauthorized disclosure of information coming to their knowledge in the course of their duties, unless such information has already been made public or is accessible to the public.

Again, (as for Regulation 1049/01), the INFOSEC regulation  reinstate that it should be applied “without prejudice” of the Staff Regulation by so mirroring the second paragraph of art.298 TFEU which states that itself states that it should be implemented  “in accordance with the Staff Regulations and the rules adopted on the basis of Article 336.” So, also from this second perspective, the correct legal basis for INFOSEC could be the Article 339 (on professional secrecy) and 336 TFEU, with the consequent amendment of the Staff Regulations by means of a legislative regulation of the Parliament and the Council.

By proposing a legislative regulation on the basis of Article 298, the Commission therefore circumvents both the obligation imposed by Article  336, art 339 (on professional secrecy)  and, more importantly  of Article 15(3) TFEU, according to which each institution or body “..shall ensure (i.e., must ensure) the transparency of its proceedings [and therefore also their protection from external interference] and shall lay down in its rules of procedure specific provisions concerning access to its documents [and therefore also concerning their protection], in accordance with the regulations referred to in the second subparagraph.”(NDR currently Regulation 1049/01)

The objectives set out in Article 298 cannot therefore override the requirements of protecting the fundamental right of access to documents, nor those of Article 15 TFEU which could be considered the “center of gravity”when several legal basis are competing [14].

The same applies to compliance with the regulation establishing the Statute and, in particular, compliance with Article 17 thereof, cited above.

Ultimately, the provisions on the legislative procedure for Union legislative acts are not at the disposal of the Commission, given that administrative transparency is a fundamental right and the protection of documents is a corollary thereof and not a means of functioning of the institutions. Administrative transparency is a fundamental right of every person; the protection of administrative data is a legitimate interest of every administration.

A ”public” interest that can certainly limit the right of access, but only under the conditions established by the legislator of art 15 TFEU and only by the latter.

4. Conclusions

If a recommendation may be made now to the co-legislators is to avoid illusionary shortcuts such as the current Commission proposal whose real impact on the EU administrative “bubble” is far to be clear[15] . The EU Legislator, since the entry into force of the Lisbon Treaty more than fourteen years ago is faced to much more pressing problems.

What is mostly needed is not inventing several layers of illusionary “protection” of the EU information but framing the administrative procedures by law as suggested several times by the European Parliament and by the multiannual endeavor of brilliant scholars focusing on the EU Administrative law[16].

What matters is that the management and the access to EU information should be framed by law and not depend from the goodwill of the administrative author or the receiver as proposed by the INFOSEC Regulation. Nor information security is strengthened transforming each one of the 64 EU “entities” covered by the INFOSEC Regulation [17] in sand-boxes where the information is shared only with the people who, according to the “originator” has a “need to know” and not a “right to know”.

Moreover the EU should limit and not generalize the power for each one of the 64 EU entities of create “classified” information (EUCI). In this perspective art.9 of Regulation 1049/01 needs indeed a true revision but in view of the new EU Constitutional framework and of the new institutional balance arising from the Lisbon treaty and of the Charter.

Fourtheen years after Lisbon the democratic oversight of the European Parliament and the judicial control of the Court of Justice on classified documents , shall be granted by EU law as it is the case in most of the EU Countriesand not by interinstitutional agreements which maintain the “Originator” against these institutions in violation of the rule of law principle as well as of the EU institutional balance.

Could still be acceptable fourteen years after the entry into force of the Lisbon Treaty that the European Parliament and the Court of justice are not taken in account in the dozens of international agreements by which the Council frame the exchange of EUCI with third countries and international organizations?

Instead of dealing with these fundamental issues the European Commission in its 67 pages proposal makes no reference to 24 years of experience in the treatment of classified information and prefer dragging the co-legislators in Kafkian debates dealing with “sensitive but not classified information”  or on the strange idea by which documents should marked “public” by purpose and not by their nature (by so crossing the line separating public transparency from public propaganda).

But all that been said, it is not the Commission which will be responsible before the Citizens (and the European Court) for badly drafted legislation. It will be the European Parliament and the Council which shall now take their responsibility. They can’t hide behind the Commission unwillingness to deal with substantive issues (as well as with other aspects of legislative and administrative transparency) ; if the Council also prefer maintain the things as they were before Lisbon it is up to the European Parliament to take the lead and establish a frank discussion with the other co-legislator and verify if there is the will of fixing the real growing shortcomings in the EU administrative “Bubble”.

Continuing with the negotiations on the current version of the INFOSEC proposal notably on the complex issue of classified information paves the way to even bigger problems which (better soon than later) risk to  be brought as in 2000 on the CJEU table.


[1] According to the Venice Commission “.. at International and national level access to classified documents is restricted by law to a particular group of persons. A formal security clearance is required to handle classified documents or access classified data. Such restrictions on the fundamental right of access to information are permissible only when disclosure will result in substantial harm to a protected interest and the resulting harm is greater than the public interest in disclosure.  Danger is that if authorities engage in human rights violations and declare those activities state secrets and thus avoid any judicial oversight and accountability. Giving bureaucrats new powers to classify even more information will have a chilling effect on freedom of information – the touchstone freedom for all other rights and democracy – and it may also hinder the strive towards transparent and democratic governance as foreseen since Lisbon by art.15.1 of TFEU (emphasis added) The basic fear is that secrecy bills will be abused by authorities and that they lead to wide classification of information which ought to be publicly accessible for the sake of democratic accountability.  Unreasonable secrecy is thus seen as acting against national security as “it shields incompetence and inaction, at a time that competence and action are both badly needed”. (…) Authorities must provide reasons for any refusal to provide access to information.  The ways the laws are crafted and applied must be in a manner that conforms to the strict requirements provided for in the restriction clauses of the freedom of information provisions in the ECHR and the ICCPR.” 

[2] Action brought on 9 October 2000 by the Kingdom of the Netherlands against the Council of the European Union (Case C-369/00) (2000/C 316/37)

[3] Action brought on 23 October 2000 by the European Parliament against the Council of the European Union (Case C-387/00) (2000/C 355/31) LINK chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:C2000/355/31

[4] Regulation 1049/01 Article 9”Treatment of sensitive documents

1. Sensitive documents are documents originating from the institutions or the agencies established by them, from Member States, third countries or International Organisations, classified as “TRÈS SECRET/TOP SECRET”, “SECRET” or “CONFIDENTIEL” in accordance with the rules of the institution concerned, which protect essential interests of the European Union or of one or more of its Member States in the areas covered by Article 4(1)(a), notably public security, defence and military matters.

2. Applications for access to sensitive documents under the procedures laid down in Articles 7 and 8 shall be handled only by those persons who have a right to acquaint themselves with those documents. These persons shall also, without prejudice to Article 11(2), assess which references to sensitive documents could be made in the public register.

3. Sensitive documents shall be recorded in the register or released only with the consent of the originator.

4. An institution which decides to refuse access to a sensitive document shall give the reasons for its decision in a manner which does not harm the interests protected in Article 4.

5. Member States shall take appropriate measures to ensure that when handling applications for sensitive documents the principles in this Article and Article 4 are respected.

6. The rules of the institutions concerning sensitive documents shall be made public.

7. The Commission and the Council shall inform the European Parliament regarding sensitive documents in accordance with arrangements agreed between the institutions.

[5] Notice for the OJ.Removal from the register of Case C-387/001By order of 22 March 2002 the President of the Court of Justice of the European Communities ordered the removal from the register of Case C-387/00: European Parliament v Council of the European Union. OJ C 355 of 09.12.2000.

[6] Interinstitutional Agreement of 20 November 2002 between the European Parliament and the Council concerning access by the European Parliament to sensitive information of the Council in the field of security and defence policy (OJ C 298, 30.11.2002, p. 1).

[7] According to the Interinstitutional Agreement of 12 March 2014 between the European Parliament and the Council concerning the forwarding to and handling by the European Parliament of classified information held by the Council on matters other than those in the area of the common foreign and security policy (OJ C 95, 1.4.2014, pp. 1–7) “4.   The Council may grant the European Parliament access to classified information which originates in other Union institutions, bodies, offices or agencies, or in Member States, third States or international organisations only with the prior written consent of the originator.

[8] According to annex III point 5 of the Framework Agreement on relations between the European Parliament and the European Commission (OJ L 304, 20.11.2010, pp. 47–62) In the case of international agreements the conclusion of which requires Parliament’s consent, the Commission shall provide to Parliament during the negotiation process all relevant information that it also provides to the Council (or to the special committee appointed by the Council). This shall include draft amendments to adopted negotiating directives, draft negotiating texts, agreed articles, the agreed date for initialling the agreement and the text of the agreement to be initialled. The Commission shall also transmit to Parliament, as it does to the Council (or to the special committee appointed by the Council), any relevant documents received from third parties, subject to the originator’s consent. The Commission shall keep the responsible parliamentary committee informed about developments in the negotiations and, in particular, explain how Parliament’s views have been taken into account.”

[9] SEE : Agreements on the security of classified information Link : https://eur-lex.europa.eu/EN/legal-content/summary/agreements-on-the-security-of-classified-information.html

[10] Article 218.10 TFUE states clearly that “The European Parliament shall be immediately and fully informed at all stages of the procedure” when the EU is negotiating international agreements even when the agreements “relates exclusively or principally to the common foreign and security policy,” (art.218.3 TFUE).

[11] Interestingly reference to art.15 of the TFEU is also made in the EP-Council 2014 Interinstitutional Agreement on access to classified information (not dealing with External Defence) See point 15 :  This Agreement is without prejudice to existing and future rules on access to documents adopted in accordance with Article 15(3) TFEU; rules on the protection of personal data adopted in accordance with Article 16(2) TFEU; rules on the European Parliament’s right of inquiry adopted in accordance with third paragraph of Article 226 TFEU; and relevant provisions relating to the European Anti-Fraud Office (OLAF)

[12] However this legal basis was fit for another legislative proposal, of a more technical nature, which  has now become EU Regulation 2023/2841 layng  down measures for a high common level of cybersecurity for the institutions, bodies, offices and agencies of the Union. This Regulation apply at EU administrative level the principles established for the EU Member States by Directive (EU) 2022/2555 (2)  improving the cyber resilience and incident response capacities of public and private entities. It created an Interinstitutional Cybersecurity Board ( IICB) and a Computer Emergency Response Team (CERT) which operationalizes the standards defined by the IICB and interact with the other EU Agencies (such as the EU Agency dealing with informatic security, Enisa), the corresponding structures in the EU Member States and even the NATO structures. It may be too early to evaluate if the Regulation is fit for its purpose ([12]) but the general impression is that its new common and cooperative system of alert and mutual support between the EU Institutions, Agencies and bodies may comply with the letter and spirit of art.298 of the TFEU

[13] Quite bizarrely this “open” attribute is not cited in the INFOSEC proposal and, even more strangely, none of the EU institutions has until now consulted the EU Ombudsman and/or the Fundamental Rights Agency.

[14] See Case C-338/01 Commission of the European Communities v Council of the European Union(Directive 2001/44/EC – Choice of legal basis)“The choice of the legal basis for a Community measure must rest on objective factors amenable to judicial review, which include in particular the aim and the content of the measure. If examination of a Community measure reveals that it pursues a twofold purpose or that it has a twofold component and if one of these is identifiable as the main or predominant purpose or component whereas the other is merely incidental, the act must be based on a single legal basis, namely that required by the main or predominant purpose or component. By way of exception, if it is established that the measure simultaneously pursues several objectives which are inseparably linked without one being secondary and indirect in relation to the other, the measure must be founded on the corresponding legal bases…”

[15]  Suffice to cite the following legal disclaimer :”This Regulation is without prejudice to Regulation (Euratom) No 3/1958 17 , Regulation No 31 (EEC), 11 (EAEC), laying down the Staff Regulations of Officials and the Conditions of Employment of other servants of the European Economic Community and the European Atomic Energy Community 18 , Regulation (EC) 1049/2001 of the European Parliament and of the Council 19 , Regulation (EU) 2018/1725 of the European Parliament and of the Council 20 , Council Regulation (EEC, EURATOM) No 354/83 21 , Regulation (EU, Euratom) 2018/1046 of the European Parliament and of the Council 22 , Regulation (EU) 2021/697 of the European Parliament and of the Council 23 , Regulation (EU) [2023/2841] of the European Parliament and of the Council 24 laying down measures for a high common level of cybersecurity at the institutions, bodies, offices and agencies of the Union.

[16]  See ReNEUAL Model Rules on EU Administrative Procedure. ReNEUAL working groups have developed a set of model rules designed as a draft proposal for  binding legislation identifying – on the basis of comparative research – best practices in different specific policies of the EU, in order to reinforce general principles of EU law

[17] The Council has listed not less than 64 EU entities (EU Institutions Agencies and Bodies – EUIBAs) in document WK8535/2023

AI liability rules: a blocked horizon?

By Michèle Dubrocard[1]

February 2025

Today, no one challenges the potential benefits offered by AI for individuals and society in general, but also the existence of serious risks, some of them already identified, others likely to emerge. Let’s have in mind the conclusions of the first International AI Safety Report[2] which, focusing on general-purpose AI, recognizes that ‘there is a wide range of possible outcomes even in the near future, including both very positive and very negative ones, as well as anything in between’

So, when the European Commission issued on 28 September 2022 its Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive- AILD), it raised a lot of hope among all those concerned about the potentially harmful consequences of the use of AI systems. These hopes were confirmed by the objective expressed in the explanatory memorandum of the Proposal, namely ‘ensuring victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general’[3].

More specifically, the Commission seemed determined to take into due consideration the imbalance between the providers and deployers on the one hand, and the affected persons on the other. Indeed, referring to Member States’ general fault-based liability rules, Recital 3 of the Proposal recognizes that ‘when AI is interposed between the act or omission of a person and the damage, the specific characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, may make it excessively difficult, if not impossible, for the injured person to meet this burden of proof’.

Alas, the rules proposed by the Commission did not meet the expectations raised by the announced objective (I). Even worse, the Commission seems to have definitively shelved its project (II), leaving the door open to what it itself had criticized: the co-existence within the EU of ‘27 different liability regimes, leading to different levels of protection and distorted competition among businesses from different Member States’[4].

I- A disappointing Proposal

The Proposal of the Commission did not challenge the choice of a fault-based regime, but instead mainly focused on two rules, aiming at alleviating the burden of proof, which remains on the victim. What are these two rules?

The disclosure of evidence:

According to Article 3(1) of the Proposal, a court may order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage. However, the requests should be supported by ‘facts and evidence sufficient to establish the plausibility of the contemplated claim for damages’ and the requested evidence should be at the addressees’ disposal. Article 3(3) provides that the preservation of such evidence may also be ordered by the court.

However, the disclosure may be ordered by a court only to ‘that which is necessary and proportionate to support a potential claim or a claim for damages and the preservation to that which is necessary and proportionate to support such a claim for damages’. Article 3(4) specifies that ‘the legitimate interests of all parties’ must be considered by the court, when   determining whether an order for the disclosure or preservation of evidence is proportionate. Moreover, the person who has been ordered to disclose or to preserve the evidence must benefit appropriate procedural remedies in response to such orders.

Article 3(5) introduces a presumption of non-compliance with a duty of care: when, in a claim for damages, the defendant fails to comply with an order by a national court to disclose or to preserve evidence at its disposal, the national court shall presume the defendant’s non-compliance with a relevant duty of care. That presumption remains rebuttable.

– The presumption of causal link in the case of fault:

Article 4 of the Proposal provides, under certain conditions, a presumption of a causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output, that gave rise to the relevant damage.

However, the claimant has to prove the fault of the defendant, consisting in the non-compliance with a duty of care laid down in Union or national law directly intended to protect against the damage that occurred. He/she also has to prove that the AI system gave rise to the damage. There is another condition, related to the likelihood, based on the circumstances of the case, of the fault’s influence on the output produced by the AI system or the failure of the AI system to produce an output.

Moreover, the presumption shall not be applied if the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link. At last, in the case of a claim for damages concerning an AI system that is not a high-risk AI system, the presumption shall only apply where the national court considers it excessively difficult for the claimant to prove the causal link. Here also, the presumption is rebuttable.

The limitations of the rules:

It follows from these provisions that the impact of the two rules laid down in the Proposal is limited by numerous conditions. In particular, the new mechanism of disclosure of evidence would be limited only to high-risk AI systems. Similarly, the presumption of causal link would mainly apply to high-risk AI systems, except where, according to the national judges, it would be excessively difficult for the claimant to prove the causal link.

In any case, the Proposal is based on a fault-based liability regime, which means that victims would still have to prove the fault or negligence of the AI system provider, or deployer. As noted by the EDPS in its own-initiative opinion[5] of 11 October 2023, ‘meeting such a requirement may be particularly difficult in the context of AI systems, where risks of manipulation, discrimination, and arbitrary decisions will be certainly occurring’, even when the providers and deployers have prima facie complied with their duty of care as defined by the AI Act. 

In order to overcome these proof-related difficulties, several solutions have been proposed. BEUC, the European Consumer Organisation, has recommended introducing a reversal of the burden of proof[6], in order to allow the consumers to only have to prove the damage they suffered and the involvement of an AI system. A more nuanced approach has been suggested by an expert, aiming at differentiating between AI systems, whether they are high-risk or not, and general-purpose AI systems: providers and deployers of high-risk AI systems would be subjected to ‘truly strict liability’, while SMEs and non-high-risk AI systems should only be subjected to rebuttable presumptions of fault and causality[7]. In the same vein, the European Parliament considered in 2020 that it seemed ‘reasonable to set up a common strict liability regime for (…) high-risk autonomous AI-systems’. As regards other AI systems, the European Parliament also considered that ‘affected persons should nevertheless benefit from a presumption of fault on the part of the operator who should be able to exculpate itself by proving it has abided by its duty of care[8].

The Commission itself has acknowledged, in its impact assessment report, that ‘the specific characteristics of the AI-system could make the victim’s burden of proof prohibitively difficult or even impossible to meet’, and has evoked different approaches, among which the reversal of the burden of proof. As a sign of its hesitation, the Commission has introduced in the Proposal the possibility to review the directive five years after the end of the transposition period, in particular in order to ‘evaluate the appropriateness of no-fault liability rules for claims against the operators of certain AI systems, as long as not already covered by other Union liability rules, and the need for insurance coverage, while taking into account the effect and impact on the roll-out and uptake of AI systems, especially for SMEs’[9].

II- The withdrawal of the Proposal

On 11 February 2025, the Commission decided to withdraw the Proposal, on the grounds that there was ‘no foreseeable agreement’, and that the Commission would ‘assess whether another proposal should be tabled or another type of approach should be chosen’[10].

This decision caught the European Parliament’s rapporteur on the Proposal, Axel Voss (PPE), by surprise, who stated that the scrapping of the rules would mean ‘legal uncertainty, corporate power imbalances, and a Wild West approach to AI liability that benefits only Big Tech[11].

On the other hand, the decision of the Commission is reported to have satisfied both the Council and the private sector. In particular, France’s Permanent Representation would have indicated that it saw no reason to impose additional liability requirements on AI providers[12].

How can such a situation be explained?

It is true that the AI liability initiative launched by the Commission on 28 September 2022 was also composed of another Proposal, aiming at updating the Directive on liability for defective products (PLD). The new directive, which now includes software and digital manufacturing files within the definition of product, and expands the notion of compensable damage to include the destruction or corruption of data, came into force on 8 December 2024.

However, the scope of the revised PLD is limited: it only provides compensation for material losses resulting from death, personal injury, damages to property and loss or corruption of data (Article 6 PLD). In particular, damage stemming from a violation of a fundamental right without any material loss is not covered by this directive, but should have been covered by the AI liability directive. The draft AILD aimed at covering ‘national liability claims mainly based on the fault of any person with a view of compensating any type of damage and any type of victim[13].

The loopholes of the PLD have also been underlined by the complementary impact assessment required by the JURI Committee, to which the file had been attributed in the European Parliament. The study[14], published on 19 September 2024, underlines: ‘However, the PLD presents notable gaps, especially in areas such as protection against discrimination, personality rights, and coverage for professionally used property. It also lacks measures for addressing pure economic loss and sustainability harms, as well as damage caused by consumers, which are contingent on Member State laws. These limitations underscore the necessity for adopting the AILD (…)’.

Thus, in the light of the complementary impact assessment, it appears that the recent adoption of the revised PLD cannot compensate the withdrawal of the proposed AILD. Moreover, as stressed by the first International AI Safety Report, the specific characteristics of general-purpose AI systems make legal liability hard to determine:

The fact that general-purpose AI systems can act in ways that were not explicitly programmed or intended by their developers or users raises questions about who should be held liable for resulting harm’[15] .

Conclusion:

Against this background, today the European citizens are left with a ‘fragmented patchwork of 27 different national legal systems’[16], most of them relying on a fault-based regime, which is not able to respond to all the challenges posed by AI systems, and in particular to general-purpose AI systems.

The withdrawal of the proposed AILD is only one element of the Commission’s plan aiming at ‘simplifying rules and effective implementation[17], which enlists 37 withdrawn proposals in total.

The fact that the final 2025 work programme of the Commission -with the addition of the withdrawal of the AILD- was published just after the AI Act Summit, held in Paris on 10-11 February, may be a simple coincidence. However, it should be noted that the Statement[18] issued after the AI Summit does not refer to the issue of liability nor to the risks of AI systems, except in the context of information.

As observed by Anupriya Datta and Théophane Hartmann in Euractiv, ‘In this context, withdrawing the AI liability directive can be understood as a strategic manoeuvre by the EU to present an image of openness to capital and innovation, to show it prioritises competitiveness and show goodwill to the new US administration’[19].

The final word may not have been spoken, yet. On 18 February, the Members of the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) voted to keep working on liability rules for artificial intelligence products, despite the European Commission’s intention to withdraw the proposal[20].


[1] The opinions expressed in this article are the author’s own and do not necessarily represent the views of the EDPS

[2] International Scientific Report on the Safety of Advanced AI January 2025

[3] COM(2022) 496 final, page 2

[4] COM(2022) 496 final, page 6

[5] EDPS Opinion 42/2023 on the Proposals for two Directives on AI liability rules, 11 October 2023, par. 33

[6] Proposal for an AI liability Directive, BEUC position paper, page 12.

[7] The European AI liability directives – Critique of a half-hearted approach and lessons for the future-Philipp Hacker, page 49.

[8] European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), par. 14 and 20.

[9] Article 5 of the Proposal.

[10] Annexes to the Communication from the Commission to the European Parliament, the Council, the European, Economic and Social Committee and the Committee of the Regions- Commission work programme 2025, page 26.

[11] Euractiv ‘Commission plans to withdraw AI Liability Directive draw mixed reactions’, 12 February 2025.

[12] Ibidem.

[13] COM(2022) 496 final, page 3.

[14] Proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence-Complementary impact assessment.

[15] International Scientific Report on the Safety of Advanced AI, page 179.

[16] Euractiv, ‘Commission plans to withdraw AI Liability Directive draw mixed reactions’, Anupriya Datta, 12 February 2025

[17] Commission work programme 2025, page 11

[18] Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet: ‘We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.’

[19] Euractiv ‘Commission withdraws AI liability directive after Vance attack on regulation’, 11 February 2025

[20] Euronews ‘Lawmakers reject Commission decision to scrap planned AI liability rules’, Cynthia Kroet, 18/02/2025

The Commission proposal of withdrawing the draft AI Liability Directive: a “strategic error” both from an institutional and content perspective? (1)

by Emilio DE CAPITANI

Maybe it is a pure coincidence, but the Commission proposal to withdraw the drat AI liability Directive seems to be the immediate answer to the US Vice-president, J.D. Vance request at the Artificial Intelligence Action Summit in Paris that the EU (and its Member States) should avoid any regulation deemed (by the US) too “aggressive” against the American technology giants. To confirm such suspicion are the Commission Vice President Sefcovic justifications according to which the Commission was withdrawing the text because of the “lack of progress” in the legislative process. This justification is simply unfounded because both the Council and the Parliament are currently actively working on the issue : the Council is debating the reactions of the Member States on the Commission’s proposal (see here and here) and the European Parliament held an hearing on this subject not later than two weeks ago, following which the EP Rapporteur has already announced a draft report in the coming months.

So, the Commission justification for withdrawing the AI Liability Directive proposal because of lack of progress is factually unfounded and, even, legally questionable. (see my other general post Here). Suffice to remember that according to the CJEU judgement withdrawing a legislative proposal may be justified only  ‘where an amendment planned by the Parliament and the Council distorts the proposal for a legislative act in a manner which prevents achievement of the objectives pursued by the proposal and which, therefore, deprives it of its raison d’être, the Commission is entitled to withdraw it’. It may however do so only after having had due regard to Parliament’s and Council’s concerns behind their wish to amend the proposal.” (C- C‑409/13, p.83). Again this seems to be of common sense interpretation. Until the Treaty will not recognize a full right of legislative initiative to the EP and to the Council these institutions may only ask the Commission to submit a legislative proposal (art.225 of the TFEU for the EP).

However, until now no formal amendment depriving the AI Liability proposal of its raison d’être have been tabled by the EP or by the Council so that the co-legislator may well continue to work on the current legislative proposal. Needless to say that, in case of formal withdrawal of the text by the Commission the co-legislators or even only one of them may well bring that institution to the Court for infringement of the principle of conferral and of Institutional Balance.

(continue)

The COE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Is the Council of Europe losing its compass ?

by Emilio DE CAPITANI

When the Committee of Ministers of the Council of Europe decided at the end of 2021 to establish the Committee on Artificial Intelligence (CAI) with the mandate to elaborate a legally binding instrument of a transversal character in the field of artificial intelligence (AI), such initiative created a lot of hopes and expectations. For the first time, an international convention ‘based on the Council of Europe’s standards on human rights, democracy and the rule of law and other relevant international standards’ would regulate activities developed in the area of AI.  

The mandate of the CAI was supposed to further build upon the work of the Ad Hoc Committee on Artificial Intelligence (CAHAI), which adopted its last report in December 2021, presenting  ‘possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law’. In this document, the CAHAI underlined the need for the future instrument to ‘focus on preventing and/or mitigating risks emanating from applications of AI systems with the potential to interfere with the enjoyment of human rights, the functioning of democracy and the observance of the rule of law, all the while promoting socially beneficial AI applications’. In particular, the CAHAI considered that the instrument should be applicable to the development, design and application of artificial intelligence (AI) systems, ‘irrespective of whether these activities are undertaken by public or private actors’, and that it should be underpinned by a risk-based approach. The risk classification should include ‘a number of categories (e.g., “low risk”, “high risk”, “unacceptable risk”), based on a risk assessment in relation to the enjoyment of human rights, the functioning of democracy and the observance of the rule of law’. According to the CAHAI, the instrument should also include ‘a provision aimed at ensuring the necessary level of human oversight over AI systems and their effects, throughout their lifecycles’.

So, a lot of hopes and expectations: some experts expressed the wish to see this new instrument as a way to complement, at least in the European Union, the future AI Act, seen as a regulation for the digital single market, setting aside the rights of the persons affected by the use of AI  systems[1]. In its opinion of 20/2022 on the Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for this Council of Europe convention, the EDPS considered that it represented ‘an important opportunity to complement the proposed AI Act by strengthening the protection of fundamental rights of all persons affected by AI systems’. The EDPS advocated that the convention should provide ‘clear and strong safeguards for the persons affected by the use of AI systems’.

Alas, those hopes and expectations were quickly dampened by the way the negotiations were organised, and, above all, by the content of the future instrument itself.

1- the organisation of the negotiations: the non-member States leading, the civil society out

The objective to open the future instrument to States which are not members of the Council of Europe was with no doubt an excellent initiative, considering the borderless character of AI, and the need to regulate this technology worldwide. Indeed, as noted by the CAHAI in its above mentioned report ‘The various legal issues raised by the application of AI systems are not specific to the member States of the Council of Europe, but are, due to the many global actors involved and the global effects they engender, transnational in nature’. The CAHAI therefore recommended that the instrument, ‘though obviously based on Council of Europe standards, be drafted in such a way that it facilitates accession by States outside of the region that share the aforementioned standards’. So, yes on a global reach, but provided that the standards of the Council of Europe are fully respected.

However, the conditions under which those non-member States have participated in the negotiations need be looked at a little more: not only have they been part of the drafting group sessions unlike the representatives of the civil society, but it seems that from the start they have played a decisive role in the conduct of negotiations. According to a report published in Euractiv in January 2023[2], the US delegation opposed the publication of the first draft of the Convention (the ‘zero draft’), refusing to disclose its negotiating positions publicly to non-country representatives.

At the same time, the organisation of the negotiations has set aside the civil society groups, who were only allowed to intervene in the plenary sessions of the meetings, while the text was discussed and modified in the drafting sessions. The next and-in principle- last plenary meeting from the 11th to the 14th of March should start with a drafting session and will end with the plenary session, which implies that the civil society representatives will have less than 24 hours to have a look at the revised version of the convention -if they can receive it on time- and make their last comments, assuming that their voices were really heard during the negotiations.

Yet, representatives of the civil society and human rights institutions have done their utmost to play an active part in the negotiations. In an email to the participating States, they recalled that the decision to exclude them from the drafting group went ‘against the examples of good practice from the Council of Europe, the prior practice of the drafting of Convention 108+, and the CoE’s own standards on civil participation in political decision-making[3]. During the 3rd Plenary meeting of 11-13 January 2023, they insisted on being part of the drafting sessions, but the Chair refused, as indicated in the list of decisions:

‘(…) –Take note of and consider the concerns raised by some Observers regarding the decision taken by the Committee at the occasion of its 2nd Plenary meeting to establish a Drafting Group to prepare the draft [Framework] Convention, composed of potential Parties to the [Framework] Convention and reporting to the Plenary.

– Not to revise the aforesaid decision, while underlining the need to ensure an inclusive and transparent negotiation process involving all Members, Participants and Observers and endorsing the Chair’s proposal for working methods in this regard’.[4]

Despite this commitment, the need of an ‘inclusive and transparent negotiation process’ has not been ensured in the light of the civil society statement of the 4th of July 2023, where again the authors ‘deeply regret(ted) that the negotiating States have chosen to exclude both civil society observers and Council of Europe member participants from the formal and informal meetings of the drafting group of the Convention. This undermines the transparency and accountability of the Council of Europe and is contrary to the established Council of Europe practice and the Committee on AI (CAI) own Terms of Reference which instructs the CAI to “contribute[…] to strengthening the role and meaningful participation of civil society in its work”.’[5]

The influence of non-member States has not been limited to the organisation of meetings. As detailed below, the American and Canadian delegations delegations, among others, threw their full weight behind the choice of systematically watering down the substance of the Convention.

2- A convention with no specific rights and very limited obligations

How should the mandate of the CAI be understood? According to the terms of reference, the Committee is instructed to ‘establish an international negotiation process and conduct work to finalise an appropriate legal framework on the development, design, use and decommissioning of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law and other relevant international standards, and conducive to innovation, which can be composed of a binding legal instrument of a transversal character, including notably general common principles (…)[6].

The objective of including in the convention ‘general common principles’ has been interpreted by the Chair literally, who considered that ‘the AI Convention will offer an underlying baseline of principles in how to handle the technology, on top of which individual governments can then build their own legislation to meet their own specific needs’[7]. Indeed, the last publicly available version -dated 18 December 2023- of the draft Convention only refers to ‘principles’ and not to specific rights[8], even those already existing in the framework of the Council of Europe and beyond. In the context of AI, though, one could have hoped the recognition of certain rights, as the right to human oversight and the right to explanation for AI based decisions.

Such a choice has been criticized by the civil society‘s representatives. In a public statement of the 4th of July 2023, they recalled that ‘while including general common principles for AI regulation as indicated in the CAI Terms of Reference, the Convention should respect the rights established by other Conventions and not reformulate them as mere principles[9].

Unfortunately, the Convention, at least in the version of the 18th of December 2023, does not even expressly include the right to privacy and the right to the protection of personal data. Yet, if data are, as the Chair himself referred to, ‘the oil of the XX1st century’[10], the need to protect our rights in this area is critical.

If one compares the successive versions of the Convention which are publicly accessible, from the zero draft[11], to the version of the 18th of December, one can only deplore the constant watering down of its content. What about ‘prohibited artificial intelligence practices’ referred to in Article 14 of the zero draft? What about the definitions, which included in the zero draft the notion of ‘artificial intelligence subject’, defined as ‘any natural or legal person whose human rights and fundamental freedoms, legal rights or interests are impacted by decisions made or substantially informed by the application use  of an artificial intelligence system’? What about a clear presentation of the risk-based approach, with a differentiation of measures to be applied in respect of artificial intelligence systems posing significant and unacceptable levels of risk (see articles 12 and 13 of the zero draft)?

Moreover, in the version of the 18th of December 2023, a number of obligations in principle imposed on Parties might become simple obligations of means, since the possible -or already accepted- wording would be that each party should ‘seek to ensure’ that adequate measures are in place. It is in particular the case in the article dedicated to the ‘integrity of democratic processes and respect for rule of law’, as well as in the article on ‘accountability and responsibility’ and even in the article on procedural safeguards, when persons are interacting with an artificial intelligence system without knowing it.

According to an article published in Euractiv on 31 Jan 2024 and updated on 15 Feb 2024, even the version of the 18th of December 2023 seems to have been watered down: ‘Entire provisions, such as protecting health and the environment, measures promoting trust in AI systems, and the requirement to provide human oversight for AI-driven decisions affecting people’s human rights, have been scrapped’[12].

3- The worse to come?

One crucial element of the Convention still needs to be discussed: its scope. Since the beginning of the negotiations, the USA and Canada, but also Japan and Israel, none of them members of the Council of Europe, have clearly indicated their wish to limit the scope of the instrument to activities within the lifecycle of artificial intelligence systems only undertaken by public authorities[13]. Moreover, national security and defence should also be out of the scope of the convention.  The version of the 18th of December includes several wordings regarding the exemption of national security, which reflect different levels of exemption.

The issue of the scope has lead the representatives of the civil society to draft an open letter[14], signed by an impressive number of organisations calling on the EU and the State Parties negotiating the text of the Convention to equally cover the public and private sectors and to unequivocally reject blanket exemptions regarding national security and defence.

Today no one knows what the result of the last round of negotiations will be: it seems that the EU is determined to maintain its position in favour of the inclusion of the private sector in the scope of the Convention, while the Americans and Canadians might use the signature of the Convention as blackmail to ensure the exclusion of the private sector.

4- Who gains?

From the Council of Europe perspective, which is an organisation founded on the values of human rights, democracy and the rule of law. the first question that comes to mind is what are the expected results of the ongoing negotiations. Can the obsession to see the Americans sign the Convention justify such a weakened text, even with the private sector in its scope? What would be the gain for the Council of Europe and its member States, to accept a Convention which looks like a simple Declaration, not very far in fact from the Organisation for Economic Co-operation and Development’s Principles on AI[15]?

At this stage, it seems that neither the Americans nor the Canadians are ready to sign the Convention with the inclusion of the private sector, even if an opt-out clause were inserted in the text. The gamble of the Chair and the Secretariat to keep these two observer States on board at the price of excessive compromises might be lost at the end of the day. One should not forget that these States do not have voting rights in the Committee of Ministers.

The second question that comes to mind is why the Chair and the Secretariat of the CAI and, above them, those who lead the Council of Europe have made such a choice. Does it have a link with internal decisions to be taken in the next future, as regards the post of the General Secretary of the organisation, as well as the post of the Director General of Human Rights and Rule of Law? Does the nationality of the Chair have a role to play in this game? In any case, the future Convention might look like an empty shell, which might have more adverse effects than it seems prima facie, by legitimizing practices around the world which would be considered incompatible with the European standards.

NOTES


[1] See in particular ‘The Council of Europe’s road towards an AI Convention: taking stock’ by Peggy Valcke and Victoria Hendrickx, 9 February 2023: ‘Whereas the AI Act focuses on the digital single market and does not create new rights for individuals, the Convention might fill these gaps by being the first legally binding treaty on AI that focuses on democracy, human rights and the rule of law’. https://www.law.kuleuven.be/citip/blog/the-council-of-europes-road-towards-an-ai-convention-taking-stock/

[2] https://www.euractiv.com/section/digital/news/us-obtains-exclusion-of-ngos-from-drafting-ai-treaty/

[3] same article

[4] https://rm.coe.int/cai-2023-03-list-of-decisions/1680a9cc4f

[5] https://ecnl.org/sites/default/files/2023-07/CSO-COE-Statement_07042023_Website.pdf

[6] https://rm.coe.int/terms-of-reference-of-the-committee-on-artificial-intelligence-cai-/1680ade00f

[7] https://www.politico.eu/newsletter/digital-bridge/one-treaty-to-rule-ai-global-politico-transatlantic-data-deal/

[8] with the exception of ‘rights of persons with disabilities and of children’ in Article 18

[9] https://ecnl.org/sites/default/files/2023-07/CSO-COE-Statement_07042023_Website.pdf

[10] https://www.linkedin.com/pulse/data-oil-21st-century-ai-systems-engines-digital-thomas-schneider/

[11] https://www.statewatch.org/news/2023/january/council-of-europe-convention-on-artificial-intelligence-zero-draft-and-member-state-submissions/

[12] https://www.euractiv.com/section/artificial-intelligence/news/tug-of-war-continues-on-international-ai-treaty-as-text-gets-softened-further/

[13] same article

[14] https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit

[15] https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449

Artificial intelligence in the EU: promoting economy at the expenses of the rights of the individual?

by Emilio DE CAPITANI (*)[1]

“The advent of artificial intelligence (‘AI’) systems is a very important step in the evolution of technologies and in the way humans interact with them. AI is a set of key technologies that will profoundly alter our daily lives, be it on a societal or an economic standpoint. In the next few years, decisive decisions are expected for AI as it helps us overcome some of the biggest challenges we face in many areas today, ranging from health to mobility, or from public administration to education. However, these promised advances do not come without risks. Indeed, the risks are very relevant considering that the individual and societal effects of AI systems are, to a large extent, unexperienced.…”[2]

Foreword

1. According to the European Commission the recent proposal for a regulation on Artificial Intelligence is consistent with the EU Charter of Fundamental Rights and the secondary EU legislation on data protection, consumer protection, non-discrimination and gender equality. Notably, it “complements” the General Data Protection Regulation (Regulation (EU) 2016/679) and the Law Enforcement Directive (Directive (EU) 2016/680) by setting “..harmonised rules applicable to the design, development and use of certain high-risk AI systems and restrictions on certain uses of remote biometric identification systems”.

Is it true or the text is mainly economic oriented and fail to place the rights of those who will be subject to such AI systems at the heart of its reflection?

2. First of all it is worth noting that, while some commentators may have considered this new proposal to be the equivalent of the General Data Protection Regulation for AI, its general scheme is much more similar to Regulation (EU) 2019/1020 of 20 June 2019 on market surveillance and product compliance, the objective of which is to improve the internal market by strengthening market surveillance of products covered by Union legislation instead of protecting or promoting fundamental rights. The Commission’s proposal is essentially aimed at holding companies producing and marketing AI systems accountable, which is in itself a positive element in the context of the establishment of a European normative framework on artificial intelligence. According to EC Proposal AI systems must meet a number of criteria and undergo conformity assessment procedures, which are more or less stringent depending on the risks involved (see Articles 8 to 51 and Annexes IV to VIII [3]

3. However,  it is quite surprising that the proposal is focused only on a “product”, (a “software” developed from techniques and approaches listed in an annex) and does not address the general notions of “algorithms” and “big data” which are the main feature artificial intelligence (AI) applications which needs huge amounts of data necessary to train it and it allowing, in return, to process the same data.  By not referring directly on the nature of algorithms or the notion of big data, the Commission avoid placing the AI applications within the general framework of fundamental rights and data protection. Needless to say, a “right-based” approach is specular to the notion of ”duty” to protect that right by another individual or by the public administration. Take the case of Regulation 2016/679 or of Directive 2016/680 where the “rights of the data subject” are detailed in specific chapter whereas there is no similar provision in the AI proposal. Similarly, if the proposal defines AI system providers (“providers”), users (“users”), importers (“importers”) and distributors (“distributors”), it makes no reference at any time to persons who are subject to such systems. Moreover nothing, in particular, is said about the possible possibilities of recourse of individuals challenging the use of an AI system.

By choosing a market centric approach the Commission is undermining the aim of placing the individual at the core of the EU policies as declared in the EU Charter preamble. 

I- Definitions and classifications

4. The proposal is built on a risk-based approach, but the classification of the systems as unacceptable, high or low is not clear:

– Article 6 on the classification of high-risk systems is a simple description of the systems falling within this category, without justification of the reasons for the choices made,

– Article 7, on the “amendments to Annex III”, which is the annex containing the systems considered to be high risk, does, however, contain a number of criteria which the Commission will have to take into account in order to add other systems in the future, if necessary.- However, the terms chosen lack precision: the systems referred to are those likely to harm health, safety or have a negative effect on fundamental rights («risque of adverse impact on fundamental rights»). But how to understand in this context the concept of negative effect?

5. The breakdown between the systems to be prohibited and those with a high risk is not further explained: why, for example, prohibit real-time remote facial recognition in public places for repressive purposes, But to authorize, considering them at high risk, the systems that, in terms of criminal prosecutions or management of migration, asylum and border control aim to detect the emotional state of a person? Similarly, what about systems that generate or manipulate audio or video content or images, which then appear to be falsely authentic, and which can be used in criminal proceedings without informing persons (section 52)?

6. Above all, this approach suggests that respect for fundamental rights may be variable in geometry, even though fundamental rights are not negotiable and must also be guaranteed, regardless of the level of risk presented by the AI system in question.[4]

II- Articulation with data protection 

7. In this proposal, the Commission’s position on the European data protection framework is characterized by its ambiguity:

–  Article 16 TFEU is one of the two legal bases of the proposal, alongside Article 114 TFEU. However, in its statement of reasons, the Commission is careful to point out that the basis of Article 16 concerns only those provisions relating to restrictions on the use of AI systems with regard to remote biometric identification in places accessible to the public and for the purposes of criminal proceedings (point 2.1. See also recital 2 of the proposal). However, the protection of individuals about the processing of their personal data cannot be limited to this single hypothesis, given the operating modalities of AI systems which, as indicated above, are based on massive data collections, which are not all non-personal or anonymized. In addition, anonymized data may in some cases be re-identified, and an interlaced set of non-personal data may identify individuals. In addition, anonymized data can be used to build profiles and have a direct impact on the privacy of individuals and create discrimination.

– Recital 41 states that the new Regulation should not be understood as constituting a legal basis for the processing of personal data, including special categories of data. Nevertheless, under recital 41 above, the classification of an AI system as high risk does not imply that its use is necessarily lawful under other European legislation, in particular those relating to the protection of personal data, and the use of polygraphs and similar tools or other systems to detect the emotional state of individuals. That recital specifies to that end that such use should continue to occur only in accordance with the applicable requirements resulting from the Charter and Union law. It therefore seems to follow that certain provisions of this proposal may prove to be incompatible with other provisions of European law: far from «supplementing» the legislative framework on data protection, the future regulation may, on the contrary, open the way to situations of conflict of laws.

– on the other hand, recital 72 states that this Regulation should provide the legal basis for the use of personal data collected for other purposes with a view to developing certain AI systems in the public interest in the case of AI regulatory “sandboxes”. However, as reminded above the Commission also states in its explanatory statement that this proposal is without prejudice to and complements the General Data Protection Regulation 2016/679 and Police Directive 2016/68 (point 1.2).

8. Furthermore, if certain AI systems authorized by this proposal are not to be approved because they would infringe the provisions of the Charter and European data protection law, this raises the question of the relevance of the proposed classification, if it legitimizes systems contrary to fundamental rights in general, and to data protection in particular. But who will decide at EU and national level which rule should prevail between the Data Protection and AI Regulations? The establishment of a new committee, the European Artificial Intelligence Committee, and the creation of national authorities responsible for ensuring the application of the proposal (Articles 56 to 59) risks to become a conflicting structure with the parallel decentralized structure for Data Protection and its European Data Protection Board and the EDPS [5].

III- Prohibitions and their limits

9. In a very symbolic way, the proposal opens, after a first title relating to the general provisions, with a title entitled “prohibited artificial intelligence practices”, which in reality only contains a single article, while the next title on high-risk systems consists of 46 articles.

There are four systems considered unacceptable:

–  systems deploying subliminal techniques to distort a person’s behavior in a manner that causes or is likely to cause physical or psychological harm to the person or to another person;

– systems exploiting the vulnerabilities of a specific group of people due to their age, physical or mental disability, to distort the behavior of a person belonging to that group in a manner that causes or is likely to cause physical or psychological harm;

– systems used by public authorities for the evaluation or classification of the reliability of individuals over a period of time based on their social behavior or known or predicted personal or personality characteristics, with the establishment of a social score (“social score”) leading to one or both of the following: adverse or adverse treatment of persons in social contexts unrelated to the contexts in which the data were initially generated or collected; or/and adverse or adverse treatment of persons that is unjustified or disproportionate to their behavior or the seriousness of their behavior;

–  ‘real-time’ remote biometric identification systems in public spaces in a criminal context, unless and to the extent that such use is strictly necessary for one of the following purposes: the targeted search for potential victims of an offence, the prevention of a specific, serious and imminent threat to the life or safety of persons, or of a terrorist attack, the search, location, the identification or prosecution of the offender or a suspect, where the maximum penalty for the offence is at least three years.

10. It follows from this list that the prohibitions mentioned are subject to several limitations and prohibitions:

– in the case of the first two prohibitions, they both imply at least the possibility of physical or psychological harm. However, with regard to vulnerable persons, the demonstration of the existence of a possibility of harm may be sensitive,

– with regard to the prohibition of the social score, it is envisaged only to the extent that this score is established by public authorities (and not private entities) and leads to unfavorable treatment in a context unrelated to the context from which the data were collected or in cases where such treatment appears disproportionate. The reading of these conditions reveals that in reality the social score is not prohibited as such. This analysis is confirmed by the review of Annex III, which includes several high-risk AI systems.  Among them, systems to assess the reliability of individuals or establish their credit score (“credit score”) in cases of access to and use of essential public and private services,

– Finally, remote biometric identification systems are prohibited only if they aim at “real-time” identification, in public spaces and in criminal proceedings.

11. These limitations leave the field open to a posteriori identification, by private entities or by public authorities not acting in a repressive framework. It should also be noted that despite its regulatory form, the proposal leaves considerable room for manoeuvre for Member States to decide whether or not to use remote biometric identification systems in real time.

IV- Uses in criminal matters

12. In addition to the exceptions to the aforementioned prohibitions on real-time remote biometric identification, the proposal allows the possible use of AI systems in criminal matters[6].

Annex III, which lists the high-risk systems referred to in Article 6(2), provides for the following systems:

– systems intended to be used for the risk assessments for the commission of an offence or for recidivism by a person, or risk assessments for potential victims of an offence,

– systems intended to be used as polygraphs and similar tools or to detect the emotional state of a natural person,

– systems intended to be used to detect “deepfake” referred to in Article 52 (3),

– systems intended for use in assessing the reliability of evidence during an investigation or criminal prosecution,

– systems intended to be used to predict the occurrence or repetition of an actual or potential criminal offence, on the basis of the profiling of natural persons referred to in Article 3 para.4 of Directive 2016/680 or the assessment of personality traits and characteristics or past criminal behaviour of persons or groups,

– systems intended to be used for profiling persons referred to in Article 3 par.4 of Directive 2016/680 in the course of the detection, investigation or prosecution of criminal offences,

– AI systems for use in the analysis of crime involving natural persons, enabling law enforcement authorities to search for large datasets available in different data sources or data formats, to identify unknown patterns or to discover hidden relationships in the data.

13. Furthermore, a certain number of guarantees are limited or even excluded in the context of the use of AI systems in criminal matters:

– prior authorisation by a judicial or independent administrative authority for the use of real-time remote biometric identification may be postponed in urgent cases,

– Article 52, which seeks to impose an obligation to inform persons subject to certain systems, whether they are high-risk or not, excludes this obligation in criminal matters. This applies in particular to systems of emotional recognition or biometric categorisation, as well as those generating or manipulating audio, video or image content, which then appear to be falsely authentic,

– finally, Article 43, on conformity assessment of systems, provides for an assessment limited to internal control for all systems considered to be high-risk, with the exception of those relating to biometric identification and the categorization of persons.

14. The framework proposed by the Commission paves the way for highly controversial practices, particularly in predictive policing. The doctrine is very divided on the added value of AI systems in the assessment of future behavior of offenders, highlighting the risks of discrimination inherent in the functioning of algorithms [7].

It is worth recalling that this practice has already been unfortunately authorized by the EU with the anti-Money Laundering legislation [8]and notably by the infamous EU Directive on the use of Passenger Name Record data [9]. On the latter practice the CJEU has already adopted a very interesting Opinion (1/15) [10] dealing with a draft EU-Canada PNR Agreement  but is now again seized of this subject because of several Preliminary Ruling requests challenging the EU Directive compliance with the art. 7 and 8 of the EU Charter as well as the with the principles of necessity and proportionality [11].

15. The possible use of lie detectors (“polygraphs”) also generates debate and there is no consensus on its use in criminal matters. It should also be pointed out that the Commission allows the use of polygraphs in the field of migration, asylum and the management of external borders, thus reinforcing the experience currently carried out under the “iBorderCtrl” project.

Similarly, the possibilities for the use of a posterior biometric recognition systems are also the subject of criticism within doctrine and civil society. Thus, on May 27, 2021, the NGO Privacy International announced the filing of several claims in Europe against the American company Clearview AI [12], specialized in facial recognition and the commercialization of data collected to law enforcement.

Conclusion

The European Commission may have missed the opportunity here to ensure full respect for European values in the context of the ‘collective digital transformation dimension of our society’. Beyond the question of whether the AI proposal is fully compatible with European data protection legislation and the requirements of the European Charter, it is clear that when decisions are taken on the basis of AI applications individuals should have the right to specific explanations, and collective rights should also be strengthened as it is already the case in other domains of wider impact (as it happens with the Aarhus legal framework in the environment related legislation).

Negotiations on the European Commission proposal are currently underway inside the European Parliament [13]  and the Council of the EU [14]. Once established their respective positions the interinstitutional dialogue will start. In the meantime it is worth noting that the EP has already voted on October a non-legislative resolution curtailing  the use of AI techniques for such activities as facial surveillance and predictive policing [15].

It remains to be seen if this “non-legislative” resolution will be mirrored in the coming months also in the legislative trialogue between the EP, the Commission and the Council where the pressure of the interior Ministers in favor of surveillance measures risks to remain rather strong.

NOTES


[1] I hereby thanks Mrs Michelle DUBROCARD working at the European Data Protection Supervisor Office for her unvaluable contribution and comments when drafting the present article.

[2] EDPS and EDPB joint Opinion 5/2021 recalling also that “…in line with the jurisprudence of the Court of Justice of the EU (CJEU), Article 16 TFEU provides an appropriate legal basis in cases where the protection of personal data is one of the essential aims or components of the rules adopted by the EU legislature. The application of Article 16 TFEU also entails the need to ensure independent oversight for compliance with the requirements regarding the processing of personal data, as is also required Article 8 of the Charter of the Fundamental Rights of the EU.”

[3] It is also likely that all these new obligations, which will have to be placed on the shoulders of companies, will not fail to revive the debate on the cumbersome nature of European legislation.

[4] Consistently with this approach the EDPS and the EDPB in their Joint Opinion 5/2021 “…call for a general ban on any use of AI for an automated recognition of human features in publicly accessible spaces – such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals – in any context. A ban is equally recommended on AI systems categorizing individuals from biometrics into clusters according to ethnicity, gender, as well as political or sexual orientation, or other grounds for discrimination under Article 21 of the Charter. Furthermore, the EDPB and the EDPS consider that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited.”

[5] To avoid these risks, the future AI Regulation should clearly establish the independency of the supervisory authorities in the performance of their supervision and enforcement tasks. According to the EDPB/EDPS Joint Opinion cited above “..The designation of data protection authorities (DPAs) as the national supervisory authorities would ensure a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions and avoid contradictions in its enforcement among Member States.”

[6] Furthermore, according to the EDPS/EDPB Joint Opinion 5/2021, “..the exclusion of international law enforcement cooperation from the scope set of the Proposal raises serious concerns for the EDPB and EDPS, as such exclusion creates a significant risk of circumvention (e.g., third countries or international organisations operating high-risk applications relied on by public authorities in the EU)”.

[7] Literature on the risks of “Predictive Criminal Policy” is growing day by day.  As rightly stated by A.Rolland in “Ethics, Artificial Intelligence and Predictive Policing” First, the data can be subject to error: law enforcers may incorrectly enter it into the system or overlook it, especially as criminal data is known to be partial and unreliable by nature, distorting the analysis. The data may be incomplete and biased, with certain areas and criminal populations being over-represented. It may also come from periods when the police engaged in discriminatory practices against certain communities, thereby unnecessarily or incorrectly classifying certain areas as ‘high risk’. These implicit biases in historical data sets have enormous consequences for targeted communities today. As a result, the use of AI in predictive policing can exacerbate biased analyses and has been associated with racial profiling”.

[8] Fight against money laundering and terrorist financing (AML/CFT) at EU level is governed by a number of instruments which have to provide for rules affecting both public authorities and private actors who constitute the obliged entities: supervision, exchange of information and intelligence, investigation and cross-border cooperation on the one side, and obligations such as reporting or customer due diligence on the other. For this reason, the relevant instruments are based on a number of different legal bases spanning from economic policy and internal market to police and judicial cooperation. On 20 July 2021, the Commission proposed a legislative package that should enhance many of the above rules. The package consists of 1)A Regulation establishing a new EU AML/CFT Authority; 2)A Regulation on AML/CFT, containing directly-applicable rules; 3-A sixth Directive on AML/CFT (“AMLD6”), replacing the existing Directive 2015/849/EU (the fourth AML directive as amended by the fifth AML directive); 4) A revision of the 2015 Regulation on Transfers of Funds to trace transfers of crypto-assets (Regulation 2015/847/EU); 5)A revision of the Directive on the use of financial information (2019/1153/EU), which is not presented as part of the package, but is closely related to it.

[9] Directive (EU) 2016/681 of the European Parliament and of the Council of 27 April 2016 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime.

[10] Opinion 1/15 pursuant to Article 218(11) TFEU — Draft agreement between Canada and the European Union — Transfer of Passenger Name Record data from the European Union to

[11] The leading Case 817/19 has been raised by the Belgian Constitutional Court and it will give the opportunity to the CJEU to decide if the indiscriminate collection of passengers data and their scoring for security purposes through secret algorithms (as currently done also in some Third Countries) is compatible with the EU Charter and with the ECHR and does not amount to a kind of general surveillance incompatible with a democratic society.

[12] In June 2020, the European Data Protection Board expressed its doubts about the existence of a European legal basis for the use of a service such as that proposed by Clearview AI  .

[13] See the current state of legislative preparatory works here : https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence

[14] See the State of the play diffused by the Council Presidency here: https://data.consilium.europa.eu/doc/document/ST-9674-2021-INIT/en/pdf

[15] See the report Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters,