The Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: perhaps a global reach, but an absence of harmonisation for sure

by Michèle DUBROCARD (*)

On 15 March 2024, Ms Marija Pejčinović Burić, the Secretary General of the Council of Europe, made a statement, on the occasion of the finalisation of the Convention on Artificial Intelligence (AI), Human Rights, Democracy and the Rule of Law. She welcomed what she described as an ‘extraordinary achievement’, namely the setting out of a legal framework that covers AI systems throughout their lifecycles from start to end. She also stressed the global nature of the instrument, ‘open to the world’.

Is it really so? The analysis of the scope, as well as the obligations set forth in the Convention raise doubts about the connection between the stated intent and the finalised text. However, this text still needs to be formally adopted by the Ministers of Foreign Affairs of the Council of Europe Member States at the occasion of the 133rd Ministerial Session of the Committee of Ministers on 17 May 2024, after the issuing of the opinion of the Parliamentary Assembly of the Council of Europe (PACE)[1].

I- The scope of the Convention

It is no secret that the definition of the scope of the Convention created a lot of controversy among the negotiators[2]. In brief, a number of States, a majority of which are not members of the Council of Europe [3] but participated in the discussions as observers, essentially opposed the European Union, in order to limit the scope of the Convention to activities related to AI systems only undertaken by public authorities, and exclude the private sector.

Those observer States achieved their goal, presumably with the help of the Chair[4] and the Secretariat of the Committee on Artificial Intelligence (CAI), but they did it in a roundabout way, with an ambiguous wording. Indeed, the reading of both Article 1.1 and Article 3.1(a) of the Convention may lead to think prima facie that the scope of the Convention is really ‘transversal’[5], irrespective of whether activities linked to AI systems are undertaken by private or public actors:

– according to Article 1.1, ‘the provisions of this Convention aim to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.

– according to Article 3.1(a),‘the scope of this Convention covers the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and rule of law as follows’.

This impression is confirmed by the explanatory report, which states in par. 15 that ‘the Drafters aim to cover any and all activities from the design of an artificial intelligence system to its retirement, no matter which actor is involved in them’.

However, the rest of Article 3 annihilates such wishful thinking: as regards activities undertaken by private actors, the application of the Convention will depend on the goodwill of States. Better still, a Party may choose not to apply the principles and obligations set forth in the Convention to activities of private actors, and nevertheless be seen as compliant with the Convention, as long as it will take ‘appropriate measures’ to fulfil the obligation of addressing risks and impacts arising from those activities:

Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors to the extent not covered in subparagraph (a) in a manner conforming with the object and purpose of the Convention.

Each Party shall specify in a declaration submitted to the Secretary General of the Council of Europe at the time of signature or when depositing its instrument of ratification, acceptance, approval or accession how it intends to implement this obligation, either by applying the principles and obligations set forth in Chapters II to VI of the Framework Convention to activities of private actors or by taking other appropriate measures to fulfil the obligation set out in this paragraph. Parties may, at any time and in the same manner, amend their declarations’.

How should one interpret such a provision? It seems to allow Parties to submit a reservation on the private sector but, at the same time, it is not worded as a reservation per se. On the contrary, it establishes a sort of equivalence between the principles and obligations laid down in the Convention and ‘other appropriate measures’ to be taken by the Parties when addressing risks and impacts arising from activities related to AI systems undertaken by private actors. In other words, the Convention organizes the modalities of circumvention of the principles and obligations that yet constitute the core of its very object.

The result of such a provision is not only a depreciation of the principles and obligations set forth in the Convention, since it is possible to derogate from them for activities of private actors without derogating from the Convention itself, but it also creates a fragmentation in the implementation of the instrument. The uncertainty stemming from these declarations is aggravated by the possibility, for each Party, to amend its declaration at any time. Since there is no other specification, one could even imagine a situation where a Party could, in the first instance, accept to apply the principles and obligations set forth in the Convention to the private sector, but then, at a later stage, reconsider its initial decision and limit such application to the public sector only.

Instead of establishing a level playing field among the Parties, the Convention legitimizes uncertainty as regards its implementation, in space and time.

On the other hand, Article 3.2 clearly authorizes an exemption, requested this time by the European Union[6], for activities within the lifecycle of AI systems related to the protection of national security interests of Parties. However, according to the provision, such activities should be ‘conducted in a manner consistent with applicable international law, including international human rights law obligations, and with respect for its democratic institutions and processes’.  In the framework of the Council of Europe, such an exemption is particularly surprising in the light of the case-law of the European Court of Human Rights, which has clearly interpreted the concept of ‘national security’[7]. Exempting from the scope of the Convention activities of AI systems related to the protection of national security interests seems therefore at best useless, if not conflicting with the obligations stemming from the European Convention on Human Rights.

In addition to national security interests, Article 3 foresees two more exemptions, namely research and development activities and national defence. Concerning research and development activities regarding AI systems not yet made available for use, Article 3.3 also includes what seems to be a safeguard, since the Convention should nevertheless apply when ‘testing or similar activities are undertaken in such a way that they have the potential to interfere with human rights, democracy and the rule of law’. However, there is no indication of how and by whom this potential to interfere could be assessed. The explanatory report is of no help on this point, since it limits itself to paraphrasing the provision of the article[8].

As regards matters related to national defence, the explanatory report[9] refers to the Statute of the Council of Europe, which excludes them from the scope of the Council of Europe. One can however wonder whether the rules of the Statute of Europe are sufficient to justify such a blanket exemption, especially in the light of the ‘global reach’ that the Convention is supposed to have[10]. Moreover, contrary to the explanations related to ‘national security interests’, the explanatory report does not mention activities regarding ‘dual use’ AI systems, which should be under the scope of the Convention insofar as these activities are intended to be used for other purposes not related to national defence.

II- Principles and obligations set forth in the Convention

According to the explanatory report, the Convention ‘creates various obligations in relation to the activities within the lifecycle of artificial intelligence systems’[11].

When reading Chapters II to Chapter VI of the Convention, one can seriously doubt whether the Convention really ‘creates’ obligations or rather simply recalls principles and obligations already recognized by previous international instruments. Moreover, the binding character of such obligations seems quite questionable.

II-A Principles and obligations previously recognized

A number of principles and obligations enshrined in the Convention refer to human rights already protected as such by the European Convention on Human Rights, but also by other international human rights instruments. Apart from Article 4 that recalls the need to protect human rights in general, Article 5 is dedicated to integrity of democratic processes and respect of rule of law[12], Article 10 is about equality and non-discrimination[13], Article 11 refers to privacy and personal data protection[14], and Articles 14 and 15 recall the right to an effective remedy[15].

Other principles are more directly related to AI, such as individual autonomy in Article 7, transparency and oversight in Article 8, accountability and responsibility in Article 9, and reliability in Article 12, but once again these principles are not new. In particular, they were already identified in the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, adopted on 19 May 2019[16].

This feeling of déjà vu is reinforced by the wording of the Convention: in most articles, each Party shall ‘adopt or maintain measures’ to ensure the respect of those principles and obligations. As duly noted in the explanatory report, ‘in using “adopt or maintain”, the Drafters wished to provide flexibility for Parties to fulfil their obligations by adopting new measures or by applying existing measures such as legislation and mechanisms that existed prior to the entry into force of the Framework Convention[17].

The question that inevitably comes to mind is what the added value of this new instrument can be, if it only recalls internationally recognized principles and obligations, some of them already constituting justiciable rights.

Indeed, the mere fact that this new instrument deals with the activities related to AI systems does not change the obligations imposed on States to protect human rights, as enshrined in applicable international law and domestic laws. The evolution of the case law of the European Court of Human Rights is very significant in this regard. As we know, the Court has considered, on many occasions, that the European Convention on Human Rights is to be seen as ‘a living instrument which must be interpreted in the light of present-day conditions[18]. Without much risk one can predict that in the future the Court will have to deal with an increasing number of cases involving the use of AI systems[19].

II-B A declaratory approach

One could try to advocate for this new Convention by emphasizing the introduction of some principles and measures which haven’t been encapsulated in a binding instrument, yet. Such is the case, for instance, of the concepts of transparency and oversight, to be linked to those of accountability and responsibility, reliability, and of the measures to be taken to assess and mitigate the risks and adverse impacts of AI systems.

However, the way these principles and measures have been defined and, above all, how their implementation is foreseen, reveal a declaratory approach, rather than the intention to establish a real binding instrument, uniformly applicable to all.

Moreover, the successive versions of the Convention, from the zero draft, to the last version of March 2024, reveal a constant watering down of its content: the provisions on the need to protect health and environment have been moved to the Preamble, while those aiming at the protection of whistleblowers have been removed.

In the light of the EU Artificial Intelligence Act[20], the current situation is almost ironic, since the Convention does not create any new individual right, contrary to the EU regulation, which clearly recognizes, for instance, the human overview as well as the right to explanation of individual decision-making. And yet, the general economy of the AI Act is based on market surveillance and product conformity considerations, while the Council of Europe Convention on AI is supposed to focus on human rights, democracy, and the rule of law[21].

So, what is this Convention about? Essentially obligations of means and total flexibility as regards the means to fulfil them.

obligations of means:

A number of obligations in principle imposed on Parties are in fact simple obligations of means, since each Party is requested to ‘seek to ensure’ that adequate measures are in place. It is the case in Article 5, dedicated to the ‘integrity of democratic processes and respect for rule of law’. It is also the case in Article 15 on procedural safeguards, when persons are interacting with an artificial intelligence system without knowing it, in Article 16.3 in relation to the need of ensuring that adverse impacts of AI systems are adequately addressed, and in Article 19 on public consultation.

On the same vein, other articles include formulations which leave States with considerable room for manoeuvre in applying the obligations: as regards reliability, each Party shall take ‘as appropriate’ measures to promote this principle[22].  As regards digital literacy and skills, each Party shall ‘encourage and promote’ them[23]. Similarly, Parties are ‘encouraged’ to strengthen cooperation to prevent and mitigate risks and adverse impacts in the contexts of AI systems[24].

More importantly, it will be up to Parties to ‘assess the need for a moratorium or ban’ AI systems posing unacceptable risks[25]. One can only deplore the removal of former Article 14 of the zero draft, which provided for the ban of   the use of AI systems by public authorities using biometrics to identify, categorise or infer emotions of individuals, as well as for the use of those systems for social scoring to determine access to essential services. Here again, the Convention is under the standards defined by the AI Act[26].

– the choice of the measures to be adopted:

First, one should note that from the first article of the Convention, flexibility is offered to the Parties as regards the nature of the measures to be adopted, if appropriate. Article 1.2 provides the possibility for each Party ‘to adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention’.

Consequently, Parties might consider that their domestic system is fully compliant with this Convention without any change in their regulations. They could even consider that simple recommendations to public or private actors might be sufficient to fulfil their obligations under the Convention.

The wide leeway given to the States also explains the constant reference to the ‘domestic law’ [27]or to the domestic legal system[28] throughout the Convention. In particular Article 6, which  constitutes a chapeau for the whole Chapter III, states that principles included in this Chapter shall be implemented by Parties ‘in a manner appropriate to its domestic legal system and the other obligations of this Convention’. Such a wording is not free from a certain ambiguity, since it might be interpreted as requiring, as part of their implementation, an adaptation of the principles set forth in the Convention to the pre-existing domestic law, and not the opposite.

Here again, with this constant reference to domestic laws intrinsically linked to the ‘flexibility’ given to the Parties, one can only deplore the lack of harmonisation of the ‘measures’ which might be adopted in accordance with the Convention.

the absence of an international oversight mechanism:

It is true that Article 26 of the Convention lays down the obligation for each Party to establish or designate one or more effective mechanisms to oversee compliance with the obligations of the Convention. However, once again, Parties are free to choose how they will implement such mechanisms, without any supervisory control at the international level. The Conference of Parties, composed of representatives of the Parties and established by Article 23 of the Convention, won’t have any monitoring powers. The only obligation foreseen is – in Article 24- a reporting obligation to the Conference of the Parties, within the first two years after the State concerned has become a Party. But after this first report, there is no indication on the periodicity of the reporting obligation. 

Conclusion

Despite the continuous pressure from the civil society[29] and the interventions of the highest authorities in the field of human rights and data protection[30], the final outcome of the negotiations is a weak text, based on very general principles and obligations. Some of them are even under the level of the standards recognized in the framework of the Council of Europe, in the light of the European Convention on Human rights and the case law of the European Court of Human Rights, as well as of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. Moreover, their application won’t be consistent among the Parties, due to a variable-geometry scope and a considerable margin of manoeuvre left to the Parties to implement the Convention.

Why so many concessions, in the context of negotiations held under the umbrella of the Council of Europe, which presents itself as the ‘continent’s leading human rights organisation’? The answer of the Council of Europe representatives is: ‘global reach’. So, should the hope to see States which are not members of the Council of Europe ratify the Convention justify such a lack of ambition?

Yet it is not the first time that an international binding instrument negotiated in the framework of the Council of Europe allows for a fragmented application of its provisions: the Second Additional Protocol to the Convention on Cybercrime[31] already provided some sort of ‘pick and choose’ mechanism in several articles. However, what could be understood in the light of the fight against cybercrime, is more difficult to accept in the framework of a Convention aiming at protecting human rights, democracy and the rule of law in the context of artificial intelligence systems.

It is possible that the negotiators could not achieve a better result, in view of the positions expressed in particular by the United States, Canada, Japan and Israel. In that case, the Council of Europe would have been better advised either to be less ambitious and drop the aim of a ‘global reach’, or wait a few more years until the ripening of the maturation of all minds.

(*)  EDPS official: This text is the sole responsibility of the author, and does not represent the official position of the EDPS

NOTES


[1] The Opinion adopted by the PACE on 18 April 2024 includes several proposals to improve the text. See https://pace.coe.int/en/files/33441/html

[2] See an article published in Euractiv on 31 Jan 2024 and updated on 15 Feb 2024:…(https://www.euractiv.com/section/artificial-intelligence/news/tug-of-war-continues-on-international-ai-treaty-as-text-gets-softened-further/ )

See also the open letter of the representatives of the civil society:

 https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit, and an article of M. Emilio de Capitani: The COE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Is the Council of Europe losing its compass? https://free-group.eu/2024/03/04/the-coe-convention-on-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-is-the-council-of-europe-losing-its-compass/

[3] USA, Canada, Japan, Israel.

[4] See an article issued in swissinfo.ch – https://www.swissinfo.ch/eng/foreign-affairs/ai-regulation-is-swiss-negotiator-a-us-stooge/73480128

[5] The terms of reference of the CAI explicitly refers to the establishment of a ‘binding legal instrument of a transversal character’.

[6] See, for instance, an article in Euractiv ‘EU prepares to push back on private sector carve-out from international AI treaty’https://www.euractiv.com/section/artificial-intelligence/news/eu-prepares-to-push-back-on-private-sector-carve-out-from-international-ai-treaty/

[7] National security and European case-law: Research Division of the European Court of Human Rights- https://rm.coe.int/168067d214

[8] Paragraph 33 of the explanatory report : ‘As regards paragraph 3, the wording reflects the intent of the Drafters to exempt research and development activities from the scope of the Framework Convention under certain conditions, namely that the artificial intelligence systems in question have not been made available for use, and that the testing and other similar activities do not pose a potential for interference with human rights, democracy and the rule of law. Such activities excluded from the scope of the Framework Convention should in any case be carried out in accordance with applicable human rights and domestic law as well as recognised ethical and professional standards for scientific research’.

[9] Paragraph 36 of the explanatory report.

[10] In its opinion of 18 April 2024 the PACE suggested to only envisage a restriction. See above note 1.

[11] Paragraph 14 of the explanatory report

[12] these principles are closely linked to freedom of expression and the right to free elections: see in particular Article 10 of the European Convention on Human Rights and Article 3 of Protocol 1

[13] See in particular Article 14 of the European Convention on Human Rights and Protocol 12,

[14] See in particular Article 8 of the European Convention on Human Rights and the case law of the European Court of Human Rights, as well as Article 1 of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.

[15] See in particular Article 13 of the European Convention on Human Rights.

[16] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449#mainText

[17] Paragraph 17 of the explanatory report.

[18] See Tyrer v United Kingdom 2 EHRR 1 at para. 31

[19] On 4 July 2023, the Third Section of the European Court of Human Rights delivered the first judgment on the compatibility of facial recognition technology with human rights in Glukhin v. Russia:

https://hudoc.echr.coe.int/eng#%22display%22:%5B2%5D,%22itemid%22:%5B%22001-225655%22%5D

[20] See Articles 14 and 86 of the AI Act – https://artificialintelligenceact.eu/the-act/

[21] ‘The Council of Europe’s road towards an AI Convention: taking stock’ by Peggy Valcke and Victoria Hendrickx, 9 February 2023: ‘Whereas the AI Act focuses on the digital single market and does not create new rights for individuals, the Convention might fill these gaps by being the first legally binding treaty on AI that focuses on democracy, human rights and the rule of law’. https://www.law.kuleuven.be/citip/blog/the-council-of-europes-road-towards-an-ai-convention-taking-stock/

[22] Article 12 of the Convention.

[23] Article 20 of the Convention.

[24] Article 25 of the Convention.

[25] Article 16.4 of the Convention.

[26] See Chapter II of the AI Act – https://artificialintelligenceact.eu/the-act/

[27] See Articles 4, 10, 11 et 15.

[28] See Articles 6 and 14.

[29] See in particular the open latter of 5 March 2024:

https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit

[30] See the statement of the Council of Europe Commissioner for Human Rights:

https://www.coe.int/en/web/commissioner/-/ai-instrument-of-the-council-of-europe-should-be-firmly-based-on-human-rights

See also the EDPS statement in view of the 10th and last Plenary Meeting of the Committee on Artificial Intelligence (CAI) of the Council of Europe drafting the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: https://www.edps.europa.eu/press-publications/press-news/press-releases/2024/edps-statement-view-10th-and-last-plenary-meeting-committee-artificial-intelligence-cai-council-europe-drafting-framework-convention-artificial_en

[31] Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence- https://rm.coe.int/1680a49dab

The COE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Is the Council of Europe losing its compass ?

by Emilio DE CAPITANI

When the Committee of Ministers of the Council of Europe decided at the end of 2021 to establish the Committee on Artificial Intelligence (CAI) with the mandate to elaborate a legally binding instrument of a transversal character in the field of artificial intelligence (AI), such initiative created a lot of hopes and expectations. For the first time, an international convention ‘based on the Council of Europe’s standards on human rights, democracy and the rule of law and other relevant international standards’ would regulate activities developed in the area of AI.  

The mandate of the CAI was supposed to further build upon the work of the Ad Hoc Committee on Artificial Intelligence (CAHAI), which adopted its last report in December 2021, presenting  ‘possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law’. In this document, the CAHAI underlined the need for the future instrument to ‘focus on preventing and/or mitigating risks emanating from applications of AI systems with the potential to interfere with the enjoyment of human rights, the functioning of democracy and the observance of the rule of law, all the while promoting socially beneficial AI applications’. In particular, the CAHAI considered that the instrument should be applicable to the development, design and application of artificial intelligence (AI) systems, ‘irrespective of whether these activities are undertaken by public or private actors’, and that it should be underpinned by a risk-based approach. The risk classification should include ‘a number of categories (e.g., “low risk”, “high risk”, “unacceptable risk”), based on a risk assessment in relation to the enjoyment of human rights, the functioning of democracy and the observance of the rule of law’. According to the CAHAI, the instrument should also include ‘a provision aimed at ensuring the necessary level of human oversight over AI systems and their effects, throughout their lifecycles’.

So, a lot of hopes and expectations: some experts expressed the wish to see this new instrument as a way to complement, at least in the European Union, the future AI Act, seen as a regulation for the digital single market, setting aside the rights of the persons affected by the use of AI  systems[1]. In its opinion of 20/2022 on the Recommendation for a Council Decision authorising the opening of negotiations on behalf of the European Union for this Council of Europe convention, the EDPS considered that it represented ‘an important opportunity to complement the proposed AI Act by strengthening the protection of fundamental rights of all persons affected by AI systems’. The EDPS advocated that the convention should provide ‘clear and strong safeguards for the persons affected by the use of AI systems’.

Alas, those hopes and expectations were quickly dampened by the way the negotiations were organised, and, above all, by the content of the future instrument itself.

1- the organisation of the negotiations: the non-member States leading, the civil society out

The objective to open the future instrument to States which are not members of the Council of Europe was with no doubt an excellent initiative, considering the borderless character of AI, and the need to regulate this technology worldwide. Indeed, as noted by the CAHAI in its above mentioned report ‘The various legal issues raised by the application of AI systems are not specific to the member States of the Council of Europe, but are, due to the many global actors involved and the global effects they engender, transnational in nature’. The CAHAI therefore recommended that the instrument, ‘though obviously based on Council of Europe standards, be drafted in such a way that it facilitates accession by States outside of the region that share the aforementioned standards’. So, yes on a global reach, but provided that the standards of the Council of Europe are fully respected.

However, the conditions under which those non-member States have participated in the negotiations need be looked at a little more: not only have they been part of the drafting group sessions unlike the representatives of the civil society, but it seems that from the start they have played a decisive role in the conduct of negotiations. According to a report published in Euractiv in January 2023[2], the US delegation opposed the publication of the first draft of the Convention (the ‘zero draft’), refusing to disclose its negotiating positions publicly to non-country representatives.

At the same time, the organisation of the negotiations has set aside the civil society groups, who were only allowed to intervene in the plenary sessions of the meetings, while the text was discussed and modified in the drafting sessions. The next and-in principle- last plenary meeting from the 11th to the 14th of March should start with a drafting session and will end with the plenary session, which implies that the civil society representatives will have less than 24 hours to have a look at the revised version of the convention -if they can receive it on time- and make their last comments, assuming that their voices were really heard during the negotiations.

Yet, representatives of the civil society and human rights institutions have done their utmost to play an active part in the negotiations. In an email to the participating States, they recalled that the decision to exclude them from the drafting group went ‘against the examples of good practice from the Council of Europe, the prior practice of the drafting of Convention 108+, and the CoE’s own standards on civil participation in political decision-making[3]. During the 3rd Plenary meeting of 11-13 January 2023, they insisted on being part of the drafting sessions, but the Chair refused, as indicated in the list of decisions:

‘(…) –Take note of and consider the concerns raised by some Observers regarding the decision taken by the Committee at the occasion of its 2nd Plenary meeting to establish a Drafting Group to prepare the draft [Framework] Convention, composed of potential Parties to the [Framework] Convention and reporting to the Plenary.

– Not to revise the aforesaid decision, while underlining the need to ensure an inclusive and transparent negotiation process involving all Members, Participants and Observers and endorsing the Chair’s proposal for working methods in this regard’.[4]

Despite this commitment, the need of an ‘inclusive and transparent negotiation process’ has not been ensured in the light of the civil society statement of the 4th of July 2023, where again the authors ‘deeply regret(ted) that the negotiating States have chosen to exclude both civil society observers and Council of Europe member participants from the formal and informal meetings of the drafting group of the Convention. This undermines the transparency and accountability of the Council of Europe and is contrary to the established Council of Europe practice and the Committee on AI (CAI) own Terms of Reference which instructs the CAI to “contribute[…] to strengthening the role and meaningful participation of civil society in its work”.’[5]

The influence of non-member States has not been limited to the organisation of meetings. As detailed below, the American and Canadian delegations delegations, among others, threw their full weight behind the choice of systematically watering down the substance of the Convention.

2- A convention with no specific rights and very limited obligations

How should the mandate of the CAI be understood? According to the terms of reference, the Committee is instructed to ‘establish an international negotiation process and conduct work to finalise an appropriate legal framework on the development, design, use and decommissioning of artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law and other relevant international standards, and conducive to innovation, which can be composed of a binding legal instrument of a transversal character, including notably general common principles (…)[6].

The objective of including in the convention ‘general common principles’ has been interpreted by the Chair literally, who considered that ‘the AI Convention will offer an underlying baseline of principles in how to handle the technology, on top of which individual governments can then build their own legislation to meet their own specific needs’[7]. Indeed, the last publicly available version -dated 18 December 2023- of the draft Convention only refers to ‘principles’ and not to specific rights[8], even those already existing in the framework of the Council of Europe and beyond. In the context of AI, though, one could have hoped the recognition of certain rights, as the right to human oversight and the right to explanation for AI based decisions.

Such a choice has been criticized by the civil society‘s representatives. In a public statement of the 4th of July 2023, they recalled that ‘while including general common principles for AI regulation as indicated in the CAI Terms of Reference, the Convention should respect the rights established by other Conventions and not reformulate them as mere principles[9].

Unfortunately, the Convention, at least in the version of the 18th of December 2023, does not even expressly include the right to privacy and the right to the protection of personal data. Yet, if data are, as the Chair himself referred to, ‘the oil of the XX1st century’[10], the need to protect our rights in this area is critical.

If one compares the successive versions of the Convention which are publicly accessible, from the zero draft[11], to the version of the 18th of December, one can only deplore the constant watering down of its content. What about ‘prohibited artificial intelligence practices’ referred to in Article 14 of the zero draft? What about the definitions, which included in the zero draft the notion of ‘artificial intelligence subject’, defined as ‘any natural or legal person whose human rights and fundamental freedoms, legal rights or interests are impacted by decisions made or substantially informed by the application use  of an artificial intelligence system’? What about a clear presentation of the risk-based approach, with a differentiation of measures to be applied in respect of artificial intelligence systems posing significant and unacceptable levels of risk (see articles 12 and 13 of the zero draft)?

Moreover, in the version of the 18th of December 2023, a number of obligations in principle imposed on Parties might become simple obligations of means, since the possible -or already accepted- wording would be that each party should ‘seek to ensure’ that adequate measures are in place. It is in particular the case in the article dedicated to the ‘integrity of democratic processes and respect for rule of law’, as well as in the article on ‘accountability and responsibility’ and even in the article on procedural safeguards, when persons are interacting with an artificial intelligence system without knowing it.

According to an article published in Euractiv on 31 Jan 2024 and updated on 15 Feb 2024, even the version of the 18th of December 2023 seems to have been watered down: ‘Entire provisions, such as protecting health and the environment, measures promoting trust in AI systems, and the requirement to provide human oversight for AI-driven decisions affecting people’s human rights, have been scrapped’[12].

3- The worse to come?

One crucial element of the Convention still needs to be discussed: its scope. Since the beginning of the negotiations, the USA and Canada, but also Japan and Israel, none of them members of the Council of Europe, have clearly indicated their wish to limit the scope of the instrument to activities within the lifecycle of artificial intelligence systems only undertaken by public authorities[13]. Moreover, national security and defence should also be out of the scope of the convention.  The version of the 18th of December includes several wordings regarding the exemption of national security, which reflect different levels of exemption.

The issue of the scope has lead the representatives of the civil society to draft an open letter[14], signed by an impressive number of organisations calling on the EU and the State Parties negotiating the text of the Convention to equally cover the public and private sectors and to unequivocally reject blanket exemptions regarding national security and defence.

Today no one knows what the result of the last round of negotiations will be: it seems that the EU is determined to maintain its position in favour of the inclusion of the private sector in the scope of the Convention, while the Americans and Canadians might use the signature of the Convention as blackmail to ensure the exclusion of the private sector.

4- Who gains?

From the Council of Europe perspective, which is an organisation founded on the values of human rights, democracy and the rule of law. the first question that comes to mind is what are the expected results of the ongoing negotiations. Can the obsession to see the Americans sign the Convention justify such a weakened text, even with the private sector in its scope? What would be the gain for the Council of Europe and its member States, to accept a Convention which looks like a simple Declaration, not very far in fact from the Organisation for Economic Co-operation and Development’s Principles on AI[15]?

At this stage, it seems that neither the Americans nor the Canadians are ready to sign the Convention with the inclusion of the private sector, even if an opt-out clause were inserted in the text. The gamble of the Chair and the Secretariat to keep these two observer States on board at the price of excessive compromises might be lost at the end of the day. One should not forget that these States do not have voting rights in the Committee of Ministers.

The second question that comes to mind is why the Chair and the Secretariat of the CAI and, above them, those who lead the Council of Europe have made such a choice. Does it have a link with internal decisions to be taken in the next future, as regards the post of the General Secretary of the organisation, as well as the post of the Director General of Human Rights and Rule of Law? Does the nationality of the Chair have a role to play in this game? In any case, the future Convention might look like an empty shell, which might have more adverse effects than it seems prima facie, by legitimizing practices around the world which would be considered incompatible with the European standards.

NOTES


[1] See in particular ‘The Council of Europe’s road towards an AI Convention: taking stock’ by Peggy Valcke and Victoria Hendrickx, 9 February 2023: ‘Whereas the AI Act focuses on the digital single market and does not create new rights for individuals, the Convention might fill these gaps by being the first legally binding treaty on AI that focuses on democracy, human rights and the rule of law’. https://www.law.kuleuven.be/citip/blog/the-council-of-europes-road-towards-an-ai-convention-taking-stock/

[2] https://www.euractiv.com/section/digital/news/us-obtains-exclusion-of-ngos-from-drafting-ai-treaty/

[3] same article

[4] https://rm.coe.int/cai-2023-03-list-of-decisions/1680a9cc4f

[5] https://ecnl.org/sites/default/files/2023-07/CSO-COE-Statement_07042023_Website.pdf

[6] https://rm.coe.int/terms-of-reference-of-the-committee-on-artificial-intelligence-cai-/1680ade00f

[7] https://www.politico.eu/newsletter/digital-bridge/one-treaty-to-rule-ai-global-politico-transatlantic-data-deal/

[8] with the exception of ‘rights of persons with disabilities and of children’ in Article 18

[9] https://ecnl.org/sites/default/files/2023-07/CSO-COE-Statement_07042023_Website.pdf

[10] https://www.linkedin.com/pulse/data-oil-21st-century-ai-systems-engines-digital-thomas-schneider/

[11] https://www.statewatch.org/news/2023/january/council-of-europe-convention-on-artificial-intelligence-zero-draft-and-member-state-submissions/

[12] https://www.euractiv.com/section/artificial-intelligence/news/tug-of-war-continues-on-international-ai-treaty-as-text-gets-softened-further/

[13] same article

[14] https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit

[15] https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449