The Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: perhaps a global reach, but an absence of harmonisation for sure

by Michèle DUBROCARD (*)

On 15 March 2024, Ms Marija Pejčinović Burić, the Secretary General of the Council of Europe, made a statement, on the occasion of the finalisation of the Convention on Artificial Intelligence (AI), Human Rights, Democracy and the Rule of Law. She welcomed what she described as an ‘extraordinary achievement’, namely the setting out of a legal framework that covers AI systems throughout their lifecycles from start to end. She also stressed the global nature of the instrument, ‘open to the world’.

Is it really so? The analysis of the scope, as well as the obligations set forth in the Convention raise doubts about the connection between the stated intent and the finalised text. However, this text still needs to be formally adopted by the Ministers of Foreign Affairs of the Council of Europe Member States at the occasion of the 133rd Ministerial Session of the Committee of Ministers on 17 May 2024, after the issuing of the opinion of the Parliamentary Assembly of the Council of Europe (PACE)[1].

I- The scope of the Convention

It is no secret that the definition of the scope of the Convention created a lot of controversy among the negotiators[2]. In brief, a number of States, a majority of which are not members of the Council of Europe [3] but participated in the discussions as observers, essentially opposed the European Union, in order to limit the scope of the Convention to activities related to AI systems only undertaken by public authorities, and exclude the private sector.

Those observer States achieved their goal, presumably with the help of the Chair[4] and the Secretariat of the Committee on Artificial Intelligence (CAI), but they did it in a roundabout way, with an ambiguous wording. Indeed, the reading of both Article 1.1 and Article 3.1(a) of the Convention may lead to think prima facie that the scope of the Convention is really ‘transversal’[5], irrespective of whether activities linked to AI systems are undertaken by private or public actors:

– according to Article 1.1, ‘the provisions of this Convention aim to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.

– according to Article 3.1(a),‘the scope of this Convention covers the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and rule of law as follows’.

This impression is confirmed by the explanatory report, which states in par. 15 that ‘the Drafters aim to cover any and all activities from the design of an artificial intelligence system to its retirement, no matter which actor is involved in them’.

However, the rest of Article 3 annihilates such wishful thinking: as regards activities undertaken by private actors, the application of the Convention will depend on the goodwill of States. Better still, a Party may choose not to apply the principles and obligations set forth in the Convention to activities of private actors, and nevertheless be seen as compliant with the Convention, as long as it will take ‘appropriate measures’ to fulfil the obligation of addressing risks and impacts arising from those activities:

Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors to the extent not covered in subparagraph (a) in a manner conforming with the object and purpose of the Convention.

Each Party shall specify in a declaration submitted to the Secretary General of the Council of Europe at the time of signature or when depositing its instrument of ratification, acceptance, approval or accession how it intends to implement this obligation, either by applying the principles and obligations set forth in Chapters II to VI of the Framework Convention to activities of private actors or by taking other appropriate measures to fulfil the obligation set out in this paragraph. Parties may, at any time and in the same manner, amend their declarations’.

How should one interpret such a provision? It seems to allow Parties to submit a reservation on the private sector but, at the same time, it is not worded as a reservation per se. On the contrary, it establishes a sort of equivalence between the principles and obligations laid down in the Convention and ‘other appropriate measures’ to be taken by the Parties when addressing risks and impacts arising from activities related to AI systems undertaken by private actors. In other words, the Convention organizes the modalities of circumvention of the principles and obligations that yet constitute the core of its very object.

The result of such a provision is not only a depreciation of the principles and obligations set forth in the Convention, since it is possible to derogate from them for activities of private actors without derogating from the Convention itself, but it also creates a fragmentation in the implementation of the instrument. The uncertainty stemming from these declarations is aggravated by the possibility, for each Party, to amend its declaration at any time. Since there is no other specification, one could even imagine a situation where a Party could, in the first instance, accept to apply the principles and obligations set forth in the Convention to the private sector, but then, at a later stage, reconsider its initial decision and limit such application to the public sector only.

Instead of establishing a level playing field among the Parties, the Convention legitimizes uncertainty as regards its implementation, in space and time.

On the other hand, Article 3.2 clearly authorizes an exemption, requested this time by the European Union[6], for activities within the lifecycle of AI systems related to the protection of national security interests of Parties. However, according to the provision, such activities should be ‘conducted in a manner consistent with applicable international law, including international human rights law obligations, and with respect for its democratic institutions and processes’.  In the framework of the Council of Europe, such an exemption is particularly surprising in the light of the case-law of the European Court of Human Rights, which has clearly interpreted the concept of ‘national security’[7]. Exempting from the scope of the Convention activities of AI systems related to the protection of national security interests seems therefore at best useless, if not conflicting with the obligations stemming from the European Convention on Human Rights.

In addition to national security interests, Article 3 foresees two more exemptions, namely research and development activities and national defence. Concerning research and development activities regarding AI systems not yet made available for use, Article 3.3 also includes what seems to be a safeguard, since the Convention should nevertheless apply when ‘testing or similar activities are undertaken in such a way that they have the potential to interfere with human rights, democracy and the rule of law’. However, there is no indication of how and by whom this potential to interfere could be assessed. The explanatory report is of no help on this point, since it limits itself to paraphrasing the provision of the article[8].

As regards matters related to national defence, the explanatory report[9] refers to the Statute of the Council of Europe, which excludes them from the scope of the Council of Europe. One can however wonder whether the rules of the Statute of Europe are sufficient to justify such a blanket exemption, especially in the light of the ‘global reach’ that the Convention is supposed to have[10]. Moreover, contrary to the explanations related to ‘national security interests’, the explanatory report does not mention activities regarding ‘dual use’ AI systems, which should be under the scope of the Convention insofar as these activities are intended to be used for other purposes not related to national defence.

II- Principles and obligations set forth in the Convention

According to the explanatory report, the Convention ‘creates various obligations in relation to the activities within the lifecycle of artificial intelligence systems’[11].

When reading Chapters II to Chapter VI of the Convention, one can seriously doubt whether the Convention really ‘creates’ obligations or rather simply recalls principles and obligations already recognized by previous international instruments. Moreover, the binding character of such obligations seems quite questionable.

II-A Principles and obligations previously recognized

A number of principles and obligations enshrined in the Convention refer to human rights already protected as such by the European Convention on Human Rights, but also by other international human rights instruments. Apart from Article 4 that recalls the need to protect human rights in general, Article 5 is dedicated to integrity of democratic processes and respect of rule of law[12], Article 10 is about equality and non-discrimination[13], Article 11 refers to privacy and personal data protection[14], and Articles 14 and 15 recall the right to an effective remedy[15].

Other principles are more directly related to AI, such as individual autonomy in Article 7, transparency and oversight in Article 8, accountability and responsibility in Article 9, and reliability in Article 12, but once again these principles are not new. In particular, they were already identified in the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, adopted on 19 May 2019[16].

This feeling of déjà vu is reinforced by the wording of the Convention: in most articles, each Party shall ‘adopt or maintain measures’ to ensure the respect of those principles and obligations. As duly noted in the explanatory report, ‘in using “adopt or maintain”, the Drafters wished to provide flexibility for Parties to fulfil their obligations by adopting new measures or by applying existing measures such as legislation and mechanisms that existed prior to the entry into force of the Framework Convention[17].

The question that inevitably comes to mind is what the added value of this new instrument can be, if it only recalls internationally recognized principles and obligations, some of them already constituting justiciable rights.

Indeed, the mere fact that this new instrument deals with the activities related to AI systems does not change the obligations imposed on States to protect human rights, as enshrined in applicable international law and domestic laws. The evolution of the case law of the European Court of Human Rights is very significant in this regard. As we know, the Court has considered, on many occasions, that the European Convention on Human Rights is to be seen as ‘a living instrument which must be interpreted in the light of present-day conditions[18]. Without much risk one can predict that in the future the Court will have to deal with an increasing number of cases involving the use of AI systems[19].

II-B A declaratory approach

One could try to advocate for this new Convention by emphasizing the introduction of some principles and measures which haven’t been encapsulated in a binding instrument, yet. Such is the case, for instance, of the concepts of transparency and oversight, to be linked to those of accountability and responsibility, reliability, and of the measures to be taken to assess and mitigate the risks and adverse impacts of AI systems.

However, the way these principles and measures have been defined and, above all, how their implementation is foreseen, reveal a declaratory approach, rather than the intention to establish a real binding instrument, uniformly applicable to all.

Moreover, the successive versions of the Convention, from the zero draft, to the last version of March 2024, reveal a constant watering down of its content: the provisions on the need to protect health and environment have been moved to the Preamble, while those aiming at the protection of whistleblowers have been removed.

In the light of the EU Artificial Intelligence Act[20], the current situation is almost ironic, since the Convention does not create any new individual right, contrary to the EU regulation, which clearly recognizes, for instance, the human overview as well as the right to explanation of individual decision-making. And yet, the general economy of the AI Act is based on market surveillance and product conformity considerations, while the Council of Europe Convention on AI is supposed to focus on human rights, democracy, and the rule of law[21].

So, what is this Convention about? Essentially obligations of means and total flexibility as regards the means to fulfil them.

obligations of means:

A number of obligations in principle imposed on Parties are in fact simple obligations of means, since each Party is requested to ‘seek to ensure’ that adequate measures are in place. It is the case in Article 5, dedicated to the ‘integrity of democratic processes and respect for rule of law’. It is also the case in Article 15 on procedural safeguards, when persons are interacting with an artificial intelligence system without knowing it, in Article 16.3 in relation to the need of ensuring that adverse impacts of AI systems are adequately addressed, and in Article 19 on public consultation.

On the same vein, other articles include formulations which leave States with considerable room for manoeuvre in applying the obligations: as regards reliability, each Party shall take ‘as appropriate’ measures to promote this principle[22].  As regards digital literacy and skills, each Party shall ‘encourage and promote’ them[23]. Similarly, Parties are ‘encouraged’ to strengthen cooperation to prevent and mitigate risks and adverse impacts in the contexts of AI systems[24].

More importantly, it will be up to Parties to ‘assess the need for a moratorium or ban’ AI systems posing unacceptable risks[25]. One can only deplore the removal of former Article 14 of the zero draft, which provided for the ban of   the use of AI systems by public authorities using biometrics to identify, categorise or infer emotions of individuals, as well as for the use of those systems for social scoring to determine access to essential services. Here again, the Convention is under the standards defined by the AI Act[26].

– the choice of the measures to be adopted:

First, one should note that from the first article of the Convention, flexibility is offered to the Parties as regards the nature of the measures to be adopted, if appropriate. Article 1.2 provides the possibility for each Party ‘to adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention’.

Consequently, Parties might consider that their domestic system is fully compliant with this Convention without any change in their regulations. They could even consider that simple recommendations to public or private actors might be sufficient to fulfil their obligations under the Convention.

The wide leeway given to the States also explains the constant reference to the ‘domestic law’ [27]or to the domestic legal system[28] throughout the Convention. In particular Article 6, which  constitutes a chapeau for the whole Chapter III, states that principles included in this Chapter shall be implemented by Parties ‘in a manner appropriate to its domestic legal system and the other obligations of this Convention’. Such a wording is not free from a certain ambiguity, since it might be interpreted as requiring, as part of their implementation, an adaptation of the principles set forth in the Convention to the pre-existing domestic law, and not the opposite.

Here again, with this constant reference to domestic laws intrinsically linked to the ‘flexibility’ given to the Parties, one can only deplore the lack of harmonisation of the ‘measures’ which might be adopted in accordance with the Convention.

the absence of an international oversight mechanism:

It is true that Article 26 of the Convention lays down the obligation for each Party to establish or designate one or more effective mechanisms to oversee compliance with the obligations of the Convention. However, once again, Parties are free to choose how they will implement such mechanisms, without any supervisory control at the international level. The Conference of Parties, composed of representatives of the Parties and established by Article 23 of the Convention, won’t have any monitoring powers. The only obligation foreseen is – in Article 24- a reporting obligation to the Conference of the Parties, within the first two years after the State concerned has become a Party. But after this first report, there is no indication on the periodicity of the reporting obligation. 

Conclusion

Despite the continuous pressure from the civil society[29] and the interventions of the highest authorities in the field of human rights and data protection[30], the final outcome of the negotiations is a weak text, based on very general principles and obligations. Some of them are even under the level of the standards recognized in the framework of the Council of Europe, in the light of the European Convention on Human rights and the case law of the European Court of Human Rights, as well as of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. Moreover, their application won’t be consistent among the Parties, due to a variable-geometry scope and a considerable margin of manoeuvre left to the Parties to implement the Convention.

Why so many concessions, in the context of negotiations held under the umbrella of the Council of Europe, which presents itself as the ‘continent’s leading human rights organisation’? The answer of the Council of Europe representatives is: ‘global reach’. So, should the hope to see States which are not members of the Council of Europe ratify the Convention justify such a lack of ambition?

Yet it is not the first time that an international binding instrument negotiated in the framework of the Council of Europe allows for a fragmented application of its provisions: the Second Additional Protocol to the Convention on Cybercrime[31] already provided some sort of ‘pick and choose’ mechanism in several articles. However, what could be understood in the light of the fight against cybercrime, is more difficult to accept in the framework of a Convention aiming at protecting human rights, democracy and the rule of law in the context of artificial intelligence systems.

It is possible that the negotiators could not achieve a better result, in view of the positions expressed in particular by the United States, Canada, Japan and Israel. In that case, the Council of Europe would have been better advised either to be less ambitious and drop the aim of a ‘global reach’, or wait a few more years until the ripening of the maturation of all minds.

(*)  EDPS official: This text is the sole responsibility of the author, and does not represent the official position of the EDPS

NOTES


[1] The Opinion adopted by the PACE on 18 April 2024 includes several proposals to improve the text. See https://pace.coe.int/en/files/33441/html

[2] See an article published in Euractiv on 31 Jan 2024 and updated on 15 Feb 2024:…(https://www.euractiv.com/section/artificial-intelligence/news/tug-of-war-continues-on-international-ai-treaty-as-text-gets-softened-further/ )

See also the open letter of the representatives of the civil society:

 https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit, and an article of M. Emilio de Capitani: The COE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Is the Council of Europe losing its compass? https://free-group.eu/2024/03/04/the-coe-convention-on-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-is-the-council-of-europe-losing-its-compass/

[3] USA, Canada, Japan, Israel.

[4] See an article issued in swissinfo.ch – https://www.swissinfo.ch/eng/foreign-affairs/ai-regulation-is-swiss-negotiator-a-us-stooge/73480128

[5] The terms of reference of the CAI explicitly refers to the establishment of a ‘binding legal instrument of a transversal character’.

[6] See, for instance, an article in Euractiv ‘EU prepares to push back on private sector carve-out from international AI treaty’https://www.euractiv.com/section/artificial-intelligence/news/eu-prepares-to-push-back-on-private-sector-carve-out-from-international-ai-treaty/

[7] National security and European case-law: Research Division of the European Court of Human Rights- https://rm.coe.int/168067d214

[8] Paragraph 33 of the explanatory report : ‘As regards paragraph 3, the wording reflects the intent of the Drafters to exempt research and development activities from the scope of the Framework Convention under certain conditions, namely that the artificial intelligence systems in question have not been made available for use, and that the testing and other similar activities do not pose a potential for interference with human rights, democracy and the rule of law. Such activities excluded from the scope of the Framework Convention should in any case be carried out in accordance with applicable human rights and domestic law as well as recognised ethical and professional standards for scientific research’.

[9] Paragraph 36 of the explanatory report.

[10] In its opinion of 18 April 2024 the PACE suggested to only envisage a restriction. See above note 1.

[11] Paragraph 14 of the explanatory report

[12] these principles are closely linked to freedom of expression and the right to free elections: see in particular Article 10 of the European Convention on Human Rights and Article 3 of Protocol 1

[13] See in particular Article 14 of the European Convention on Human Rights and Protocol 12,

[14] See in particular Article 8 of the European Convention on Human Rights and the case law of the European Court of Human Rights, as well as Article 1 of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.

[15] See in particular Article 13 of the European Convention on Human Rights.

[16] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449#mainText

[17] Paragraph 17 of the explanatory report.

[18] See Tyrer v United Kingdom 2 EHRR 1 at para. 31

[19] On 4 July 2023, the Third Section of the European Court of Human Rights delivered the first judgment on the compatibility of facial recognition technology with human rights in Glukhin v. Russia:

https://hudoc.echr.coe.int/eng#%22display%22:%5B2%5D,%22itemid%22:%5B%22001-225655%22%5D

[20] See Articles 14 and 86 of the AI Act – https://artificialintelligenceact.eu/the-act/

[21] ‘The Council of Europe’s road towards an AI Convention: taking stock’ by Peggy Valcke and Victoria Hendrickx, 9 February 2023: ‘Whereas the AI Act focuses on the digital single market and does not create new rights for individuals, the Convention might fill these gaps by being the first legally binding treaty on AI that focuses on democracy, human rights and the rule of law’. https://www.law.kuleuven.be/citip/blog/the-council-of-europes-road-towards-an-ai-convention-taking-stock/

[22] Article 12 of the Convention.

[23] Article 20 of the Convention.

[24] Article 25 of the Convention.

[25] Article 16.4 of the Convention.

[26] See Chapter II of the AI Act – https://artificialintelligenceact.eu/the-act/

[27] See Articles 4, 10, 11 et 15.

[28] See Articles 6 and 14.

[29] See in particular the open latter of 5 March 2024:

https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit

[30] See the statement of the Council of Europe Commissioner for Human Rights:

https://www.coe.int/en/web/commissioner/-/ai-instrument-of-the-council-of-europe-should-be-firmly-based-on-human-rights

See also the EDPS statement in view of the 10th and last Plenary Meeting of the Committee on Artificial Intelligence (CAI) of the Council of Europe drafting the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: https://www.edps.europa.eu/press-publications/press-news/press-releases/2024/edps-statement-view-10th-and-last-plenary-meeting-committee-artificial-intelligence-cai-council-europe-drafting-framework-convention-artificial_en

[31] Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence- https://rm.coe.int/1680a49dab

One thought on “The Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: perhaps a global reach, but an absence of harmonisation for sure”

Leave a Reply