The Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: perhaps a global reach, but an absence of harmonisation for sure

by Michèle DUBROCARD (*)

On 15 March 2024, Ms Marija Pejčinović Burić, the Secretary General of the Council of Europe, made a statement, on the occasion of the finalisation of the Convention on Artificial Intelligence (AI), Human Rights, Democracy and the Rule of Law. She welcomed what she described as an ‘extraordinary achievement’, namely the setting out of a legal framework that covers AI systems throughout their lifecycles from start to end. She also stressed the global nature of the instrument, ‘open to the world’.

Is it really so? The analysis of the scope, as well as the obligations set forth in the Convention raise doubts about the connection between the stated intent and the finalised text. However, this text still needs to be formally adopted by the Ministers of Foreign Affairs of the Council of Europe Member States at the occasion of the 133rd Ministerial Session of the Committee of Ministers on 17 May 2024, after the issuing of the opinion of the Parliamentary Assembly of the Council of Europe (PACE)[1].

I- The scope of the Convention

It is no secret that the definition of the scope of the Convention created a lot of controversy among the negotiators[2]. In brief, a number of States, a majority of which are not members of the Council of Europe [3] but participated in the discussions as observers, essentially opposed the European Union, in order to limit the scope of the Convention to activities related to AI systems only undertaken by public authorities, and exclude the private sector.

Those observer States achieved their goal, presumably with the help of the Chair[4] and the Secretariat of the Committee on Artificial Intelligence (CAI), but they did it in a roundabout way, with an ambiguous wording. Indeed, the reading of both Article 1.1 and Article 3.1(a) of the Convention may lead to think prima facie that the scope of the Convention is really ‘transversal’[5], irrespective of whether activities linked to AI systems are undertaken by private or public actors:

– according to Article 1.1, ‘the provisions of this Convention aim to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.

– according to Article 3.1(a),‘the scope of this Convention covers the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and rule of law as follows’.

This impression is confirmed by the explanatory report, which states in par. 15 that ‘the Drafters aim to cover any and all activities from the design of an artificial intelligence system to its retirement, no matter which actor is involved in them’.

However, the rest of Article 3 annihilates such wishful thinking: as regards activities undertaken by private actors, the application of the Convention will depend on the goodwill of States. Better still, a Party may choose not to apply the principles and obligations set forth in the Convention to activities of private actors, and nevertheless be seen as compliant with the Convention, as long as it will take ‘appropriate measures’ to fulfil the obligation of addressing risks and impacts arising from those activities:

Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors to the extent not covered in subparagraph (a) in a manner conforming with the object and purpose of the Convention.

Each Party shall specify in a declaration submitted to the Secretary General of the Council of Europe at the time of signature or when depositing its instrument of ratification, acceptance, approval or accession how it intends to implement this obligation, either by applying the principles and obligations set forth in Chapters II to VI of the Framework Convention to activities of private actors or by taking other appropriate measures to fulfil the obligation set out in this paragraph. Parties may, at any time and in the same manner, amend their declarations’.

How should one interpret such a provision? It seems to allow Parties to submit a reservation on the private sector but, at the same time, it is not worded as a reservation per se. On the contrary, it establishes a sort of equivalence between the principles and obligations laid down in the Convention and ‘other appropriate measures’ to be taken by the Parties when addressing risks and impacts arising from activities related to AI systems undertaken by private actors. In other words, the Convention organizes the modalities of circumvention of the principles and obligations that yet constitute the core of its very object.

The result of such a provision is not only a depreciation of the principles and obligations set forth in the Convention, since it is possible to derogate from them for activities of private actors without derogating from the Convention itself, but it also creates a fragmentation in the implementation of the instrument. The uncertainty stemming from these declarations is aggravated by the possibility, for each Party, to amend its declaration at any time. Since there is no other specification, one could even imagine a situation where a Party could, in the first instance, accept to apply the principles and obligations set forth in the Convention to the private sector, but then, at a later stage, reconsider its initial decision and limit such application to the public sector only.

Instead of establishing a level playing field among the Parties, the Convention legitimizes uncertainty as regards its implementation, in space and time.

On the other hand, Article 3.2 clearly authorizes an exemption, requested this time by the European Union[6], for activities within the lifecycle of AI systems related to the protection of national security interests of Parties. However, according to the provision, such activities should be ‘conducted in a manner consistent with applicable international law, including international human rights law obligations, and with respect for its democratic institutions and processes’.  In the framework of the Council of Europe, such an exemption is particularly surprising in the light of the case-law of the European Court of Human Rights, which has clearly interpreted the concept of ‘national security’[7]. Exempting from the scope of the Convention activities of AI systems related to the protection of national security interests seems therefore at best useless, if not conflicting with the obligations stemming from the European Convention on Human Rights.

In addition to national security interests, Article 3 foresees two more exemptions, namely research and development activities and national defence. Concerning research and development activities regarding AI systems not yet made available for use, Article 3.3 also includes what seems to be a safeguard, since the Convention should nevertheless apply when ‘testing or similar activities are undertaken in such a way that they have the potential to interfere with human rights, democracy and the rule of law’. However, there is no indication of how and by whom this potential to interfere could be assessed. The explanatory report is of no help on this point, since it limits itself to paraphrasing the provision of the article[8].

As regards matters related to national defence, the explanatory report[9] refers to the Statute of the Council of Europe, which excludes them from the scope of the Council of Europe. One can however wonder whether the rules of the Statute of Europe are sufficient to justify such a blanket exemption, especially in the light of the ‘global reach’ that the Convention is supposed to have[10]. Moreover, contrary to the explanations related to ‘national security interests’, the explanatory report does not mention activities regarding ‘dual use’ AI systems, which should be under the scope of the Convention insofar as these activities are intended to be used for other purposes not related to national defence.

II- Principles and obligations set forth in the Convention

According to the explanatory report, the Convention ‘creates various obligations in relation to the activities within the lifecycle of artificial intelligence systems’[11].

When reading Chapters II to Chapter VI of the Convention, one can seriously doubt whether the Convention really ‘creates’ obligations or rather simply recalls principles and obligations already recognized by previous international instruments. Moreover, the binding character of such obligations seems quite questionable.

II-A Principles and obligations previously recognized

A number of principles and obligations enshrined in the Convention refer to human rights already protected as such by the European Convention on Human Rights, but also by other international human rights instruments. Apart from Article 4 that recalls the need to protect human rights in general, Article 5 is dedicated to integrity of democratic processes and respect of rule of law[12], Article 10 is about equality and non-discrimination[13], Article 11 refers to privacy and personal data protection[14], and Articles 14 and 15 recall the right to an effective remedy[15].

Other principles are more directly related to AI, such as individual autonomy in Article 7, transparency and oversight in Article 8, accountability and responsibility in Article 9, and reliability in Article 12, but once again these principles are not new. In particular, they were already identified in the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, adopted on 19 May 2019[16].

This feeling of déjà vu is reinforced by the wording of the Convention: in most articles, each Party shall ‘adopt or maintain measures’ to ensure the respect of those principles and obligations. As duly noted in the explanatory report, ‘in using “adopt or maintain”, the Drafters wished to provide flexibility for Parties to fulfil their obligations by adopting new measures or by applying existing measures such as legislation and mechanisms that existed prior to the entry into force of the Framework Convention[17].

The question that inevitably comes to mind is what the added value of this new instrument can be, if it only recalls internationally recognized principles and obligations, some of them already constituting justiciable rights.

Indeed, the mere fact that this new instrument deals with the activities related to AI systems does not change the obligations imposed on States to protect human rights, as enshrined in applicable international law and domestic laws. The evolution of the case law of the European Court of Human Rights is very significant in this regard. As we know, the Court has considered, on many occasions, that the European Convention on Human Rights is to be seen as ‘a living instrument which must be interpreted in the light of present-day conditions[18]. Without much risk one can predict that in the future the Court will have to deal with an increasing number of cases involving the use of AI systems[19].

II-B A declaratory approach

One could try to advocate for this new Convention by emphasizing the introduction of some principles and measures which haven’t been encapsulated in a binding instrument, yet. Such is the case, for instance, of the concepts of transparency and oversight, to be linked to those of accountability and responsibility, reliability, and of the measures to be taken to assess and mitigate the risks and adverse impacts of AI systems.

However, the way these principles and measures have been defined and, above all, how their implementation is foreseen, reveal a declaratory approach, rather than the intention to establish a real binding instrument, uniformly applicable to all.

Moreover, the successive versions of the Convention, from the zero draft, to the last version of March 2024, reveal a constant watering down of its content: the provisions on the need to protect health and environment have been moved to the Preamble, while those aiming at the protection of whistleblowers have been removed.

In the light of the EU Artificial Intelligence Act[20], the current situation is almost ironic, since the Convention does not create any new individual right, contrary to the EU regulation, which clearly recognizes, for instance, the human overview as well as the right to explanation of individual decision-making. And yet, the general economy of the AI Act is based on market surveillance and product conformity considerations, while the Council of Europe Convention on AI is supposed to focus on human rights, democracy, and the rule of law[21].

So, what is this Convention about? Essentially obligations of means and total flexibility as regards the means to fulfil them.

obligations of means:

A number of obligations in principle imposed on Parties are in fact simple obligations of means, since each Party is requested to ‘seek to ensure’ that adequate measures are in place. It is the case in Article 5, dedicated to the ‘integrity of democratic processes and respect for rule of law’. It is also the case in Article 15 on procedural safeguards, when persons are interacting with an artificial intelligence system without knowing it, in Article 16.3 in relation to the need of ensuring that adverse impacts of AI systems are adequately addressed, and in Article 19 on public consultation.

On the same vein, other articles include formulations which leave States with considerable room for manoeuvre in applying the obligations: as regards reliability, each Party shall take ‘as appropriate’ measures to promote this principle[22].  As regards digital literacy and skills, each Party shall ‘encourage and promote’ them[23]. Similarly, Parties are ‘encouraged’ to strengthen cooperation to prevent and mitigate risks and adverse impacts in the contexts of AI systems[24].

More importantly, it will be up to Parties to ‘assess the need for a moratorium or ban’ AI systems posing unacceptable risks[25]. One can only deplore the removal of former Article 14 of the zero draft, which provided for the ban of   the use of AI systems by public authorities using biometrics to identify, categorise or infer emotions of individuals, as well as for the use of those systems for social scoring to determine access to essential services. Here again, the Convention is under the standards defined by the AI Act[26].

– the choice of the measures to be adopted:

First, one should note that from the first article of the Convention, flexibility is offered to the Parties as regards the nature of the measures to be adopted, if appropriate. Article 1.2 provides the possibility for each Party ‘to adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention’.

Consequently, Parties might consider that their domestic system is fully compliant with this Convention without any change in their regulations. They could even consider that simple recommendations to public or private actors might be sufficient to fulfil their obligations under the Convention.

The wide leeway given to the States also explains the constant reference to the ‘domestic law’ [27]or to the domestic legal system[28] throughout the Convention. In particular Article 6, which  constitutes a chapeau for the whole Chapter III, states that principles included in this Chapter shall be implemented by Parties ‘in a manner appropriate to its domestic legal system and the other obligations of this Convention’. Such a wording is not free from a certain ambiguity, since it might be interpreted as requiring, as part of their implementation, an adaptation of the principles set forth in the Convention to the pre-existing domestic law, and not the opposite.

Here again, with this constant reference to domestic laws intrinsically linked to the ‘flexibility’ given to the Parties, one can only deplore the lack of harmonisation of the ‘measures’ which might be adopted in accordance with the Convention.

the absence of an international oversight mechanism:

It is true that Article 26 of the Convention lays down the obligation for each Party to establish or designate one or more effective mechanisms to oversee compliance with the obligations of the Convention. However, once again, Parties are free to choose how they will implement such mechanisms, without any supervisory control at the international level. The Conference of Parties, composed of representatives of the Parties and established by Article 23 of the Convention, won’t have any monitoring powers. The only obligation foreseen is – in Article 24- a reporting obligation to the Conference of the Parties, within the first two years after the State concerned has become a Party. But after this first report, there is no indication on the periodicity of the reporting obligation. 

Conclusion

Despite the continuous pressure from the civil society[29] and the interventions of the highest authorities in the field of human rights and data protection[30], the final outcome of the negotiations is a weak text, based on very general principles and obligations. Some of them are even under the level of the standards recognized in the framework of the Council of Europe, in the light of the European Convention on Human rights and the case law of the European Court of Human Rights, as well as of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. Moreover, their application won’t be consistent among the Parties, due to a variable-geometry scope and a considerable margin of manoeuvre left to the Parties to implement the Convention.

Why so many concessions, in the context of negotiations held under the umbrella of the Council of Europe, which presents itself as the ‘continent’s leading human rights organisation’? The answer of the Council of Europe representatives is: ‘global reach’. So, should the hope to see States which are not members of the Council of Europe ratify the Convention justify such a lack of ambition?

Yet it is not the first time that an international binding instrument negotiated in the framework of the Council of Europe allows for a fragmented application of its provisions: the Second Additional Protocol to the Convention on Cybercrime[31] already provided some sort of ‘pick and choose’ mechanism in several articles. However, what could be understood in the light of the fight against cybercrime, is more difficult to accept in the framework of a Convention aiming at protecting human rights, democracy and the rule of law in the context of artificial intelligence systems.

It is possible that the negotiators could not achieve a better result, in view of the positions expressed in particular by the United States, Canada, Japan and Israel. In that case, the Council of Europe would have been better advised either to be less ambitious and drop the aim of a ‘global reach’, or wait a few more years until the ripening of the maturation of all minds.

(*)  EDPS official: This text is the sole responsibility of the author, and does not represent the official position of the EDPS

NOTES


[1] The Opinion adopted by the PACE on 18 April 2024 includes several proposals to improve the text. See https://pace.coe.int/en/files/33441/html

[2] See an article published in Euractiv on 31 Jan 2024 and updated on 15 Feb 2024:…(https://www.euractiv.com/section/artificial-intelligence/news/tug-of-war-continues-on-international-ai-treaty-as-text-gets-softened-further/ )

See also the open letter of the representatives of the civil society:

 https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit, and an article of M. Emilio de Capitani: The COE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Is the Council of Europe losing its compass? https://free-group.eu/2024/03/04/the-coe-convention-on-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-is-the-council-of-europe-losing-its-compass/

[3] USA, Canada, Japan, Israel.

[4] See an article issued in swissinfo.ch – https://www.swissinfo.ch/eng/foreign-affairs/ai-regulation-is-swiss-negotiator-a-us-stooge/73480128

[5] The terms of reference of the CAI explicitly refers to the establishment of a ‘binding legal instrument of a transversal character’.

[6] See, for instance, an article in Euractiv ‘EU prepares to push back on private sector carve-out from international AI treaty’https://www.euractiv.com/section/artificial-intelligence/news/eu-prepares-to-push-back-on-private-sector-carve-out-from-international-ai-treaty/

[7] National security and European case-law: Research Division of the European Court of Human Rights- https://rm.coe.int/168067d214

[8] Paragraph 33 of the explanatory report : ‘As regards paragraph 3, the wording reflects the intent of the Drafters to exempt research and development activities from the scope of the Framework Convention under certain conditions, namely that the artificial intelligence systems in question have not been made available for use, and that the testing and other similar activities do not pose a potential for interference with human rights, democracy and the rule of law. Such activities excluded from the scope of the Framework Convention should in any case be carried out in accordance with applicable human rights and domestic law as well as recognised ethical and professional standards for scientific research’.

[9] Paragraph 36 of the explanatory report.

[10] In its opinion of 18 April 2024 the PACE suggested to only envisage a restriction. See above note 1.

[11] Paragraph 14 of the explanatory report

[12] these principles are closely linked to freedom of expression and the right to free elections: see in particular Article 10 of the European Convention on Human Rights and Article 3 of Protocol 1

[13] See in particular Article 14 of the European Convention on Human Rights and Protocol 12,

[14] See in particular Article 8 of the European Convention on Human Rights and the case law of the European Court of Human Rights, as well as Article 1 of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.

[15] See in particular Article 13 of the European Convention on Human Rights.

[16] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449#mainText

[17] Paragraph 17 of the explanatory report.

[18] See Tyrer v United Kingdom 2 EHRR 1 at para. 31

[19] On 4 July 2023, the Third Section of the European Court of Human Rights delivered the first judgment on the compatibility of facial recognition technology with human rights in Glukhin v. Russia:

https://hudoc.echr.coe.int/eng#%22display%22:%5B2%5D,%22itemid%22:%5B%22001-225655%22%5D

[20] See Articles 14 and 86 of the AI Act – https://artificialintelligenceact.eu/the-act/

[21] ‘The Council of Europe’s road towards an AI Convention: taking stock’ by Peggy Valcke and Victoria Hendrickx, 9 February 2023: ‘Whereas the AI Act focuses on the digital single market and does not create new rights for individuals, the Convention might fill these gaps by being the first legally binding treaty on AI that focuses on democracy, human rights and the rule of law’. https://www.law.kuleuven.be/citip/blog/the-council-of-europes-road-towards-an-ai-convention-taking-stock/

[22] Article 12 of the Convention.

[23] Article 20 of the Convention.

[24] Article 25 of the Convention.

[25] Article 16.4 of the Convention.

[26] See Chapter II of the AI Act – https://artificialintelligenceact.eu/the-act/

[27] See Articles 4, 10, 11 et 15.

[28] See Articles 6 and 14.

[29] See in particular the open latter of 5 March 2024:

https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit

[30] See the statement of the Council of Europe Commissioner for Human Rights:

https://www.coe.int/en/web/commissioner/-/ai-instrument-of-the-council-of-europe-should-be-firmly-based-on-human-rights

See also the EDPS statement in view of the 10th and last Plenary Meeting of the Committee on Artificial Intelligence (CAI) of the Council of Europe drafting the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: https://www.edps.europa.eu/press-publications/press-news/press-releases/2024/edps-statement-view-10th-and-last-plenary-meeting-committee-artificial-intelligence-cai-council-europe-drafting-framework-convention-artificial_en

[31] Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence- https://rm.coe.int/1680a49dab

(EUROPEAN LAW BLOG) EU/US Adequacy Negotiations and the Redress Challenge: How to Create an Independent Authority with Effective Remedy Powers (2)

16 FEBRUARY 2022/ BY THEODORE CHRISTAKISKENNETH PROPP AND PETER SWIRE

Can the U.S. Government create, by non-statutory means, an independent redress authority capable of providing an effective remedy for a European person who believes that her or his rights have been infringed by an intelligence service? In this article we put forward a novel non-statutory solution that could resolve the “redress” problem in the EU/US adequacy negotiations. This solution is based on three “building blocks” inspired by methods utilized in U.S. administrative law. First, the U.S. Department of Justice should issue a binding regulation creating within that executive agency an independent “Foreign Intelligence Redress Authority” (FIRA). Second, the President should issue a separate Executive Order providing the necessary investigative powers and giving FIRA’s decisions binding effect across the intelligence agencies and other components of the U.S. government. Finally, European individuals could obtain judicial review of an independent redress decision by using the existing Administrative Procedure Act.

Our first article, published on January 31, concentrated on whether the U.S. Congress would necessarily have to enact a new statute in order to create an adequate redress mechanism. We examined political, practical, and U.S. constitutional difficulties in enacting such a statute. Based on careful attention to EU law, we concluded that relying on a non-statutory solution could be compatible with the “essential equivalence” requirements of Article 45 of the EU’s General Data Protection Regulation (GDPR), if the requisite substantive protections for redress were put into place.

This article examines, from both a U.S. and a European law perspective, measures that could address the substantive requirements, notably the deficiencies highlighted by the Court of Justice of the European Union (CJEU) in its Schrems II judgment: independence of the redress body; its ability to substantively review the requests; and its authority to issue decisions that are binding on the intelligence agencies. We discuss only the redress issues highlighted by the CJEU. We do not address here the other deficiency cited by the Court — whether U.S. surveillance statutes and procedures sufficiently incorporate principles of “necessity and proportionality” also required under EU law.

Part I of this article explains how the U.S. executive branch could create an independent administrative institution to review redress requests and complaints. The institution, which we call “FIRA”, would be similar in important ways to what in Europe is considered as an independent administrative authority, such as the several surveillance oversight/redress bodies operating in Europe and listed in the EU Agency for Fundamental Rights’ (FRA) 2017 comparative study on surveillance (p. 115 – in France, for example, the National Commission for Control of Intelligence Techniques, CNCTR). We submit that, in the U.S., such an institution could be based on a binding regulation adopted by the Department of Justice (DOJ). Despite being created by the executive branch, the independence of FIRA will be guaranteed, since leading U.S. Supreme Court precedent considers such a regulation to have binding effect and to protect members of the redress authority from interference by the President or the Attorney General. 

Next, Part II of this article assesses how the U.S. executive branch could provide the necessary investigatory powers for FIRA to review European requests and complaints and to adopt decisions binding upon intelligence agencies. This could be done through a Presidential Executive Order that the President may use to limit executive discretion. 

Finally, Part III of this article discusses the important question of whether the ultimate availability of judicial redress is necessary under EU law and whether there is a path under U.S. law to achieve it, despite the 2021 Supreme Court decision in the TransUnion case limiting standing in some privacy cases. We examine reasons why judicial review of decisions by the independent FIRA may not be required under EU law. Nonetheless, we describe a potential path to U.S. judicial review based on the existing Administrative Procedure Act.  

I. Creating an Independent Redress Authority

Based on our discussions with stakeholders, the most difficult intellectual challenge has been how a redress authority can be created within the executive branch yet have the necessary independence from it. We first present the EU criticisms of the Privacy Shield Ombudsperson approach, and then explain how a binding regulation issued by DOJ can address those criticisms satisfactorily. 

1. Identifying the problems of independence with the previous Privacy Shield mechanism

Four criteria for independence of the redress body have been identified by EU authorities in their critiques of the Ombudsperson approach included in the 2016 Privacy Shield. 

a) Protection against dismissal or revocation of the members of the redress body

A crucial measure of independence under EU law, is protection against removal of any member of the independent body. In Schrems II, the CJEU noted there was “nothing in [the Privacy Shield Decision] to indicate that the dismissal or revocation of the appointment of the Ombudsperson is accompanied by any particular guarantees” (§195), a point previously made in 2016 by the Article 29 Working Party (WP29) when it observed “the relative ease with which political appointees can be dismissed” (here, p. 51). Protection against removal is also recognized under U.S. law and a key indicator for independence.(1) 

b) Independence as protection against external intervention or pressure

Protection against external intervention is a major requirement for a redress authority, as stated by the Advocate General in his 2019 Schrems II Opinion

“The concept of independence has a first aspect, which is external and presumes that the body concerned is protected against external intervention or pressure liable to jeopardise the independent judgment of its members as regards proceedings before them” (note 213).  

By contrast, the Ombudsperson in the original Privacy Shield was “presented as being independent of the ‘intelligence community’, [but] (…) not independent of the executive” (§ 337). 

c) Impartiality

In the same opinion, Advocate General Saugmandsgaard Øe stressed (and the CJEU endorsed), the importance of impartiality: “The second aspect of [independence], which is internal, is linked to impartiality and seeks to ensure a level playing field for the parties to the proceedings and their respective interests with regard to the subject matter of those proceedings” (note 213, emphasis added). 

d)  Relationship to the intelligence community 

In its 2015 study on surveillance, FRA noted that there is a “Goldilocks” challenge concerning the ties between redress bodies and intelligence agencies: “While ties that are too close may lead to a conflict of interest, too much separation might result in oversight bodies that, while independent, are very poorly informed” (p. 71).  In 2016, the WP29 found that the Privacy Shield solution did not appropriately respond to this challenge:

“The Under Secretary is nominated by the U.S. President, directed by the Secretary of State as the Ombudsperson, and confirmed by the U.S. Senate in her role as Under Secretary. As the letter and the Memorandum representations stress, the Ombudsperson is ‘independent from the U.S. Intelligence community’. The WP29 however questions if the Ombudsperson is created within the most suitable department. Some knowledge and understanding of the workings of the intelligence community seems to be required in order to effectively fulfil the Ombudsperson’s role, while at the same time indeed sufficient distance from the intelligence community is required to be able to act independently.” (p.49)

2. How the creation of FIRA by DOJ Regulation could fix these problems 

To date, despite insightful discussions of the challenges, we have not seen any detailed public proposals for how the U.S. executive branch might create a redress institution to meet the strict EU requirements for independence.(2) One innovation, which we understand that the parties might now be considering, could be a binding U.S. regulation, issued by an agency pursuant to existing statutory authority, to create and govern FIRA. Crucially, leading U.S. Supreme Court cases have given binding effect to a comparable regulation, even in the face of objections by the President or Attorney General.

a) Binding DOJ regulation to ensure independence of the FIRA 

The Department of Justice could issue a regulation to create FIRA and guarantee its independent functioning.  It could guarantee independence for the members of FIRA, including protections against removal, in the same fashion.

Under the U.S. legal system, such an agency regulation has the force of law, making it suitable for defining the procedures for review of redress requests and complaints. DOJ regularly issues such regulations, under existing statutory authorities, and pursuant to established and public procedures. To protect against arbitrary or sudden change, modifying or repealing the regulation would require following the same public procedural steps as enacting the regulation in the first place did.  In Motor Vehicles Manufacturers Association vs. State Farm Mutual Automobile Insurance Co., the Supreme Court held that since a federal agency had the discretion to issue a regulation initially, it would have to utilize the same administrative procedures to repeal it.

In an EU/U.S. framework for a new Privacy Shield, the U.S. Government unilaterally could commit to maintain this DOJ regulation in force, and the European Commission could reference the U.S. commitment as a condition of its adequacy decision. This would provide both to the EU and to members of FIRA a guarantee against revocation of the regulation ensuring that the authority would act independently. 

b) Supreme Court precedents protect against external intervention or pressure 

During the Watergate scandal involving then-President Richard Nixon, the Department of Justice issued a regulation creating an independent “special prosecutor” (also called “independent counsel”) within that department. The special prosecutor was designed to be independent from Presidential control, with the regulation stipulating that he could not be removed except with involvement by designated members of Congress. 

Acting within the powers defined in the regulation, the special prosecutor issued a subpoena for audio tapes held by the White House. The President, acting through the Attorney General, objected to the subpoena.  In a unanimous 1974 Supreme Court decision, United States v. Nixon, it was held that the special prosecutor’s decision to issue the subpoena had the force of law, despite the Attorney General’s objection.  The Court noted that although the Attorney General has general authority to oversee criminal prosecutions, including by issuing a subpoena, the fact that the special prosecutor had acted pursuant to a binding DOJ regulation deprived the Attorney General of his otherwise plenary power over subpoenas. 

The Supreme Court observed that “[t]he regulation gives the Special Prosecutor explicit power” to conduct the investigation and issue subpoenas, and that “[s]o long as this regulation is extant, it has the force of law” (emphasis added).  The Court concluded: 

“It is theoretically possible for the Attorney General to amend or revoke the regulation defining the Special Prosecutor’s authority. But he has not done so. So long as this regulation remains in force, the Executive Branch is bound by it, and indeed the United States, as the sovereign composed of the three branches, is bound to respect and to enforce it.”

In sum, as supported by clear Supreme Court precedent, a DOJ regulation can create a mechanism within the executive branch, so that the members of the administration must comply with its terms, even in the face of contrary instructions from the President or Attorney General. And, as stated earlier, the lasting character of the DOJ regulation creating FIRA could be guaranteed by the US Government in the EU/US agreement and be identified by the European Commission in its subsequent adequacy decision as a condition for maintaining this decision in force.

c) Impartiality

We are not aware of significant U.S. constitutional obstacles to ensuring impartiality in FIRA. DOJ appoints Administrative Law Judges (ALJ), such as for deciding immigration matters, and “[t]he ALJ position functions, and is classified, as a judge under the Administrative Procedure Act.” 

U.S. law concerning ALJ’s, including those located in DOJ, states that they are “independent impartial triers of fact in formal proceedings”.(3) In Nixon the Supreme Court reaffirmed the lawfulness of an independent adjudicatory function located within the DOJ.(4) A DOJ FIRA regulation could similarly offer guarantees in terms of the impartiality and expertise of members.

d) Relationship to the intelligence community 

Furthermore, the DOJ appears to be the executive agency best-suited to resolve the “Goldilocks” problem, mentioned above, by combining knowledge and understanding of the intelligence agencies with sufficient distance to judge their conduct independently. 

As noted, EU bodies questioned whether the Department of State, a diplomatic agency, was a “suitable department” for the redress role. The DOJ is more suitable in part because of its experience with the Watergate independent counsel and, for instance, with Immigration Judges as independent triers of fact. 

At the same time, a FIRA located within the DOJ would be well-placed to have knowledge about the intelligence community. The DOJ provides extensive oversight of intelligence activities through its National Security Division, including by issuing regular reports concerning classified activities of the Foreign Intelligence Surveillance Court. Other DOJ components, such as the Office of Privacy and Civil Liberties, also have access to classified information including Top Secret information about intelligence agency activities. In addition, an Executive Order could empower the DOJ to enlist other agencies, such as the Office of the Director of National Intelligence, to gain information from the intelligence community.

II. Creating Effective Powers for the Independent Redress Authority

A DOJ regulation creating an independent redress authority within that executive department must be accompanied by additional government-wide steps for effectively investigating redress requests and for issuing decisions that are binding on the entire intelligence community. The DOJ-issued regulation would define the interaction of FIRA with other parts of that Department.  For the overall mechanism to be effective in other parts of the U.S. government, however, the key legal instrument would be a separate Executive Order issued by the President. In issuing an EO, the President would act within the scope of his overall executive power to define legal limits, such as by requiring intelligence agencies to be bound by FIRA decisions. 

1. Identifying the problems of effectiveness concerning the previous Privacy Shield mechanism

To meet the EU requirement of effective remedial powers, the new redress system would need to have two types of effective powers that the Privacy Shield Ombudsperson lacked. 

a) Investigative Powers 

The WP29 wrote in 2016: 

“concerns remain regarding the powers of the Ombudsperson to exercise effective and continuous control. Based on the available information (…), the WP29 cannot come to the conclusion that the Ombudsperson will at all times have direct access to all information, files and IT systems required to make his own assessment” (p. 51).

In 2019, the European Data Protection Board (EDPB) likewise stated: 

“[T]he EDPB is not in a position to conclude that the Ombudsperson is vested with sufficient powers to access information and to remedy non-compliance, (…)” (§103). 

b) Decisional Powers 

In Schrems II, the CJEU stated:  

Similarly, (…) although recital 120 of the Privacy Shield Decision refers to a commitment from the US Government that the relevant component of the intelligence services is required to correct any violation of the applicable rules detected by the Privacy Shield Ombudsperson, there is nothing in that decision to indicate that that ombudsperson has the power to adopt decisions that are binding on those intelligence services and does not mention any legal safeguards that would accompany that political commitment on which data subjects could rely” (§196).

The EDPB similarly concluded in 2019:

“Based on the available information, the EDPB still doubts that the powers to remedy non-compliance vis-à-vis the intelligence authorities are sufficient, as the ‘power’ of the Ombudsperson seems to be limited to decide not to confirm compliance towards the petitioner. In the understanding of the EDPB, the (acting) Ombudsperson is not vested with powers, which courts or other similarly independent bodies would usually be granted to fulfil their role” (§102).

2. How a Presidential Executive Order Could Confer These Powers upon FIRA 

These passages describe key EU legal requirements for a new redress system. President Biden could satisfy them by issuance of an Executive Order (EO).  The American Bar Association has published a useful overview explaining that an EO  is a “signed, written, and published directive from the President of the United States that manages operations of the federal government.” EOs “have the force of law, much like regulations issued by federal agencies.”  Once in place, only “a sitting U.S. President may overturn an existing executive order by issuing another executive order to that effect.”

As a general matter, the President has broad authority under Article II of the Constitution to direct the executive branch. In addition, the Constitution names the President as Commander-in-Chief of the armed forces, conferring additional responsibilities and powers with respect to national security. The President’s powers in some instances may be limited by a properly enacted statute, but we are not aware of any such limits relevant to redress.

Not only does the President enjoy broad executive powers, but he or she also may decide to limit how he or she exercises such powers through an EO which, under the law, would govern until and unless withdrawn or revised. Thus, the President would appear to have considerable discretion to instruct the intelligence community, by means of an EO, to cooperate in investigations and to comply with binding rulings concerning redress.

As with the DOJ regulation, the U.S. Government could commit in the EU/US adequacy arrangement to maintain this EO in force. But how could the EU and the general public have confidence that the EO is actually being followed by intelligence agencies? First, FIRA will be able to assess whether this is the case, backed by an eventual provision in the Presidential EO fixing penalties for lack of compliance with its orders (similarly as legislation in European countries fixes penalties for failure to comply with the orders of equivalent redress bodies – for an example see art. L 833-3 of the French surveillance law). Furthermore, U.S. intelligence agencies are already subject to parliamentary oversight, including on classified matters, by the Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence. Oversight might also be performed by other governmental actors that have access to classified materials, such as an agency official called the Inspector General or the Civil Liberties and Privacy Office, or by the independent Privacy and Civil Liberties Oversight Board (whose new Director, Sharon Bradford Franklin, recently confirmed by the Senate, is known for her commitment to strong surveillance safeguards and oversight). Oversight may be performed at the Top Secret or other classification level, with unclassified summaries released to the public

III. Creating Judicial Review of the Decisions of the Independent Redress Authority

Finally, we turn to whether and how decisions of FIRA may be reviewed judicially. We first explain why judicial review in these circumstances may not be required under EU law.  Nonetheless, to minimize the risk of invalidation by the CJEU, we set forth possible paths for creating U.S. judicial review.

1. Reasons that judicial redress is not necessarily required 

There are at least four reasons to believe that EU law does not necessarily require judicial redress if FIRA is independent and capable of exercising the quasi-judicial functions described above by adopting decisions binding on intelligence agencies.

First, as explained in our earlier article, Article 13 of the European Convention on Human Rights (ECHR) may be the appropriate legal standard for the European Commission to use in deciding upon the “essential equivalence” of third countries for international data transfer purposes.  Article 13 only requires an independent “national authority,” thus a non-judicial body could suffice.

Second, the Advocate General in Schrems II seemed to give the impression that judicial review should only be required in a case where the redress body itself is not independent: 

“in accordance with the case-law, respect for the right guaranteed by Article 47 of the Charter thus assumes that a decision of an administrative authority that does not itself satisfy the condition of independence must be subject to subsequent control by a judicial body with jurisdiction to consider all the relevant issues. However, according to the indications provided in the ‘privacy shield’ decision, the decisions of the Ombudsperson are not the subject of independent judicial review.” (§340, emphasis added)

Since FIRA, unlike the Ombudsperson, will not only enjoy independence but also will exercise quasi-judicial functions by adopting decisions binding on intelligence agencies, separate judicial redress may not be required.

Third, this is exactly what seems to be happening in practice in EU Member States themselves. FRA noted in its 2017 comparative study on surveillance that, in most European countries, redress bodies are non-judicial bodies. It also observed that such non-judicial remedies appear better than judicial ones, because their procedural rules are less strict, proceedings are faster and cheaper, and non-judicial avenues generally offer greater expertise than judicial mechanisms. Furthermore, FRA found that “across the EU only in a few cases can decisions of non-judicial bodies be reviewed by a judge” (ibid., p.114 – and table pp.115-116). Requiring the U.S. to provide judicial redress would thus be more than what exists in many Member States.(5) 

Fourth, these observations are even more relevant when one focuses on international surveillance. In France, for instance, an individual may file complaints with the Supreme Administrative Court (Conseil d’Etat) on the basis of the domestic surveillance law of July 2015. There is no possibility to do so under the international surveillance law of November 2015, however, since that law gives only the CNCTR, an administrative authority, the power to initiate (under some conditions) proceedings in the Conseil d’Etat – but does not confer this right directly upon an individual.(6)

Of course, actual practice under Member States law does not necessarily mean that a third country’s similar practices meet the “essential equivalence” standard of EU fundamental rights law, since the relevant comparator seems to be European Law standards – not Member States’ practices which do not always necessarily meet these standards.(7) Nonetheless, demanding from the U.S. a much more elaborate process than what already exists for international surveillance in most EU Member States might be complicated, particularly if there is an effective independent administrative regime in the U.S. exercising quasi-judicial functions.

2. Ultimate judicial redress will however help ensure meeting CJEU requirements

Despite these indications that European law may not require judicial redress, we acknowledge that the position of the CJEU on this point remains ambiguous.  

As indicated in our first article, the CJEU in Schrems II expressly used the term “body,” giving the impression that an independent national administrative authority (in conformity with the requirements of Art. 13 ECHR) could be enough to fulfill the adjudicatory function. As we explained, this is how the EDPB seems to have read Schrems II in its 2020 European Essential Guarantees Recommendations. Long-time EU data protection official Christopher Docksey concurs as well. 

However, it is also true that the Schrems II judgment contains multiple references to judicial redress. It refers to “ the premiss [sic] that data subjects must have the possibility of bringing legal action before an independent and impartial court ” (§194); “the right to judicial protection” (ibid.); “data subject rights actionable in the courts against the US authorities” (§192); “the judicial protection of persons whose personal data is transferred to that third country” (§190); and “the existence of such a lacuna in judicial protection in respect of interferences with intelligence programmes” (§191). It is not clear whether these statements should also apply (following the Advocate General’s logic) to an independent redress body such as FIRA capable of exercising quasi-judicial functions, in contrast to the Ombudsperson examined by the CJEU. Nevertheless, the CJEU judgment might be read as requiring at least some form of ultimate judicial control of a redress authority’s decisions. This also appears to be the interpretation of a senior Commission official. 

In light of these statements, it would be prudent for the U.S. to provide for some form of ultimate judicial review of FIRA decisions, to increase the likelihood of passing the CJEU test in an eventual Schrems III case.  

3. A path to ultimate judicial review of FIRA decisions

As we explained in our first article, the U.S. constitutional doctrine of standing poses a major hurdle in creating a pathway to judicial redress. In the 2021 TransUnion case, the Supreme Court held that plaintiffs incorrectly identified by a credit reporting agency as being on a government terrorism watch list had not shown the required “injury in fact”. This lack of injury in fact, and thus lack of standing, existed even though the underlying statute appeared to confer the right to sue. While one might find this U.S. constitutional jurisprudence unduly restrictive, any new Privacy Shield agreement must take it into account.

There might be, however, another way to provide an individual with judicial redress. An unsatisfied individual could appeal to a federal court an administrative disposition of a redress petition on the grounds that FIRA has failed to follow the law. In such a case an individual would not be challenging the surveillance actions of intelligence agencies (for which injury in fact may be impossible to satisfy) as such; instead, the suit would allege the failure of an independent administrative body (FIRA) to take the actions required by law.  

As Propp and Swire have written previously, one useful precedent is the U.S. Freedom of Information Act (FOIA), under which any individual can request an agency to produce documents, without first having to demonstrate that he or she has suffered particular “injury in fact”. The agency is then required to conduct an effective investigation and to explain any decision not to supply the documents. After the agency responds, the individual may appeal the decision to federal court. The judge then examines the quality of the agency’s investigation to ensure compliance with law, and the judge can order changes in the event of mistakes by the agency.

Analogously, a European individual, unsatisfied by FIRA’s investigation and decision, could bring a challenge in court. Taking into consideration that FOIA concerns a distinct question,  the appeal against FIRA’s decisions would be based upon the umbrella U.S. Administrative Procedure Act (APA). The APA provides generally for judicial review of an agency action that is “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.” Since both a regulation and an Executive Order have the force of law, an APA-based appeal could examine whether the FIRA decision and its implementation was “in accordance with law.” Since the APA applies generally, it could operate in these circumstances without need for an additional federal statute. In addition, U.S. federal courts deciding APA-based appeals already have methods for handling classified national security information. For instance, they access classified information under the Classified Information Procedures Act (CIPA).

Including judicial review under the APA would be a good faith effort by the U.S. government to respond to ultimate EU law concerns. However, since the FIRA approach has not been judicially tested, some legal uncertainty concerning standing to bring the APA suit in federal court would remain. FOIA practice provides a good legal basis for meeting the standing requirement through challenging agency action itself, but TransUnion highlighted the level of privacy injuries which must be shown to enable a decision in federal court.  

Conclusion

In these two articles, we have sought to examine rigorously and fully the requirements of EU law with respect to redress. We also have examined U.S. constitutional law, explaining both the difficulties surrounding some solutions (for instance the problem of standing for judicial redress) and the opportunities created by some precedents (such as the protection offered to independent investigative bodies by decisions of the U.S. Supreme Court).

We are not aware of any other published proposal that wrestles in such detail with the complexity of EU and U.S. law requirements for foreign intelligence redress. We hope that our contribution helps fill this gap and presents a promising path permitting resolution of the “redress challenge” in the EU/US adequacy negotiations.

Much will depend on the details of construction and implementation for this protective mechanism. What our articles contribute is the identification of three fundamental building blocks on which a solid and long-lasting transatlantic adequacy agreement could stand. We have shown that there is a promising way to create, by non-statutory means, an independent redress authority and to provide the necessary investigative and decisional powers to respond to redress requests by European persons. We also suggest a way to successfully address the problem of standing and thereby to provide for an ultimate possibility of judicial control. Using these building blocks to create an effective redress mechanism could enable the U.S. and the EU not only to establish a solid transatlantic adequacy regime capable of resisting CJEU scrutiny but also to advance human rights more broadly.

Notes

(1) In 2020, as discussed here, the Supreme Court addressed the President’s removal power in the Siela Law LLC case, finding unconstitutional Congress’ establishment of independence for an agency head. At the same time, the Court reaffirmed that protections against removal can exist for “inferior officers” (roughly, officials appointed through a civil service process rather than by the President) and for multi-member bodies. Either or both of these categories may apply to FIRA members. In 2021, the Supreme Court, in U.S. v. Arthrex, struck down a system of independent Administrative Patent Judges. The approach in our article would be different since the President here issues an executive order, and thus the President serves as the “politically accountable officer” required by the Supreme Court in Arthrex.

(2) More specifically, there have been proposals for providing redress for surveillance conducted pursuant to Section 702 FISA, such as here and here. However, an additional “thorny issue is whether international surveillance, conducted by US intelligence agencies outside the territory of the US on the basis of Executive Order 12333 (EO 12333) should be (or not) part of the adequacy assessment.” Although arguments exist under EU law that redress for EO 12333 surveillance might be excluded from the assessment, this article proceeds on the understanding that the current negotiations will only succeed if EO 12333 surveillance is covered as well. We are not aware of any published proposal that would do so, and seek in this article to present such an approach. For example, the proposal here would apply to requests for redress concerning surveillance conducted under EO 12333, such as programs recently declassified by the U.S. government.

(3) It appears that terms such as “adjudication” and “court” may be understood somewhat differently in the U.S. compared with the EU, creating a risk of confusion in proposals concerning redress. Under U.S. law, many federal agencies, including the Federal Trade Commission and Department of Justice, routinely conduct what is called “adjudication.” Many federal agencies have Administrative Law Judges, defined by the U.S. government as “independent impartial triers of fact in formal proceedings.”  By contrast, in Europe, “courts” and “judges” generally exist outside of the Executive. Therefore, our discussion of FIRA avoids words such as “adjudication” that may be understood differently in different legal systems.

(4) In the 1954 case, Accardi v. Shaughnessy, the Attorney General by regulation had delegated certain of his discretionary powers to the Board of Immigration Appeals. The regulation required the Board to exercise its own discretion on appeals for deportation cases. As noted in U.S. v. Nixon, the Supreme Court in Accardi had held that, “so long as the Attorney General’s regulations remained operative, he denied himself the authority to exercise the discretion delegated to the Board even though the original authority was his and he could reassert it by amending the regulations.”

(5) For a recent description of the German system, see here by Daniel Felz.

(6) This finding was confirmed in a June 2018 decision by the Conseil d’Etat following a request introduced in this court by the Member of the European Parliament Sophie In ’t Veld (analysis here). The Court also rejected the possibility for the claimant to challenge indirectly an alleged misuse of power resulting from the failure of the chairman of the CNCTR to refer the matter to the Council of State. However, as stated by the CNCTR (here, at 46) this is one of the points appearing in the (no less than) 14 challenges currently pending at the ECHR against the French surveillance laws.

(7) See for instance this study by I. Brown and D. Korff arguing that “the EU institutions should stand up for the rule of law and demand the member states and third countries bring their practices in line with those standards”  (at 111).

(EUROPEAN LAW BLOG) EU/US Adequacy Negotiations and the Redress Challenge: Whether a New U.S. Statute is Necessary to Produce an “Essentially Equivalent” Solution (1)

31 JANUARY 2022/ BY THEODORE CHRISTAKISKENNETH PROPP AND PETER SWIRE

Must the U.S. Congress change statutory law to solve the major issue of “redress” in the EU-US adequacy negotiations? This is a crucial question, especially since a series of political, pragmatic and even legal/constitutional difficulties mean that the U.S. might not be able to come up with a short-term statutory solution for redress. In this article we analyse this question for the first time in detail, and argue that, provided the U.S. is able to address the deficiencies highlighted by the Court of Justice of the European Union (CJEU) in its Schrems II judgment (independence of the redress body; ability to substantively review the requests; and authority to issue decisions that are binding on the intelligence agencies), then relying on a non-statutory solution could be compatible with the “essential equivalence” requirements of Article 45 of the EU’s General Data Protection Regulation (GDPR). In a second, forthcoming article, we set forth specific elements of a novel non-statutory solution and assess whether it would meet the substantive European legal requirements for redress.

The CJEU issued its Schrems II judgment in July, 2020, invalidating the EU/U.S. Privacy Shield and creating uncertainty about the use of Standard Contractual Clauses (SCCs) for transfers of personal data to all third countries (see analysis herehereherehere and here). In light of the legal uncertainty and the increasing tensions concerning transatlantic data transfers resulting from the intensification of enforcement actions by European data protection authorities (DPAs) since Schrems II (such as this and this), there is both strong reason to reach a new EU/U.S. agreement and also a stated willingness of both sides to do so.  The European Commission, understandably, has emphasized though that there is no “quick fix” and that any new agreement must meet the full requirements of EU law.

This article focuses on one of the two deficiencies highlighted by the CJEU: the need for the U.S. legal system to provide a redress avenue accessible to all EU data subjects. We do not address here the other deficiency– whether U.S. surveillance statues and procedures sufficiently incorporate principles of ‘necessity and proportionality’ also required under EU law.

We concentrate our inquiry, from both a U.S. and a European law perspective, on whether the U.S. Congress would necessarily have to enact a new statute in order to create an adequate redress mechanism. Part I of this article explains the pragmatic and political reasons why it would be difficult to adopt a new U.S. statute, and especially to do so quickly. Part II examines the U.S. constitutional requirements for “standing”, and explains the legal difficulties and uncertainty concerning proposals, such as the one advanced by the American Civil Liberties Union (ACLU), to provide redress through an individual action in U.S. federal courts. Part III then addresses European law concerning whether a statute is necessary, concluding that the substance of the protections of fundamental rights and respect of the essence of the right to an effective remedy are the key considerations, rather than the form by which an independent and effective redress mechanism would be created.

This article will be followed by a second article exploring whether a non-statutory solution for redress is capable of satisfying the strict substantive standards required by EU law.

I. Political Difficulties of an Immediate Statutory Approach to Redress

There are important advantages to enacting a new U.S. statute to provide redress:

  • There is greater democratic legitimacy if the legislature passes a statute.
  • A law can set limits on Executive discretion that only may be changed by a subsequent statute.
  • A law can fix in a stable, permanent and objective way the rules and procedures for the appointment of the members of the redress body, the duration of their mandate, and guarantees concerning their independence.

However, there are strong pragmatic and political reasons why it would be difficult to enact a new statute in the short term to create a new redress mechanism.

  • First, it is no secret that the U.S. Congress currently finds it difficult to pass legislation generally, with partisan battles and procedural obstacles slowing passage of even essential legislation. As Politico recently reported, “it is increasingly unlikely that Congress will pass any digital-focused bills before lawmakers shut down ahead of November’s midterms”.
  • Second, legislative reform of U.S. surveillance laws is a particularly complex and contentious issue. The national security community in the U.S. has little appetite for sweeping reforms, and even a strong push from the White House may not be sufficient to move such legislation through Congress. In Europe as well, substantial reform of surveillance laws requires a lot of time to seek the necessary political consensus (see for instance this).[i]
  • Third, the international dimensions of a redress reform make legislation even more difficult. If a new redress mechanism benefits only EU data subjects, then it is hard to explain to Congress why they should get greater rights than Americans. On the other hand, if redress rights were also to be conferred on U.S. data subjects, then a novel and complex set of institutional changes to the overall U.S. surveillance system would be needed.
  • Fourth, it would be difficult for U.S. legislators to vote for a statute without knowing in advance whether the CJEU will accept it as good enough.
  • Fifth, Congress historically has been reluctant to regulate in great detail how the President conducts foreign policy and protects national security. For instance, Congress has adopted detailed statutes (such as the Foreign Intelligence Surveillance Act, FISA)) concerning “compelled access”, e.g. how intelligence agencies can request data from service providers. By contrast, it has rarely enacted any statute that applies to “direct” surveillance conducted outside of the U.S. under the standards of Executive Order (EO) 12,333. Furthermore, specific actions under that Executive Order have never, so far as we know, been subject to review by federal judges.

For these reasons, we believe at a pragmatic level that it would be extremely difficult for Congress to promptly pass legislation to provide redress to EU persons. By contrast, if an adequate fix to the redress problem can be created at least in large part without new legislation, then it would be considerably easier for Congress subsequently to enact a targeted statute ratifying the new mechanism, perhaps adding other provisions to perfect an initial non-statutory approach. That sort of legislation is far easier to enact than writing a law in Congress from a blank page.

II. Constitutional Difficulties for a U.S. Statutory Approach to Redress: The Problem of Standing

These political and pragmatic reasons alone would justify U.S. government and European Commission negotiators seeking to address the redress deficiencies highlighted in Schrems II through a non-statutory solution. But, in addition, there is a constitutional dimension. The U.S. Constitution establishes a “standing” requirement as a prerequisite to a case being heard before judges in the federal court system. Any new U.S. redress mechanism must be consistent with the U.S. Constitution, just as it must meet EU fundamental rights requirements.

U.S. standing doctrine derives from Article III of the U.S. Constitution, which governs the federal court system. The federal judicial power extends only to “cases” and “controversies” – meaning that there has to be an “injury in fact” in order to have a case heard. A related doctrine is the ban on issuance of “advisory opinions” by federal judges, a position of the Supreme Court dating back to the first President, George Washington, and defined most clearly in Muskrat v. United States. In sum, a statute that creates a cause of action in the federal courts is unconstitutional unless it meets the requirements of standing and injury in fact, and does not violate the prohibition on advisory opinions.

The ACLU in 2020 called for a “standing fix” to enable suit in federal court “where a person takes objectively reasonable protective measures in response to a good-faith belief that she is subject to surveillance.” However, since the right to redress under European law also exists for individuals who did not take protective measures, the proposal seems too narrow to meet the CJEU requirements.

A second difficulty with the ACLU approach is that the Supreme Court made standing related to privacy injuries even more difficult to establish in its TransUnion LLC v. Ramirez decision in June, 2021. As discussed here, the majority in that case made it significantly more difficult for privacy plaintiffs henceforth to sue in federal court. The Court restated its 2016 Spokeo case that a plaintiff does not automatically satisfy “the injury-in-fact requirement whenever a statute grants a person a statutory right and purports to authorize that person to sue to vindicate that right.” More bluntly, the Court stated: “An injury in law is not an injury in fact”. [ii] The majority in TransUnion found “concrete harm” for some plaintiffs but not others. Even individuals whose credit histories were badly mistaken – stating they were on a government list as “potential terrorists” – did not enjoy a right of action created by statute. In sum, there would be substantial legal uncertainty surrounding a U.S. statute conferring upon EU data subjects the right to go straight to U.S. courts to get redress (for a similar conclusion see here).

The standing objection applies only to direct access to federal courts, and not to an independent non-judicial redress authority. However, Congress might be reluctant to intervene ex nihilo in a field such as “direct” foreign surveillance conducted under EO 12,333, which traditionally belongs to the Executive power under the U.S. Constitution. Congress might be more willing to act and endorse by statute an effective redress mechanism if, as a first step, the Executive branch itself had first created such an independent non-judicial redress authority within the Executive branch. In any case, such a statute does not appear to be a necessary precondition under U.S. law for creating a redress system

III. Is a Non-Statutory Approach to Redress Compatible with European Law?

Since the U.S. government might not be able to produce a short-term statutory solution for redress, the question then arises as to whether a non-statutory approach would be acceptable under EU law. In order for the European Commission to be able to issue an adequacy decision under Article 45 of the GDPR, the U.S. must ensure an “adequate” level of protection.

If the U.S. is able to address by non-statutory means the deficiencies highlighted by the CJEU in Schrems II (mentioned above), then such a solution could be compatible with the “essential equivalence” requirements of Article 45 of the GDPR. We defer for now the question of whether a non-statutory path would indeed be able to address these substantive issues, instead focusing only on whether a non-statutory approach in principle is compatible with European law.

A. The Starting Point: The Right to Effective Remedy Under European Human Rights Law

What we call “redress” in the context of transatlantic adequacy negotiations corresponds to the “right to effective remedy” under European law. Article 47(1) of the Charter of Fundamental Rights of the European Union (“Charter”) states that:

“Everyone whose rights and freedoms guaranteed by the law of the Union are violated has the right to an effective remedy before a tribunal in compliance with the conditions laid down in this Article.”

The official explanations of Article 47 make clear that this article is “based on Article 13 of the European Convention of Human Rights” (ECHR), according to which:

“Everyone whose rights and freedoms as set forth in this Convention are violated shall have an effective remedy before a national authority notwithstanding that the violation has been committed by persons acting in an official capacity.”

A comparison of the two articles reveals that in EU law the protection is more extensive than in ECHR law, since the former guarantees the right to an effective remedy before a “tribunal” while the latter only refers to a “national authority”. The term “tribunal” seems to refer to a judicial body, as the official explanation suggests. This is confirmed by reference to non-English language versions of Article 47(1), which translate the word “tribunal” as “court” (e.g.“Gericht” in German and “Gerecht” in Dutch). It is also evident that neither Article 47(1) of the Charter nor Article 13 of the ECHR require that a redress body be created by statute.

 However, Article 47 (2) of the Charter adds additional, complicating requirements.:

“Everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal previously established by law. Everyone shall have the possibility of being advised, defended and represented”.

As the official explanations point out, this second paragraph “corresponds to Article 6(1) of the ECHR”, which reads as follows:

“In the determination of his civil rights and obligations or of any criminal charge against him, everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal established by law. Judgment shall be pronounced publicly but the press and public may be excluded from all or part of the trial in the interests of morals, public order or national security in a democratic society, where the interests of juveniles or the protection of the private life of the parties so require, or to the extent strictly necessary in the opinion of the court in special circumstances where publicity would prejudice the interests of justice”.

Both Article 47(2) of the Charter and Article 6(1) of the ECHR thus require “an independent and impartial tribunal established by law”. Yet, what is the exact relationship between the provisions on “effective remedy” (Article 47(1) of the Charter and Article 13 of the ECHR), and those on “a fair and public hearing by independent and impartial tribunals established by law” (Article 47(2) of the Charter and Article 6(1) of the ECHR)?

A restrictive analysis would regard the two sets of articles as entirely interlinked, in which case redress bodies would always have to be “established by law”.

A second more flexible and plausible interpretation would maintain that this latter set of requirements constitutes lex specialis in relation to the former; in other words, the “right to effective remedy” (“redress”) is broader than the “right to a fair trial”. This interpretation finds support in the ECHR, which textually separates the two sets of rights and requirements (Articles 13 and 6(1)). It is also confirmed by the official guide to Article 13 which states that “Article 6 § 1 of the Convention is lex specialis in relation to Article 13” (here, at 41), and by the fact that Article 6(1) is limited in scope to civil rights and criminal charges. It therefore would be difficult to merge the obligation of states to put in place an “effective remedy” with the established by law” requirement, as this latter requirement only concerns the right to a fair trial before a “tribunal” under Article 6(1) – and not the broader right of redress before a “national authority” under Article 13. It seems then that, at least under the ECHR, a redress body need not always be a judicial body nor be “established by law”, provided that it satisfies the substantive requirements of the “right to effective remedy”. As we will see, the standards of the ECHR have always been particularly relevant for the European Data Protection Board (EDPB) in assessing the “essential equivalence” of “redress” mechanisms under Article 45 of the GDPR.

B. Flexibility Introduced by the “Essentially Equivalent” Standard of EU Data Protection Law

A flexible interpretation of the “effective remedy” requirement is also supported by the “essential equivalence” standard of the GDPR for third countries.

In Schrems I, the Court clearly acknowledged that “the means to which [a] third country has recourse, [… ] for the purpose of ensuring such a level of protection may differ from those employed within the European Union, [… ] those means must nevertheless prove, in practice, effective in order to ensure protection essentially equivalent to that guaranteed within the European Union” (§74 of the October 6, 2015 judgment, emphasis added).

The CJEU Advocate General emphasised in his 2019 Schrems II Opinion that the “essentially equivalent” standard “does not mean that the level of protection must be ‘identical’ to that required in the Union”. He explained that:

“It also follows from that judgment, in my view, that the law of the third State of destination may reflect its own scale of values according to which the respective weight of the various interests involved may diverge from that attributed to them in the EU legal order. Moreover, the protection of personal data that prevails within the European Union meets a particularly high standard by comparison with the level of protection in force in the rest of the world. The ‘essential equivalence’ test should therefore in my view be applied in such a way as to preserve a certain flexibility in order to take the various legal and cultural traditions into account” (§§ 248-249, emphasis added).

The EDPB previously had endorsed this flexible interpretation of the elements for adequacy. In its 2016 Opinion on Privacy Shield, for instance, the EDPB’s predecessor (WP29) emphasised that:

“the WP29 does not expect the Privacy Shield to be a mere and exhaustive copy of the EU legal framework […]. The Court has underlined that the term ‘adequate level of protection’, although not requiring the third country to ensure a level of protection identical to that guaranteed in the EU legal order, must be understood as requiring the third country in fact to ensure, by reason of its domestic law or its international commitments, a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union [… ]” (p. 3).

It is precisely this flexible approach that allowed EU authorities to set aside the requirement that a redress body should be a “tribunal” – despite clear terms to the contrary in Article 47(1) of the Charter. As the EDPB noted in its Recommendations 02/2020 on the European Essential Guarantees for surveillance measures of November 10, 2020 (§47): “an effective judicial protection against such interferences can be ensured not only by a court, but also by a body which offers guarantees essentially equivalent to those required by Article 47 of the Charter” (emphasis added). The EDPB noted that the CJEU itself “expressly” used the word “body” in §197 of Schrems II. Indeed, in all its extant positions on U.S. redress mechanisms, the EDPB has recognised that the applicable standards equate with those in Article 13 of the ECHR, which “only obliges Members States to ensure that everyone whose rights and freedoms are violated shall have an effective remedy before a national authority, which does not necessarily need to be a judicial authority” (ibid, §46, emphasis added).

Therefore, provided that the U.S. redress mechanism meets the substantive requirements of Article 13 ECHR as cited in Schrems II and the EDPB opinions, a judicial body will not be necessarily required, and an “established by law” standard need not be applied in order to meet the “essential equivalence” test. As the astute European legal observer Chris Docksey concluded:

“This could be an opportunity for the CJEU to give meaning to the difference between essential equivalence and absolute equivalence mentioned above when deciding on the standard of individual redress to be applied in the specific case of international transfers. If the content of the right under Article 47 is ensured, then the form should not be an obstacle” (emphasis added).

C. Interpreting “Law” in a Substantive, Not Formal, Sense

European human rights law seems, in fact, to prioritise substance over form even in situations that go beyond an “essential equivalence” assessment. This can be shown by examining interpretations of the “in accordance with the law” requirement found in the ECHR, the Charter and several fundamental EU data protection sources of law, including the GDPR.

ECHR articles concerning human rights, including Article 8 (right to privacy), stipulate that some restrictions to these rights may be acceptable provided they are “in accordance with the law” and “necessary in a democratic society” in order to protect certain legitimate interests (such as national security, public safety, or the prevention of disorder or crime). Similarly, Article 52 of the Charter requires that: “Any limitation on the exercise of the rights and freedoms recognised by this Charter must be provided for by law (…)”.

Both the Convention and the Charter, however, interpret the term “law” in a flexible way. The ECtHR, for instance, has emphasised on multiple occasions that:

“[A]s regards the words “in accordance with the law” and “prescribed by law” which appear in Articles 8 to 11 of the Convention, the Court observes that it has always understood the term “law” in its “substantive” sense, not its “formal” one; it has included both “written law”, encompassing enactments of lower ranking statutes and regulatory measures (…), and unwritten law” (Sanoma Uitgevers B.V. v. the Netherlands, 2010, § 83, emphasis added). See also Sunday Times (No. 1) v. the United Kingdom, 1979, §47).

Similarly, in EU data protection law, both the Law Enforcement Data Protection Directive (LED) and the GDPR also understand the term “law” in its substantive sense. According to Recital 33 of the LED, for instance:

“Where this Directive refers to Member State law, a legal basis or a legislative measure, this does not necessarily require a legislative act adopted by a parliament, without prejudice to requirements pursuant to the constitutional order of the Member State concerned (…)” (emphasis added).

Further, Recital 41 of the GDPR provides:

“Where this Regulation refers to a legal basis or a legislative measure, this does not necessarily require a legislative act adopted by a parliament, without prejudice to requirements pursuant to the constitutional order of the Member State concerned. However, such a legal basis or legislative measure should be clear and precise and its application should be foreseeable to persons subject to it, in accordance with the case-law of the [CJEU] and the European Court of Human Rights” (emphasis added).

This flexible interpretation of the term “law” in the data protection context for assessing the incursion of state interests on fundamental rights is formally separate from the requirement in Article 47(2) of the Charter that a tribunal be “previously established by law”. However, this analytic flexibility is consistent with how EU bodies have interpreted the “essentially equivalent” standard, including in the context of the Privacy Shield. It therefore supports the conclusion that a U.S. decision to put in place an independent and effective redress mechanism for surveillance would satisfy the requirements of European law even if it does not involve the adoption of a statute. This conclusion is also supported by the European DPAs previous positions concerning the Privacy Shield Ombudsperson.

D. The CJEU and EU DPAs Did Not Object to Non-Statutory Redress

The fact that the Privacy Shield Ombudsperson was not created by statute did not seem to be a primary concern for either the CJEU or the EDPB in assessing whether this mechanism offers  “essentially equivalent” protection to European law.

In Schrems II the Court did not identify as a deficiency that the Ombudsperson mechanism was not created by statute. Rather, the problems detected were that there was “nothing in [the Privacy Shield Decision] to indicate that the dismissal or revocation of the appointment of the Ombudsperson is accompanied by any particular guarantees” and, also, that there was “nothing in that decision to indicate that the ombudsperson has the power to adopt decisions that are binding on those intelligence services (…)” (§§ 195-196). Thus, provided there is a way to fix these deficiencies by non-statutory means, the new redress solution could pass the “essential equivalence” test.

The EDPB also seems to support this argument. In its 2016 Opinion on Privacy Shield, the WP29 began by stating that:

“in addition to the question whether the Ombudsperson can be considered a ‘tribunal’, the application of Article 47 (2) Charter implies an additional challenge, since it provides that the tribunal has to be ‘established by law’. It is doubtful however whether a Memorandum which sets forth the workings of a new mechanism can be considered ‘law’” (p. 47).

The WP29 therefore seemed to link Articles 47(1) and 47(2). However, it did not appear to consider the legal form by which the Ombudsperson was created as an insuperable obstacle. It stated:

“As a consequence – with the principle of essential equivalency in mind – rather than assessing whether an Ombudsperson can formally be considered a tribunal established by law, the Working Party decided to elaborate further the nuances of the case law as regards the specific requirements necessary to consider ‘legal remedies’ and ‘legal redress’ compliant with the fundamental rights of Articles 7, 8 and 47 Charter and Article 8 (and 13) ECHR” (ibid., emphasis added).

The WP29 then went on to analyse the requirements of European law concerning the “right to effective remedy”, focusing primarily on the case law of the ECtHR, and concluded that the Ombudsperson did not meet these requirements, essentially for the same reasons mentioned by the CJEU in the Schrems II Judgment.

In their subsequent assessments of Privacy Shield, the WP29 and the EDPB arrived at the same conclusion. They did not consider that the means by which the Ombudsperson was created represented an obstacle to passing the “essentially equivalent” test. On the contrary, the EDPB “welcomed the establishment of an Ombudsperson mechanism as a new redress mechanism” (see for instance here, §99) and repeated that “having analysed the jurisprudence of the ECtHR in particular”, it “favored an approach which took into account the powers of the Ombudsperson” (see here, p.19).

Similarly, the European Data Protection Supervisor (EDPS) did not oppose the creation of the Ombudsperson on the grounds that it was done in a non-statutory way. On the contrary he argued that “in order to improve the redress mechanism proposed in the national security area, the role of the Ombudsperson should also be further developed, so that she is able to act independently not only from the intelligence community but also from any other  authority” (here, at 8, emphasis added). 

Conclusion

In sum, European law is flexible in interpreting whether the United States must adopt a new statute to meet redress requirements, especially when the question is viewed through the “essential equivalence” prism of data protection. Substance prevails over form. It remains true that a statutory approach would in abstracto be the easiest way for the United States to establish a permanent and independent redress body for effectively reviewing complaints and adopting decisions that bind intelligence services. However, when one takes into consideration the political, practical and constitutional difficulties confronting negotiators, it makes sense to achieve the same results in a different way.

In a second article, to be published shortly, we will detail specific elements of a non-statutory solution and assess whether it would meet the substantive European requirements on redress.

[i] As this report shows even in a country like Germany, particularly sensitive to intelligence law questions, its major Signals Intelligence (SIGINT) reform did not provide any judicial redress options for non-Germans: “There is no legally defined path for foreign individuals, such as journalists abroad, who want to find out if their communications have been collected in SIGINT operations and, if so, to verify whether the collection and processing of their data was lawful. What is more, the legislators opted to explicitly waive notification rights for foreigners regarding the bulk collection of their personal data.” (p. 63)

[ii] The European Court of Human Rights has developed jurisprudence that is more flexible than U.S. standing law in terms of who may bring a suit. European human rights law accepts since Klass and Others v. Germany case (1978) that an individual may, under certain conditions, claim to be the victim of a violation occasioned by the mere existence of legislation permitting secret measures of surveillance, without having to allege that such measures were in fact applied to him or that that he has been subject to a concrete measure of surveillance (the famous theory of “potential victim” of a human rights violation, see here, paras 34-38 and here, p. 15 for an updated analysis). Notwithstanding this greater flexibility in European law, we reiterate that the limits on U.S. standing are a matter of U.S. constitutional law, which cannot be overruled by a statute enacted by Congress.

Worth Reading :”Understanding EU data protection policy “

European Parliament Research Service (EPRS) : Policy Briefing

Summary : The datafication of everyday life and data scandals have made the protection of personal information an increasingly important social, legal and political matter for the EU. In recent years, awareness of data rights and expectations for EU action in this area have both grown considerably. The right to privacy and the right to protection of personal data are both enshrined in the Charter of Fundamental Rights of the EU and in the EU Treaties. The entry into force of the Lisbon Treaty in 2009 gave the Charter the same legal value as the Treaties and abolished the pillar structure, providing a stronger basis for a more effective and comprehensive EU data protection regime.

In 2012, the European Commission launched an ambitious reform to modernise the EU data protection framework. In 2016, the co-legislators adopted the EU’s most prominent data protection legislation – the General Data Protection Regulation (GDPR) – and the Law Enforcement Directive. The framework overhaul also included adopting an updated Regulation on Data Protection in the EU institutions and reforming the e-Privacy Directive, which is currently the subject of negotiation between the co-legislators. The European Parliament has played a key role in these reforms, both as co-legislator and author of own-initiative reports and resolutions seeking to guarantee a high level of data protection for EU citizens. The European Court of Justice plays a crucial role in developing the EU data protection framework through case law. In the coming years, challenges in the area of data protection will include balancing compliance and data needs of emerging technologies, equipping data protection authorities with sufficient resources to fulfil their tasks, mitigating compliance burdens for small and medium-sized enterprises, taming digital surveillance and further clarifying requirements of valid consent. (This is an updated edition of a briefing written by Sofija Voronova in May 2020.)

LINK TO THE FULL TEXT

VERFASSUNGSBLOG : A cautious green light for technology-driven mass surveillance

The Advocate General’s Opinion on the PNR Directive

by Christian Thönnes

Yesterday, on 27 January 2022, Advocate General (AG) Pitruzzella published his Opinion (“OP”) in the Court of Justice of the European Union’s (CJEU) preliminary ruling procedure C-817/19. The questions in this case pertain to Directive (EU) 2016/681 of 27 April 2016 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime (in short: PNR Directive) and its compatibility with EU primary law.

In his Opinion (which, besides the Press Release (“PR”), was only available in French at the time of writing), the AG, while criticizing the PNR Directive’s overbroad data retention period and its lack of clarity and precision in certain points, generally considers the PNR Directive to be “compatible with the fundamental rights to respect for private life and to the protection of personal data” (PR). His arguments are not convincing.

Certainly, much more can and will be written about this case in general and the Opinion in particular. This entry can only shine a light on some of the AG’s major arguments. In so doing, it shall point out why, in my opinion, the CJEU would do well not to follow the AG’s recommendations. Instead, I believe the PNR Directive is incompatible with Articles 7 and 8 of the EU Charter of Fundamental Rights (CFR). Consequently, it ought to be invalidated.

What the AG has to say about the PNR Directive

The PNR Directive obliges EU Member States to require air carriers to transmit a set of data for each passenger to national security authorities, where they are subjected to automated processing against pre-existing databases (Art. 6 § 3 letter a) and “pre-determined criteria” (Art. 6 § 3 letter b), which contain (allegedly) suspicious flight behaviors (such as a mismatch between luggage and length of stay and destination, see the Commission’s Evaluation Report, point 5.1, in order to identify potential perpetrators of serious crimes or acts of terrorism (a more detailed description of the Directive’s workings can be found in paras 9-18 of the AG’s Opinion or here).

The AG points to certain (limited) problems with the Directive’s wording. Firstly, he contends that point 12 of Annex I, enabling “General Remarks” to be included in PNR data sets, fail to “satisfy the conditions of clarity and precision laid down by the Charter” (PR, also para 150 OP). He also considers the Directive’s five-year-retention period for PNR data excessive and proposes that this period be limited to cases where “a connection is established, on the basis of objective criteria, between those data and the fight against terrorism or serious crime” (PR, also para 245 OP). In addition, he provides clarifying criteria for the relevancy of databases under Art. 6 § 3 letter a (para 219 OP), regarding the applicability of the GDPR (para 53 OP) as well as collisions with the Schengen Borders Code (para 283 OP). He also demands that, due to their lack of transparency, (at least some) “machine-learning artificial intelligence systems” (PR), should not be used for pre-determined criteria (para 228 OP).

The most resounding message of his Opinion, however, is that the PNR Directive’s mass retention and processing regime is “relevant, adequate and not excessive in relation to the objectives pursued” (PR) and thus compatible with Articles 7 and 8 CFR. He therefore recommends to let it stand, albeit with some interpretative limitations (para 254 OP).

Incompatibility with Digital Rights Ireland and its successors

The AG’s reasoning in support of the PNR Directive’s proportionality relies on his central finding that “the Court’s case-law on data retention and access in the electronic communications sector is not transposable to the system laid down by the PNR Directive” (PR). He is referring to decisions like Digital Rights IrelandTele2 Sverige and Quadrature du Net, in which the CJEU had laid down strict limits on governments’ power to collect and process telecommunications data. Notably, it posited that “the fight against serious crime […] and terrorism […] cannot in itself justify that national legislation providing for the general and indiscriminate retention of all traffic and location data should be considered to be necessary for the purposes of that fight” (Tele2 Sverige, para 103; also Digital Rights Ireland, para 51). Instead, the CJEU required that in order to be considered “limited to what is strictly necessary […] the retention of data must continue nonetheless to meet objective criteria, that establish a connection between the data to be retained and the objective pursued” (Tele2 Sverige, para 110).

Evidently, the PNR Directive would clash with these criteria – were they found to be applicable. The collection and automated processing of PNR data is completely indiscriminate. Given Member States’ universal extension to EU domestic flights, it affects all European flight passengers, regardless of their personal histories and independently of a potential increased domestic threat situation (this is proposed as a possible criterion in Quadrature du Net, para 168). The use of pre-determined criteria is not, like the comparison against existing databases, aimed at recognizing known suspects, but at conjuring up new suspicions (see EU Commission PNR Directive Proposal, SEC(2011) 132, p. 12). Also, taking a flight is a perfectly ordinary form of human behavior. There is no empirically demonstrated connection to the perpetration of serious crimes or acts of terrorism (in para 203, the AG presupposes such a “lien objectif” without providing any evidence exceeding anecdotal intuitions about terrorism and human trafficking) and the PNR Directive, given its broad catalogue of targeted crimes, is not limited to dangers caused by air traffic. What behavior will be targeted next? Visiting the museum? Going to a rock concert? Belgium, for example, has already expanded the PNR Directive’s scope to international trains, busses and ferries (Doc. parl., Chambre, 20152016, DOC 54-2069/001, p.7).

Good reasons for applicability

It thus is quite clear: Should Digital Rights Ireland and its successors apply, the PNR Directive is in trouble. Now, why wouldn’t their criteria be transposable? The AG’s arguments mainly turn on a perceived difference in sensitivity of PNR data, compared to telecommunications meta-data. The latter, the AG explains, contain intimate information of users’ private lives (para 195, 196), and almost uncontrollable in their scope and processing because everyone uses telecommunication (paras 196, 198). Moreover, because they are used for communication, telecommunications data, unlike PNR data, have an intrinsic connection to fundamental democratic freedoms (para 197). PNR data, on the other hand, he opines, are limited to a delineated life domain and narrower target groups because fewer people use planes than telecommunication (paras 196, 198).

Under closer examination, this comparison falls apart. Firstly, PNR data contain very sensitive information, too. As the CJEU has pointed out in his Opinion 1/15 regarding the once-envisaged EU-Canada PNR Agreement, “taken as a whole, the data may, inter alia, reveal a complete travel itinerary, travel habits, relationships existing between air passengers and the financial situation of air passengers, their dietary habits or state of health” (para 128). Unlike the AG (see para 195 in his Opinion), I can find no remarks in Opinion 1/15 that would relegate PNR data to a diminished place compared to telecommunications data. But secondly, and more importantly, the AG fails to consider other factors weighing on the severity of the PNR Directive’s data processing when compared against the processing of Directive 2006/24/EC and its siblings: The method and breadth of processing and the locus of storage.

Only a small minority of telecommunication datasets, upon government requests in specific cases (see Articles 4 and 8 of Directive 2006/24/EC), underwent closer scrutiny, while the vast majority remained untouched. Under the PNR Directive, however, all passengers, without exception, are subjected to automated processing. In so doing, the comparison against pre-determined criteria, as the AG points out himself (para 228 OP), can be seen as inviting Member States to use self-learning algorithms to establish suspicious movement patterns. Other EU law statutes like Art. 22 GDPR or Art. 11 of Directive 2016/618, as well as comparable decisions by national constitutional courts (BVerfG, Beschluss des Ersten Senats vom 10. November 2020 – 1 BvR 3214/15 -, para 109) are inspired by an understanding that such automated processing methods greatly increase the severity of respective interferences with fundamental rights. Moreover, while telecommunications data were stored on telecommunication service providers’ servers (to whom users had entrusted these data), PNR data are all transferred from air carriers to government entities and then stored there.

Hence, there are good reasons to assume that the data processing at hand causes even more severe interferences with Articles 7 and 8 CFR than Directive 2006/24/EC did. It thus follows, that the case law of Digital Rights Ireland should apply a fortiori.

An inaccurate conception of automated algorithmic profiling and base rate fallacy

There are other problems with the AG’s reasoning; completely untangling all of them would exceed this space. Broadly speaking, however, the AG seems to underestimate the intrinsic pitfalls of unleashing predictive self-learning algorithms on datapools like these. The AG claims that the PNR Directive contains sufficient safeguards against false-positives and discriminatory results (para 176 OP).

Firstly, it is unclear what these safeguards are supposed to be. The Directive does not enunciate clear standards for human review. Secondly, even if there were more specific safeguards, it is hard to see how they could remedy the Directive’s central inefficiency. That inefficiency does not reside in the text, it’s in the math – and it’s called ‘base rate fallacy’. The Directive forces law enforcement to look for the needle in a haystack. Even if their algorithms were extremely accurate, false-positives would most likely exceed true-positives. Statistics provided by Member States showing extremely high false-positive rates support this observation. The Opinion barely even discusses false-positives as a problem (only in an aside in para 226 OP). Also, it is unclear how the antidiscrimination principle of Art. 6 § 4 is supposed to work. While the algorithms in question may be programmed in way to not process explicit data points on race, religion, health etc., indirect discrimination is a well-established problem of antidiscrimination law. Both humans and algorithms may just use the next-best proxy trait. (see for example Tischbirek, Artificial Intelligence and Discrimination).

Now, the AG attempts to circumvent these problems by reading the PNR Directive in a way that prohibits the use of self-learning algorithms (para 228 OP). But that interpretation, which is vaguely based on some “système de garanties“ (para 228 OP), is both implausible – it lacks textual support and the pile of PNR data is amassed precisely to create a use case for AI at EU borders – and insufficient to alleviate this surveillance tool’s inherent statistical inefficiency.

This cursory analysis sheds light on some of the AG’s Opinion’s shortcomings. It thus follows that the CJEU should deviate from Pitruzzella’s recommendations. The PNR Directive, due to the severity of its effects and its inherent inefficiency in fulfilling its stated purpose, produces disproportionate interferences with Articles 7 and 8 CFR. It ought to be invalidated.

Between 2017 and 2021, the author worked for the German NGO “Gesellschaft für Freiheitsrechte”, among other things, on a similar case (C-148/20 to C-150/20) directed against the PNR Directive.

Does the EU PNR Directive pave the way to Mass surveillance in the EU? (soon to be decided by the CJEU… )

Fundamental Rights European Experts Group

(FREE-Group)

Opinon on the broader and core issues arising in the PNR Case currently before the CJEU (Case C-817/19)

by Douwe Korff (Emeritus Professor of International Law, London Metropolitan University Associate, Oxford Martin School, University of Oxford)

(LINK TO THE FULL VERSION 148 Pages)

EXECUTIVE SUMMARY

(with a one-page “at a glance” overview of the main findings and conclusions)

Main findings and conclusions at a glance

In my opinion, the appropriate tests to be applied to mass surveillance measures such as are carried out under the PNR Directive (and were carried out under the Data Retention Directive, and are still carried out under the national data retention laws of the EU Member States that continue to apply in spite of the CJEU case-law) are:

Have the entities that apply the mass surveillance measure – i.e., in the case of the PNR Directive (and the DRD), the European Commission and the EU Member States – produced reliable, verifiable evidence:

  • that those measures have actually, demonstrably contributed significantly to the stated purpose of the measures, i.e., in relation to the PNR Directive, to the fight against PNR-relevant crimes (and in relation the DRD, to the fight against “serious crime as defined by national law”); and
  • that those measures have demonstrably not seriously negatively affected the interests and fundamental rights of the persons to whom they were applied?

If the mass surveillance measures do not demonstrably pass both these tests, they are fundamentally incompatible with European human rights and fundamental rights law and the Charter of Fundamental Rights; this means the measures must be justified, by the entities that apply them, on the basis of hard, verifiable, peer-reviewable data.

The conclusion reached by the European Commission and Dutch Minister of Justice: that overall, the PNR Directive, respectively the Dutch PNR law, had been “effective” because the EU Member States said so (Commission) or because PNR data were quite widely used and the competent authorities said so (Dutch Minister) is fundamentally flawed, given that this conclusion was reached in the absence of any real supporting data. Rather, my analyses show that:

  • Full PNR data are disproportionate to the purpose of basic identity checks;
  • The necessity of the PNR checks against Interpol’s Stolen and Lost Travel Document database is questionable;
  • The matches against unspecified national databases and “repositories” are not based on foreseeable legal rules and are therefore not based on “law”;
  • The necessity and proportionality of matches against various simple, supposedly “suspicious” elements (tickets bought from a “suspicious” travel agent; “suspicious” travel route; etc.) is highly questionable; and
  • The matches against more complex “pre-determined criteria” and profiles are inherently and irredeemably flawed and lead to tens, perhaps hundreds of thousands of innocent travellers wrongly being labelled to be a person who “may be” involved in terrorism or serious crime, and are therefore unsuited (D: ungeeignet) to the purpose of fighting terrorism and serious crime.

The hope must be that the Court will stand up for the rights of individuals, enforce the Charter of Fundamental Rights, and declare the PNR Directive (like the Data Retention Directive) to be fundamentally in breach of the Charter.

– o – O – o –

Executive Summary

This document summarises the analyses and findings in the full Opinion on the broader and core issues arising in the PNR Case currently before the CJEU (Case C-817/19), using the same headings and heading numbers. Please see the full opinion for the full analyses and extensive references. A one-page “at a glance” overview of the main findings and conclusions is also provided.

The opinion drew in particular on the following three documents, also mentioned in this Executive Summary:

– o – O – o –

  1. Introduction

In the opinion, after explaining, at 2, the broader context in which personal data are being processed under the PNR Directive, I try to assess whether the processing that the PNR Directive requires or allows is suitable, effective and proportionate to the aims of the directive. In doing so, in making those assessments, I base myself on the relevant European human rights and data protection standards, summarised at 3.

NB: The opinion focusses on the system as it is designed and intended to operate, and on what it allows (even if not everything that may be allowed is [yet] implemented in all Member States), and less on the somewhat slow implementation of the directive in the Member States and on the technical aspects that the Commission report and the staff working document often focussed on. It notes in particular a number of elements or aspects of the directive and the system it establishes that are problematic, either conceptually or in the way they are supposed to operate or to be evaluated.

2. PNR in context

In the footsteps of the US and UK intelligence services (as revealed by Snowden), the EU Member States’ law enforcement agencies are increasingly using their access to bulk data – bulk e-communications data, financial data, PNR data, etc. – to “mine” the big data sets by means of sophisticated, self-learning algorithms and Artificial Intelligence (AI).

The European Union Agency for Law Enforcement Cooperation, Europol, has become increasingly involved in algorithm/AI-based data analysis (or at least in the research underpinning those technologies), and last year the Commission proposed to significantly further expand this role.

The processing of PNR data under the PNR Directive must be seen in these wider contexts: the clear and strengthening trend towards more “proactive”, “preventive” policing by means of analyses and algorithm/AI-based data mining of (especially) large private-sector data sets and databases; the increasingly central role played by Europol in this (and the proposal to expand that role yet further); the focusing on “persons of interest” against whom there is (as yet) insufficient evidence for action under the criminal law (including, in relation to Europol, persons against whom there is an “Article 36 alert” in its SIS II database); and the still increasing intertwining of law enforcement and national security “intelligence” operations in those regards.

Notably, “Article 36 SIS alerts” have been increasing, and in the Netherlands, in 2020, 82.4% of all PNR “hits” against the Schengen Information System, confirmed by the Dutch Passenger Information Unit established under the PNR Directive, were “hits” against “Article 36 alerts”.

Human rights-, digital rights- and broader civil society NGOs have strongly criticised these developments and warned of the serious negative consequences. Those concerns should be taken seriously, and be properly responded to.

3 Legal standards

General fundamental rights standards stipulate that all interferences with fundamental rights must be based on a “law” that meets the European “quality of law” standards: the law must be public, clear and specific, and foreseeable in its application; the interferences must be limited to what is “necessary” and “proportionate” to serve a “legitimate aim” in a democratic society; the relevant limitations must be set out in the law itself (and not left to the discretion of states or state authorities); and those affected by the interferences must be able to challenge them and have a remedy in a court of law. Generalised, indiscriminate surveillance of whole populations (such as all air passengers flying to or from the EU) violates the EU Charter of Fundamental Rights. A special exception to this prohibition accepted by the EU Court of Justice in the La Quadrature du Net case, which allows EU Member States to respond to “serious”, “genuine and present or foreseeable” threats to “the essential functions of the State and the fundamental interests of society” must be strictly limited in time and place: it cannot form the basis for continuous surveillance of large populations (such as all air passengers) generally, on a continuous, indefinite basis: that would turn the (exceptional) exception into the rule. Yet that is precisely what the PNR Directive provides for.

European data protection law expands on the above general principles in relation to the processing of personal data. The (strict) case-law of the CJEU and the European Court of Human Rights on data protection generally and generalised surveillance in particular are reflected in the European Data Protection Board’s European Essential Guarantees for surveillance (EEGs).

Processing of information on a person suggesting that that person “may be” involved in criminal activities is subject to especially strict tests of legitimacy, necessity and proportionality.

Contrary to assertions by the European Commission and representatives of EU Member States (inter alia, at the hearing in the PNR case in July 2021) that the processing under the PNR Directive has little or no effect on the rights and interests of the data subjects, the processing under the directive must under EU data protection law be classified as posing “high risks” to the fundamental rights and interests of hundreds of millions of airline passengers.

Under the Law Enforcement Directive (as under the GDPR), this means that the processing should be subject to careful evaluation of the risks and the taking of remedial action to prevent, as far as possible, any negative consequences of the processing – such as the creation of “false positives” (cases in which a person is wrongly labelled to be a person who “may be” involved in terrorism or serious crime). It also means that if it is not possible to avoid excessive negative consequences, the processing is “not fit for purpose” and should not be used.

Under the proposed Artificial Intelligence Act that is currently under consideration, similar duties of assessment and remedial action – or abandoning of systems – are to apply to AI-based processes.

4 The PNR Directive

4.1 Introduction

4.2 The system

Under the PNR Directive, special “Passenger Information Units” (PIUs) in each EU Member State match the data contained in so-called passenger name records (PNRs) that airlines flying into or from the EU have to provide to those units against supposedly relevant lists and databases, to both identify already “known” formally wanted persons or already “known” “persons of interest” who “may be” involved in terrorism or other serious crime, and to “identify” (i.e., label) previously “unknown” persons who “may be” involved in such activities by means of “risk analyses” and the identification of “patterns” and “profiles” based on the identified patterns (see below, at 4.7).

The opinion analyses and assesses all major elements of the system in turn.

4.3 The aims of the PNR Directive

In simple terms, the overall aim of the PNR Directive is to facilitate the apprehension of terrorists and individuals who are involved in terrorism or other serious transnational crime, including in particular international drug- and people trafficking.

However, the first aim of the checking of the PNR data by the PIUs is more limited than the aims of the directive overall; this is: to identify persons who require further examination by the competent authorities [see below, at 4.5], and, where relevant, by Europol [see below, at 4.11], in view of the fact [?] that such persons may be involved in a terrorist offence or serious crime. (Article 6(1)(a))

When there is a match of PNR data against various lists, i.e., a “hit” (see below, at 4.9), the PNR passes this “hit” on to certain “competent authorities” (see below, at 4.5) for “further examination”; if the initial “hit” was generated by automated means, this is only done after a manual review by PIU staff. In practice, about 80% of initial “hits” are discarded (see below, at 4.9).

It is one of the main points of the opinion that the suitability, effectiveness and proportionality of the PNR Directive cannot and should not be assessed by reference to the number of initial “hits” noted by the PIUs, compared to the number of cases passed on for “further examination” to the competent authorities, but rather, with reference to more concrete outcomes (as is done in section 5.2).

4.4 The Legal Basis of the PNR Directive

It appears obvious from the Court of Justice opinion on the Draft EU-Canada Agreement that the PNR Directive, like that draft agreement, should have been based on Articles 16 and 87(2)(a) TFEU, and not on Article 82(1) TFEU. It follows that the PNR Directive, too, appears to not have been adopted in accordance with the properly applicable procedure. That could lead to the directive being declared invalid on that ground alone.

4.5 The Competent Authorities

Although most competent authorities (authorities authorised to receive PNR data and the results of processing of PNR data from the PIUs) in the EU Member States are law enforcement agencies, “many Member States [have designated] intelligence services, including military intelligence services, as authorities competent to receive and request PNR data from the Passenger Information Unit”, and “in some Member States the PIUs are actually “embedded in … [the] state security agenc[ies]”.

Given the increasingly close cooperation between law enforcement agencies (and border agencies) and intelligence agencies, in particular in relation to the mining of large data sets and the development of evermore sophisticated AI-based data mining technologies by the agencies working together (and in future especially also with and through Europol), this involvement of the intelligence agencies (and in future, Europol) in PNR data mining must be seen as a matter of major concern.

4.6 The crimes covered (“PNR- Relevant offences”)

The PNR Directive stipulates that PNR data and the results of processing of PNR data may only be used for a range of terrorist and other serious offences, as defined in Directive 2017/541 and in an annex to the PNR Directive, respectively (so-called “PNR-relevant offences”).

The processing under the PNR Directive aims to single out quite different categories of data subjects from this large base: on the one hand, it seeks to identify already “known” formally wanted persons (i.e., persons formally designated suspects under criminal [procedure] law, persons formally charged with or indicted for, or indeed already convicted of PNR-relevant offences) and already “known” “persons of interest” (but who are not yet formally wanted) by checking basic identity data in the PNRs against the corresponding data in “wanted” lists (such as “Article 26 alerts” in SIS II); and on the other hand, it seeks to “identify” previously “unknown” persons as possibly being terrorist or serious criminals, or “of interest”, on the basis of vague indications and probability scores. In the latter case, the term “identifying” means no more than labelling a person as a possible suspect or “person of interest” on the basis of a probability.

The opinion argues that any assessment of the suitability, effectiveness and proportionality of the processing must make a fundamental distinction between these different categories of data subjects (as is done in section 5).

4.7 The categories of personal data processed

An annex to the PNR Directive lists the specific categories of data that airlines must send to the database of the PIU of the Member State on the territory of which the flight will land or from the territory of which the flight will depart. This obligation is stipulated with regard to extra-EU flights but can be extended by each Member State to apply also to intra-EU flights  – and all but one Member States have done so. The list of PNR data is much longer than the Advance Passenger Information (API) data that airlines must already send to the Member States under the API Directive, and includes information on travel agents used, travel routes, email addresses, payment (card) details, luggage, and fellow travellers. On the other hand, often some basic details (such as date of birth) are not included in the APIs.

The use of sensitive data

The PNR Directive prohibits the processing of sensitive data, i.e., “data revealing a person’s race or ethnic origin, political opinions, religion or philosophical beliefs, trade union membership, health, sexual life or sexual orientation”. In the event that PNR data revealing such information are received by a PIU, they must be deleted immediately. Moreover, competent authorities may not take “any decision that produces an adverse legal effect on a person or significantly affects a person” on the basis of such data. However, PNR data can be matched against national lists and data “repositories” that may well contain sensitive data. Moreover, as noted at 4.9(f), below, the provisions in the PNR Directive do not really protect against discriminatory outcomes of the profiling that it encourages.

4.8 The different kinds of matches

(a) Matching of basic identity data in PNRs against the identity data of “known” formally wanted persons

PNR data are matched against SIS II alerts on “known” formally wanted persons (including “Article 26 alerts”) and against “relevant” national lists of “known” formally wanted persons.

This is usually done by automated means, followed by a manual review. The Commission reports that approximately 81% of all initial matches are rejected – and not passed on to competent authorities for further examination. Notably:

– the quality of the PNR data as received by the PIUs, including even of the basic identity data, is apparently terrible and often “limited”; this is almost certainly the reason for the vast majority of the 81% rejections;

– most of the long lists of PNR data are not needed for basic identity checks: full names, date of birth, gender and citizenship/nationality should suffice – and a passport or identity card number would make the match more reliable still. All those data are included in the API data, and all are included in optical character recognition format in the machine-readable travel documents (MRTD) that have been in wide use since the 1980s.

In other words, paradoxically, PNR data are both excessive for the purpose of basic identity checks (by containing extensive data that are not needed for such checks), and insufficient (“too limited”), in particular in relation to intra-Schengen flights (by not [always] including the dates of birth of the passengers).

– the lists against which the PNR data are compared, including in particular the SIS alerts and the EAW lists, but also many national lists, relate to many more crimes than are subject to the PNR Directive (“PNR-relevant offences”) – but in several Member States “hits” against not-PNR-relevant suspects (etc.) are still passed on to competent authorities, in clear breach of the purpose-limitation principle underpinning the directive.

In that respect, it should be noted that the Commission staff working document claims that in relation to situations in which the PNR data is “too limited” (typically, by not including date of birth), “[t]he individual manual review provided for in Article 6.5 of the PNR Directive protects individuals against the adverse impact of potential ‘false positives’” – but this is simply untrue: While a confirmed matching of identity data in relation to a person who is formally wanted in relation to PNR-relevant offences can be regarded as a “positive” result of the identity check, a “hit” in relation to a person who is wanted for not-PNR-relevant offences should of course not be regarded as a positive result under the PNR Directive.

(b) Matching of basic identity data in PNRs against the identity data of “known” “persons of interest”

In principle, the matching of basic identity data from PNRs against lists of basic identity data of “persons of interest” listed in the SIS system (and comparable categories in national law enforcement repositories), like the matching of data on formally wanted persons, should be fairly straight-forward.

However, the PNRs in this regard first of all suffer from the same two deficiencies as were discussed in relation to matches for formally wanted persons, discussed at (a), above: PNR data are both excessive for the purpose of basic identity checks (by containing extensive data that are not needed for such checks), and insufficient (“too limited”), in particular in relation to intra-Schengen flights (by not [always] including the dates of birth of the passengers). The third issue identified in the previous sub-section, that SIS alerts (and similar alerts in national law enforcement repositories) can relate to many more criminal offences than those that are “PNR-relevant” also applies: many persons labelled “person of interest” will be so labelled in relation to “non-PNR-relevant” offences.

In my opinion, while a confirmed matching of identity data in relation to persons who are formally wanted in relation to (formally suspected of, charged with, or convicted of) PNR-relevant offences can be regarded as a “positive” result of an identity check, a “hit” in relation to persons who are labelled “person of interest” should not be regarded as a positive result under the PNR Directive – certainly of course not if they are so labelled in relation to non-PNR-relevant offences, but also not if they are in no way implicated as in any way being culpable of PNR-relevant offences.

In my opinion, even confirmed “hits” confirming the identity of already listed “persons of interest” should not be regarded as “positive” results under the PNR Directive unless they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

(c) Matching of PNR Data against data on lost/stolen/fake credit cards and lost/stolen/fake identity or travel documents

The staff working document makes clear that PNR data are checked by “a large majority of PIUs” against Interpol’s Stolen and Lost Travel Document database as one “relevant database”. However, this is somewhat of a residual check because that database is also already made available to airlines through Interpol’s “I-Checkit” facility. Moreover:

Even leaving the issue of purpose-limitation aside, a “hit” against a listed lost/stolen/fake credit card or a lost/stolen/fake identity or travel document should still only be considered a “positive result” in terms of the PNR Directive if it results in a person subsequently being formally declared to be (at least) a formal suspect in relation to terrorist or other serious, PNR-relevant criminal offences.

(d) Matching of PNR data against other, unspecified, supposedly relevant (in particular national) databases

It is far from clear what databases can be – and in practice, in the different Member States, what databases actually are – regarded as “relevant databases” in terms of the PNR Directive: this is left to the Member States. At the July 2021 Court hearing, the representative of the Commission said that the data of Facebook, Amazon and Google could not be regarded as “relevant”, and that law enforcement databases (des bases policières) would be the most obvious “relevant” databases. But the Commission did not exclude matches against other databases with relatively “hard” data, such as databases with financial data (credit card data?) or telecommunications data (location data?).

The vagueness of the phrase “relevant databases” in Article 6(3)(a) and the apparently wide discretion granted to Member States to allow matching against all sorts of unspecified data sets is incompatible with the Charter of Fundamental Rights and the European Convention on Human Rights. It means that the application of the law is not clear or foreseeable to those affected – i.e., the provision is not “law” in the sense of the Charter and the Convention (and EU law generally) – and that the laws can be applied in a disproportionate manner.

In other words, even in relation to the basic checks on the basis of lists of “simple selectors”, the PNR Directive does not ensure that those checks are based on clear, precise, and in their application foreseeable Member State laws, or that those laws are only applied in a proportionate manner. In the terminology of the European Court of Human Rights, the directive does not protect individuals against arbitrary interferences with the rights to privacy and protection of personal data.

(e) Matching of PNR data against lists of “suspicious travel agents”, “suspicious routes”, etc.

The staff working document repeatedly refers to checks of PNR data against “patterns” such as tickets being bought from “suspicious” travel agents; the use of “suspicious” travel routes; passengers carrying “suspicious” amounts of luggage (and the Dutch evaluation report even mentions that a person wearing a suit and hastening through customs [while being black] was regarded by custom authorities as fitting a “suspicious” pattern). No proper prosecuting or judicial authority could declare travellers to be a formal suspect – let alone to charge, prosecute or convict a traveller – on the basis of a match against such simple “suspicious” elements alone. In my opinion:

For the purpose of evaluating the suitability, effectiveness and proportionality of the PNR Directive (and of the practices under the directive), a simple “hit” against these vague and far-from-conclusive factors or “criteria” should not be regarded as a “positive” result. Rather, a “hit” against such vague “criteria” as the purchase of an air ticket from a “suspicious” travel agent, or the using of a “suspicious” route, or the carrying of a “suspicious” amount of luggage – let alone “walking fast in a suit (while being black)” – should again only be considered a “positive result” in terms of the PNR Directive if it result in a person subsequently being formally declared to be (at least) a formal suspect in relation to terrorist or other serious, PNR-relevant criminal offences.

(f) Matching of data in the PNRs against more complex “pre-determined criteria” or profiles

(fa)      Introduction

Under the PNR Directive, PIUs may, in the course of carrying out their assessment of whether passengers “may be involved in a terrorist offence or [other] serious crime”, “process PNR data against pre-determined criteria”. As also noted by the EDPS, it is clear that the PNR data can be matched against “patterns” discerned in previous data and against “profiles” of possible terrorists and serious criminals created on the basis of these patterns, that are more complex than the simple patterns discussed at (e), above. This is also undoubtedly the direction in which searches for terrorists and other serious criminals are moving.

(fb)      The nature of the “pre-determined criteria”/“profiles”

The EU and EU Member State agencies are increasingly applying, or are poised to apply, increasingly sophisticated data mining technologies such as are already used by the UK (and US) agencies. This involves self-learning, AI-based algorithms that are constantly dynamically re-generated and refined through loops linking back to earlier analyses. The software creates constantly self-improving and refining profiles against which it matches the massive amounts of data – and in the end, it produces lists of individuals that the algorithm suggests may (possibly or probably) be terrorists, or associates of terrorists or other serious criminals. It is the stated policy of the EU to accelerate the development and deployment of these sophisticated technologies, under the guidance of Europol.

Whatever the current level of use of such sophisticated techniques in law enforcement and national security contexts in the Member States (as discussed at (fd), below), if the PNR Directive is upheld as valid in its current terms, nothing will stand in the way of the ever-greater deployment of these more sophisticated (but flawed) technologies in relation to air passengers. That would also pave the way to yet further use of such (dangerous) data mining and profiling in relation to other large population sets (such as all users of electronic communications, or of bank cards).

(fc)      The creation of the “pre-determined criteria”/“profiles”

Given (a) the increasingly sophisticated surveillance and data analysis/data mining/risk assessment technologies developed by the intelligence services of the EU Member States (often drawing on US and UK experience) and now also by law enforcement agencies and (b) the clear role assigned to Europol in this respect, it would appear clear that there is being developed a cadre of data mining specialists in the EU – and that the PNR data are one of the focus areas for this work. In other words, the “pre-determined criteria” – or AI-based algorithms – that are to be used in the mining of the PNR data are being developed, not solely by or within the PIUs but by this broader cadre that draws in particular on intelligence experts (some of whom may be embedded in the PIUs). The PNR databases are (also) between them a test laboratory for data mining/profiling technologies. And (c) there is nothing in the PNR Directive that stands in the way of using other data than PNR data in the creation of “pre-determined criteria”, or indeed in the way of using profiles developed by other agencies (including intelligence agencies) as “pre-determined criteria” in the PIU analyses.

(fd)      The application of the more complex “pre-determined criteria”/“profiles” in practice

It would appear that to date, few Member States are as yet using data mining in relation to PNR data in as sophisticated a way as described in sub-section (fb), above (or at least acknowledge such uses).

However, in a range of EU Member States algorithm/AI-based profiling is already in use in relation to broader law enforcement (and especially crime prevention). Moreover, the aim of the Commission and the Member States is expressly to significantly expand this use, with the help of Europol and its Travel Intelligence Task Force, and through “training on the development of pre-determined criteria” in “an ongoing EU-funded project, financed under the ISF-Police Union Actions.”

This merely underlines the point I made in the previous sub-sections: that the PNR database is being used as a test laboratory for advanced data mining technologies, and that if the PNR Directive is upheld as valid in its current terms, nothing will stand in the way of the ever-greater deployment of these more sophisticated (but flawed) technologies in relation to air passengers, and others. The fact that sophisticated data mining and profiling is said to not yet be in widespread operational use in most Member States should not be a reason for ignoring this issue – on the contrary: this is the desired destination of the analyses.

(fe)      The limitations of and flaws in the technologies

There are three main problems with algorithmic data mining-based detection of rare phenomena (such as terrorists and serious criminals in a general population):

– The base-rate fallacy and its effect on false positives:

In very simple layperson’s terms, the base-rate fallacy means that if you are looking for very rare instances or phenomena in a very large dataset, you will inevitably obtain a very high percentage of false positives in particular – and this cannot be remedied by adding more or somehow “better” data: by adding hay to a haystack.

As noted above, at 4.7, a very rough guess would be that on average the 1 billion people counted by Eurostat as flying to or from the EU relate to 500 million distinct individuals. In other words, the base rate for PNR data can be reasonably assumed to be in the region of 500 million.

The Commission reports that there are initial “hits” in relation to 0.59% of all PNRs, while 0.11% of all PNRs are passed on as confirmed “hits” to competent authorities for “further examination”. The Commission report and the staff working document appear to imply – and certainly do nothing to refute – that the 0.11% of all confirmed “hits” that are passed on to competent authorities are all “true positives”. However, that glaringly fails to take account of the base rate, and its impact on results.

Even if the PNR checks had a failure rate of just 0.1% (meaning that (1) in relation to persons who are actually terrorists or serious criminals, the PIUs will rightly confirm this as a proper “hit” 99.9% of the time, and fail to do so 0.1% of the time and (2) in relation to persons who are not terrorists, the PIUs will rightly not generate a confirmed “hit” 99.9% of the time, but wrongly register the innocent person as a confirmed “hit” 0.1% of the time) the probability that a person flagged by this system is actually a terrorist would still be closer to 1% than to 99%. In any case, even if the accuracy rate of the PNR checks were to be as high as this assumed 99.9% (which of course is unrealistic), that would still lead to some 500,000 false positives each year.

Yet the Commission documentation is silent about this.

– Built-in biases:

The Commission staff working document claims that, because the “pre-determined criteria” that are used in algorithmic profiling may not be based on sensitive data, “the assessment cannot be carried out in a discriminatory manner” and that “[t]his limits the risk that discriminatory profiling will be carried out by the authorities.” This is simply wrong.

In simple terms: since “intimate part[s] of [a person’s] private life” can be deduced, or at least inferred, from seemingly innocuous information – such as data included in PNRs (in particular if matched against other data) – those “intimate aspects” are not “fully protected by the processing operations provided for in the PNR Directive”. Indeed, in a way, the claim to the contrary is absurd: the whole point of “risk analysis” based on “pre-determined criteria” is to discover unknown, indeed hidden matters about the individuals who are being profiled: inferring from the data on those people, on the basis of the application of those criteria, that they are persons who “may be” involved in terrorism or other serious crimes surely is a deduction of an “intimate aspect” of those persons (even if it is not specifically or necessarily a sensitive datum in the GDPR sense – although if the inference was that a person “might be” an Islamist terrorist, that would be a [tentatively] sensitive datum in the strict sense). Moreover, even without specifically using or revealing sensitive information, the outcomes of algorithmic analyses and processing, and the application of “abstract”, algorithm/AI-based criteria to “real” people can still lead to discrimination.

The PNR Directive stipulates that the assessment[s] of passengers prior to their scheduled arrival in or departure from the Member State carried out with the aim of identifying persons who require further examination by the competent authorities of the directive “shall be carried out in a non-discriminatory manner”. However, this falls considerably short of stipulating: (i) that the “pre-determined criteria” (the outputs of the algorithms) are not biased in some way and (ii) that measures must be taken to ensure that the outcomes of the assessments are not discriminatory. It is important to address both those issues (as explained in a recent EDRi/TU Delft report).

Given that profile-based matches to detect terrorists and other serious criminals are inherently “high risk” (as noted at 3, above and further discussed at 5, below), it requires an in-depth Data Protection Impact Assessment under EU data protection law, and indeed a broader human rights impact assessment. The need for serious pre-evaluation of algorithms to be used in data mining and for continuous re-evaluation throughout their use is also stressed in various paragraphs in the recent Council of Europe recommendation on profiling. The proposed AI Act also requires this.

However, no serious efforts have been made by the European Commission or the EU Member States to fulfil these duties. Neither have ensured that full, appropriate basic information required for such serious ex ante  and ex post evaluations is even sought or recorded.

In sum: the European Commission and the EU Member States have not ensured that in practice the processing of the PNR data, and the linking of those data to other data (databases and lists), does not have discriminatory outcomes. The mere stipulation that outputs of algorithmic/AI-based profiling should not be “solely based on” sensitive aspects of the data subjects (the airline passengers) falls far short of ensuring compliance with the prohibition of discrimination.

– Opacity and unchallengeability of decisions:

In the more developed “artificial intelligence” or “expert” systems, the computers operating the relevant programmes create feedback loops that continuously improve the underlying algorithms – with almost no-one in the end being able to explain the results: the analyses are based on underlying code that cannot be properly understood by many who rely on them, or even expressed in plain language. This makes it extremely difficult to provide for serious accountability in relation to, and redress against, algorithm-based decisions generally. Profiling thus poses a serious threat of a Kafkaesque world in which powerful agencies take decisions that significantly affect individuals, without those decision-makers being able or willing to explain the underlying reasoning for those decisions, and in which those subjects are denied any effective individual or collective remedies.

That is how serious the issue of profiling is: it poses a fundamental threat to the most basic principles of the Rule of Law and the relationship between the powerful and the people in a democratic society. Specifically in relation to PNR:

– PIU staff cannot challenge algorithm-based computer outputs;

– The staff of the competent authorities are also unlikely (or indeed also effectively unable) to challenge the computer output; and

– Supervisory bodies cannot properly assess the systems.

External supervisory bodies such as Member States’ data protection supervisory authorities will generally not be given access to the underlying data, cannot review the algorithms at the design stage or at regular intervals after deployment and in any case do not have the expertise. Internal bodies are unlikely to be critical and may involve the very people who design the system (who write the code that provides the [dynamic] algorithm). The report on the evaluation of the Dutch PNR Law noted that under that law (under which the algorithms/profiles are supposed to be checked by a special commission):

The rules [on the creation of the pre-determined criteria] do not require the weighing [of the elements] or the threshold value [for regarding a “hit” against those criteria to be a valid one] to meet objective scientific standards.

This is quite an astonishing matter. It acknowledges that the algorithm/AI-based profiles are essentially unscientific. In my opinion, this fatally undermines the way the pre-determined criteria are created and “tested” in the Netherlands. Yet at the same time, the Dutch system, with this “special commission”, is probably better than what is in place in most other EU Member States. This surely is a matter that should be taken into account in any assessment of the PNR system EU-wide – including the assessment that is shortly to be made by the Luxembourg Court.

In sum:

– because the “base-rate” for the PNR data mining is so high (in the region of 500 million people) and the incidence of terrorists and serious criminals within this population so relatively low, algorithm/AI-based profiling is likely to result in tens of thousands of “false positives”: individual air passengers who are wrongly labelled to a be person who “may be” involved in terrorism or other serious crime;

– the provisions in the PNR Directive that stipulate that no sensitive data may be processed, and that individual decisions and matches may not be “solely based on” sensitive aspects of the individuals concerned do not protect those individuals from discriminatory outcomes of the profiling;

– the algorithm/AI-based outcomes of the processing are almost impossible to challenge because those algorithms are constantly dynamically changed (“improved” through self-learning) and therefore in effect impossible to fully comprehend even by those carrying out the analyses/risk assessments; and

– the outputs and outcomes of the algorithm/AI-based profiling and data mining and matching are not subject to proper scientific testing or auditing, and extremely unlikely to made subject to such testing and auditing.

4.9 Direct access to PNR data by EU Member States’ intelligence agencies

It appears that at least in the Netherlands, the national intelligence agencies are granted direct access to the bulk PNR database, without having to go through the PIU (or at least without this being properly recorded). If the Dutch authorities were to argue that such direct access to data by the Dutch intelligence agencies is outside EU law, they would be wrong. Specifically, in its LQDN judgment, the CJEU held that the rules on personal data processing operations by entities that are, in that processing, subject to EU data protection law (in that case, providers of electronic communication services, who are subject to the e-Privacy Directive), including processing operations by such entities resulting from obligations imposed on them (under the law) by Member States’ public authorities (in that case, for national security purposes) can be assessed for their compatibility with the relevant EU data protection instrument and the Charter of Fundamental Rights.

In my opinion, if the Dutch intelligence and security agencies do indeed have direct access to the PNR database, without having to go through the Dutch PIU (the Pi-NL), or without that being recorded – as appears to be pretty obviously the case – that is in direct breach of the PNR Directive, of the EU data protection instruments, and of the EU Charter of Fundamental Rights.

Whether the EU data protection instruments and the PNR Directive are similarly circumvented in other EU Member States, I do not know. Let me just recall that in several Member States, the PIU is “embedded in … [the] state security agenc[ies]”. However, the Dutch example shows how dangerous, in a democratic society, the accruing of such bulk databases is.

4.10 Dissemination and subsequent use of the data and purpose-limitation

(a) Spontaneous provision of PNR data and information on (confirmed) “hits”

In principle, subject only to a “relevant and necessary” requirement in relation to transmissions to the other PIUs, confirmed “hits” can be very widely shared across all the EU Member States, both between the PIUs but also, via the PIUs, with any “competent authority” in any Member State (including intelligence agencies where those are designated as such: see at 4.5, above).

(aa)     Spontaneous provision of information to domestic competent authorities on the basis of matches against lists and databases (including SIS II)

The Commission staff working report gives no insight into the actual scope of spontaneous dissemination of PNR data or “results of the processing” of PNR data by the PIUs on the basis of (confirmed) “hits” to competent authorities in the PIUs’ own countries.

The report on the evaluation of the Dutch PNR Law suggests that, in that country, spontaneous provisions of PNR to Dutch authorities “for further examination” are still effectively limited to (confirmed) matches against the SIS II database, and indeed to matches against the alerts listed in Articles 26 and 36 of the Council Decision establishing that database (respectively, alerts for persons wanted for arrest for extradition, and alerts relating to people or vehicles requiring discreet checks). The Dutch SIS II matches amounted to roughly 10 in every 100,000 passengers (2:100,000 “Article 26” matches and 8:100,000 “Article 36” matches).

If the Dutch statistics of 10:100,000 and 82.4% are representative of the overall situation in the EU, this would mean that each year, out of the 500 million passengers on whom PNR data are collected annually, approximately 50,000 passengers are subjected to “further examination” on the basis of a SIS II match, 40,000 of whom are relate to “Article 36 alerts”, i.e., to “persons of interest” who are not (yet) formally wanted in relation to any crime (let alone a PNR-relevant one).

But of course, there are also (confirmed) “hits” on other bases (including on the basis of “pre-determined criteria” and matches resulting from requests for information) – and other countries may also match against more than just Article 26 and Article 36 alerts on SIS II.

(ab)     Spontaneous provision of information to other PIUs on the basis of matches against lists and databases (including SIS II)

It would appear that, until now, in practice, information – including information on matches against SIS II alerts – is only rarely spontaneously shared between PIUs.

However, the clear aim of the Commission is to significantly increase the number of spontaneous transmissions of PNR data and of information on (confirmed) “hits” against SIS II (or against pre-determined criteria: see below) between PIUs, and via PIUs to competent authorities in other EU Member States (again including intelligence agencies in Member States where those are designated as such).

(ac)     Spontaneous provision of information to domestic competent authorities and to other PIUs on the basis of matches against pre-determined criteria

It would appear that matching of PNR data against pre-determined criteria – and consequently also the spontaneous informing of competent authorities of (confirmed) “hits” against such criteria – is still extremely rare in the EU Member States. However, the aim is for the use of such criteria to be greatly expanded.

(ad)     Spontaneous provision of “results of processing” of PNR data other than information on matches against list or databases (such as SIS II) or pre-determined criteria

The spontaneous sharing of new or improved criteria is more likely to occur within the data mining cadre that is being formed (see above, at 4.9(fc)), rather than done through exchanges between PIUs. But that of course does not mean that it will not occur – on the contrary, the aim is clearly to extend the use of pre-determined criteria, and for the EU Member States to cooperate much more closely in the development and sharing of those criteria, specifically through a much-enhanced role for Europol.

(b) Provision of PNR data and analysis data to competent authorities, other PIUs or Europol on request

(ba)     Provision of information to domestic competent authorities at the request of such authorities

In relation to the provision of information by the PIUs to their domestic competent authorities at the latter’s request, the relevant national rules apply. The Commission staff working document provides no information whatsoever on the extent to which this option is used beyond saying that the numbers are increasing. In the Netherlands, some procedural safeguards are established to seek to ensure that requests are only made in appropriate cases, and in particular only in relation to PNR-relevant offences. Whether other Member States impose procedural safeguards such as prior authorisation of requests from certain senior officials, I do not know. The PNR Directive does not require them (it leaves this to the laws of the Member States) and the Commission staff working report does not mention them.

(bb)     Provision of information to competent authorities of other EU Member States at the request of such authorities

The Commission claims that provision of PNR data at the request of competent authorities of other EU Member States is one part of the PNR system that operates well. However, the Commission staff working report suggests that there are problems, in particular in relation to compliance with the purpose-limitation principle underpinning the PNR Directive: see below, at (d).

Moreover, if the Dutch data are anything to go by, it would appear that the vast majority of requests for PNR data come from the national authorities of the PIU’s own country: in the Netherlands, in 2019-20, there were 3,130 requests from national authorities, against just 375 requests from other PIUs and authorities in other EU Member States. This rather qualifies the Commission claim that “the exchange of data between the Member States based on requests functions in an effective manner” and that “[t]he number of requests has grown consistently”. Both statements could be true, but the actual total numbers of requests from other Member States may still be extremely low (for now), at least in comparison with the number of requests the PIUs receive from their own national authorities.

(bc)     Provision of information to Europol at the latter’s request

The Commission staff working document does not provide any information on the number of requests made by Europol, or on the responses to such requests from the PIUs. The report on the evaluation of the Dutch PNR notes that within Europol there appear to be no procedural conditions or safeguards relating to the making of requests (such as the safeguard that requests from Dutch authorities must be checked by a Dutch prosecutor (OvJ).

If the Dutch data are anything to go by, it would appear that there are in fact very view requests for information from Europol: in that country, the PIU only received 32 such requests between June 2019 and the end of 2020, i.e., less than two a month. But if Europol is to be given a much more central role in the processing of PNR data, especially in the matching of those data against more sophisticated pre-determined criteria (with Europol playing the central role in the development of those more sophisticated criteria, as planned), the cooperation between the Member States’ PIUs and Europol, and the sharing of PNR data and data on “hits”, is certain to greatly expand.

(c) Transfer of PNR data to third countries on a case-by-case basis.

The transfer of PNR data by the Member States to countries outside the EU is only allowed on a case-by-case basis and only when necessary for fighting terrorism and serious crime, and PNR data may be shared only with public authorities that are competent for combating PNR-relevant offences. Moreover, the DPO of the relevant PIU must be informed of all such transfers.

However, the Commission reports that four Member States have failed to fully transpose other conditions provided for by the Directive relating to the purposes for which the data can be transferred or the authorities competent to receive it, and two do not require the informing of the DPO.

It is seriously worrying that several Member States do not adhere to the conditions and safeguards relating to transfers of PNR data (and of “the results of processing” of PNR data – which can include the fact that there was a “hit” against lists or criteria) to third countries that may not have adequate data protection rules (or indeed other relevant rule of law-conform rules) in place. Some of the (unnamed) Member States that do not comply with the PNR Directive in this regard are likely to pass on such data in breach of the Directive (in particular, without ensuring that the data are only used in the fight against terrorism and serious crime) to close security and political allies such as the ones that make up the “Five Eyes” intelligence group: the USA, the UK, Australia, Canada and New Zealand.

This concern is especially aggravated in relation to the USA, which the Court of Justice has now held several times to not provide adequate protection to personal data transferred to it from the EU, specifically because of its excessive mass surveillance (and there are similar concerns in relation to the UK, in spite of the Commission having issued an adequacy decision in respect of that country).

Moreover, neither the Commission staff working document nor the Dutch report provides any information on how it is – or indeed can be – guaranteed that data provided in response to a request from a third country are really only used by that third country in relation to PNR-relevant offences, or how this is – or indeed can be – monitored.

For instance, if data are provided to the US Federal Bureau of Investigation (FBI) in relation to an investigation into suspected terrorist activity, those data will also become available to the US National Security Agency (NSA), which may use them in relation to much broader “foreign intelligence purposes”. That issue of course arises in relation to provision of information from any EU Member State to any third country that has excessive surveillance laws.

Furthermore, if I am right to believe that the Dutch intelligence agencies have secret, unrecorded direct access to the PNR database (see above, at 4.10), they may also be sharing data from that database more directly with intelligence partners in other countries, including third countries, bypassing the whole PNR Directive system. Neither the Commission staff working document nor the report on the evaluation of the Dutch PNR law addresses this issue. And that issue, too, may well arise also in relation to other EU Member States.

(d) Subsequent use of the data and purpose-limitation

In principle, any information provided by the PIUs to any other entities, at home or abroad, or to Europol, is to be used by any recipient only for the prevention, detection, investigation and prosecution of terrorist offences and serious crime, more specifically for the prevention, detection, investigation and prosecution of PNR-relevant offences.

But it has become clear that this is far from assured in practice:

– because of the dilemma faced by PIUs in some EU Member States caused by the duty of any agency to pursue any offence that comes to their attention, the PIUs in some Member States pass on information also on (confirmed) “hits” relating to not-PNR-relevant offences (both spontaneously and in response to requests), and those data are then used in relation to the prevention, detection, investigation and prosecution of those not-PNR-relevant offences;

– in the Netherlands (and probably other Member States), once information is provided to a domestic competent authority, those data enter the databases of that authority (e.g., the general police databases) and will be subject to the legal regime that applies to the relevant database – which means that there is no guarantee that their subsequent use is in practice limited to PNR-relevant offences;

– when PNR data are provided by a PIU of one Member State to a PIU of another Member State (or to several or all of the other PIUs), they are provided subject to the purpose-limitation principle of the PNR Directive – but if those data are then provided by the recipient PIU(s) to competent authorities in their own countries, the same problems arise as noted in the previous indents;

– Member States take rather different views of what constitute PNR-relevant offences, and some make “broad and unspecified requests to many (or even all Passenger Information Units)” – suggesting that in this regard, too, the purpose-limitation principle is not always fully adhered to;

– within Europol there appears to be no procedural conditions or safeguards relating to the making of requests for PNR data from PIUs (such as the safeguard that requests from Dutch authorities must be checked by a Dutch prosecutor) and the Commission staff report does not indicate whether all the PIUs check whether Europol requests are strictly limited to PNR-relevant offences (or if they do, how strict and effective those checks are);

– “four Member States have failed to fully transpose … [the] conditions provided for by the Directive relating to the purposes for which [PNR data] can be transferred [to third countries] or [relating to] the authorities competent to receive [such data]”;

– neither the Commission staff working document nor the Dutch report provides any information on how it is – or indeed can be – guaranteed that data provided in response to a request from a third country are really only used by that third country in relation to PNR-relevant offences, or how this is – or indeed can be – monitored;

and

– if I am right to believe that the Dutch intelligence agencies have secret, unrecorded direct access to the PNR database, they may also be sharing data from that database more directly with intelligence partners in other countries, including third countries, bypassing the whole PNR Directive system. Neither the Commission staff working document nor the report on the evaluation of the Dutch PNR law addresses this issue. And that issue, too, may well arise also in relation to other EU Member States.

In sum: There are major deficiencies in the system as concerns compliance, by the EU Member States, by Europol, and by third countries that may receive PNR data on a case-by-case-basis, with the fundamental purpose-limitation principle underpinning the PNR Directive, i.e., with the rule that any PNR data (or data resulting from the processing of PNR data) may only be used – not just by the PIUs, but also by any other entities that may receive those data – for the purposes of the prevention, detection, investigation and prosecution of PNR-relevant offences. In simple terms: in this respect, the PNR system leaks like a sieve.

4.11 The consequences of a “match”

It is quite clear from the available information that confirmed “hits” and the associated PNR data on at the very least tens of thousands and most probably several hundred thousand innocent people are passed on to law enforcement (and in many cases, intelligence agencies) of EU Member States and to Europol – and in some cases to law enforcement and intelligence agencies of third countries – for “further examination”. Many of those data – many of those individuals – will end up in miscellaneous national databases as data on “persons of interest”, and/or in the Europol SIS II database as “Article 36 alerts”. They may even end up in similar databases or lists of third countries.

In terms of European human rights and data protection law, even the supposedly not-very-intrusive measures such as “only” being made the object of “discreet checks” constitute serious interferences with the fundamental rights of the individuals concerned – something that the European Commission and several Member States studiously avoided acknowledging at the Court hearing. More intrusive measure such as being detained and questioned or barred from flying of course constitute even more serious interferences. Both kinds require significant justification in terms of suitability, effectiveness and proportionality – with the onus of proof lying squarely on those who want to impose or justify those interferences, i.e., in casu, the European Commission and the Member States.

Moreover, in practice “watch lists” often become “black lists”. History shows that people – innocent people – will suffer if there are lists of “suspicious”, “perhaps not reliable”, “not one of us” people lying around, and not just in dictatorships.

That is yet another reason why those who argue in favour of such lists – and that includes “Article 36 alerts” and other lists of “persons of interest” “identified” on the basis of flimsy or complex criteria or profiles – bear a heavy onus to prove that those lists are absolutely necessary in a democratic society, and that the strongest possible measures are in place to prevent such further slippery uses of the lists.

5. The suitability, effectiveness and proportionality of the processing

5.1 The lack of data and of proof of effectiveness of the PNR Directive

Neither the European Commission’s review nor the Dutch evaluation has come up with serious, measurable data showing that the PNR Directive and the PNR law are effective in the fight against terrorism or serious crime.

The Dutch researchers at least tried to find hard data, but found that in many crucial respects no records were kept that could provide such data. At most, some suggestions for better recording were made, and some ideas are under consideration, to obtain better data (although the researchers also noted that some law enforcement practitioners thought it would be too much effort).

To date, neither the Commission nor the Member States (including the Netherlands) have seriously tried to design suitable, scientifically valid methods and methodologies of data capture (geeignete Formen der Datenerfassung) in this context. Given that the onus is clearly on them to demonstrate – properly, scientifically demonstrate, in a peer-reviewable manner – that the serious interferences with privacy and data protection they insist on perpetrating are effective, this is a manifest dereliction of duty.

The excuse for not doing this essential work – that it would be too costly or demanding of law enforcement time and staff – is utterly unconvincing, given the many millions of euros that are being devoted to developing the “high risk” intrusive technologies themselves.

5.2 An attempt at an assessment

(a) The appropriate tests to be applied

(aa)     The general tests

In my opinion, the appropriate tests to be applied to mass surveillance measures such as are carried out under the PNR Directive (and were carried out under the Data Retention Directive, and are still carried out under the national data retention laws of the EU Member States that continue to apply in spite of the CJEU case-law) are:

Have the entities that apply the mass surveillance measure – i.e., in the case of the PNR Directive (and the DRD), the European Commission and the EU Member States – produced reliable, verifiable evidence:

(iii) that those measures have actually, demonstrably contributed significantly to the stated purpose of the measures, i.e., in relation to the PNR Directive, to the fight against PNR-relevant crimes (and in relation the DRD, to the fight against “serious crime as defined by national law”); and

(iv) that those measures have demonstrably not seriously negatively affected the interests and fundamental rights of the persons to whom they were applied?

If the mass surveillance measures do not demonstrably pass both these tests, they are fundamentally incompatible with European human rights and fundamental rights law.

This means the measures must be justified, by the entities that apply them, on the basis of hard, verifiable, peer-reviewable data.

(ab)     When a (confirmed) “hit can be said to constitute a “positive” result (and when not)

In the context of collecting and assessing data, it is important to clarify when a (confirmed) “hit can be said to constitute a “positive” result (and when not).

In my opinion, confirmed “hits” confirming the identity of “known” “persons of interest”/subjects of “Article 36 alerts” and the “identification” (labelling) of previously “unknown” persons by the PIUs as “persons who may be involved in terrorism or serious crime” can only be regarded as “positive” results under the PNR Directive if they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

(b) The failure of the European Commission (and the Dutch government) to meet the appropriate test

The conclusion reached by the European Commission and Dutch Minister of Justice: that overall, the PNR Directive, respectively the Dutch PNR law, had been “effective” because the EU Member States said so (Commission) or because PNR data were quite widely used and the competent authorities said so (Dutch Minister) is fundamentally flawed, given that this conclusion was reached in the absence of any real supporting data.

It is the equivalent to a snake oil salesman claiming that the effectiveness of his snake oil is proven by the fact that his franchise holders agree with him that the product is effective, or by the fact that many gullible people bought the stuff.

Or to use the example of Covid vaccines, invoked by the judge-rapporteur: it is equivalent to a claim that a vaccine is effective because interested parties say it is, or because many people had been vaccinated with the vaccine – without any data on how many people were protected from infection or, perhaps worse, how many people suffered serious side-effects.

At the very least, the competent authorities in the EU Member States should have been required to collect, in a systematic and comparable way, reliable information on the outcomes of the passing on of (confirmed) “hits”. Given that they have not done so – and that the Commission and the Member States have not even tried to establish reliable systems for this – there is no insight into how many of the (confirmed) “hits” actually, concretely contributed to the fight against PNR-relevant offences.

(c) An attempt to apply the tests to the different types of matches

In my opinion, confirmed “hits” confirming the identity of “known” “persons of interest”/subjects of “Article 36 alerts” and the “identification” (labelling) of previously “unknown” persons by the PIUs as “persons who may be involved in terrorism or serious crime” can only be regarded as “positive” results under the PNR Directive if they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

At the very least, the competent authorities in the EU Member States should have been required to collect, in a systematic and comparable way, reliable information on such outcomes. Given that they have not done so – and that the Commission and the Member States have not even tried to establish reliable systems for this, there is no insight into how many of the (confirmed) “hits” actually, concretely contributed to the fight against PNR-relevant offences.

However, the following can still usefully be observed as regards the lawfulness, suitability, effectiveness and proportionality of the different kinds of matches:

– Full PNR data are disproportionate to the purpose of basic identity checks;

– The necessity of the PNR checks against Interpol’s Stolen and Lost Travel Document database is questionable;

– The matches against unspecified national databases and “repositories” are not based on foreseeable legal rules and are therefore not based on “law”;

– The necessity and proportionality of matches against various simple, supposedly “suspicious” elements (tickets bought from a “suspicious” travel agent; “suspicious” travel route; etc.) is highly questionable; and

– The matches against more complex “pre-determined criteria” and profiles are inherently and irredeemably flawed and lead to tens and possibly hundreds of thousands of innocent travellers wrongly being labelled to be a person who “may be” involved in terrorism or serious crime, and are therefore unsuited (D: ungeeignet) for the purpose of fighting terrorism and serious crime.

5.3 Overall conclusions

The PNR Directive and the generalised, indiscriminate collection of personal data on an enormous population – all persons flying to or from, and the vast majority of people flying within, the EU – that it facilitates (and intends to facilitate) is part of a wider attempt by the European Union and the EU Member States to create means of mass surveillance that, in my opinion, fly in the face of the case-law of the Court of Justice of the EU.

In trying to justify the directive and the processing of personal data on hundreds of millions of individuals, the vast majority of whom are indisputably entirely innocent, the European Commission and the Member States not only do not produce relevant, measurable and peer-reviewable data, they do not even attempt to provide for the means to obtain such data. Rather, they apply “measures” of effectiveness that are not even deserving of that name: the wide use of the data and the “belief” of those using them that they are useful.

If proper tests are applied (as set out in sub-section 5.2(a), above), the disingenuousness of the “justifications” becomes clear: the claims of effectiveness of the PNR Directive (and the Dutch PNR Law) are based on sand; in fact, as the Dutch researchers rightly noted:

“There are no quantitative data on the way in which [and the extent to which] PNR data have contributed to the prevention, detection, investigation and prosecution of terrorist offences and serious crime.”

The Commission and the Member States also ignore the “high risks” that the tools used to “identify” individuals who “may be” terrorists or serious criminals entail. This applies in particular to the use of algorithm/AI-based data mining and of profiles based on such data mining that they want to massively increase.

If the Court of Justice were to uphold the PNR Directive, it would not only endorse the mass surveillance under the directive as currently practised – it would also give the green light to the massive extension of the application of (so far less used) sophisticated data mining and profiling technologies to the PNR data without regard for their mathematically inevitable serious negative consequences for tens and possible hundreds of thousands of individuals.

What is more, that would also pave the way to yet further use of such (dangerous) data mining and profiling technologies in relation to other large population sets (such as all users of electronic communications, or of bank cards). Given that the Commission has stubbornly refused to enforce the Digital Rights Ireland judgment against Member States that continue to mandate retention of communications data, and is in fact colluding with those Member States in actually seeking to re-introduce mandatory communications data retention EU wide in the e-Privacy Regulation that is currently in the legislative process, this is a clear and imminent danger.

The hope must be that the Court will stand up for the rights of individuals, enforce the Charter of Fundamental Rights, and declare the PNR Directive (like the Data Retention Directive) to be fundamentally in breach of the Charter.

– o – O – o –

Douwe Korff (Prof.)

Cambridge (UK)

November 2021

  1. 1.1           The categories of personal data processed

An annex to the PNR Directive lists the specific categories of data that airlines must send to the database of the PIU of the Member State on the territory of which the flight will land or from the territory of which the flight will depart. This obligation is stipulated with regard to extra-EU flights but can be extended by each Member State to apply also to intra-EU flights  – and all but one Member States have done so. The list of PNR data is much longer than the Advance Passenger Information (API) data that airlines must already send to the Member States under the API Directive, and includes information on travel agents used, travel routes, email addresses, payment (card) details, luggage, and fellow travellers. On the other hand, often some basic details (such as date of birth) are not included in the APIs.

NB: The opinion focusses on the system as it is designed and intended to operate, and on what it allows (even if not everything that may be allowed is [yet] implemented in all Member States), and less on the somewhat slow implementation of the directive in the Member States and on the technical aspects that the Commission report and the staff working document often focussed on. It notes in particular a number of elements or aspects of the directive and the system it establishes that are problematic, either conceptually or in the way they are supposed to operate or to be evaluated.

Worth reading : the final report by the EU High Level Expert Group on Information Systems and Interoperability (HLEG),

NB: The full version (PDF)  of the Report is accessible HERE

On May 8th the (EU) High Level Expert Group on Information Systems and Interoperability (HLEG) which was set up in June 2016 following the Commission Communication on “Stronger and Smarter Information Systems for Borders and Security ” has published its long awaited 56 long pages Report on Information Systems and Interoperability.

Members of the HLEG were the EU Members States (+ Norway, Switzerland and Liechtenstein), the EU Agencies (Fundamental Rights Agency, FRONTEX, European Asylum Support Office, Europol and the EU-LISA “Large Information Support Agency”) as well as the representatives of the Commission and the European Data Protection Supervisor (EDPS) and the Anti-Terrorism Coordinator (an High Council General Secretariat Official designated by the European Council).

Three Statements, respectively of the EU Fundamental Rights Agency, of the European Data Protection Supervisor and of the EU Counter-Terrorism Coordinator (CTC),  are attached. The first two can be considered as a sort of partially dissenting Opinions while the CTC  statement is quite obviously in full support of the recommendations set out by the report as it embodies for the first time at EU level the “Availability Principle” which was set up already in 2004 by the European Council. According to that principle if a Member State (or the EU) has a security related information which can be useful to another Member State it has to make it available to the authority of another Member State. It looks as a common sense principle which goes hand in hand with the principle of sincere cooperation between EU Member States and between them and the EU Institutions.

The little detail is that when information is collected for security purposes national and European legislation set very strict criteria to avoid the possible abuses by public EU and National Law enforcement authorities. This is the core of Data Protection legislation and of the art. 6, 7 and 8 of the EU Charter of Fundamental Rights which prevent the EU and its Member States from becoming a sort of Big Brother “State of surveillance”. Moreover, at least until now these principles have guided the post-Lisbon European Court of Justice jurisprudence in this domain and it is quite appalling that no reference is made in this report to the Luxembourg Court Rulings notably dealing with “profiling” and “data retention”(“Digital Rights”, “Schrems”, “TELE 2-Watson”…).

Needless to say to implement all the HLWG recommendations several legislative measures will be needed as well as the definition of a legally EU Security Strategy which should be adopted under the responsibility of the EU co-legislators. Without a strong legally founded EU security strategy not only the European Parliament will continue to be out of the game but also the control of the Court of Justice on the necessity and  proportionality of the existing and planned EU legislative measures will be weakened.  Overall this HLWG report is mainly focused on security related objectives and the references to fundamental rights and data protection are given more as “excusatio non petita” than as a clearly explained reasoning (see the Fundamental Rights Agency Statement). On the Content of the  perceived “threats” to be countered with this new approach it has to be seen if some of them (such as the mixing irregular migration with terrorism)  are not imaginary and, by the countrary, real ones are not taken in account.

At least this report is now public. It will be naive to consider it as purely “technical” : it is highly political and will justify several EU legislative measures. It will be worthless for the European Parliament to wake up when the formal legislative proposals will be submitted. If it has an alternative vision it has to show it NOW and not waiting when the Report will be quite likely “endorsed” by the Council and the European Council.

Emilio De Capitani

TEXT OF THE REPORT (NB  Figures have not been currently imported, sorry.)

——- Continue reading “Worth reading : the final report by the EU High Level Expert Group on Information Systems and Interoperability (HLEG),”

Legal Frameworks for Hacking by Law Enforcement: Identification, Evaluation and Comparison of Practices

EXECUTIVE SUMMARY OF A STUDY FOR THE EP LIBE COMMITEE.

FULL TEXT ACCESSIBLE  HERE  

by Mirja  GUTHEIL, Quentin  LIGER, Aurélie  HEETMAN, James  EAGER, Max  CRAWFORD  (Optimity  Advisors)

Hacking by law enforcement is a relatively new phenomenon within the framework of the longstanding public policy problem of balancing security and privacy. On the one hand, law enforcement agencies assert that the use of hacking techniques brings security, stating that it represents a part of the solution to the law enforcement challenge of encryption and ‘Going Dark’ without systematically weakening encryption through the introduction of ‘backdoors’ or similar techniques. On the other hand, civil society actors argue that hacking is extremely invasive and significantly restricts the fundamental right to privacy. Furthermore, the use of hacking practices pits security against cybersecurity, as the exploitation of cybersecurity vulnerabilities to provide law enforcement with access to certain data can have significant implications  for  the security of the internet.

Against this backdrop, the present study provides the LIBE Committee with relevant, actionable insight into the legal frameworks and practices for hacking by law enforcement. Firstly, the study examines the international and EU-level debates on the topic of hacking by law enforcement (Chapter 2), before analysing the possible legal bases for EU intervention in the field (Chapter 3). These chapters set the scene for the primary focus of the study: the comparative analysis of legal frameworks and practices for hacking by law enforcement across six selected Member States (France, Germany, Italy, the Netherlands, Poland and the UK), with further illustrative examples from three non-EU countries (Australia, Israel and the US) (Chapter 4). Based on these analyses, the study concludes (Chapter 5) and presents concrete recommendations and policy proposals for  EU  action  in  the field (Chapter 6).

The international and EU-level debates on the use of hacking techniques by law enforcement primarily evolve from the law enforcement challenge posed by encryption – i.e. the  ‘Going  Dark’  issue.

Going Dark is a term used to describe [the] decreasing ability [of law enforcement agencies] to lawfully access and examine evidence at rest on devices and evidence in motion across   communications   networks.1

According to the International Association of Chiefs of Police (IACP), law enforcement agencies are not able to investigate illegal activity and prosecute criminals without this evidence. Encryption technologies are cited as one of the major barriers to this access. Although recent political statements from several countries (including France, Germany, the UK and the US) seemingly call for ‘backdoors’ to encryption technologies, support for strong encryption at international and EU fora remains strong. As such, law enforcement agencies across the world started to use hacking techniques to bypass encryption. Although the term ‘hacking’ is not used by law enforcement agencies, these practices essentially mirror the techniques used by hackers (i.e. exploiting any possible vulnerabilities – including technical, system  and/or human  vulnerabilities  – within  an  information  technology  (IT) system).

Law enforcement representatives, such as the IACP and Europol, report that access to encrypted and other data through such hacking techniques brings significant investigative benefits. However, it is not the only possible law enforcement solution to the ‘Going Dark’ issue. Outside of the scope of this study, the other options include: requiring users to provide their password or decrypt their data; requiring technology vendors and service providers to bypass   the   security   of   their   own   products   and   services;   and   the    systematic   weakening   of encryption through the mandated introduction of ‘backdoors’ and/or weakened standards for encryption.

With the benefits of hacking established, a 2016 Joint Statement published by the European Union Agency for Network and Information Security (ENISA) and Europol2 noted that the use of  hacking  techniques also brings  several   key  risks.

The primary risk relates to the fundamental right to privacy and freedom of expression and information, as enshrined in international, EU and national-level law. Hacking techniques are extremely invasive, particularly when compared with traditionally intrusive investigative tools (e.g. wiretapping, house searches etc.). Through hacking, law enforcement can gain access to all data stored or in transit from a device; this represents a significant amount of data (e.g. a recent investigation by Dutch law enforcement collected seven terabytes of data, which translates into around 86 million pages of Microsoft Word documents3), as well as extremely sensitive data (e.g. a person’s location and movements, all communications, all stored data etc.). Consequently, the use of hacking techniques will inherently restrict the fundamental right to privacy.

Therefore, current debates at international and EU fora focus on assessing and providing recommendations on the current legal balances and safeguards for the restriction of the right to privacy by hacking techniques. However, these debates have assumed that hacking practices are necessary for law enforcement and simply require governing laws; they have not discussed whether the use of hacking techniques by law enforcement is necessary and proportional. The law enforcement assertions regarding the necessity of these invasive tools have  not   been  challenged.

The second key risk relates to the security of the internet. Law enforcement use of hacking techniques has the potential to significantly weaken the security of the internet by “[increasing] the attack surface for malicious abuse”4. Given that critical infrastructure and defence organisations, as well as law enforcement agencies themselves, use the technologies targeted and potentially weakened by law enforcement hacking, the potential ramifications reach  far  beyond  the intended  target.

As such, debates at international and EU fora focus on the appropriate balances between security and privacy, as well as security and cybersecurity. Regarding security v. privacy, the debates to date have assessed and provided recommendations on the legislative safeguards required to ensure that hacking techniques are only permitted in situations where a restriction of the fundamental right to privacy is valid in line with EU legislation (i.e. legal, necessary and proportional). Regarding security v. cybersecurity, the debates have been limited and primarily centre around the use and/or reporting of zero-day vulnerabilities discovered  by  law enforcement agencies.

Further risks not discussed in the Joint Statement but covered by this study include: the risks to territorial sovereignty – as law enforcement agencies may not know the physical location of the target data; and the risks related to the supply and use of commercially-developed hacking tools by governments with poor consideration for human rights.

Alongside the analysis of international and EU debates, the study presents hypotheses on the legal  bases  for  EU  intervention  in  the  field. Although  possibilities for  EU  legal  intervention  in several areas are discussed, including mutual admissibility of evidence (Art. 82(2) TFEU), common investigative techniques (Art. 87(2)(c) TFEU), operational cooperation (Art. 87(3) TFEU) and data protection (Art. 16 TFEU, Art. 7 & 8 EU Charter), the onus regarding the development of legislation in the field is with the Member States. As such, the management of the risks associated with law enforcement activities is governed at the Member State level.

As suggested by the focus of the international and EU discussions, concrete measures need to be stipulated at national-level to manage these risks. This study presents a comparative analysis of the legal frameworks for hacking by law enforcement across six Member States, as well as certain practical aspects of hacking by law enforcement, thereby providing an overview of the primary Member State mechanisms for the management of these risks. Further illustrative examples are provided from research conducted in three non-EU countries.

More specifically, the study examines the legal and practical balances and safeguards implemented at national-level to ensure: i) the legality, necessity and proportionality of restrictions to the fundamental  right  to  privacy;   and ii) the security  of  the internet.

Regarding restrictions to the right to privacy, the study first examines the existence of specific legal frameworks for hacking by law enforcement, before exploring the ex-ante and ex-post conditions and mechanisms stipulated to govern restrictions of the right to privacy and ensure they are legal, necessary  and  proportional.

It is found that hacking practices are seemingly necessary across all Member States examined, as four Member States (France, Germany, Poland and the UK) have adopted specific legislative provisions and the remaining two are in the legislative process. For all Member States except Germany, the adoption of specific legislative provisions occurred in 2016 (France, Poland and the UK) or will occur later (Italy, the Netherlands).  This  confirms the  new  nature  of these investigative techniques.

Additionally, law enforcement agencies in all Member States examined have used, or still use, hacking techniques in the absence of specific legislative provisions, under so-called ‘grey area’ legal provisions. Given the invasiveness of hacking techniques, these grey areaprovisions are considered  insufficient  to adequately  protect the right to privacy.

Where specific legal provisions have been adopted, all stakeholders agree that a restriction of the right to privacy requires the implementation of certain safeguards. The current or proposed legal frameworks of all six Member States comprise a suite of ex-ante conditions and ex-post mechanisms that aim to ensure the use of hacking techniques is proportionate and necessary. As recommended by various UN bodies, the provisions of primary importance include judicial authorisation of hacking practices, safeguards related to the nature, scope and duration of possible measures (e.g. limitations to crimes of a certain gravity and the  duration  of  the hack,  etc.) and  independent   oversight.

Although many of these types of recommended conditions are common across the Member States examined – demonstrated in the below table – their implementation parameters differ. For instance, both German and Polish law permit law enforcement hacking practices without judicial authorisation in exigent circumstance if judicial authorisation is achieved in a specified timeframe. However, the timeframe differs (three days in Germany compared with five days in Poland). These differences make significant difference, as the Polish timeframe was criticised  by the Council  of  Europe’s  Venice Commission  for being  too long.5

Furthermore, the Member States examined all accompany these common types of ex-ante and ex-post conditions with different, less common conditions. This is particularly true for ex-post oversight mechanisms. For instance, in Poland, the Minister for internal affairs provides macro-level information to the lower (Sejm) and upper (Senat) chambers of Parliament;6 and in the UK, oversight is provided by the Investigatory Powers Commissioner, who reviews all cases of hacking by law enforcement, and the Investigatory Powers Tribunal, which  considers disputes or  complaints surrounding  law enforcement  hacking.7

Key ex-ante considerations
Judicial authorisation The    legal    provisions    of    all    six    Member    States    require    ex-ante judicial        authorisation        for        law        enforcement        hacking.        The information  to  be  provided  in  these requests differ.

Select     Member     States     (e.g.     Germany,     Poland,     the     UK)     also provide for hacking without prior judicial authorisation in exigent circumstances  if  judicial  authorisation  is subsequently  provided. The timeframes  for  ex-post authorisation  differ.

Limitation by crime and  duration All  six Member  States  restrict  the  use  of  hacking  tools  based  on the   gravity   of   crimes.    In    some    Member   States,    the    legislation presents  a  specific  list  of  crimes  for  which  hacking  is permitted; in     others,     the    limit    is    set     for    crimes    that    have    a    maximum custodial    sentence   of   greater   than    a   certain   number    of   years. The lists and numbers  of years required differ by Member   State.

Many Member States also restrict the duration for which hacking may   be   used.   This   restriction   ranges   from   maximum   1   month (France, Netherlands) to a maximum of 6 months (UK), although extensions     are     permitted     under     the     same     conditions     in     all Member States.

Key ex-post considerations
Notification and effective remedy Most    Member    States    provide    for    the    notification    of    targets    of hacking  practices and  remedy  in  cases  of unlawful   hacking.
Reporting and oversight Primarily, Member States report at a micro-level through logging hacking  activities and  reporting them  in  case  files.

However,   some   Member   States   (e.g.   Germany,   Poland   and   the UK) have macro-level  review  and  oversight mechanisms.

Furthermore, as regards the issue of territoriality (i.e. the difficulty law enforcement agencies face obtaining the location of the data to be collected using hacking techniques), only one Member States, the Netherlands, legally permits the hacking of devices if the location is unknown. If the device turns out to be in another jurisdiction, Dutch law enforcement must apply  for Mutual  Legal  Assistance.

As such, when aggregated, these provisions strongly mirror Article 8 of the European Convention on Human Rights, as well as the UN recommendations and paragraph 95 of the ECtHR  judgement  in  Weber and  Saravia  v.  Germany.  However,   there are  many,  and  varied, criticisms when the Member State conditions are examined in isolation. Some of the provisions criticised include: the limits based on the gravity of crimes (e.g. the Netherlands, France and Poland); the provisions for notification and effective remedy (e.g. Italy and the Netherlands); the process for screening and deleting non-relevant data (Germany); the definition of devices that can be targeted (e.g. the Netherlands); the duration permitted for hacking (e.g. Poland); and a lack of knowledge amongst the judiciary (e.g. France, Germany, Italy and the Netherlands).With this said, certain elements, taken in isolation, can be called good  practices. Such  examples  are  presented below.

Select  good practice: Member State legislative frameworks

Germany: Although they were deemed unconstitutional in a 2016 ruling, the provisions for the screening and deletion of data related to the core area of private life are a positive step. If the provisions are amended, as stipulated in the ruling, to ensure screening by an independent body, they would provide strong protection for the targeted individual’s private data.

Italy: The 2017 draft Italian law includes a range of provisions related to the development and monitoring of the continued use of hacking tools. As such, one academic stakeholder remarked that the drafting of the law must have been driven by technicians. However, these provisions bring significant benefits to the legislative provisions in terms of supervision and oversight of the use of hacking tools. Furthermore, the Italian draft law takes great care to separate the functionalities of the hacking tools, thus protecting against the overuse or abuse of a  hacking tool’s  extensive  capabilities.

Netherlands: The Dutch Computer Crime III Bill stipulates the need to conduct a formal proportionality assessment for each hacking request, with the assistance of a dedicated Central Review Commission (Centrale Toetsings Commissie). Also, the law requires rules to be laid down on the authorisation and expertise of the investigation officers that can perform hacking.

With these findings in mind, the study concludes that the specific national-level legal provisions examined provide for the use of hacking techniques in a wide array of circumstances. The varied combinations of requirements, including those related to the gravity of crimes, the duration and purpose of operations and the oversight, result in a situation where the law does not provide for much stricter conditions than are necessary for less  intrusive  investigative activities such  as interception.

Based on the study findings,  relevant  and actionable policy proposals and recommendations have been developed under the two key elements: i) the fundamental right  to  privacy;  and  ii) the security  of the internet.

Recommendations and policy proposals: Fundamental  right  to  privacy

It is recommended that the use of ‘grey area’ legal provisions is not sufficient to protect the fundamental right to privacy. This is primarily because existing legal provisions do not provide for the more invasive nature of hacking techniques and do not provide for the legislative precision  and  clarity  as  required  under  the  Charter and the  ECHR.

Furthermore, many of these provisions have only recently been enacted. As such, there is a need for robust evidence-based monitoring and evaluation of the practical application of these provisions. It is therefore recommended that the application of these new legal provisions is evaluated regularly at national level, and that the results of these evaluations are  assessed at  EU-level.

If specific legislative provisions are deemed necessary, the study recommends a range of good practice, specific ex-ante and ex-post provisions governing the use of hacking practices by  law  enforcement  agencies. These are detailed  in  Chapter 6.

Policy proposal 1: The European Parliament should pass a resolution calling on Member States to conduct a Privacy Impact Assessment when new laws are proposed to permit and govern the use of hacking techniques by law enforcement agencies. This Privacy Impact Assessment should focus on the necessity and proportionality of the use of hacking tools and should  require input  from  national  data protection  authorities.

Policy proposal 2: The European Parliament should reaffirm the need for Member States to adopt a clear and precise legal basis if law enforcement agencies are to use hacking techniques.

Policy proposal 3: The European Parliament should commission more research or encourage the European Commission or other bodies to conduct more research on the topic. In response to the Snowden revelations, the European Parliament called on the EU Agency for Fundamental Rights (FRA) to thoroughly research fundamental rights protection in the context of surveillance. A similar brief related to the legal frameworks governing the use of hacking techniques by law enforcement across all EU Member States would act as an invaluable piece  of  research.

Policy proposal 4: The European Parliament should encourage Member States to undertake evaluation and monitoring activities on the practical application of the new legislative provisions  that  permit  hacking  by  law  enforcement  agencies.

Policy proposal 5: The European Parliament should call on the EU Agency for Fundamental Rights (FRA) to develop a practitioner handbook related to the governing of hacking by law enforcement. This handbook should be intended for lawyers, judges, prosecutors, law enforcement officers and others working with national authorities, as well as non­governmental organisations and other bodies confronted with legal questions in the areas set out by the handbook. These areas should cover the invasive nature of hacking techniques and relevant safeguards as per international and EU law and case law, as well as appropriate mechanisms for supervision  and   oversight.

Policy proposal 6: The European Parliament should call on EU bodies, such as the FRA, CEPOL and Eurojust, to provide training for national-level members of the judiciary and data protection authorities, in collaboration with the abovementioned handbook, on the technical means for hacking in use across the Member States, their potential for invasiveness and the principles of  necessity  and  proportionality in  relation  to these  technical  means.

Recommendations and policy proposals: Security of  the  internet

The primary recommendation related to the security of the internet is that the position of the EU against the implementation of ‘backdoors’ and similar techniques, and in support of strong encryption standards, should be reaffirmed, given the prominent role encryption plays in our society and its importance to the EU’s Digital Agenda. To support this position, the EU should ensure continued engagement with global experts in computer science as well as civil society privacy and  digital  rights groups.

The actual impacts of hacking by law enforcement on the security of the internet are yet unknown. More work should be done at the Member State level to assess the potential impacts such that these data can feed in to overarching discussions on the necessity and proportionality of law enforcement hacking. Furthermore, more work should be done, beyond understanding the risks to the security of the internet, to educate those involved in the authorisation and use of  hacking  techniques by law enforcement.

At present, the steps taken to safeguard the security of the internet against the potential risks of hacking are not widespread. As such, the specific legislative provisions governing the use of hacking techniques by law enforcement, if deemed necessary, should safeguard the security of the internet and the security of the device, including reporting the vulnerabilities used to gain access to a device to the appropriate technology vendor or service provider; and  ensure  the  full  removal  of  the software  or hardware from the targeted  device.

Policy proposal 7: The European Parliament should pass a resolution calling on Member States to conduct an Impact Assessment to examine the impact of new or existing laws governing  the  use  of hacking  techniques by  law  enforcement on  the  security  of  the internet.

Policy proposal 8: The European Parliament, through enhanced cooperation with Europol
and the European Union Agency for Network and Information Security (ENISA), should
reaffirm its commitment to strong encryption considering discussions on the topic of hacking by law enforcement. In addition, the Parliament should reaffirm its opposition to the implementation of  
backdoors and  similar techniques in information technology infrastructures or  services.

Policy proposal 9: Given the lack of discussion around handling zero-day vulnerabilities, the European Parliament should support the efforts made under the cybersecurity contractual Public-Private Partnership (PPP) to develop appropriate responses to handling zero-day vulnerabilities, taking into consideration the risks related to fundamental rights and the security  of the internet.

Policy proposal 10: Extending policy proposal 4, above, the proposed FRA handbook should also cover the risks  posed  to  the  security  of the  internet  by  using hacking  techniques.

Policy proposal 11: Extending policy proposal 5, training provided to the judiciary by EU bodies such as FRA, CEPOL and Eurojust should also educate these individuals on the risks posed  to  the security  of  the internet  by  hacking  techniques.

Policy proposal 12: Given the lack of discussion around the risks posed to the security of the internet by hacking practices, the European Parliament should encourage debates at the appropriate fora specific to understanding this risk and the approaches to managing this risk. It is encouraged that law enforcement representatives should be present within such discussions.

Parliamentary Tracker : the EP incoming resolution on the EU-USA (so called) “Privacy Shield”…

 

NOTA BENE : Below the text that will be submitted to vote at the next EP plenary. As in previous occasions the text is well drafted, legally precise and it confirms the high level of  competence that the European Parliament (and its committee LIBE) has developed along the last 17 years from the first inquiry on Echelon (2000), the Safe Harbor (2000), the EU-USA agreement on PNR (since 2003 a thirteen year long lasting saga…) the SWIFT agreement (2006) …

What is puzzling are the critics raised against the  so called “adequacy finding” mechanism which empowers the European Commission to decide if a third Country protect “adequately” the EU citizens personal data. The weaknesses of the Commission face to our strongest transatlantic ally  were already very well known when recently the parliamentarians have reformed the European legal framework on data protection in view of the new legal basis foreseen by the Treaties and in the art. 7 and 8 of the EU Charter.  However the EP did’nt try to strengthen the “adequacy” mechanism by transforming it at least in a “delegated” function (so that it would had been possible for the EP to block something which could had weackened our standards).

Now the US Congress is weakening the (already poor) US data protection and the new US administration will probably go in the same direction.  It seems to me to easy  to complain now on something that you had recently the chance to fix..

Let’s now hope that the Court of Justice by answering to the request for opinion on the EU-Canada PNR agreement will give to the EU legislator some additional recommendations but as an EU citizen I would had preferred a stronger EU legislation instead of been ruled by european or national Judges…

Emilio De Capitani

B8‑0235/2017 European Parliament resolution on the adequacy of the protection afforded by the EU-US Privacy Shield (2016/3018(RSP))

The European Parliament,

–        having regard to the Treaty on European Union (TEU), the Treaty on the Functioning of the European Union (TFEU) and Articles 6, 7, 8, 11, 16, 47 and 52 of the Charter of Fundamental Rights of the European Union,

–        having regard to Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data (Data Protection Directive)[1],

–        having regard to Council Framework Decision 2008/977/JHA of 27 November 2008 on the protection of personal data processed in the framework of police and judicial cooperation in criminal matters[2],

–        having regard to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)[3], and to Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA[4],

–        having regard to the judgment of the Court of Justice of the European Union of 6 October 2015 in Case C-362/14 Maximillian Schrems v Data Protection Commissioner[5],

–        having regard to the Commission communication to the European Parliament and the Council of 6 November 2015 on the transfer of personal data from the EU to the United States of America under Directive 95/46/EC following the judgment by the Court of Justice in Case C-362/14 (Schrems) (COM(2015)0566),

–        having regard to the Commission communication to the European Parliament and the Council of 10 January 2017 on Exchanging and Protecting Personal Data in a Globalised World (COM(2017)0007),

–        having regard to the judgment of the Court of Justice of the European Union of 21 December 2016 in Cases C-203/15 Tele2 Sverige AB v Post- och telestyrelsen and C-698/15 Secretary of State for the Home Department v Tom Watson and Others[6],

–        having regard to Commission Implementing Decision (EU) 2016/1250 of 12 July 2016 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the EU-US Privacy Shield[7],

–        having regard to Opinion 4/2016 of the European Data Protection Supervisor (EDPS) on the EU-US Privacy Shield draft adequacy decision[8],

–        having regard to the Opinion of the Article 29 Data Protection Working Party of 13 April 2016 on the EU-US Privacy Shield draft adequacy decision[9] and its Statement of 26 July 2016[10],

–        having regard to its resolution of 26 May 2016 on transatlantic data flows[11],

–        having regard to Rule 123(2) of its Rules of Procedure,

  1. whereas the Court of Justice of the European Union (CJEU) in its judgment of 6 October 2015 in Case C-362/14 Maximillian Schrems v Data Protection Commissioner invalidated the Safe Harbour decision and clarified that an adequate level of protection in a third country must be understood to be ‘essentially equivalent’ to that guaranteed within the European Union by virtue of Directive 95/46/EC read in the light of the Charter of Fundamental Rights of the European Union (hereinafter ‘the EU Charter’), prompting the need to conclude negotiations on a new arrangement so as to ensure legal certainty on how personal data should be transferred from the EU to the US;
  2. whereas, when examining the level of protection afforded by a third country, the Commission is obliged to assess the content of the rules applicable in that country deriving from its domestic law or its international commitments, as well as the practice designed to ensure compliance with those rules, since it must, under Article 25(2) of Directive 95/46/EC, take account of all the circumstances surrounding a transfer of personal data to a third country; whereas this assessment must not only refer to legislation and practices relating to the protection of personal data for commercial and private purposes, but must also cover all aspects of the framework applicable to that country or sector, in particular, but not limited to, law enforcement, national security and respect for fundamental rights;
  3. whereas transfers of personal data between commercial organisations of the EU and the US are an important element for the transatlantic relationships; whereas these transfers should be carried out in full respect of the right to the protection of personal data and the right to privacy; whereas one of the fundamental objectives of the EU is the protection of fundamental rights, as enshrined in the EU Charter;
  4. whereas in its Opinion 4/2016 the EDPS raised several concerns on the draft Privacy Shield; whereas the EDPS welcomes in the same opinion the efforts made by all parties to find a solution for transfers of personal data from the EU to the US for commercial purposes under a system of self-certification;
  5. whereas in its Opinion 01/2016 on the EU-US Privacy Shield draft adequacy decision the Article 29 Working Party welcomed the significant improvements brought about by the Privacy Shield compared with the Safe Harbour decision whilst also raising strong concerns about both the commercial aspects and access by public authorities to data transferred under the Privacy Shield;
  6. whereas on 12 July 2016, after further discussions with the US administration, the Commission adopted its Implementing Decision (EU) 2016/1250, declaring the adequate level of protection for personal data transferred from the Union to organisations in the United States under the EU-US Privacy Shield;
  7. whereas the EU-US Privacy Shield is accompanied by several letters and unilateral statements from the US administration explaining, inter alia, the data protection principles, the functioning of oversight, enforcement and redress and the protections and safeguards under which security agencies can access and process personal data;
  8. whereas in its statement of 26 July 2016, the Article 29 Working Party welcomes the improvements brought by the EU-US Privacy Shield mechanism compared with Safe Harbour and commended the Commission and the US authorities for having taken into consideration its concerns; whereas the Article 29 Working Party indicates, nevertheless, that a number of its concerns remain, regarding both the commercial aspects and the access by US public authorities to data transferred from the EU, such as the lack of specific rules on automated decisions and of a general right to object, the need for stricter guarantees on the independence and powers of the Ombudsperson mechanism, and the lack of concrete assurances of not conducting mass and indiscriminate collection of personal data (bulk collection);
  9. Welcomes the efforts made by both the Commission and the US administration to address the concerns raised by the CJEU, the Member States, the European Parliament, data protection authorities (DPAs) and stakeholders, so as to enable the Commission to adopt the implementing decision declaring the adequacy of the EU-US Privacy Shield;
  10. Acknowledges that the EU-US Privacy Shield contains significant improvements regarding the clarity of standards compared with the former EU-US Safe Harbour and that US organisations self-certifying adherence to the EU-US Privacy Shield will have to comply with clearer data protection standards than under Safe Harbour;
  11. Takes note that as at 23 March 2017, 1 893 US organisations have joined the EU-US Privacy Shield; regrets that the Privacy Shield is based on voluntary self-certification and therefore applies only to US organisations which have voluntarily signed up to it, which means that many companies are not covered by the scheme;
  12. Acknowledges that the EU-US Privacy Shield facilitates data transfers from SMEs and businesses in the Union to the US;
  13. Notes that, in line with the ruling of the CJEU in the Schrems case, the powers of the European DPAs remain unaffected by the adequacy decision and they can, therefore, exercise them, including the suspension or the ban of data transfers to an organisation registered with the EU-US Privacy Shield; welcomes in this regard the prominent role given by the Privacy Shield Framework to Member State DPAs to examine and investigate claims related to the protection of the rights to privacy and family life under the EU Charter and to suspend transfers of data, as well as the obligation placed upon the US Department of Commerce to resolve such complaints;
  14. Notes with satisfaction that under the Privacy Shield Framework, EU data subjects have several means available to them to pursue legal remedies in the US: first, complaints can be lodged either directly with the company or through the Department of Commerce following a referral by a DPA, or with an independent dispute resolution body, secondly, with regard to interferences with fundamental rights for the purpose of national security, a civil claim can be brought before the US court and similar complaints can also be addressed by the newly created independent Ombudsperson, and finally, complaints about interferences with fundamental rights for the purposes of law enforcement and the public interest can be dealt with by motions challenging subpoenas; encourages further guidance from the Commission and DPAs to make those legal remedies all the more easily accessible and available;
  15. Acknowledges the clear commitment of the US Department of Commerce to closely monitor the compliance of US organisations with the EU-US Privacy Shield Principles and their intention to take enforcement actions against entities failing to comply;
  16. Reiterates its call on the Commission to seek clarification on the legal status of the ‘written assurances’ provided by the US and to ensure that any commitment or arrangement foreseen under the Privacy Shield is maintained following the taking up of office of a new administration in the United States;
  17. Considers that, despite the commitments and assurances made by the US Government by means of the letters attached to the Privacy Shield arrangement, important questions remain as regards certain commercial aspects, national security and law enforcement;
  18. Specifically notes the significant difference between the protection provided by Article 7 of Directive 95/46/EC and the ‘notice and choice’ principle of the Privacy Shield arrangement, as well as the considerable differences between Article 6 of Directive 95/46/EC and the ‘data integrity and purpose limitation’ principle of the Privacy Shield arrangement; points out that instead of the need for a legal basis (such as consent or contract) that applies to all processing operations, the data subject rights under the Privacy Shield Principles only apply to two narrow processing operations (disclosure and change of purpose) and only provide for a right to object (‘opt-out’);
  19. Takes the view that these numerous concerns could lead to a fresh challenge to the decision on the adequacy of the protection being brought before the courts in the future; emphasises the harmful consequences as regards both respect for fundamental rights and the necessary legal certainty for stakeholders;
  20. Notes, amongst other things, the lack of specific rules on automated decision-making and on a general right to object, and the lack of clear principles on how the Privacy Shield Principles apply to processors (agents);
  21. Notes that, while individuals have the possibility to object vis-à-vis the EU controller to any transfer of their personal data to the US, and to the further processing of those data in the US where the Privacy Shield company acts as a processor on behalf of the EU controller, the Privacy Shield lacks specific rules on a general right to object vis-à-vis the US self-certified company;
  22. Notes that only a fraction of the US organisations that have joined the Privacy Shield have chosen to use an EU DPA for the dispute resolution mechanism; is concerned that this constitutes a disadvantage for EU citizens when trying to enforce their rights;
  23. Notes the lack of explicit principles on how the Privacy Shield Principles apply to processors (agents), while recognising that all principles apply to the processing of personal data by any US self-certified company ‘[u]nless otherwise stated’ and that the transfer for processing purposes always requires a contract with the EU controller which will determine the purposes and means of processing, including whether the processor is authorised to carry out onward transfers (e.g. for sub-processing);
  24. Stresses that, as regards national security and surveillance, notwithstanding the clarifications brought by the Office of the Director of National Intelligence (ODNI) in the letters attached to the Privacy Shield framework, ‘bulk surveillance’, despite the different terminology used by the US authorities, remains possible; regrets the lack of a uniform definition of the concept of bulk surveillance and the adoption of the American terminology, and therefore calls for a uniform definition of bulk surveillance linked to the European understanding of the term, where evaluation is not made dependent on selection; stresses that any kind of mass surveillance is in breach of the EU Charter;
  25. Recalls that Annex VI (letter from Robert S. Litt, ODNI) clarifies that under Presidential Policy Directive 28 (hereinafter ‘PPD-28’), bulk collection of personal data and communications of non-US persons is still permitted in six cases; points out that such bulk collection only has to be ‘as tailored as feasible’ and ‘reasonable’, which does not meet the stricter criteria of necessity and proportionality as laid down in the EU Charter;
  26. Deplores the fact that the EU-US Privacy Shield does not prohibit the collection of bulk data for law enforcement purposes;
  27. Stresses that in its judgment of 21 December 2016, the CJEU clarified that the EU Charter ‘must be interpreted as precluding national legislation which, for the purpose of fighting crime, provides for the general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication’; points out that the bulk surveillance in the US therefore does not provide for an essentially equivalent level of the protection of personal data and communications;
  28. Is alarmed by the recent revelations about surveillance activities conducted by a US electronic communications service provider on all emails reaching its servers, upon request of the National Security Agency (NSA) and the FBI, as late as 2015, i.e. one year after Presidential Policy Directive 28 was adopted and during the negotiation of the EU-US Privacy Shield; insists that the Commission seek full clarification from the US authorities and make the answers provided available to the Council, Parliament and national DPAs; sees this as a reason to strongly doubt the assurances brought by the ODNI; is aware that the EU-US Privacy Shield rests on PPD-28, which was issued by the President and can also be repealed by any future President without Congress’s consent;
  29. Expresses great concerns at the issuance of the ‘Procedures for the Availability or Dissemination of Raw Signals Intelligence Information by the National Security Agency under Section 2.3 of Executive Order 12333’, approved by the Attorney General on 3 January 2017, allowing the NSA to share vast amounts of private data gathered without warrants, court orders or congressional authorisation with 16 other agencies, including the FBI, the Drug Enforcement Agency and the Department of Homeland Security; calls on the Commission to immediately assess the compatibility of these new rules with the commitments made by the US authorities under the Privacy Shield, as well as their impact on the level of personal data protection in the United States;
  30. Deplores the fact that neither the Privacy Shield Principles nor the letters of the US administration providing clarifications and assurances demonstrate the existence of effective judicial redress rights for individuals in the EU whose personal data are transferred to a US organisation under the Privacy Shield Principles and further accessed and processed by US public authorities for law enforcement and public interest purposes, which were emphasised by the CJEU in its judgment of 6 October 2015 as the essence of the fundamental right in Article 47 of the EU Charter;
  31. Recalls its resolution of 26 May 2016 stating that the Ombudsperson mechanism set up by the US Department of State is not sufficiently independent and is not vested with sufficient effective powers to carry out its duties and provide effective redress to EU individuals; notes that according to the representations and assurances provided by the US Government the Office of the Ombudsperson is independent from the US intelligence services, free from any improper influence that could affect its function and moreover works together with other independent oversight bodies with effective powers of supervision over the US Intelligence Community; is generally concerned that an individual affected by a breach of the rules can apply only for information and for the data to be deleted and/or for a stop to further processing, but has no right to compensation;
  32. Regrets that the procedure of adoption of an adequacy decision does not provide for a formal consultation of relevant stakeholders such as companies, and in particular SMEs’ representation organisations;
  33. Regrets that the Commission followed the procedure for adoption of the Commission implementing decision in a practical manner that de facto has not enabled Parliament to exercise its right of scrutiny on the draft implementing act in an effective manner;
  34. Calls on the Commission to take all the necessary measures to ensure that the Privacy Shield will fully comply with Regulation (EU) 2016/679, to be applied as from 16 May 2018, and with the EU Charter;
  35. Calls on the Commission to ensure, in particular, that personal data that has been transferred to the US under the Privacy Shield can only be transferred to another third country if that transfer is compatible with the purpose for which the data was originally collected, and if the same rules of specific and targeted access for law enforcement apply in the third country;
  36. Calls on the Commission to monitor whether personal data which is no longer necessary for the purpose for which it had been originally collected is deleted, including by law enforcement agencies;
  37. Calls on the Commission to closely monitor whether the Privacy Shield allows for the DPAs to fully exercise all their powers, and if not, to identify the provisions that result in a hindrance to the DPAs’ exercise of powers;
  38. Calls on the Commission to conduct, during the first joint annual review, a thorough and in-depth examination of all the shortcomings and weaknesses referred to in this resolution and in its resolution of 26 May 2016 on transatlantic data flows, and those identified by the Article 29 Working Party, the EDPS and the stakeholders, and to demonstrate how they have been addressed so as to ensure compliance with the EU Charter and Union law, and to evaluate meticulously whether the mechanisms and safeguards indicated in the assurances and clarifications by the US administration are effective and feasible;
  39. Calls on the Commission to ensure that when conducting the joint annual review, all the members of the team have full and unrestricted access to all documents and premises necessary for the performance of their tasks, including elements allowing a proper evaluation of the necessity and proportionality of the collection and access to data transferred by public authorities, for either law enforcement or national security purposes;
  40. Stresses that all members of the joint review team must be ensured independence in the performance of their tasks and must be entitled to express their own dissenting opinions in the final report of the joint review, which will be public and annexed to the joint report;
  41. Calls on the Union DPAs to monitor the functioning of the EU-US Privacy Shield and to exercise their powers, including the suspension or definitive ban of personal data transfers to an organisation in the EU-US Privacy Shield if they consider that the fundamental rights to privacy and the protection of personal data of the Union’s data subjects are not ensured;
  42. Stresses that Parliament should have full access to any relevant document related to the joint annual review;
  43. Instructs its President to forward this resolution to the Commission, the Council, the governments and national parliaments of the Member States and the US Government and Congress.

NOTES
[1] OJ L 281, 23.11.1995, p. 31.
[2] OJ L 350, 30.12.2008, p. 60.
[3] OJ L 119, 4.5.2016, p. 1.
[4] OJ L 119, 4.5.2016, p. 89.
[5] ECLI:EU:C:2015:650.
[6] ECLI:EU:C:2016:970.
[7] OJ L 207, 1.8.2016, p. 1.
[8] OJ C 257, 15.7.2016, p. 8.
[9] http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2016/wp238_en.pdf
[10] http://ec.europa.eu/justice/data-protection/article-29/press-material/press-release/art29_press_material/2016/20160726_wp29_wp_statement_eu_us_privacy_shield_en.pdf
[11] Texts adopted, P8_TA(2016)0233.

The Mejiers Committee on the inter-parliamentary scrutiny of Europol

ORIGINAL PUBLISHED ON THE MEJIERS COMMITTE (*) PAGE  HERE

  1. Introducton

Article 88 TFEU provides for a unique form of scrutiny on the functioning of Europol. It lays down that the [regulations on Europol] shall also lay down the procedures for scrutiny of Europol’s activities by the European Parliament, together with national Parliaments.

Such a procedure is now laid down in Article 51 of the Europol Regulation (Regulation (EU) 2016/794), which provides for the establishment of a “specialized Joint Parliamentary Scrutiny Group (JPSG)”, which will play the central role in ensuring this scrutiny. The Europol Regulation shall apply from 1st of May 2017.

Article 51 of the Europol Regulation also closely relates to Protocol (1) of the Lisbon Treaty on the role of national parliaments in the EU. Article 9 of that protocol provides: “The European Parliament and national Parliaments shall together determine the organization and promotion of effective and regular inter-parliamentary cooperation within the Union.”

Article 51 (2) does not only lay down the basis for the political monitoring of Europol’s activities (the democratic perspective), but also stipulates that “in fulfilling its mission”, it should pay attention to the impact of the activities of Europol on the fundamental rights and freedoms of natural persons (the perspective of the rule of law).

The Meijers Committee takes the view that improving the inter-parliamentary scrutiny of Europol, with appropriate involvement of both the national and the European levels, will by itself enhance the attention being paid by Europol on the perspectives of democracy and the rule of law, and more in particular the fundamental rights protection. It will raise the alertness of Europol as concerns these perspectives.

Moreover, the scrutiny mechanism could pay specific attention to the fundamental rights protection within Europol. This is particularly important in view of the large amounts of – often sensitive – personal data processed by Europol and exchanged with national police authorities of Member States and also with authorities of third countries.

The implementation of Article 51 into practice is currently debated, e.g. in the inter-parliamentary committee of the European Parliament and national parliaments.1 As specified by Article 51 (1) of the Europol regulation, the organization and the rules of procedure of the JPSG shall be determined.

The Meijers Commitee wishes to engage in this debate and makes, in this note, recommendations on the organization and rules of procedure.

  1. Context

Continue reading “The Mejiers Committee on the inter-parliamentary scrutiny of Europol”