Why the European Parliament should reject (or substantially amend)  the  Commission’s proposal on EU Information Security (“INFOSEC”). (1) The issue of “classified information”

By Emilio De Capitani

1.Setting the scene of EU legal framework on access to documents and to confidential information before the Lisbon Treaty

To better understand why the Commission “INFOSEC” draft legislative proposal (2022/0084(COD) on information security shall be substantially amended, let’s recall what was before the Lisbon Treaty and of the Charter, the EU legal framework on access to documents, and notably of EU classified information. With the entry into force of the Amsterdam Treaty on May 1999 the EP and the Council have been under the obligation (art.255 TCE) of adopting in two years time new EU rules framing the individual  right of access to documents by establishing at the same time “the general principles and limits of public interests” which may limit such right of access.(emphasis added).

Notwithstanding a rather prudent Commission’s legislative proposal the EP strongly advocated a stronger legal framework for access to documents, for legislative transparency and even for the treatment at EU level of information which, because of their content, should be treated confidentially (so called ,“sensitive” or “classified information”). 

Needless to say  “Sensitive” or “classified information” at Member States level, are deemed to protect “essential interests”  of the State and, by law, are subject to a special parliamentary and judicial oversight regime.[1] As a consequence, at EU level, even after Lisbon, national classified information are considered an essential aspect of national security which “.. remains the sole responsibility of each Member State” (art. 4.2 TEU) and “..no Member State shall be obliged to supply information the disclosure of which it considers contrary to the essential interests of its security;”(art 346.1(a)TFEU.

However, if national classified information is shared at EU level as it is the case for EU internal or external security policies it shall be treated as for any other EU policy by complying with EU rules. Point is on what legal basis these rules should be founded. This issue came to the fore already in 2000 when the newly appointed Council Secretary General Xavier SOLANA negotiated with NATO a first interim agreement on the exchange of classified information. The agreement which mirrored at EU level the NATO Classification standards (“Confidential”, “Secret” and “Top Secret”) was founded  on the Council internal organizational power  but this “administrative” approach was immediately challenged before the Court of Justice by the a Member State (NL) [2]and by the European Parliament itself [3] which considered that the correct legal basis should had been the new legislation on access to documents foreseen by art 255 of TEC which was at the time under negotiation.  The Council, at last, acknowledged that art.255 TEC on access to documents was right legal basis and a specific article (art.9[4]) was inserted in in Regulation 1049/01 implementing art.255 TEC and the EP and NL withdrew their applications before the CJEU[5].

Point is that Art.9 of Regulation 1049/01 still covers only the possible access by EU citizens and such access may be vetoed by the “originator” of the classified information. Unlike national legislation on classified information art.9 didn’t solved, unfortunately, for the lack of time, the issue of the democratic and judicial control by the European Parliament and by the Court of Justice to the EUCI. Art.9(7) of Regulation 1049/01 makes only a generic reference to the fact that “The Commission and the Council shall inform the European Parliament regarding sensitive documents in accordance with arrangements agreed between the institutions.” A transitional and partial solution has then been founded by negotiating Interinstitutional Agreements between the Council and the EP in 2002 [6]and in 2014 [7]and between the European Commission[8] in 2010.

Point is that interinstitutional agreements even if they may be binding (art.295 TFEU) they can only “facilitate” the implementation of EU law which, as described above,  in the case of democratic and judicial control of classified information still does not exists. Not surprisingly, both the Council and the Commission Interinstitutional agreements consider that the “originator” principle should also be binding for the other EU institutions such as the European Parliament  and the Court of Justice.

This situation is clearly unacceptable in an EU deemed to be democratic and bound by the rule of law as it create zones where not only the EU Citizens but also their Representatives may have no access because of “originator’s” veto. As result, in these situations the EU is no more governed by the rule of law but only by the “goodwill” of the former.

To make things even worse the Council established practice is to negotiate with third Countries and international organizations agreements [9]covering the exchange of confidential information by declaring that the other EU Institutions (such as the EP and the Court of Justice)  should be considered “third parties” subject then to the “originator” principle.

Such situation has become kafkianesque with the entry into force of the Lisbon treaty which recognize now at primary law level the EP right to be “fully and timely” informed also on classified information exchanged during the negotiation of an international agreement[10]. Inexplicabily , fourtheen years since the entry into force of the Traty the European Parliament has not yet challenged before the Court of Justice these clearly unlawful agreements.

That Institutional problem kept apart, fact remains that until the presentation of the draft INFOSEC proposal none challenged the idea that in the EU the correct legal basis supporting the treatment also of classified information should be the same of access to documents which after the entry into force of the Lisbon treaty is now art.15.3 of the TFEU[11].

2 Why the Commission choice of art 298 TFEU as the legal basis for the INFOSEC proposal is highly questionable [12]

After the entry into force of the Lisbon Treaty and of the Charter the relation between the fundamental right of access to documents and the corresponding obligation of the EU administration of granting administrative transparency and disclose or not its information/documents has now been strengthened also because of art.52 of the EU Charter.

In an EU bound by the rule of law and by democratic principles,  openness and the fundamental right of access should be the general rule and  “limits” to such rights should be an exception  framed only “by law”. As described above the correct legal basis for such “law” is art.15 of the TFEU which, as the former art.255 TEC, states that  General principles and limits on grounds of public or private interest..” may limit the right of access and the obligation of disclosing EU internal information / documents. Also from a systemic point of view  “limits” to disclosure and to access are now covered by the same Treaty article which frames (in much stronger words than art 255 before Lisbon) the principles of “good governance”(par 1), of legislative transparency  (par 2) and of administrative transparency (par 3).

Such general “Transparency” rule is worded as following:    “1. In order to promote good governance and ensure the participation of civil society, the Union institutions, bodies, offices and agencies shall conduct their work as openly as possible.(..) Each institution, body, office or agency shall ensure that its proceedings are transparent and shall elaborate in its own Rules of Procedure specific provisions regarding access to its documents, in accordance with the regulations referred to in the second subparagraph.”

Bizarrely, the European Commission has chosen for the INFOSEC regulation art.298 TFEU on an open, independent and efficient EU administration by simply ignoring art.15 TFEU and by making an ambiguous reference to the fact that INFOSEC should be implemented “without prejudice” of the pre-Lisbon Regulation 1049/01 dealing with access to documents and administrative transparency.  How a “prejudice” may not exist when both Regulations are overlapping and INFOSEC Regulation is upgrading the Council Internal Security rules at legislative level is a challenging question.

It is indeed  self evident that both the INFOSEC Regulation and Regulation 1049/01 deal with the authorized/unauthorised “disclosure” of EU internal information/documents.

Such overlapping of the two Regulations is even more striking for the treatment  EU Classified information (EUCI) as these information are covered both by art. 9 of Regulation 1049/01 and now  by articles 18 to 58 and annexes II to VI of the INFOSEC Regulation.

As described above, Art 255 TCE has since Lisbon been replaced and strengthened by art 15 TFEU so that the Commission proposal of replacing it with art.298 TFEU looks like a “detournement de procedure” which may be challenged before the Court for almost the same reasons already raised in 2000 by the EP and by NL.  It would then been sensible to relaunch the negotiations on the revision of Regulation 1049 in the new post-Lisbon perspective but the Commission has decided this year to withdraw the relevant legislative procedure. Submitting a legislative proposal such INFOSEC promoting overall confidentiality and withdrawing at the same time a legislative proposal promoting transparency seems a rather Commission’s strong message to the public.

3 Does the INFOSEC proposal grant a true security for EU internal information?

Point is that European administrative transparency is now a fundamental right of the individual enshrined in the Charter (Article 42).The protection of administrative data is one of the aspects of the “duty” of good administration enshrined in Article 41 of the Charter which stipulates that every person has the right of access to their file, “with due regard for the legitimate interests of confidentiality and professional and business secrecy.”  

However Art.298 TFEU is not the legal basis framing professional secrecy. It is only a provision on the functioning of the institutions and bodies which, “in carrying out their tasks … [must be based] on an “open” European administration”[13] and is not an article intended to ensure the protection of administrative documents.

This objective is better served by other legal basis of the Treaties.

First of all, protecting the archives of EU institutions and bodies from outside interference is, even before being a legitimate interest, an imperative condition laid down by the Treatiesand the related 1965 Protocol on the Privileges and Immunities of the Union adopted on the basis of the current Article 343 TFEU. Articles 1 and 2 of that Protocol stipulate that the premises and buildings of the Union, as well as its archives, “shall be inviolable.”

Furthermore, in order to ensure that, in the performance of their duties, officials are obliged to protect the documents of their institutions, Article 17 of the Staff Regulations stipulates that

1. Officials shall refrain from any unauthorized disclosure of information coming to their knowledge in the course of their duties, unless such information has already been made public or is accessible to the public.

Again, (as for Regulation 1049/01), the INFOSEC regulation  reinstate that it should be applied “without prejudice” of the Staff Regulation by so mirroring the second paragraph of art.298 TFEU which states that itself states that it should be implemented  “in accordance with the Staff Regulations and the rules adopted on the basis of Article 336.” So, also from this second perspective, the correct legal basis for INFOSEC could be the Article 339 (on professional secrecy) and 336 TFEU, with the consequent amendment of the Staff Regulations by means of a legislative regulation of the Parliament and the Council.

By proposing a legislative regulation on the basis of Article 298, the Commission therefore circumvents both the obligation imposed by Article  336, art 339 (on professional secrecy)  and, more importantly  of Article 15(3) TFEU, according to which each institution or body “..shall ensure (i.e., must ensure) the transparency of its proceedings [and therefore also their protection from external interference] and shall lay down in its rules of procedure specific provisions concerning access to its documents [and therefore also concerning their protection], in accordance with the regulations referred to in the second subparagraph.”(NDR currently Regulation 1049/01)

The objectives set out in Article 298 cannot therefore override the requirements of protecting the fundamental right of access to documents, nor those of Article 15 TFEU which could be considered the “center of gravity”when several legal basis are competing [14].

The same applies to compliance with the regulation establishing the Statute and, in particular, compliance with Article 17 thereof, cited above.

Ultimately, the provisions on the legislative procedure for Union legislative acts are not at the disposal of the Commission, given that administrative transparency is a fundamental right and the protection of documents is a corollary thereof and not a means of functioning of the institutions. Administrative transparency is a fundamental right of every person; the protection of administrative data is a legitimate interest of every administration.

A ”public” interest that can certainly limit the right of access, but only under the conditions established by the legislator of art 15 TFEU and only by the latter.

4. Conclusions

If a recommendation may be made now to the co-legislators is to avoid illusionary shortcuts such as the current Commission proposal whose real impact on the EU administrative “bubble” is far to be clear[15] . The EU Legislator, since the entry into force of the Lisbon Treaty more than fourteen years ago is faced to much more pressing problems.

What is mostly needed is not inventing several layers of illusionary “protection” of the EU information but framing the administrative procedures by law as suggested several times by the European Parliament and by the multiannual endeavor of brilliant scholars focusing on the EU Administrative law[16].

What matters is that the management and the access to EU information should be framed by law and not depend from the goodwill of the administrative author or the receiver as proposed by the INFOSEC Regulation. Nor information security is strengthened transforming each one of the 64 EU “entities” covered by the INFOSEC Regulation [17] in sand-boxes where the information is shared only with the people who, according to the “originator” has a “need to know” and not a “right to know”.

Moreover the EU should limit and not generalize the power for each one of the 64 EU entities of create “classified” information (EUCI). In this perspective art.9 of Regulation 1049/01 needs indeed a true revision but in view of the new EU Constitutional framework and of the new institutional balance arising from the Lisbon treaty and of the Charter.

Fourtheen years after Lisbon the democratic oversight of the European Parliament and the judicial control of the Court of Justice on classified documents , shall be granted by EU law as it is the case in most of the EU Countriesand not by interinstitutional agreements which maintain the “Originator” against these institutions in violation of the rule of law principle as well as of the EU institutional balance.

Could still be acceptable fourteen years after the entry into force of the Lisbon Treaty that the European Parliament and the Court of justice are not taken in account in the dozens of international agreements by which the Council frame the exchange of EUCI with third countries and international organizations?

Instead of dealing with these fundamental issues the European Commission in its 67 pages proposal makes no reference to 24 years of experience in the treatment of classified information and prefer dragging the co-legislators in Kafkian debates dealing with “sensitive but not classified information”  or on the strange idea by which documents should marked “public” by purpose and not by their nature (by so crossing the line separating public transparency from public propaganda).

But all that been said, it is not the Commission which will be responsible before the Citizens (and the European Court) for badly drafted legislation. It will be the European Parliament and the Council which shall now take their responsibility. They can’t hide behind the Commission unwillingness to deal with substantive issues (as well as with other aspects of legislative and administrative transparency) ; if the Council also prefer maintain the things as they were before Lisbon it is up to the European Parliament to take the lead and establish a frank discussion with the other co-legislator and verify if there is the will of fixing the real growing shortcomings in the EU administrative “Bubble”.

Continuing with the negotiations on the current version of the INFOSEC proposal notably on the complex issue of classified information paves the way to even bigger problems which (better soon than later) risk to  be brought as in 2000 on the CJEU table.


[1] According to the Venice Commission “.. at International and national level access to classified documents is restricted by law to a particular group of persons. A formal security clearance is required to handle classified documents or access classified data. Such restrictions on the fundamental right of access to information are permissible only when disclosure will result in substantial harm to a protected interest and the resulting harm is greater than the public interest in disclosure.  Danger is that if authorities engage in human rights violations and declare those activities state secrets and thus avoid any judicial oversight and accountability. Giving bureaucrats new powers to classify even more information will have a chilling effect on freedom of information – the touchstone freedom for all other rights and democracy – and it may also hinder the strive towards transparent and democratic governance as foreseen since Lisbon by art.15.1 of TFEU (emphasis added) The basic fear is that secrecy bills will be abused by authorities and that they lead to wide classification of information which ought to be publicly accessible for the sake of democratic accountability.  Unreasonable secrecy is thus seen as acting against national security as “it shields incompetence and inaction, at a time that competence and action are both badly needed”. (…) Authorities must provide reasons for any refusal to provide access to information.  The ways the laws are crafted and applied must be in a manner that conforms to the strict requirements provided for in the restriction clauses of the freedom of information provisions in the ECHR and the ICCPR.” 

[2] Action brought on 9 October 2000 by the Kingdom of the Netherlands against the Council of the European Union (Case C-369/00) (2000/C 316/37)

[3] Action brought on 23 October 2000 by the European Parliament against the Council of the European Union (Case C-387/00) (2000/C 355/31) LINK chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:C2000/355/31

[4] Regulation 1049/01 Article 9”Treatment of sensitive documents

1. Sensitive documents are documents originating from the institutions or the agencies established by them, from Member States, third countries or International Organisations, classified as “TRÈS SECRET/TOP SECRET”, “SECRET” or “CONFIDENTIEL” in accordance with the rules of the institution concerned, which protect essential interests of the European Union or of one or more of its Member States in the areas covered by Article 4(1)(a), notably public security, defence and military matters.

2. Applications for access to sensitive documents under the procedures laid down in Articles 7 and 8 shall be handled only by those persons who have a right to acquaint themselves with those documents. These persons shall also, without prejudice to Article 11(2), assess which references to sensitive documents could be made in the public register.

3. Sensitive documents shall be recorded in the register or released only with the consent of the originator.

4. An institution which decides to refuse access to a sensitive document shall give the reasons for its decision in a manner which does not harm the interests protected in Article 4.

5. Member States shall take appropriate measures to ensure that when handling applications for sensitive documents the principles in this Article and Article 4 are respected.

6. The rules of the institutions concerning sensitive documents shall be made public.

7. The Commission and the Council shall inform the European Parliament regarding sensitive documents in accordance with arrangements agreed between the institutions.

[5] Notice for the OJ.Removal from the register of Case C-387/001By order of 22 March 2002 the President of the Court of Justice of the European Communities ordered the removal from the register of Case C-387/00: European Parliament v Council of the European Union. OJ C 355 of 09.12.2000.

[6] Interinstitutional Agreement of 20 November 2002 between the European Parliament and the Council concerning access by the European Parliament to sensitive information of the Council in the field of security and defence policy (OJ C 298, 30.11.2002, p. 1).

[7] According to the Interinstitutional Agreement of 12 March 2014 between the European Parliament and the Council concerning the forwarding to and handling by the European Parliament of classified information held by the Council on matters other than those in the area of the common foreign and security policy (OJ C 95, 1.4.2014, pp. 1–7) “4.   The Council may grant the European Parliament access to classified information which originates in other Union institutions, bodies, offices or agencies, or in Member States, third States or international organisations only with the prior written consent of the originator.

[8] According to annex III point 5 of the Framework Agreement on relations between the European Parliament and the European Commission (OJ L 304, 20.11.2010, pp. 47–62) In the case of international agreements the conclusion of which requires Parliament’s consent, the Commission shall provide to Parliament during the negotiation process all relevant information that it also provides to the Council (or to the special committee appointed by the Council). This shall include draft amendments to adopted negotiating directives, draft negotiating texts, agreed articles, the agreed date for initialling the agreement and the text of the agreement to be initialled. The Commission shall also transmit to Parliament, as it does to the Council (or to the special committee appointed by the Council), any relevant documents received from third parties, subject to the originator’s consent. The Commission shall keep the responsible parliamentary committee informed about developments in the negotiations and, in particular, explain how Parliament’s views have been taken into account.”

[9] SEE : Agreements on the security of classified information Link : https://eur-lex.europa.eu/EN/legal-content/summary/agreements-on-the-security-of-classified-information.html

[10] Article 218.10 TFUE states clearly that “The European Parliament shall be immediately and fully informed at all stages of the procedure” when the EU is negotiating international agreements even when the agreements “relates exclusively or principally to the common foreign and security policy,” (art.218.3 TFUE).

[11] Interestingly reference to art.15 of the TFEU is also made in the EP-Council 2014 Interinstitutional Agreement on access to classified information (not dealing with External Defence) See point 15 :  This Agreement is without prejudice to existing and future rules on access to documents adopted in accordance with Article 15(3) TFEU; rules on the protection of personal data adopted in accordance with Article 16(2) TFEU; rules on the European Parliament’s right of inquiry adopted in accordance with third paragraph of Article 226 TFEU; and relevant provisions relating to the European Anti-Fraud Office (OLAF)

[12] However this legal basis was fit for another legislative proposal, of a more technical nature, which  has now become EU Regulation 2023/2841 layng  down measures for a high common level of cybersecurity for the institutions, bodies, offices and agencies of the Union. This Regulation apply at EU administrative level the principles established for the EU Member States by Directive (EU) 2022/2555 (2)  improving the cyber resilience and incident response capacities of public and private entities. It created an Interinstitutional Cybersecurity Board ( IICB) and a Computer Emergency Response Team (CERT) which operationalizes the standards defined by the IICB and interact with the other EU Agencies (such as the EU Agency dealing with informatic security, Enisa), the corresponding structures in the EU Member States and even the NATO structures. It may be too early to evaluate if the Regulation is fit for its purpose ([12]) but the general impression is that its new common and cooperative system of alert and mutual support between the EU Institutions, Agencies and bodies may comply with the letter and spirit of art.298 of the TFEU

[13] Quite bizarrely this “open” attribute is not cited in the INFOSEC proposal and, even more strangely, none of the EU institutions has until now consulted the EU Ombudsman and/or the Fundamental Rights Agency.

[14] See Case C-338/01 Commission of the European Communities v Council of the European Union(Directive 2001/44/EC – Choice of legal basis)“The choice of the legal basis for a Community measure must rest on objective factors amenable to judicial review, which include in particular the aim and the content of the measure. If examination of a Community measure reveals that it pursues a twofold purpose or that it has a twofold component and if one of these is identifiable as the main or predominant purpose or component whereas the other is merely incidental, the act must be based on a single legal basis, namely that required by the main or predominant purpose or component. By way of exception, if it is established that the measure simultaneously pursues several objectives which are inseparably linked without one being secondary and indirect in relation to the other, the measure must be founded on the corresponding legal bases…”

[15]  Suffice to cite the following legal disclaimer :”This Regulation is without prejudice to Regulation (Euratom) No 3/1958 17 , Regulation No 31 (EEC), 11 (EAEC), laying down the Staff Regulations of Officials and the Conditions of Employment of other servants of the European Economic Community and the European Atomic Energy Community 18 , Regulation (EC) 1049/2001 of the European Parliament and of the Council 19 , Regulation (EU) 2018/1725 of the European Parliament and of the Council 20 , Council Regulation (EEC, EURATOM) No 354/83 21 , Regulation (EU, Euratom) 2018/1046 of the European Parliament and of the Council 22 , Regulation (EU) 2021/697 of the European Parliament and of the Council 23 , Regulation (EU) [2023/2841] of the European Parliament and of the Council 24 laying down measures for a high common level of cybersecurity at the institutions, bodies, offices and agencies of the Union.

[16]  See ReNEUAL Model Rules on EU Administrative Procedure. ReNEUAL working groups have developed a set of model rules designed as a draft proposal for  binding legislation identifying – on the basis of comparative research – best practices in different specific policies of the EU, in order to reinforce general principles of EU law

[17] The Council has listed not less than 64 EU entities (EU Institutions Agencies and Bodies – EUIBAs) in document WK8535/2023

EU Transparency and participative democracy in the EU institutions after Lisbon :“Everything must change for everything to remain the same”?

by Emilio DE CAPITANI *[1]

Foreword

In a famous Italian novel “The Leopard” which describes a key moment of regime change in Sicily a young protagonist, Tancredi, addresses the old Prince of Salina, suggesting as the best strategy in order to maintain the old privileges to adapt, at least apparently, to the new situation.

This seems to be also the strategy chosen by the European institutions after the entry into force of the Treaty of Lisbon when dealing with openness and  transparency of their decision-making process.

This Treaty marks a radical change from the previous situation, notably because it make visible and strengthens the interrelation between the principles of the Rule of law, democracy, mutual trust and transparency in the EU. This relation was already implicit before the Treaty but has become more evident at primary law level with the definition of the EU funding values (art 2 TEU), the binding nature of the EU Charter of fundamental rights and the establishment in the Treaties of clear legal basis transforming these principles in reality within the EU institutional framework and in relation with the EU Member States.

Under this perspective several articles of the EU Charter become relevant when dealing with principles of openness and transparency in the EU such the art.11 on Freedom of expression and information and articles 41 and 42 on the right to good administration and of access to EU documents. These rights should be granted and promoted not only by the EU Institutions Agencies and bodies but also by the Member states when implementing EU law. If a decision making process should be transparent at EU level the same transparency should be granted when EU measures are transposed at national level [2].

Openness and Transparency as corollaries of EU democracy

Furthermore the Lisbon Treaty has also endorsed several ambitious institutional innovations negotiated at the time of the draft Constitutional Treaty and which have now a direct or indirect impact on EU notions of rule of law, mutual trust, democracy and transparency. 

First of all, the Treaty makes clear the democratic nature of the EU not only by strengthening representative democracy (“The functioning of the Union shall be founded on representative democracy.” Art.10.1 TEU) but also by recognizing the principle of participative democracy [3] (“Every citizen shall have the right to participate in the democratic life of the Union. Decisions shall be taken as openly and as closely as possible to the citizen”  art.10.3 TEU).

Participative democracy is further strengthened by recognizing the role of Civil Society in art.11 TEU according to which “1. The institutions shall, by appropriate means, give citizens and representative associations the opportunity to make known and publicly exchange their views in all areas of Union action. 2. The institutions shall maintain an open, transparent and regular dialogue with representative associations and civil society.”. 

Moreover, the Lisbon treaty confirms the principle of openness when it states that “(EU) decisions are taken as openly as possible and as closely as possible to the citizen.”(art 1, 2nd  Alinea TEU). This provision was already present before Lisbon, but since then the notion of what could be considered “possible” has evolved both from a technical and political point of view. From a technical perspective, in the last twenty years the digital transformation has already triggered also at EU level the notion of e-government[4], of re-use of public data [5]. In a Google era  efficient communication techniques that involve and empower citizens make now possible involving citizens in public decision-making processes.[6]

From a political perspective the new Treaty emphasizes that “In all its activities, the Union shall observe the principle of the equality of its citizens, who shall receive equal attention from its institutions, bodies, offices and agencies.” (art.9 TEU). When translated in transparency policies this principle requires that, when in public domain, information should be accessible by means and procedures which should not be directly or indirectly discriminatory [7].

(EU) Preaching  “Transparency by design…

The Lisbon Treaty not only proclaimed the democratic principles on which the EU is founded and should be promoted (art 9-12 TEU) but confirmed the principle of openness and of participative democracy according to which ‘(EU) decisions are taken as openly as possible and as closely as possible to the citizens’ (art.1.2 TEU) and “[e]very citizen shall have the right to participate in the democratic life of the Union. Decisions shall be taken as openly and as closely as possible to the citizen.”(art.10.3 TEU).

Moreover, EU Legislative acts [8] are now defined at primary law level (art.289 TFEU) and the obligation of granting ‘Legislative transparency’ is now foreseen by Article 15(2) TFEU according to which “The European Parliament shall meet in public, as shall the Council when considering and voting on a draft legislative act.” As a consequence, granting legislative transparency has become a self-standing constitutional obligation which cannot be jeopardized by measures of EU secondary law or even more, by internal practices of the EU institutions. In other words, the mandatory principle of ‘legislative transparency’ established by Article 15(2) TFEU and 16.8 TEU should no more, be mixed with the ‘transparency on demand’ approach of the “pre-Lisbon” era when the scope of legislative transparency was often linked to the aleatory condition that a citizen may ask or not access to a legislative preparatory document.

…but framing  “confidentiality by design”.

Unfortunately, even today, fifteen years since the entry into force of the Lisbon Treaty legislative preparatory documents made proactively public by the EU legislators following art.15.2 TFEU are still a fraction of the documents prepared and debated by the Commission, the Council and, even by the European Parliament along a legislative procedure.

The Council is the most appalling case of hiding legislative preparatory documents.

Even today, the Council’ internal Rules of procedures [9]consider that confidentiality should be the rule and transparency the exception. According to Council Internal Guidelines transparency of Council meetings when debating legislative procedures (as required by Article 16(8) TEU) is required only for “formal” Meetings at ministerial level. By so doing, citizen’s access is excluded not only from the “informal” Ministerial meetings but also from all the Coreper and working parties meetings no matter if, in a more general perspective, the Council is a single legal entity and preparatory bodies should not be considered apart).[10] As a proof that the main Council inspiration is “confidentiality by design” instead of “transparency by design” is the Council reorganization operational since 2015 of its internal document management[11]. Its 130/150 internal working parties have been transformed into ‘virtual communities’, which are de facto also virtual ‘sandboxes’ where working (WK) documents covering also legislative preparatory works (also at ‘trilogue level’) are shared only between the Community members [12].

By doing so the Council of the European Union is, since years preventing, routinely, access and democratic participation of EU citizens and of civil society, and is making unduly difficult the work of journalists, preventing the National Parliaments from checking the respect of the principle of subsidiarity and, last but not least, hiding essential information to the other co-legislator, the European Parliament.

The EU “Catch 22” how promoting confidentiality to protect ..transparency

To justify this behavior the Council still today refer to the exceptions set in art.4 and 9 of the pre-Lisbon Regulation 1049/2001 , and notably to the need of ‘protecting its decision making process’ as foreseen by art.4.3 of that Regulation. According to this principle “Access to a document, drawn up by an institution for internal use or received by an institution, which relates to a matter where the decision has not been taken by the institution, shall be refused if disclosure of the document would seriously undermine the institution’s decision-making process, unless there is an overriding public interest in disclosure”. Suffice to note that, if transposed to legislative preparatory works this principle may justify, for instance, the confidentiality of the work of the Parliamentary committees but this will clash with the provisions of art. 15.2 TFEU imposing the publicity of meetings of the EP and of the Council when acting as legislators (and this voer also the preparatory bodies as the EP and the Council have a single institutional identity). Moreover such use of a generic exception by an institution in its own interest will clash with the interinstitutional nature of the EU legislative process as described by art 294 of the TFEU.

To overcome the clash between the current provisions of the treaty and the exception described in  art.4.3 of the pre-Lisbon Regulation 1049/01 there are then only two possibilities: either you consider that this exception is not relevant for legislative procedures or you consider that when legislation is at stake the “overriding public interest” is directly foreseen by the treaty and no exception can be raised. Behaving like the Council does when acting as legislator, create a “Catch 22” situation  where confidential is invoked to “protect” a procedure which should be …transparent.

Needless to say this Council behavior has been denounced in several occasions, not only by the other co-legislator, the EP, but also by the EU Ombudsman not to speak of the Court of Justice. The latter with several rulings has framed in stricter terms the scope of Regulation 1049/01 exceptions even before the entry into force of the Lisbon treaty and of art.15.2 TFEU. It is then quite appalling that the impact on the Council practice of the EP pressure, of the Ombudsman recommendations of  the CJUE jurisprudence has been very limited and anecdotical. [14]

To overcome all these legal inconsistencies the European Parliament voted on December 15th , 2011[15] several ambitious amendments aligning Regulation 1049/2001 to the post-Lisbon new Constitutional framework.  The EP Plenary not only considered that legislative debates should not be covered by the pre-Lisbon exceptions listed in art. 4, but voted also a legislative framework for classified documents (art. 9) and paved the way for the implementation of the principle of good administration by EU institutions, agencies and bodies. In the same perspective it also adopted two legislative proposals on framing the principle of good administration by the EU institutions, Agencies and bodies [16]   

Unfortunately, the EP position on the alignment of Regulation 1049/01 with the Lisbon treaty, is , since thirteen years still formally pending, and has not been endorsed by the European Commission nor by the EU Council so that the EU and its citizens are still confronted with a secondary law (Regulation 1049/2001) and a wide practice of the EU institutions, agencies and bodies not complying with the new post-Lisbon constitutional framework.

In a quite opposite direction from the EP recommendation on the revision of Regulation 1049/01 and on the establishment of an EU code on good administration founded on art 298 TFUE (open, independent and efficient EU Public administration) the European Commission submitted in 2022 on the same legal basis (and without consulting the EU Ombudsman) a legislative proposal[17] dealing with information security in the institutions, bodies, offices and agencies of the Union.

The so called ‘INFOSEC’ Proposal, if adopted as it stands, may even pave the way for the transformation of the ‘EU Bubble’ into a sort of (administrative) fortress and substitute the principle of ‘transparency by design’ arising from art. 1.2 TEU with the principle of ‘confidentiality by design’[18] of all EU Institutions, Agencies and Bodies. It does so by redefining the conditions of treatment, access and sharing of all kinds of information/documents treated by the EU institutions, agencies and bodies by so overlapping and turning upside down Regulation 1049/2001 and the letter and spirit of the Treaty.

If the principle of Regulation 1049/2001 is to frame the right to know of EU citizens by granting that everything is public unless a specific exception is applicable, the logic of the new Commission proposal is that almost all internal documents should be protected and shared only with people with a recognised ‘need to know’ unless the document is marked as ‘public’. This will generalise to all the EU Institutions, Agencies and bodies the current Council practice of limiting the access internal documents in clear clash with art. 1 of the TEU which requires that the EU Institutions should act as openly as possible and the art.298 TFEU requiring that the EU administration should be not only indipendent and efficient but also “open”.

With the new proposed legal regime, the Commission, by endorsing and widening in a legislative measure the current Council internal security rules, is proposing to go back to the pre-Maastricht era when it was up to the EU institutions to decide whether or not to give access to their internal documents [19]. But since the Amsterdam Treaty (Article 255 TCE) and, even more, since the Lisbon Treaty, this practice is no longer compatible within an EU that is bound by the rule of law.

The core of the proposed INFOSEC Regulation is the creation and management of EU classified information (EUCI). By doing so, it substantially amends Article 9 of Regulation 1049/2001, which deals with so-called ‘sensitive documents’. It does not regulate how the information should be classified and declassified in the interests of the EU, as opposed to the interests of the originator (whether that be a member State, EU institution, agency or body). It is worth recalling that Article 9 of Regulation 1049/2001 recognises the so-called ‘originator privilege’ only in the domain of ‘sensitive’ documents and information mainly covered by the EU external defence policy (former Second “Pillar”). As such it is an exception to the general philosophy of Regulation 1049/2001 according to which the EU institutions may only be bound by law and not by the will of an ‘author’, even if it were an EU Member State. [20]

How the EP risks slowly turning to intergovernamental practices

The EP has been, since its first direct election, the most supportive institution of the transparency of the EU decision making process both in the interest of the EU citizens and its own constitutional role. For decades it has challenged the Council and Commission reluctance when sharing the relevant information on what was happening on the ground inside or outside the EU. The Court of Justice has recognised in several cases that the EP’s right to relevant information is explicitly recognised by the Treaty notably for international agreements (Article 218 (10) TFEU).

Unfortunately, instead of pushing the Council towards an open ‘parliamentary’ approach to legislation, the EP has followed the Council ‘diplomatic’ approach notably in the crucial phase of inter-institutional negotiations (‘trilogues’) even when, as is normally the case, these negotiations take place in the first parliamentary ‘reading’.

Although the CJEU considers the documents shared within the trilogues meetings as ‘legislative’[21], the European Parliament still publish these documents only since March 2023 but only after specific requests for access by EU citizens and after a consistent delay so that the information becomes available when the agreements have been reached.

This practice does not fit with Article 15(2) TFEU nor with the CJEU jurisprudence according to which ‘[i]n a system based on the principle of democratic legitimacy, co-legislators must be answerable for their actions to the public and if citizens are to be able to exercise their democratic rights they must be in a position to follow in detail the decision-making process within the institutions taking part in the legislative procedures and to have access to all relevant information.’[22]



[1] Affiliate to the Scuola Superiore S.Anna (Pisa)

[2] In this perspective it is quite bizarre that the Council evoke the notion of sincere cooperation by the Member States in order not to debate publicly at national level the EU legislative preparatory documents (coded as LIMITE) notably through the National Parliaments

[3] This emphasis for participative democracy is now also echoed at UN level by the 2030 Agenda for Sustainable Development whose Goal 16 foresees notably, to “Develop effective, accountable and transparent institutions at all levels”(16.6) Ensure responsive, inclusive, participatory and representative decision-making at all levels (16.7) 16.10 Ensure public access to information and protect fundamental freedoms, in accordance with national legislation and international agreements (16.10)

[4] See the European Commission communication  https://commission.europa.eu/business-economy-euro/egovernment_en

[5] See the Directive (EU) 2019/1024 of the European Parliament and of the Council of 20 June 2019 on open data and the re-use of public sector information which maybe a clear reference also for comparable initiatives of the EU Institutions, agencies and bodies.

[6] See the recent Council Conclusions on the EU’s ambition to play a leading role globally in the digital transformation and digital governance that respects, promotes and protects universal human rights, democracy and sustainable development, and puts people and their universal human rights at the centre, in line with the international law and the EU Declaration on Digital Rights and Principles. (Doc 9957/24 of 21st of May 2024)

[7] This issue is relevant not only in cases of proactive publication but also when an information is disclosed following a Citizen’s request. If the information/document deals with legislative procedures it should be accessible in the public domain to everyone without further request for access.

[8] It should be noted that the concept of draft legislative act and legislative acts referred to in Article 15(2) TFEU does not correspond to the concept of legislative documents and legislative procedures referred to in the Pre-Lisbon Regulation 1049/01. While Article 15(2) TFEU refers to the projects and legislative acts defined in Article 289 TFEU (i.e. the joint adoption of legislative acts by the Council and the European Parliament), the Regulation, which pre-dates the entry into force of Article 289 TFEU, refers to “documents drawn up or received in the course of procedures for the adoption of acts which are legally binding”.2 Now, according for instance to the new Article 290 TFEU, Commission delegated acts which were “legislative” before Lisbon are now “non-legislative acts” (see also Article 16.8 TEU as to the “non-legislative activities” of the Council

[9] Council Decision of 1 December 2009 adopting the Council’s Rules of Procedure Link : https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:32009D0937

[10] Indeed, Article 5(1) of the Council Rules of Procedure (CRP) provides that, unless deliberating or voting on legislative acts, Council meetings must not be public, and Article 6(1) CRP stipulates that ‘Without prejudice to Articles 7, 8 and 9 and to provisions on public access to documents, the deliberations of the Council shall be covered by the obligation of professional secrecy …’, but on page 54 of its commentary on the CRP it is notably stated explicitly that : This rule also applies to the preparatory work for Council meetings, that is, all the Council’s preparatory bodies (Coreper, committees and working parties). However, legislative work in preparatory bodies is not public.”(emphasis added)

[11] See the Council public document 7385/16 of 2 May 2016, “Delegates Portal: a new Community Approach to document distribution”. The reorganization of the internal production/diffusion of Council internal documents has been endorsed by the Coreper in public document 6704/13 CIS 5 work on COCOON (Council Collaboration Online)”. The system has been generalised to all Working Parties in 2015. See https://data.consilium.europa.eu/doc/document/ST-7385-2016-INIT/en/pdf.

[12] Meijers Committee, ‘Working Documents’ in the Council of the EU cause a worrying increase in secrecy in the legislative process, CM2107 June 2021 https://www.commissie-meijers.nl/wp-content/uploads/2021/09/2107_en.pdf.

101See (2022/0084(COD) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0119.

[13] Regulation (EC) No 1049/2001 of the European Parliament and of the Council of 30 May 2001 regarding public access

to European Parliament, Council and Commission documents.

[14] European Ombudsman openly stated for the first time in a recent decision of March 2024 that EU institutions are not giving effect to case law on public access to legislative documents. See European Ombudsman, Case OI/4/2023/MIK, ‘How the European Parliament, the Council of the EU and the European Commission deal with requests for public access to legislative documents’, https://www.ombudsman.europa.eu/en/case/en/64321.. Cited by the EP Study “Regulation 1049/2001 on the right of access to documents, including the digital context” https://www.europarl.europa.eu/RegData/etudes/STUD/2024/762890/IPOL_STU(2024)762890_EN.pdf

[15] See Legislative Procedure 2008/0090(COD).Link https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?lang=en&reference=2008/0090(COD)

[16] With the aim of guaranteeing the right to good administration and ensuring an open, efficient and independent EU civil service, on 15 January 2013 the European Parliament adopted a first resolution (Rapporteur Luigi Berlinguer SD Italy) presenting detailed recommendations to the Commission on a Law of Administrative Procedure of the EU under the new legal basis of Article 298 of the Treaty on the Functioning of the European Union (TFEU). A second resolution for an open, efficient and independent European Union administration (rapporteur: Haidi Hautala, Greens/EFA, Finland) in June 2016 (2016/2610(RSP)).

[17] See Legislative Procedure 2022/0084(COD) Proposal for a Regulation of the European Parliament and of the Council on information security in the institutions, bodies, offices and agencies of the Union Link : https://oeil.secure.europarl.europa.eu/oeil/popups/ficheprocedure.do?reference=2022/0084(COD)&l=en

[18] In principle, the objective as announced in the title of the proposal is legitimate: granting a comparable level of protection in all the EU institutions, agencies and bodies, for information and documents, which, according to the law, should be protected. To do so a wide inter-institutional coordination group is proposed, as well as a network of security officials in all the EU entities and a securitised informatic network (TEMPEST) is foreseen.

[19] By replacing the ‘right to know’ foreseen at the Treaty with the a ‘need to know’ mechanism the proposed Regulation

turn upside down the EU openness and transparency principle.

[20] What the INFOSEC proposal does is transform the exception of the ‘originator principle’ in a rule against the provision of Regulation 1049/2001. It does not foresee judicial oversight of classified information. It does not solve the problem of the sharing of ‘sensitive information’ between entities that have a legitimate “need to know”. Last but not least, it threatens the EP oversight role of EU security agreements with third countries and international organisations on the exchange of classified information.

[21] See Case T-540/15 De Capitani v European Parliament

[22] Case T-163/21 De Capitani v Council EU:T:2023:15.

Verfassungsblog : Why an EU Country under the Surveillance Procedure (Article 7.1 TEU) Should not Chair the Council Presidency

by Virgilio DASTOLI and Emilio DE CAPITANI

In accordance with the Council Decision on the exercise of the Presidency of the Council of the European Union,1) from July 1 of this year the office is to be held by Hungary. This occasion will mark the first time that the Presidency will have been held by a Member State that has been subject to the “surveillance” procedure in Article 7(1) of the Treaty on European Union, having been launched by the European Parliament in September 2018.

As the Court of Justice has recognised,2) by adopting its Resolution, the EP has already triggered the legal consequences foreseen by Protocol 24.

‘[A]s long as the Council or the European Council has not taken a decision in respect of the Member State concerned, a Member State may, by way of derogation from the general rule laid down in that single article, take into consideration or declare admissible to be examined any asylum application lodged by a national of the Member State that is the subject of that procedure.’

Simply put, it means that Hungary is no longer to be considered a “safe country”, and if it should occur, a Hungarian may request asylum in another EU Country. In other words, the general presumption that fundamental rights and values are respected in that Member State is no longer absolute, and precaution should be taken when fundamental rights of individuals are concerned (as is the case in relation to the European Arrest Warrant). In a more general sense, and in the relations with other Member States or EU Institutions, the principle of mutual trust that is the bedrock of intra-EU cooperation is not “blind trust” and cannot be taken for granted.

Within this perspective, it would be sensible to assume that a Member State that does not enjoy the full confidence of the other Member States should not be responsible for a key coordinating role, as is the case when holding the Council Presidency.  As a matter of fact, holding the Council Presidency is anything but a protocolar task. It plans, coordinates and chairs meetings of the Council and most of the Council’s preparatory bodies, i.e. working parties and committees. It suggests compromise solutions with a view to reaching an agreement between the Members of the Council (‘honest broker’). The Presidency should be, by definition, neutral and impartial. It is the moderator for discussions and cannot, therefore, favour either its own preferences or those of a particular Member State.

But holding the Council Presidency also has an essential interinstitutional dimension, because it is the Presidency that represents the Council in its relations with the European Parliament (EP) and negotiates on behalf of the Council to reach agreements on legislative files by protecting and promoting together the EU values that Hungary is openly challenging.

It is not surprising that the European Parliament (which originally triggered the Article 7(1) TEU procedure against Hungary) already one year ago3) sent a Resolution to the Council and the Commission underlining

‘the important role of the presidency of the Council in driving forward the Council’s work on EU legislation, ensuring the continuity of the EU agenda and representing the Council in relations with the other EU institutions” but also questioning “…how Hungary will be able to credibly fulfil this task in 2024, in view of its non-compliance with EU law and the values enshrined in Article 2 TEU, as well as the principle of sincere cooperation’.

Surprisingly, neither the Commission nor the Council have to date furnished any response. Perhaps the reason was that these two institutions were expecting a positive development prior to the end of the legislative term, such as apparently occurred with Poland, (the only other European Country subjected to the Article 7(1) TEU procedure). Yet, unfortunately, in the case of Hungary, the situation has in the meantime rather worsened, to the extent that the European Parliament adopted two new Resolutions, the first on January 18 of this year4) and the second on April 24.5)

These highly detailed texts summarise and update the already formidable list of all Hungarian infringements of the rule of law and of the Budgetary Conditionality Mechanism. The most recent text declares in even stronger words the same concerns as to the suitability of Hungary as President of the Council and declares the EP readiness to take measures to defend the credibility of the Union with respect to the values enshrined in Article 2 TEU as regards cooperation with the Council’.

It remains to be seen if the two most recent EP texts will once again fall on deaf ears on the Council side. However, from a constitutional point of view, the assessment of the EP appears well founded and should have received much greater attention from the Council, notably because by maintaining the Hungarian Presidency the Council is threatening the smooth functioning of the EU in its essential legislative and budgetary functions as envisaged in the post-Lisbon Treaty framework: these functions now fall within the joint responsibility of the European Parliament and of the Council (Article 14(1) and 16(1) TEU), and this co-responsibility requires a great deal more than loyal cooperation between the two institutions (Article 13 TEU).

It would now be both prudent and sensible for the Council to modify its 2016 Decision, by qualified majority, as already provided for in legal doctrine,6)  by foreseeing explicitly that Council Presidency should not be held by a Country under art. 7 Procedure. As a consequence the Hungarian Presidency will be delayed until the Article 7(1) TEU surveillance procedure will have been successfully concluded. It has to be noted that a postponement should not be considered as a sanction against Hungary, but rather a simple precautionary measure to preserve the smooth functioning of the European Union and to avoid a period of interinstitutional bickering between the EU co-legislators, particularly at such a decisive moment for the EU legislature both from an internal and international point of view. Moreover, it wouldn’t be the first time that the Council Presidency has been postponed, and then for much less serious reasons.  As rightly noted by the Meijers Committee,

‘changes in the previously agreed order of Presidencies have not been uncommon.  They occurred on six occasions, for different reasons: three times after the accession of new Member States, in 1995, in 2005 and in 2007; in 2002 at the request of Germany because general elections were scheduled during its upcoming Presidency; in 2009 because of the Treaty of Lisbon; and in 2016 after accession of Croatia and the Brexit Referendum with regard to the UK Presidency, which was scheduled to start in 11 months’ time, as of July 2017. Therefore, it is established legal and political practice to reconsider the order of the Presidency in case of relevant circumstances, even if relatively close to the date that the rotation is scheduled to start’.

It is finally also worth noting that an urgent appeal to postpone the Hungarian Presidency has very recently been submitted to the EU Institutions by the European Movement (IT, ES, FR branches).7) The European Commission President, Ursula Von Der Leyen, has shared it with the competent Members of the College, notably with Vice-President Maroš Šefčovič, who is responsible for interinstitutional relations. The time period until July 1 is rapidly diminishing, and on June 18 the General Affairs Council will decide on a reasoned proposal from the Commission on closing the Article 7(1) TEU procedure against Poland.8) Will it also be the occasion to discuss the issue of the incoming Hungarian Presidency? If so the point could also be submitted for final decision at the European Council Meeting on June 27/28 under the chapter on institutional issues (as the general responsibility on the issue of Council Presidencies falls under the COEUR competence – Article 236 TFEU).

We, the undersigned scholars, experts and citizens, support this call for the postponement of the Hungarian Presidency.

Those who wish to support this initiative can send their contact details here.

Prof. Gábor Halmai, European University Institute, Florence

Prof. Sergio Fabbrini, Luiss University, Rome

Prof. Petra Bard, Radboud University

Prof. Tomacz Tadeus Koncewicz, University of Gdańsk, Department of European and Comparative Law

Prof. Laurent Pech, University College Dublin

Prof. Paul Craig, University of Oxford

Prof. Kim Lane Scheppele, Princeton University

Prof. Catherine Dupré, University of Exeter Law School

Prof. Maria Bergström, Uppsala University, Faculty of Law

Prof. Marie-Laure Basilien-Gainche University Jean Moulin Lyon 3, Institut Universitaire de France

Prof. Henri de Waele, Radboud University Nijmegen and University of Antwerp

Prof. Elspeth Guild, Queen Mary University of London

Prof. Olivier Costa, CNRS, CEVIPOF, Sciences Po, Paris

Dr. Marta Lasek-Markey, Trinity College Dublin

Prof. Stephen Skinner, University of Exeter

Dr. Christine Bicknell, Human Rights and Democracy Forum, University of Exeter Law School

Dr. Carlotta Garofalo, University of Graz

Ounia N. Doukoure, Paris 1 University, Institut Convergences Migrations ; Lille Catholic University

Prof. Marc Valéri, University of Exeter

Prof. Federico Fabbrini, Dublin City University

Prof. Dominique Custos, Caen Normandie University

Prof. Dino G. Rinoldi, Catholic University of the Sacred Heart of Milan

Prof. Nicoletta Parisi, Catholic University of the Sacred Heart of Milan

Prof. Douwe Korff, University of Oxford

Prof. Susanna Cafaro, University of Salento

Prof. Laurence Burgorgue-Larsen, Paris 1 University

Prof. Fred Constant, University of the Antilles

Prof. Jean-Manuel Larralde, Caen Normandie University

Prof. Maria Castillo, University Caen Normandie University

Prof. Maciej Bernatt, University of Warsaw

Prof. Yves Poullet, University of Namur

Prof. Antonio Da Re, University of Padova

Prof. Luciano Corradini, Roma Tre University

Prof. Massimiliano Guderzo, University of Siena

Prof. Massimo Fragola, Università della Calabria

This is a pre-peer reviewed version of an article submitted for publication in the European Law Journal.

References

↑1Council Decision (EU) 2016/1316 of 26 July 2016 amending Decision 2009/908/EU (OJ L 208, 2.8.2016, p. 42) : https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016D1316.
↑2See paras. 39 and 40 of Case C-650/18, Ungary v. European Parliament, June 3 2021, EU:C:2021:426:‘39      In the present case, it should be noted that the adoption of the contested resolution initiates the procedure laid down in Article 7(1) TEU. Under point (b) of the sole article of Protocol (No 24), once that procedure is initiated and as long as the Council or the European Council has not taken a decision in respect of the Member State concerned, a Member State may, by way of derogation from the general rule laid down in that single article, take into consideration or declare admissible to be examined any asylum application lodged by a national of the Member State that is the subject of that procedure.40      It follows that the adoption of the contested resolution has the immediate effect of lifting the prohibition, which is in principle imposed on the Member States, on taking into consideration or declaring admissible to be examined an asylum application made by a Hungarian national. That resolution thus changes, in relations between Member States, the position of Hungary in the field of asylum.’
↑3European Parliament resolution of 1 June 2023 on the breaches of the Rule of Law and fundamental rights in Hungary and frozen EU funds (2023/2691 RSP)OJ C, C/2023/1223, 21.12.2023, ELI: http://data.europa.eu/eli/C/2023/1223/oj
↑4See point 8 of the Resolution “Situation in Hungary and frozen EU funds”  questioning again “..if the Hungarian Government will be able to credibly fulfil this task in 2024, in view of its non-compliance with EU law and the values enshrined in Article 2 TEU, as well as the principle of sincere cooperation;” and  “asking the Council to find proper solutions to mitigate these risks as soon as possible”,  https://www.europarl.europa.eu/doceo/document/TA-9-2024-0053_EN.html.
↑5See Resolution Ongoing hearings under Article 7(1) TEU regarding Hungary to strengthen Rule of Law and its budgetary implications where it regretted ‘that the Council has not yet found a solution to this problem, and that representatives of the Hungarian Government would chair the Council’s meetings concerning democracy, the rule of law and fundamental rights, including meetings related to protecting the EU’s financial interests and budget; underscores that this challenge comes at the crucial moment of the European elections and the formation of the Commission; deplores the failure to find a solution and reiterates its readiness to take measures to defend the credibility of the Union with respect to the values enshrined in Article 2 TEU as regards cooperation with the Council;’  https://www.europarl.europa.eu/doceo/document/TA-9-2024-0367_EN.html.
↑6See the Mejiers Committee “Comment on the exercise and order of the Presidency of the Council of the EU”, published on 19 May 2023, https://www.commissie-meijers.nl/comment/comment-on-the-exercise-and-order-of-the-presidency-of-the-council-of-the-eu/.
↑7Available at:  https://www.movimentoeuropeo.it/images/documenti/VIKTOR_ORB%C3%81N_MUST_NOT_CHAIR_THE_COUNCIL_OF_THE_EUROPEAN_UNION_MEIT-FR-ES.pdf.
↑8Available at: https://data.consilium.europa.eu/doc/document/ST-10716-2024-INIT/en/pdf.

The Council of Europe Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: perhaps a global reach, but an absence of harmonisation for sure

by Michèle DUBROCARD (*)

On 15 March 2024, Ms Marija Pejčinović Burić, the Secretary General of the Council of Europe, made a statement, on the occasion of the finalisation of the Convention on Artificial Intelligence (AI), Human Rights, Democracy and the Rule of Law. She welcomed what she described as an ‘extraordinary achievement’, namely the setting out of a legal framework that covers AI systems throughout their lifecycles from start to end. She also stressed the global nature of the instrument, ‘open to the world’.

Is it really so? The analysis of the scope, as well as the obligations set forth in the Convention raise doubts about the connection between the stated intent and the finalised text. However, this text still needs to be formally adopted by the Ministers of Foreign Affairs of the Council of Europe Member States at the occasion of the 133rd Ministerial Session of the Committee of Ministers on 17 May 2024, after the issuing of the opinion of the Parliamentary Assembly of the Council of Europe (PACE)[1].

I- The scope of the Convention

It is no secret that the definition of the scope of the Convention created a lot of controversy among the negotiators[2]. In brief, a number of States, a majority of which are not members of the Council of Europe [3] but participated in the discussions as observers, essentially opposed the European Union, in order to limit the scope of the Convention to activities related to AI systems only undertaken by public authorities, and exclude the private sector.

Those observer States achieved their goal, presumably with the help of the Chair[4] and the Secretariat of the Committee on Artificial Intelligence (CAI), but they did it in a roundabout way, with an ambiguous wording. Indeed, the reading of both Article 1.1 and Article 3.1(a) of the Convention may lead to think prima facie that the scope of the Convention is really ‘transversal’[5], irrespective of whether activities linked to AI systems are undertaken by private or public actors:

– according to Article 1.1, ‘the provisions of this Convention aim to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.

– according to Article 3.1(a),‘the scope of this Convention covers the activities within the lifecycle of artificial intelligence systems that have the potential to interfere with human rights, democracy and rule of law as follows’.

This impression is confirmed by the explanatory report, which states in par. 15 that ‘the Drafters aim to cover any and all activities from the design of an artificial intelligence system to its retirement, no matter which actor is involved in them’.

However, the rest of Article 3 annihilates such wishful thinking: as regards activities undertaken by private actors, the application of the Convention will depend on the goodwill of States. Better still, a Party may choose not to apply the principles and obligations set forth in the Convention to activities of private actors, and nevertheless be seen as compliant with the Convention, as long as it will take ‘appropriate measures’ to fulfil the obligation of addressing risks and impacts arising from those activities:

Each Party shall address risks and impacts arising from activities within the lifecycle of artificial intelligence systems by private actors to the extent not covered in subparagraph (a) in a manner conforming with the object and purpose of the Convention.

Each Party shall specify in a declaration submitted to the Secretary General of the Council of Europe at the time of signature or when depositing its instrument of ratification, acceptance, approval or accession how it intends to implement this obligation, either by applying the principles and obligations set forth in Chapters II to VI of the Framework Convention to activities of private actors or by taking other appropriate measures to fulfil the obligation set out in this paragraph. Parties may, at any time and in the same manner, amend their declarations’.

How should one interpret such a provision? It seems to allow Parties to submit a reservation on the private sector but, at the same time, it is not worded as a reservation per se. On the contrary, it establishes a sort of equivalence between the principles and obligations laid down in the Convention and ‘other appropriate measures’ to be taken by the Parties when addressing risks and impacts arising from activities related to AI systems undertaken by private actors. In other words, the Convention organizes the modalities of circumvention of the principles and obligations that yet constitute the core of its very object.

The result of such a provision is not only a depreciation of the principles and obligations set forth in the Convention, since it is possible to derogate from them for activities of private actors without derogating from the Convention itself, but it also creates a fragmentation in the implementation of the instrument. The uncertainty stemming from these declarations is aggravated by the possibility, for each Party, to amend its declaration at any time. Since there is no other specification, one could even imagine a situation where a Party could, in the first instance, accept to apply the principles and obligations set forth in the Convention to the private sector, but then, at a later stage, reconsider its initial decision and limit such application to the public sector only.

Instead of establishing a level playing field among the Parties, the Convention legitimizes uncertainty as regards its implementation, in space and time.

On the other hand, Article 3.2 clearly authorizes an exemption, requested this time by the European Union[6], for activities within the lifecycle of AI systems related to the protection of national security interests of Parties. However, according to the provision, such activities should be ‘conducted in a manner consistent with applicable international law, including international human rights law obligations, and with respect for its democratic institutions and processes’.  In the framework of the Council of Europe, such an exemption is particularly surprising in the light of the case-law of the European Court of Human Rights, which has clearly interpreted the concept of ‘national security’[7]. Exempting from the scope of the Convention activities of AI systems related to the protection of national security interests seems therefore at best useless, if not conflicting with the obligations stemming from the European Convention on Human Rights.

In addition to national security interests, Article 3 foresees two more exemptions, namely research and development activities and national defence. Concerning research and development activities regarding AI systems not yet made available for use, Article 3.3 also includes what seems to be a safeguard, since the Convention should nevertheless apply when ‘testing or similar activities are undertaken in such a way that they have the potential to interfere with human rights, democracy and the rule of law’. However, there is no indication of how and by whom this potential to interfere could be assessed. The explanatory report is of no help on this point, since it limits itself to paraphrasing the provision of the article[8].

As regards matters related to national defence, the explanatory report[9] refers to the Statute of the Council of Europe, which excludes them from the scope of the Council of Europe. One can however wonder whether the rules of the Statute of Europe are sufficient to justify such a blanket exemption, especially in the light of the ‘global reach’ that the Convention is supposed to have[10]. Moreover, contrary to the explanations related to ‘national security interests’, the explanatory report does not mention activities regarding ‘dual use’ AI systems, which should be under the scope of the Convention insofar as these activities are intended to be used for other purposes not related to national defence.

II- Principles and obligations set forth in the Convention

According to the explanatory report, the Convention ‘creates various obligations in relation to the activities within the lifecycle of artificial intelligence systems’[11].

When reading Chapters II to Chapter VI of the Convention, one can seriously doubt whether the Convention really ‘creates’ obligations or rather simply recalls principles and obligations already recognized by previous international instruments. Moreover, the binding character of such obligations seems quite questionable.

II-A Principles and obligations previously recognized

A number of principles and obligations enshrined in the Convention refer to human rights already protected as such by the European Convention on Human Rights, but also by other international human rights instruments. Apart from Article 4 that recalls the need to protect human rights in general, Article 5 is dedicated to integrity of democratic processes and respect of rule of law[12], Article 10 is about equality and non-discrimination[13], Article 11 refers to privacy and personal data protection[14], and Articles 14 and 15 recall the right to an effective remedy[15].

Other principles are more directly related to AI, such as individual autonomy in Article 7, transparency and oversight in Article 8, accountability and responsibility in Article 9, and reliability in Article 12, but once again these principles are not new. In particular, they were already identified in the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, adopted on 19 May 2019[16].

This feeling of déjà vu is reinforced by the wording of the Convention: in most articles, each Party shall ‘adopt or maintain measures’ to ensure the respect of those principles and obligations. As duly noted in the explanatory report, ‘in using “adopt or maintain”, the Drafters wished to provide flexibility for Parties to fulfil their obligations by adopting new measures or by applying existing measures such as legislation and mechanisms that existed prior to the entry into force of the Framework Convention[17].

The question that inevitably comes to mind is what the added value of this new instrument can be, if it only recalls internationally recognized principles and obligations, some of them already constituting justiciable rights.

Indeed, the mere fact that this new instrument deals with the activities related to AI systems does not change the obligations imposed on States to protect human rights, as enshrined in applicable international law and domestic laws. The evolution of the case law of the European Court of Human Rights is very significant in this regard. As we know, the Court has considered, on many occasions, that the European Convention on Human Rights is to be seen as ‘a living instrument which must be interpreted in the light of present-day conditions[18]. Without much risk one can predict that in the future the Court will have to deal with an increasing number of cases involving the use of AI systems[19].

II-B A declaratory approach

One could try to advocate for this new Convention by emphasizing the introduction of some principles and measures which haven’t been encapsulated in a binding instrument, yet. Such is the case, for instance, of the concepts of transparency and oversight, to be linked to those of accountability and responsibility, reliability, and of the measures to be taken to assess and mitigate the risks and adverse impacts of AI systems.

However, the way these principles and measures have been defined and, above all, how their implementation is foreseen, reveal a declaratory approach, rather than the intention to establish a real binding instrument, uniformly applicable to all.

Moreover, the successive versions of the Convention, from the zero draft, to the last version of March 2024, reveal a constant watering down of its content: the provisions on the need to protect health and environment have been moved to the Preamble, while those aiming at the protection of whistleblowers have been removed.

In the light of the EU Artificial Intelligence Act[20], the current situation is almost ironic, since the Convention does not create any new individual right, contrary to the EU regulation, which clearly recognizes, for instance, the human overview as well as the right to explanation of individual decision-making. And yet, the general economy of the AI Act is based on market surveillance and product conformity considerations, while the Council of Europe Convention on AI is supposed to focus on human rights, democracy, and the rule of law[21].

So, what is this Convention about? Essentially obligations of means and total flexibility as regards the means to fulfil them.

obligations of means:

A number of obligations in principle imposed on Parties are in fact simple obligations of means, since each Party is requested to ‘seek to ensure’ that adequate measures are in place. It is the case in Article 5, dedicated to the ‘integrity of democratic processes and respect for rule of law’. It is also the case in Article 15 on procedural safeguards, when persons are interacting with an artificial intelligence system without knowing it, in Article 16.3 in relation to the need of ensuring that adverse impacts of AI systems are adequately addressed, and in Article 19 on public consultation.

On the same vein, other articles include formulations which leave States with considerable room for manoeuvre in applying the obligations: as regards reliability, each Party shall take ‘as appropriate’ measures to promote this principle[22].  As regards digital literacy and skills, each Party shall ‘encourage and promote’ them[23]. Similarly, Parties are ‘encouraged’ to strengthen cooperation to prevent and mitigate risks and adverse impacts in the contexts of AI systems[24].

More importantly, it will be up to Parties to ‘assess the need for a moratorium or ban’ AI systems posing unacceptable risks[25]. One can only deplore the removal of former Article 14 of the zero draft, which provided for the ban of   the use of AI systems by public authorities using biometrics to identify, categorise or infer emotions of individuals, as well as for the use of those systems for social scoring to determine access to essential services. Here again, the Convention is under the standards defined by the AI Act[26].

– the choice of the measures to be adopted:

First, one should note that from the first article of the Convention, flexibility is offered to the Parties as regards the nature of the measures to be adopted, if appropriate. Article 1.2 provides the possibility for each Party ‘to adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention’.

Consequently, Parties might consider that their domestic system is fully compliant with this Convention without any change in their regulations. They could even consider that simple recommendations to public or private actors might be sufficient to fulfil their obligations under the Convention.

The wide leeway given to the States also explains the constant reference to the ‘domestic law’ [27]or to the domestic legal system[28] throughout the Convention. In particular Article 6, which  constitutes a chapeau for the whole Chapter III, states that principles included in this Chapter shall be implemented by Parties ‘in a manner appropriate to its domestic legal system and the other obligations of this Convention’. Such a wording is not free from a certain ambiguity, since it might be interpreted as requiring, as part of their implementation, an adaptation of the principles set forth in the Convention to the pre-existing domestic law, and not the opposite.

Here again, with this constant reference to domestic laws intrinsically linked to the ‘flexibility’ given to the Parties, one can only deplore the lack of harmonisation of the ‘measures’ which might be adopted in accordance with the Convention.

the absence of an international oversight mechanism:

It is true that Article 26 of the Convention lays down the obligation for each Party to establish or designate one or more effective mechanisms to oversee compliance with the obligations of the Convention. However, once again, Parties are free to choose how they will implement such mechanisms, without any supervisory control at the international level. The Conference of Parties, composed of representatives of the Parties and established by Article 23 of the Convention, won’t have any monitoring powers. The only obligation foreseen is – in Article 24- a reporting obligation to the Conference of the Parties, within the first two years after the State concerned has become a Party. But after this first report, there is no indication on the periodicity of the reporting obligation. 

Conclusion

Despite the continuous pressure from the civil society[29] and the interventions of the highest authorities in the field of human rights and data protection[30], the final outcome of the negotiations is a weak text, based on very general principles and obligations. Some of them are even under the level of the standards recognized in the framework of the Council of Europe, in the light of the European Convention on Human rights and the case law of the European Court of Human Rights, as well as of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. Moreover, their application won’t be consistent among the Parties, due to a variable-geometry scope and a considerable margin of manoeuvre left to the Parties to implement the Convention.

Why so many concessions, in the context of negotiations held under the umbrella of the Council of Europe, which presents itself as the ‘continent’s leading human rights organisation’? The answer of the Council of Europe representatives is: ‘global reach’. So, should the hope to see States which are not members of the Council of Europe ratify the Convention justify such a lack of ambition?

Yet it is not the first time that an international binding instrument negotiated in the framework of the Council of Europe allows for a fragmented application of its provisions: the Second Additional Protocol to the Convention on Cybercrime[31] already provided some sort of ‘pick and choose’ mechanism in several articles. However, what could be understood in the light of the fight against cybercrime, is more difficult to accept in the framework of a Convention aiming at protecting human rights, democracy and the rule of law in the context of artificial intelligence systems.

It is possible that the negotiators could not achieve a better result, in view of the positions expressed in particular by the United States, Canada, Japan and Israel. In that case, the Council of Europe would have been better advised either to be less ambitious and drop the aim of a ‘global reach’, or wait a few more years until the ripening of the maturation of all minds.

(*)  EDPS official: This text is the sole responsibility of the author, and does not represent the official position of the EDPS

NOTES


[1] The Opinion adopted by the PACE on 18 April 2024 includes several proposals to improve the text. See https://pace.coe.int/en/files/33441/html

[2] See an article published in Euractiv on 31 Jan 2024 and updated on 15 Feb 2024:…(https://www.euractiv.com/section/artificial-intelligence/news/tug-of-war-continues-on-international-ai-treaty-as-text-gets-softened-further/ )

See also the open letter of the representatives of the civil society:

 https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit, and an article of M. Emilio de Capitani: The COE Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Is the Council of Europe losing its compass? https://free-group.eu/2024/03/04/the-coe-convention-on-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-is-the-council-of-europe-losing-its-compass/

[3] USA, Canada, Japan, Israel.

[4] See an article issued in swissinfo.ch – https://www.swissinfo.ch/eng/foreign-affairs/ai-regulation-is-swiss-negotiator-a-us-stooge/73480128

[5] The terms of reference of the CAI explicitly refers to the establishment of a ‘binding legal instrument of a transversal character’.

[6] See, for instance, an article in Euractiv ‘EU prepares to push back on private sector carve-out from international AI treaty’https://www.euractiv.com/section/artificial-intelligence/news/eu-prepares-to-push-back-on-private-sector-carve-out-from-international-ai-treaty/

[7] National security and European case-law: Research Division of the European Court of Human Rights- https://rm.coe.int/168067d214

[8] Paragraph 33 of the explanatory report : ‘As regards paragraph 3, the wording reflects the intent of the Drafters to exempt research and development activities from the scope of the Framework Convention under certain conditions, namely that the artificial intelligence systems in question have not been made available for use, and that the testing and other similar activities do not pose a potential for interference with human rights, democracy and the rule of law. Such activities excluded from the scope of the Framework Convention should in any case be carried out in accordance with applicable human rights and domestic law as well as recognised ethical and professional standards for scientific research’.

[9] Paragraph 36 of the explanatory report.

[10] In its opinion of 18 April 2024 the PACE suggested to only envisage a restriction. See above note 1.

[11] Paragraph 14 of the explanatory report

[12] these principles are closely linked to freedom of expression and the right to free elections: see in particular Article 10 of the European Convention on Human Rights and Article 3 of Protocol 1

[13] See in particular Article 14 of the European Convention on Human Rights and Protocol 12,

[14] See in particular Article 8 of the European Convention on Human Rights and the case law of the European Court of Human Rights, as well as Article 1 of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.

[15] See in particular Article 13 of the European Convention on Human Rights.

[16] https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449#mainText

[17] Paragraph 17 of the explanatory report.

[18] See Tyrer v United Kingdom 2 EHRR 1 at para. 31

[19] On 4 July 2023, the Third Section of the European Court of Human Rights delivered the first judgment on the compatibility of facial recognition technology with human rights in Glukhin v. Russia:

https://hudoc.echr.coe.int/eng#%22display%22:%5B2%5D,%22itemid%22:%5B%22001-225655%22%5D

[20] See Articles 14 and 86 of the AI Act – https://artificialintelligenceact.eu/the-act/

[21] ‘The Council of Europe’s road towards an AI Convention: taking stock’ by Peggy Valcke and Victoria Hendrickx, 9 February 2023: ‘Whereas the AI Act focuses on the digital single market and does not create new rights for individuals, the Convention might fill these gaps by being the first legally binding treaty on AI that focuses on democracy, human rights and the rule of law’. https://www.law.kuleuven.be/citip/blog/the-council-of-europes-road-towards-an-ai-convention-taking-stock/

[22] Article 12 of the Convention.

[23] Article 20 of the Convention.

[24] Article 25 of the Convention.

[25] Article 16.4 of the Convention.

[26] See Chapter II of the AI Act – https://artificialintelligenceact.eu/the-act/

[27] See Articles 4, 10, 11 et 15.

[28] See Articles 6 and 14.

[29] See in particular the open latter of 5 March 2024:

https://docs.google.com/document/d/19pwQg0r7g5Dm6_OlRvTAgBPGXaufZrNW/edit

[30] See the statement of the Council of Europe Commissioner for Human Rights:

https://www.coe.int/en/web/commissioner/-/ai-instrument-of-the-council-of-europe-should-be-firmly-based-on-human-rights

See also the EDPS statement in view of the 10th and last Plenary Meeting of the Committee on Artificial Intelligence (CAI) of the Council of Europe drafting the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law: https://www.edps.europa.eu/press-publications/press-news/press-releases/2024/edps-statement-view-10th-and-last-plenary-meeting-committee-artificial-intelligence-cai-council-europe-drafting-framework-convention-artificial_en

[31] Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence- https://rm.coe.int/1680a49dab

The new proposal on the security of EU informations: transforming the EU “Bubble” in an EU “Fortress” ? (3)

3. How the INFOSEC proposal builds a wider, but still incomplete, legal framework for EU Classified informations (EUCI)

 “The core of the proposed Regulation on the security of EU information (hereafter the INFOSEC proposal) concerns the creation and management of EU classified information (EUCI). In doing so, it substantially modifies Article 9 of Regulation 1049/2001, which deals with public access (or not) to so-called “sensitive documents”.

According to that article:

“Sensitive documents are documents originating from the institutions or the agencies established by them, from Member States, third countries or International Organizations, classified as ‘TRÈS SECRET/TOP SECRET’, ‘SECRET’ or ‘CONFIDENTIEL’ in accordance with the rules of the institution concerned, which protect essential interests of the European Union or of one or more of its Member States in the areas covered by Article 4(1)(a), notably public security, defense and military matters.”

Paragraph 3 of the same article also makes clear that: “Sensitive documents shall be recorded in the register or released only with the consent of the originator.”

Paragraph 7 says: “The Commission and the Council shall inform the European Parliament regarding sensitive documents in accordance with arrangements agreed between the institutions.”

It should be noted that Article 9 of Regulation 1049/2001 was a “fast and dirty” solution for a problem which arose in July 2000: Javier Solana, newly appointed Secretary General of the Council, negotiated with the new NATO Secretary General, Mr Robertson, an administrative arrangement with NATO on the exchange of classified information with the Council of the EU. However, that arrangement was challenged before the Court by the European Parliament (EP) and the Dutch government, because they considered that it limited a citizen’s fundamental right of access to documents, and exceptions to such fundamental right should have been framed by law.

At the time, the negotiation of Regulation 1049/01 was under the pressure of a deadline established in the Treaty. The reference to “sensitive” documents was added at the end of the legislative procedure and, because of this, the EP and the Dutch government withdrew their case before the Court.

Unfortunately, it was a Pyrrhic victory – it soon became clear that Article 9 of Regulation 1049/2001 was (and still is) a rather elusive and patchy framework for EU classified information.

A number of points can be made in this regard:

a) It does not regulate how the information should be classified and declassified in the interests of the EU, as opposed to the interests of the originator (whether that be a member State, EU institution, agency or body). Quite the contrary – by transferring the definition of these aspects to the internal security of each institution it paved the way to different standards and the very well-known risk of over classification.

b) It foresees a very weak framework for parliamentary oversight. By making reference to interinstitutional agreements and not codifying in secondary law the EP’s constitutional right to oversee classified information, it places the institution in an ancillary position. It is unfortunate that the EP has not fought until now to obtain treatment comparable to the one reserved for national parliaments with regard to their governments.

The solutions may be different, and special procedures and perhaps even special parliamentary bodies may be needed, but a stronger EP role is more than necessary because this lack of oversight will not be covered at national level – governments will declare that they are barred from revealing the information because it is classified at “European” level! Moreover, the instrument of an “interinstitutional agreement/arrangement” as currently foreseen by Article 295 of the Lisbon Treaty has strong constitutional limitations. As the Council Legal Service itself recognized in 2018: “The wording of the provision (NDR art.295 TFEU), and notably the use of the term ‘arrangements’, points to the fact that IIAs are instruments for regulating the modalities of cooperation and not for the regulation of substantive policy areas.”

It is thus quite surprising that, since the first Interinstitutional Agreement in 2002, the European Parliament has not asked for a sturdier legal basis for its oversight power.

With the adoption of the INFOSEC Regulation the situation will become even worse, because the EP will be obliged to negotiate interinstitutional agreements with all the other EU institutions, agencies and bodies if access to classified information is necessary for fulfilling its own constitutional role. From the outside, 21 years after the first interinstitutional agreement, the fact that the EP is still negotiating the revision of the 2002 interinstitutional agreement on access to classified information in the Common Security and Defence Policy (CSDP) area instead of creating a true legislative legal basis for its oversight may look to some like a form of Stockholm syndrome. To exit from such an impasse would not be wise for the European Parliament to study the more suitable model by looking at the experience of the major EU Member States and, even of the USA ?

c) Article 9 recognises, albeit only in the domain of “sensitive” documents and information, the so-called “originator privilege” or “author rule.” This is an exception to the general philosophy of Regulation 1049/2001, as made clear in Article 4(5):

“A Member State may request the institution not to disclose a document originating from that Member State without its prior agreement.” The point was, and still is, that the EU institutions may only by bound by law and not by the will of an “author”, even if it were an EU member state, a point confirmed in the jurisprudence of the Court of Justice of the EU

What the INFOSEC proposal does is to transform the exception of the “originator principle” in a rule. But, by recognizing to each EU Institution, Agency and Body the power of classify information in the interest of the EU it does not establish a mechanism which may verify that the EU interest is adequately by the classification or if it has been abusively established. For instance, an oversight power may be recognized to the European Commission or to the Ombudsman to decide if a document/information created by the EU Agencies should be declassified.

Clear rules on this point at INFOSEC level, may prevent from happening, other “incidents”, such as the one which occurred between Europol, the Ombudsman and the Commission, in 2015  when the Ombudsman asked to inspect the report of Europol’s Joint Supervisory Body (JSB) on the implementation of the EU-US Terrorist Finance Tracking Programme Agreement ( see  https://www.ombudsman.europa.eu/fr/case/en/42114 )

d) It does not foresee a judicial oversight of classified information. Today it is still up to the originator to decide whether or not to give the Court of Justice access to classified information. This is not a rhetorical question: it has already happened that the Council did’nt answer positively to a Court of Justice request of having access to classified informations.  As Deirdre Curtin remind us in her essay Top Secret Europe: “…in the OMPI case (*) on the blacklisting of terrorists by the UN and within the EU context, the Court said clearly that the Council could not base its decision on information that is not revealed to the Court.” ( Case T-248/08, People’s Mojahedin Organization of Iran v Council (OMPI III) para 73). It is worth recalling that in some Countries such as the USA

e) It does not solve the problem of sharing of “sensitive information” between entities which have a legitimate “need to know.” Instead, as Article 9 is focused on the security of each author of “sensitive information” and does not refer to common legislative standards, this has been done until now by the Council. This institution remains the main creator and exchanger of classified information, and has imposed via bilateral agreements with all the other EU institutions, agencies and bodies its internal security rules which, in turn, mirror the NATO standards. It is because of the legal fragility of this “de facto harmonisation” that the Commission has decided to launch a legislative initiative establishing at secondary law level the principles which should be respected in this domain inside the EU.

However, the solution envisaged in the INFOSEC proposal still does not address the main weaknesses of Article9 of Regulation 1049/2001 nor the weaknesses of the Council Internal Security Rules which are proposed to become the common EU standard. . In fact, in some cases it makes the situation even worse.

A useful example can be seen in the EU security agreements with third countries and international organizations on the exchange of classified information foreseen by articles 55-68 of the INFOSEC proposal.

The proposal requires, as a rule, that these agreements be negotiated and concluded according to Article 218 of the Lisbon Treaty, which will finally give the possibility for the EP to give its consent and to be fully and timely informed of the agreements’ content. But INFOSEC foresees also the possibility of continuing with “executive” arrangements which can be negotiated not only by the Council but also by other EU Institutions, agencies and bodies without associating the EP.  That exclusion of the EP has been , unfortunately, until now the case and dozens of international agreements have been negotiated by the Council using Article 13 of its internal security rules as a legal basis.

Now, if the INFOSEC proposal is adopted not only the Council but also all the other EU Institutions Agencies and bodies will have a legal basis for negotiating and concluding these executive “arrangements”. It would be wise to make clear in the INFOSEC proposal that the arrangements shall foresee that, because of the EU’s constitutional framework, no veto can be exercised over the transmission of classified information to the EP and to the CJEU.

4. Summing up: by endorsing the INFOSEC legislative proposal is the EP shooting on its Foot ?

(EUROPEAN LAW BLOG) EU/US Adequacy Negotiations and the Redress Challenge: How to Create an Independent Authority with Effective Remedy Powers (2)

16 FEBRUARY 2022/ BY THEODORE CHRISTAKISKENNETH PROPP AND PETER SWIRE

Can the U.S. Government create, by non-statutory means, an independent redress authority capable of providing an effective remedy for a European person who believes that her or his rights have been infringed by an intelligence service? In this article we put forward a novel non-statutory solution that could resolve the “redress” problem in the EU/US adequacy negotiations. This solution is based on three “building blocks” inspired by methods utilized in U.S. administrative law. First, the U.S. Department of Justice should issue a binding regulation creating within that executive agency an independent “Foreign Intelligence Redress Authority” (FIRA). Second, the President should issue a separate Executive Order providing the necessary investigative powers and giving FIRA’s decisions binding effect across the intelligence agencies and other components of the U.S. government. Finally, European individuals could obtain judicial review of an independent redress decision by using the existing Administrative Procedure Act.

Our first article, published on January 31, concentrated on whether the U.S. Congress would necessarily have to enact a new statute in order to create an adequate redress mechanism. We examined political, practical, and U.S. constitutional difficulties in enacting such a statute. Based on careful attention to EU law, we concluded that relying on a non-statutory solution could be compatible with the “essential equivalence” requirements of Article 45 of the EU’s General Data Protection Regulation (GDPR), if the requisite substantive protections for redress were put into place.

This article examines, from both a U.S. and a European law perspective, measures that could address the substantive requirements, notably the deficiencies highlighted by the Court of Justice of the European Union (CJEU) in its Schrems II judgment: independence of the redress body; its ability to substantively review the requests; and its authority to issue decisions that are binding on the intelligence agencies. We discuss only the redress issues highlighted by the CJEU. We do not address here the other deficiency cited by the Court — whether U.S. surveillance statutes and procedures sufficiently incorporate principles of “necessity and proportionality” also required under EU law.

Part I of this article explains how the U.S. executive branch could create an independent administrative institution to review redress requests and complaints. The institution, which we call “FIRA”, would be similar in important ways to what in Europe is considered as an independent administrative authority, such as the several surveillance oversight/redress bodies operating in Europe and listed in the EU Agency for Fundamental Rights’ (FRA) 2017 comparative study on surveillance (p. 115 – in France, for example, the National Commission for Control of Intelligence Techniques, CNCTR). We submit that, in the U.S., such an institution could be based on a binding regulation adopted by the Department of Justice (DOJ). Despite being created by the executive branch, the independence of FIRA will be guaranteed, since leading U.S. Supreme Court precedent considers such a regulation to have binding effect and to protect members of the redress authority from interference by the President or the Attorney General. 

Next, Part II of this article assesses how the U.S. executive branch could provide the necessary investigatory powers for FIRA to review European requests and complaints and to adopt decisions binding upon intelligence agencies. This could be done through a Presidential Executive Order that the President may use to limit executive discretion. 

Finally, Part III of this article discusses the important question of whether the ultimate availability of judicial redress is necessary under EU law and whether there is a path under U.S. law to achieve it, despite the 2021 Supreme Court decision in the TransUnion case limiting standing in some privacy cases. We examine reasons why judicial review of decisions by the independent FIRA may not be required under EU law. Nonetheless, we describe a potential path to U.S. judicial review based on the existing Administrative Procedure Act.  

I. Creating an Independent Redress Authority

Based on our discussions with stakeholders, the most difficult intellectual challenge has been how a redress authority can be created within the executive branch yet have the necessary independence from it. We first present the EU criticisms of the Privacy Shield Ombudsperson approach, and then explain how a binding regulation issued by DOJ can address those criticisms satisfactorily. 

1. Identifying the problems of independence with the previous Privacy Shield mechanism

Four criteria for independence of the redress body have been identified by EU authorities in their critiques of the Ombudsperson approach included in the 2016 Privacy Shield. 

a) Protection against dismissal or revocation of the members of the redress body

A crucial measure of independence under EU law, is protection against removal of any member of the independent body. In Schrems II, the CJEU noted there was “nothing in [the Privacy Shield Decision] to indicate that the dismissal or revocation of the appointment of the Ombudsperson is accompanied by any particular guarantees” (§195), a point previously made in 2016 by the Article 29 Working Party (WP29) when it observed “the relative ease with which political appointees can be dismissed” (here, p. 51). Protection against removal is also recognized under U.S. law and a key indicator for independence.(1) 

b) Independence as protection against external intervention or pressure

Protection against external intervention is a major requirement for a redress authority, as stated by the Advocate General in his 2019 Schrems II Opinion

“The concept of independence has a first aspect, which is external and presumes that the body concerned is protected against external intervention or pressure liable to jeopardise the independent judgment of its members as regards proceedings before them” (note 213).  

By contrast, the Ombudsperson in the original Privacy Shield was “presented as being independent of the ‘intelligence community’, [but] (…) not independent of the executive” (§ 337). 

c) Impartiality

In the same opinion, Advocate General Saugmandsgaard Øe stressed (and the CJEU endorsed), the importance of impartiality: “The second aspect of [independence], which is internal, is linked to impartiality and seeks to ensure a level playing field for the parties to the proceedings and their respective interests with regard to the subject matter of those proceedings” (note 213, emphasis added). 

d)  Relationship to the intelligence community 

In its 2015 study on surveillance, FRA noted that there is a “Goldilocks” challenge concerning the ties between redress bodies and intelligence agencies: “While ties that are too close may lead to a conflict of interest, too much separation might result in oversight bodies that, while independent, are very poorly informed” (p. 71).  In 2016, the WP29 found that the Privacy Shield solution did not appropriately respond to this challenge:

“The Under Secretary is nominated by the U.S. President, directed by the Secretary of State as the Ombudsperson, and confirmed by the U.S. Senate in her role as Under Secretary. As the letter and the Memorandum representations stress, the Ombudsperson is ‘independent from the U.S. Intelligence community’. The WP29 however questions if the Ombudsperson is created within the most suitable department. Some knowledge and understanding of the workings of the intelligence community seems to be required in order to effectively fulfil the Ombudsperson’s role, while at the same time indeed sufficient distance from the intelligence community is required to be able to act independently.” (p.49)

2. How the creation of FIRA by DOJ Regulation could fix these problems 

To date, despite insightful discussions of the challenges, we have not seen any detailed public proposals for how the U.S. executive branch might create a redress institution to meet the strict EU requirements for independence.(2) One innovation, which we understand that the parties might now be considering, could be a binding U.S. regulation, issued by an agency pursuant to existing statutory authority, to create and govern FIRA. Crucially, leading U.S. Supreme Court cases have given binding effect to a comparable regulation, even in the face of objections by the President or Attorney General.

a) Binding DOJ regulation to ensure independence of the FIRA 

The Department of Justice could issue a regulation to create FIRA and guarantee its independent functioning.  It could guarantee independence for the members of FIRA, including protections against removal, in the same fashion.

Under the U.S. legal system, such an agency regulation has the force of law, making it suitable for defining the procedures for review of redress requests and complaints. DOJ regularly issues such regulations, under existing statutory authorities, and pursuant to established and public procedures. To protect against arbitrary or sudden change, modifying or repealing the regulation would require following the same public procedural steps as enacting the regulation in the first place did.  In Motor Vehicles Manufacturers Association vs. State Farm Mutual Automobile Insurance Co., the Supreme Court held that since a federal agency had the discretion to issue a regulation initially, it would have to utilize the same administrative procedures to repeal it.

In an EU/U.S. framework for a new Privacy Shield, the U.S. Government unilaterally could commit to maintain this DOJ regulation in force, and the European Commission could reference the U.S. commitment as a condition of its adequacy decision. This would provide both to the EU and to members of FIRA a guarantee against revocation of the regulation ensuring that the authority would act independently. 

b) Supreme Court precedents protect against external intervention or pressure 

During the Watergate scandal involving then-President Richard Nixon, the Department of Justice issued a regulation creating an independent “special prosecutor” (also called “independent counsel”) within that department. The special prosecutor was designed to be independent from Presidential control, with the regulation stipulating that he could not be removed except with involvement by designated members of Congress. 

Acting within the powers defined in the regulation, the special prosecutor issued a subpoena for audio tapes held by the White House. The President, acting through the Attorney General, objected to the subpoena.  In a unanimous 1974 Supreme Court decision, United States v. Nixon, it was held that the special prosecutor’s decision to issue the subpoena had the force of law, despite the Attorney General’s objection.  The Court noted that although the Attorney General has general authority to oversee criminal prosecutions, including by issuing a subpoena, the fact that the special prosecutor had acted pursuant to a binding DOJ regulation deprived the Attorney General of his otherwise plenary power over subpoenas. 

The Supreme Court observed that “[t]he regulation gives the Special Prosecutor explicit power” to conduct the investigation and issue subpoenas, and that “[s]o long as this regulation is extant, it has the force of law” (emphasis added).  The Court concluded: 

“It is theoretically possible for the Attorney General to amend or revoke the regulation defining the Special Prosecutor’s authority. But he has not done so. So long as this regulation remains in force, the Executive Branch is bound by it, and indeed the United States, as the sovereign composed of the three branches, is bound to respect and to enforce it.”

In sum, as supported by clear Supreme Court precedent, a DOJ regulation can create a mechanism within the executive branch, so that the members of the administration must comply with its terms, even in the face of contrary instructions from the President or Attorney General. And, as stated earlier, the lasting character of the DOJ regulation creating FIRA could be guaranteed by the US Government in the EU/US agreement and be identified by the European Commission in its subsequent adequacy decision as a condition for maintaining this decision in force.

c) Impartiality

We are not aware of significant U.S. constitutional obstacles to ensuring impartiality in FIRA. DOJ appoints Administrative Law Judges (ALJ), such as for deciding immigration matters, and “[t]he ALJ position functions, and is classified, as a judge under the Administrative Procedure Act.” 

U.S. law concerning ALJ’s, including those located in DOJ, states that they are “independent impartial triers of fact in formal proceedings”.(3) In Nixon the Supreme Court reaffirmed the lawfulness of an independent adjudicatory function located within the DOJ.(4) A DOJ FIRA regulation could similarly offer guarantees in terms of the impartiality and expertise of members.

d) Relationship to the intelligence community 

Furthermore, the DOJ appears to be the executive agency best-suited to resolve the “Goldilocks” problem, mentioned above, by combining knowledge and understanding of the intelligence agencies with sufficient distance to judge their conduct independently. 

As noted, EU bodies questioned whether the Department of State, a diplomatic agency, was a “suitable department” for the redress role. The DOJ is more suitable in part because of its experience with the Watergate independent counsel and, for instance, with Immigration Judges as independent triers of fact. 

At the same time, a FIRA located within the DOJ would be well-placed to have knowledge about the intelligence community. The DOJ provides extensive oversight of intelligence activities through its National Security Division, including by issuing regular reports concerning classified activities of the Foreign Intelligence Surveillance Court. Other DOJ components, such as the Office of Privacy and Civil Liberties, also have access to classified information including Top Secret information about intelligence agency activities. In addition, an Executive Order could empower the DOJ to enlist other agencies, such as the Office of the Director of National Intelligence, to gain information from the intelligence community.

II. Creating Effective Powers for the Independent Redress Authority

A DOJ regulation creating an independent redress authority within that executive department must be accompanied by additional government-wide steps for effectively investigating redress requests and for issuing decisions that are binding on the entire intelligence community. The DOJ-issued regulation would define the interaction of FIRA with other parts of that Department.  For the overall mechanism to be effective in other parts of the U.S. government, however, the key legal instrument would be a separate Executive Order issued by the President. In issuing an EO, the President would act within the scope of his overall executive power to define legal limits, such as by requiring intelligence agencies to be bound by FIRA decisions. 

1. Identifying the problems of effectiveness concerning the previous Privacy Shield mechanism

To meet the EU requirement of effective remedial powers, the new redress system would need to have two types of effective powers that the Privacy Shield Ombudsperson lacked. 

a) Investigative Powers 

The WP29 wrote in 2016: 

“concerns remain regarding the powers of the Ombudsperson to exercise effective and continuous control. Based on the available information (…), the WP29 cannot come to the conclusion that the Ombudsperson will at all times have direct access to all information, files and IT systems required to make his own assessment” (p. 51).

In 2019, the European Data Protection Board (EDPB) likewise stated: 

“[T]he EDPB is not in a position to conclude that the Ombudsperson is vested with sufficient powers to access information and to remedy non-compliance, (…)” (§103). 

b) Decisional Powers 

In Schrems II, the CJEU stated:  

Similarly, (…) although recital 120 of the Privacy Shield Decision refers to a commitment from the US Government that the relevant component of the intelligence services is required to correct any violation of the applicable rules detected by the Privacy Shield Ombudsperson, there is nothing in that decision to indicate that that ombudsperson has the power to adopt decisions that are binding on those intelligence services and does not mention any legal safeguards that would accompany that political commitment on which data subjects could rely” (§196).

The EDPB similarly concluded in 2019:

“Based on the available information, the EDPB still doubts that the powers to remedy non-compliance vis-à-vis the intelligence authorities are sufficient, as the ‘power’ of the Ombudsperson seems to be limited to decide not to confirm compliance towards the petitioner. In the understanding of the EDPB, the (acting) Ombudsperson is not vested with powers, which courts or other similarly independent bodies would usually be granted to fulfil their role” (§102).

2. How a Presidential Executive Order Could Confer These Powers upon FIRA 

These passages describe key EU legal requirements for a new redress system. President Biden could satisfy them by issuance of an Executive Order (EO).  The American Bar Association has published a useful overview explaining that an EO  is a “signed, written, and published directive from the President of the United States that manages operations of the federal government.” EOs “have the force of law, much like regulations issued by federal agencies.”  Once in place, only “a sitting U.S. President may overturn an existing executive order by issuing another executive order to that effect.”

As a general matter, the President has broad authority under Article II of the Constitution to direct the executive branch. In addition, the Constitution names the President as Commander-in-Chief of the armed forces, conferring additional responsibilities and powers with respect to national security. The President’s powers in some instances may be limited by a properly enacted statute, but we are not aware of any such limits relevant to redress.

Not only does the President enjoy broad executive powers, but he or she also may decide to limit how he or she exercises such powers through an EO which, under the law, would govern until and unless withdrawn or revised. Thus, the President would appear to have considerable discretion to instruct the intelligence community, by means of an EO, to cooperate in investigations and to comply with binding rulings concerning redress.

As with the DOJ regulation, the U.S. Government could commit in the EU/US adequacy arrangement to maintain this EO in force. But how could the EU and the general public have confidence that the EO is actually being followed by intelligence agencies? First, FIRA will be able to assess whether this is the case, backed by an eventual provision in the Presidential EO fixing penalties for lack of compliance with its orders (similarly as legislation in European countries fixes penalties for failure to comply with the orders of equivalent redress bodies – for an example see art. L 833-3 of the French surveillance law). Furthermore, U.S. intelligence agencies are already subject to parliamentary oversight, including on classified matters, by the Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence. Oversight might also be performed by other governmental actors that have access to classified materials, such as an agency official called the Inspector General or the Civil Liberties and Privacy Office, or by the independent Privacy and Civil Liberties Oversight Board (whose new Director, Sharon Bradford Franklin, recently confirmed by the Senate, is known for her commitment to strong surveillance safeguards and oversight). Oversight may be performed at the Top Secret or other classification level, with unclassified summaries released to the public

III. Creating Judicial Review of the Decisions of the Independent Redress Authority

Finally, we turn to whether and how decisions of FIRA may be reviewed judicially. We first explain why judicial review in these circumstances may not be required under EU law.  Nonetheless, to minimize the risk of invalidation by the CJEU, we set forth possible paths for creating U.S. judicial review.

1. Reasons that judicial redress is not necessarily required 

There are at least four reasons to believe that EU law does not necessarily require judicial redress if FIRA is independent and capable of exercising the quasi-judicial functions described above by adopting decisions binding on intelligence agencies.

First, as explained in our earlier article, Article 13 of the European Convention on Human Rights (ECHR) may be the appropriate legal standard for the European Commission to use in deciding upon the “essential equivalence” of third countries for international data transfer purposes.  Article 13 only requires an independent “national authority,” thus a non-judicial body could suffice.

Second, the Advocate General in Schrems II seemed to give the impression that judicial review should only be required in a case where the redress body itself is not independent: 

“in accordance with the case-law, respect for the right guaranteed by Article 47 of the Charter thus assumes that a decision of an administrative authority that does not itself satisfy the condition of independence must be subject to subsequent control by a judicial body with jurisdiction to consider all the relevant issues. However, according to the indications provided in the ‘privacy shield’ decision, the decisions of the Ombudsperson are not the subject of independent judicial review.” (§340, emphasis added)

Since FIRA, unlike the Ombudsperson, will not only enjoy independence but also will exercise quasi-judicial functions by adopting decisions binding on intelligence agencies, separate judicial redress may not be required.

Third, this is exactly what seems to be happening in practice in EU Member States themselves. FRA noted in its 2017 comparative study on surveillance that, in most European countries, redress bodies are non-judicial bodies. It also observed that such non-judicial remedies appear better than judicial ones, because their procedural rules are less strict, proceedings are faster and cheaper, and non-judicial avenues generally offer greater expertise than judicial mechanisms. Furthermore, FRA found that “across the EU only in a few cases can decisions of non-judicial bodies be reviewed by a judge” (ibid., p.114 – and table pp.115-116). Requiring the U.S. to provide judicial redress would thus be more than what exists in many Member States.(5) 

Fourth, these observations are even more relevant when one focuses on international surveillance. In France, for instance, an individual may file complaints with the Supreme Administrative Court (Conseil d’Etat) on the basis of the domestic surveillance law of July 2015. There is no possibility to do so under the international surveillance law of November 2015, however, since that law gives only the CNCTR, an administrative authority, the power to initiate (under some conditions) proceedings in the Conseil d’Etat – but does not confer this right directly upon an individual.(6)

Of course, actual practice under Member States law does not necessarily mean that a third country’s similar practices meet the “essential equivalence” standard of EU fundamental rights law, since the relevant comparator seems to be European Law standards – not Member States’ practices which do not always necessarily meet these standards.(7) Nonetheless, demanding from the U.S. a much more elaborate process than what already exists for international surveillance in most EU Member States might be complicated, particularly if there is an effective independent administrative regime in the U.S. exercising quasi-judicial functions.

2. Ultimate judicial redress will however help ensure meeting CJEU requirements

Despite these indications that European law may not require judicial redress, we acknowledge that the position of the CJEU on this point remains ambiguous.  

As indicated in our first article, the CJEU in Schrems II expressly used the term “body,” giving the impression that an independent national administrative authority (in conformity with the requirements of Art. 13 ECHR) could be enough to fulfill the adjudicatory function. As we explained, this is how the EDPB seems to have read Schrems II in its 2020 European Essential Guarantees Recommendations. Long-time EU data protection official Christopher Docksey concurs as well. 

However, it is also true that the Schrems II judgment contains multiple references to judicial redress. It refers to “ the premiss [sic] that data subjects must have the possibility of bringing legal action before an independent and impartial court ” (§194); “the right to judicial protection” (ibid.); “data subject rights actionable in the courts against the US authorities” (§192); “the judicial protection of persons whose personal data is transferred to that third country” (§190); and “the existence of such a lacuna in judicial protection in respect of interferences with intelligence programmes” (§191). It is not clear whether these statements should also apply (following the Advocate General’s logic) to an independent redress body such as FIRA capable of exercising quasi-judicial functions, in contrast to the Ombudsperson examined by the CJEU. Nevertheless, the CJEU judgment might be read as requiring at least some form of ultimate judicial control of a redress authority’s decisions. This also appears to be the interpretation of a senior Commission official. 

In light of these statements, it would be prudent for the U.S. to provide for some form of ultimate judicial review of FIRA decisions, to increase the likelihood of passing the CJEU test in an eventual Schrems III case.  

3. A path to ultimate judicial review of FIRA decisions

As we explained in our first article, the U.S. constitutional doctrine of standing poses a major hurdle in creating a pathway to judicial redress. In the 2021 TransUnion case, the Supreme Court held that plaintiffs incorrectly identified by a credit reporting agency as being on a government terrorism watch list had not shown the required “injury in fact”. This lack of injury in fact, and thus lack of standing, existed even though the underlying statute appeared to confer the right to sue. While one might find this U.S. constitutional jurisprudence unduly restrictive, any new Privacy Shield agreement must take it into account.

There might be, however, another way to provide an individual with judicial redress. An unsatisfied individual could appeal to a federal court an administrative disposition of a redress petition on the grounds that FIRA has failed to follow the law. In such a case an individual would not be challenging the surveillance actions of intelligence agencies (for which injury in fact may be impossible to satisfy) as such; instead, the suit would allege the failure of an independent administrative body (FIRA) to take the actions required by law.  

As Propp and Swire have written previously, one useful precedent is the U.S. Freedom of Information Act (FOIA), under which any individual can request an agency to produce documents, without first having to demonstrate that he or she has suffered particular “injury in fact”. The agency is then required to conduct an effective investigation and to explain any decision not to supply the documents. After the agency responds, the individual may appeal the decision to federal court. The judge then examines the quality of the agency’s investigation to ensure compliance with law, and the judge can order changes in the event of mistakes by the agency.

Analogously, a European individual, unsatisfied by FIRA’s investigation and decision, could bring a challenge in court. Taking into consideration that FOIA concerns a distinct question,  the appeal against FIRA’s decisions would be based upon the umbrella U.S. Administrative Procedure Act (APA). The APA provides generally for judicial review of an agency action that is “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.” Since both a regulation and an Executive Order have the force of law, an APA-based appeal could examine whether the FIRA decision and its implementation was “in accordance with law.” Since the APA applies generally, it could operate in these circumstances without need for an additional federal statute. In addition, U.S. federal courts deciding APA-based appeals already have methods for handling classified national security information. For instance, they access classified information under the Classified Information Procedures Act (CIPA).

Including judicial review under the APA would be a good faith effort by the U.S. government to respond to ultimate EU law concerns. However, since the FIRA approach has not been judicially tested, some legal uncertainty concerning standing to bring the APA suit in federal court would remain. FOIA practice provides a good legal basis for meeting the standing requirement through challenging agency action itself, but TransUnion highlighted the level of privacy injuries which must be shown to enable a decision in federal court.  

Conclusion

In these two articles, we have sought to examine rigorously and fully the requirements of EU law with respect to redress. We also have examined U.S. constitutional law, explaining both the difficulties surrounding some solutions (for instance the problem of standing for judicial redress) and the opportunities created by some precedents (such as the protection offered to independent investigative bodies by decisions of the U.S. Supreme Court).

We are not aware of any other published proposal that wrestles in such detail with the complexity of EU and U.S. law requirements for foreign intelligence redress. We hope that our contribution helps fill this gap and presents a promising path permitting resolution of the “redress challenge” in the EU/US adequacy negotiations.

Much will depend on the details of construction and implementation for this protective mechanism. What our articles contribute is the identification of three fundamental building blocks on which a solid and long-lasting transatlantic adequacy agreement could stand. We have shown that there is a promising way to create, by non-statutory means, an independent redress authority and to provide the necessary investigative and decisional powers to respond to redress requests by European persons. We also suggest a way to successfully address the problem of standing and thereby to provide for an ultimate possibility of judicial control. Using these building blocks to create an effective redress mechanism could enable the U.S. and the EU not only to establish a solid transatlantic adequacy regime capable of resisting CJEU scrutiny but also to advance human rights more broadly.

Notes

(1) In 2020, as discussed here, the Supreme Court addressed the President’s removal power in the Siela Law LLC case, finding unconstitutional Congress’ establishment of independence for an agency head. At the same time, the Court reaffirmed that protections against removal can exist for “inferior officers” (roughly, officials appointed through a civil service process rather than by the President) and for multi-member bodies. Either or both of these categories may apply to FIRA members. In 2021, the Supreme Court, in U.S. v. Arthrex, struck down a system of independent Administrative Patent Judges. The approach in our article would be different since the President here issues an executive order, and thus the President serves as the “politically accountable officer” required by the Supreme Court in Arthrex.

(2) More specifically, there have been proposals for providing redress for surveillance conducted pursuant to Section 702 FISA, such as here and here. However, an additional “thorny issue is whether international surveillance, conducted by US intelligence agencies outside the territory of the US on the basis of Executive Order 12333 (EO 12333) should be (or not) part of the adequacy assessment.” Although arguments exist under EU law that redress for EO 12333 surveillance might be excluded from the assessment, this article proceeds on the understanding that the current negotiations will only succeed if EO 12333 surveillance is covered as well. We are not aware of any published proposal that would do so, and seek in this article to present such an approach. For example, the proposal here would apply to requests for redress concerning surveillance conducted under EO 12333, such as programs recently declassified by the U.S. government.

(3) It appears that terms such as “adjudication” and “court” may be understood somewhat differently in the U.S. compared with the EU, creating a risk of confusion in proposals concerning redress. Under U.S. law, many federal agencies, including the Federal Trade Commission and Department of Justice, routinely conduct what is called “adjudication.” Many federal agencies have Administrative Law Judges, defined by the U.S. government as “independent impartial triers of fact in formal proceedings.”  By contrast, in Europe, “courts” and “judges” generally exist outside of the Executive. Therefore, our discussion of FIRA avoids words such as “adjudication” that may be understood differently in different legal systems.

(4) In the 1954 case, Accardi v. Shaughnessy, the Attorney General by regulation had delegated certain of his discretionary powers to the Board of Immigration Appeals. The regulation required the Board to exercise its own discretion on appeals for deportation cases. As noted in U.S. v. Nixon, the Supreme Court in Accardi had held that, “so long as the Attorney General’s regulations remained operative, he denied himself the authority to exercise the discretion delegated to the Board even though the original authority was his and he could reassert it by amending the regulations.”

(5) For a recent description of the German system, see here by Daniel Felz.

(6) This finding was confirmed in a June 2018 decision by the Conseil d’Etat following a request introduced in this court by the Member of the European Parliament Sophie In ’t Veld (analysis here). The Court also rejected the possibility for the claimant to challenge indirectly an alleged misuse of power resulting from the failure of the chairman of the CNCTR to refer the matter to the Council of State. However, as stated by the CNCTR (here, at 46) this is one of the points appearing in the (no less than) 14 challenges currently pending at the ECHR against the French surveillance laws.

(7) See for instance this study by I. Brown and D. Korff arguing that “the EU institutions should stand up for the rule of law and demand the member states and third countries bring their practices in line with those standards”  (at 111).

(EUROPEAN LAW BLOG) EU/US Adequacy Negotiations and the Redress Challenge: Whether a New U.S. Statute is Necessary to Produce an “Essentially Equivalent” Solution (1)

31 JANUARY 2022/ BY THEODORE CHRISTAKISKENNETH PROPP AND PETER SWIRE

Must the U.S. Congress change statutory law to solve the major issue of “redress” in the EU-US adequacy negotiations? This is a crucial question, especially since a series of political, pragmatic and even legal/constitutional difficulties mean that the U.S. might not be able to come up with a short-term statutory solution for redress. In this article we analyse this question for the first time in detail, and argue that, provided the U.S. is able to address the deficiencies highlighted by the Court of Justice of the European Union (CJEU) in its Schrems II judgment (independence of the redress body; ability to substantively review the requests; and authority to issue decisions that are binding on the intelligence agencies), then relying on a non-statutory solution could be compatible with the “essential equivalence” requirements of Article 45 of the EU’s General Data Protection Regulation (GDPR). In a second, forthcoming article, we set forth specific elements of a novel non-statutory solution and assess whether it would meet the substantive European legal requirements for redress.

The CJEU issued its Schrems II judgment in July, 2020, invalidating the EU/U.S. Privacy Shield and creating uncertainty about the use of Standard Contractual Clauses (SCCs) for transfers of personal data to all third countries (see analysis herehereherehere and here). In light of the legal uncertainty and the increasing tensions concerning transatlantic data transfers resulting from the intensification of enforcement actions by European data protection authorities (DPAs) since Schrems II (such as this and this), there is both strong reason to reach a new EU/U.S. agreement and also a stated willingness of both sides to do so.  The European Commission, understandably, has emphasized though that there is no “quick fix” and that any new agreement must meet the full requirements of EU law.

This article focuses on one of the two deficiencies highlighted by the CJEU: the need for the U.S. legal system to provide a redress avenue accessible to all EU data subjects. We do not address here the other deficiency– whether U.S. surveillance statues and procedures sufficiently incorporate principles of ‘necessity and proportionality’ also required under EU law.

We concentrate our inquiry, from both a U.S. and a European law perspective, on whether the U.S. Congress would necessarily have to enact a new statute in order to create an adequate redress mechanism. Part I of this article explains the pragmatic and political reasons why it would be difficult to adopt a new U.S. statute, and especially to do so quickly. Part II examines the U.S. constitutional requirements for “standing”, and explains the legal difficulties and uncertainty concerning proposals, such as the one advanced by the American Civil Liberties Union (ACLU), to provide redress through an individual action in U.S. federal courts. Part III then addresses European law concerning whether a statute is necessary, concluding that the substance of the protections of fundamental rights and respect of the essence of the right to an effective remedy are the key considerations, rather than the form by which an independent and effective redress mechanism would be created.

This article will be followed by a second article exploring whether a non-statutory solution for redress is capable of satisfying the strict substantive standards required by EU law.

I. Political Difficulties of an Immediate Statutory Approach to Redress

There are important advantages to enacting a new U.S. statute to provide redress:

  • There is greater democratic legitimacy if the legislature passes a statute.
  • A law can set limits on Executive discretion that only may be changed by a subsequent statute.
  • A law can fix in a stable, permanent and objective way the rules and procedures for the appointment of the members of the redress body, the duration of their mandate, and guarantees concerning their independence.

However, there are strong pragmatic and political reasons why it would be difficult to enact a new statute in the short term to create a new redress mechanism.

  • First, it is no secret that the U.S. Congress currently finds it difficult to pass legislation generally, with partisan battles and procedural obstacles slowing passage of even essential legislation. As Politico recently reported, “it is increasingly unlikely that Congress will pass any digital-focused bills before lawmakers shut down ahead of November’s midterms”.
  • Second, legislative reform of U.S. surveillance laws is a particularly complex and contentious issue. The national security community in the U.S. has little appetite for sweeping reforms, and even a strong push from the White House may not be sufficient to move such legislation through Congress. In Europe as well, substantial reform of surveillance laws requires a lot of time to seek the necessary political consensus (see for instance this).[i]
  • Third, the international dimensions of a redress reform make legislation even more difficult. If a new redress mechanism benefits only EU data subjects, then it is hard to explain to Congress why they should get greater rights than Americans. On the other hand, if redress rights were also to be conferred on U.S. data subjects, then a novel and complex set of institutional changes to the overall U.S. surveillance system would be needed.
  • Fourth, it would be difficult for U.S. legislators to vote for a statute without knowing in advance whether the CJEU will accept it as good enough.
  • Fifth, Congress historically has been reluctant to regulate in great detail how the President conducts foreign policy and protects national security. For instance, Congress has adopted detailed statutes (such as the Foreign Intelligence Surveillance Act, FISA)) concerning “compelled access”, e.g. how intelligence agencies can request data from service providers. By contrast, it has rarely enacted any statute that applies to “direct” surveillance conducted outside of the U.S. under the standards of Executive Order (EO) 12,333. Furthermore, specific actions under that Executive Order have never, so far as we know, been subject to review by federal judges.

For these reasons, we believe at a pragmatic level that it would be extremely difficult for Congress to promptly pass legislation to provide redress to EU persons. By contrast, if an adequate fix to the redress problem can be created at least in large part without new legislation, then it would be considerably easier for Congress subsequently to enact a targeted statute ratifying the new mechanism, perhaps adding other provisions to perfect an initial non-statutory approach. That sort of legislation is far easier to enact than writing a law in Congress from a blank page.

II. Constitutional Difficulties for a U.S. Statutory Approach to Redress: The Problem of Standing

These political and pragmatic reasons alone would justify U.S. government and European Commission negotiators seeking to address the redress deficiencies highlighted in Schrems II through a non-statutory solution. But, in addition, there is a constitutional dimension. The U.S. Constitution establishes a “standing” requirement as a prerequisite to a case being heard before judges in the federal court system. Any new U.S. redress mechanism must be consistent with the U.S. Constitution, just as it must meet EU fundamental rights requirements.

U.S. standing doctrine derives from Article III of the U.S. Constitution, which governs the federal court system. The federal judicial power extends only to “cases” and “controversies” – meaning that there has to be an “injury in fact” in order to have a case heard. A related doctrine is the ban on issuance of “advisory opinions” by federal judges, a position of the Supreme Court dating back to the first President, George Washington, and defined most clearly in Muskrat v. United States. In sum, a statute that creates a cause of action in the federal courts is unconstitutional unless it meets the requirements of standing and injury in fact, and does not violate the prohibition on advisory opinions.

The ACLU in 2020 called for a “standing fix” to enable suit in federal court “where a person takes objectively reasonable protective measures in response to a good-faith belief that she is subject to surveillance.” However, since the right to redress under European law also exists for individuals who did not take protective measures, the proposal seems too narrow to meet the CJEU requirements.

A second difficulty with the ACLU approach is that the Supreme Court made standing related to privacy injuries even more difficult to establish in its TransUnion LLC v. Ramirez decision in June, 2021. As discussed here, the majority in that case made it significantly more difficult for privacy plaintiffs henceforth to sue in federal court. The Court restated its 2016 Spokeo case that a plaintiff does not automatically satisfy “the injury-in-fact requirement whenever a statute grants a person a statutory right and purports to authorize that person to sue to vindicate that right.” More bluntly, the Court stated: “An injury in law is not an injury in fact”. [ii] The majority in TransUnion found “concrete harm” for some plaintiffs but not others. Even individuals whose credit histories were badly mistaken – stating they were on a government list as “potential terrorists” – did not enjoy a right of action created by statute. In sum, there would be substantial legal uncertainty surrounding a U.S. statute conferring upon EU data subjects the right to go straight to U.S. courts to get redress (for a similar conclusion see here).

The standing objection applies only to direct access to federal courts, and not to an independent non-judicial redress authority. However, Congress might be reluctant to intervene ex nihilo in a field such as “direct” foreign surveillance conducted under EO 12,333, which traditionally belongs to the Executive power under the U.S. Constitution. Congress might be more willing to act and endorse by statute an effective redress mechanism if, as a first step, the Executive branch itself had first created such an independent non-judicial redress authority within the Executive branch. In any case, such a statute does not appear to be a necessary precondition under U.S. law for creating a redress system

III. Is a Non-Statutory Approach to Redress Compatible with European Law?

Since the U.S. government might not be able to produce a short-term statutory solution for redress, the question then arises as to whether a non-statutory approach would be acceptable under EU law. In order for the European Commission to be able to issue an adequacy decision under Article 45 of the GDPR, the U.S. must ensure an “adequate” level of protection.

If the U.S. is able to address by non-statutory means the deficiencies highlighted by the CJEU in Schrems II (mentioned above), then such a solution could be compatible with the “essential equivalence” requirements of Article 45 of the GDPR. We defer for now the question of whether a non-statutory path would indeed be able to address these substantive issues, instead focusing only on whether a non-statutory approach in principle is compatible with European law.

A. The Starting Point: The Right to Effective Remedy Under European Human Rights Law

What we call “redress” in the context of transatlantic adequacy negotiations corresponds to the “right to effective remedy” under European law. Article 47(1) of the Charter of Fundamental Rights of the European Union (“Charter”) states that:

“Everyone whose rights and freedoms guaranteed by the law of the Union are violated has the right to an effective remedy before a tribunal in compliance with the conditions laid down in this Article.”

The official explanations of Article 47 make clear that this article is “based on Article 13 of the European Convention of Human Rights” (ECHR), according to which:

“Everyone whose rights and freedoms as set forth in this Convention are violated shall have an effective remedy before a national authority notwithstanding that the violation has been committed by persons acting in an official capacity.”

A comparison of the two articles reveals that in EU law the protection is more extensive than in ECHR law, since the former guarantees the right to an effective remedy before a “tribunal” while the latter only refers to a “national authority”. The term “tribunal” seems to refer to a judicial body, as the official explanation suggests. This is confirmed by reference to non-English language versions of Article 47(1), which translate the word “tribunal” as “court” (e.g.“Gericht” in German and “Gerecht” in Dutch). It is also evident that neither Article 47(1) of the Charter nor Article 13 of the ECHR require that a redress body be created by statute.

 However, Article 47 (2) of the Charter adds additional, complicating requirements.:

“Everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal previously established by law. Everyone shall have the possibility of being advised, defended and represented”.

As the official explanations point out, this second paragraph “corresponds to Article 6(1) of the ECHR”, which reads as follows:

“In the determination of his civil rights and obligations or of any criminal charge against him, everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal established by law. Judgment shall be pronounced publicly but the press and public may be excluded from all or part of the trial in the interests of morals, public order or national security in a democratic society, where the interests of juveniles or the protection of the private life of the parties so require, or to the extent strictly necessary in the opinion of the court in special circumstances where publicity would prejudice the interests of justice”.

Both Article 47(2) of the Charter and Article 6(1) of the ECHR thus require “an independent and impartial tribunal established by law”. Yet, what is the exact relationship between the provisions on “effective remedy” (Article 47(1) of the Charter and Article 13 of the ECHR), and those on “a fair and public hearing by independent and impartial tribunals established by law” (Article 47(2) of the Charter and Article 6(1) of the ECHR)?

A restrictive analysis would regard the two sets of articles as entirely interlinked, in which case redress bodies would always have to be “established by law”.

A second more flexible and plausible interpretation would maintain that this latter set of requirements constitutes lex specialis in relation to the former; in other words, the “right to effective remedy” (“redress”) is broader than the “right to a fair trial”. This interpretation finds support in the ECHR, which textually separates the two sets of rights and requirements (Articles 13 and 6(1)). It is also confirmed by the official guide to Article 13 which states that “Article 6 § 1 of the Convention is lex specialis in relation to Article 13” (here, at 41), and by the fact that Article 6(1) is limited in scope to civil rights and criminal charges. It therefore would be difficult to merge the obligation of states to put in place an “effective remedy” with the established by law” requirement, as this latter requirement only concerns the right to a fair trial before a “tribunal” under Article 6(1) – and not the broader right of redress before a “national authority” under Article 13. It seems then that, at least under the ECHR, a redress body need not always be a judicial body nor be “established by law”, provided that it satisfies the substantive requirements of the “right to effective remedy”. As we will see, the standards of the ECHR have always been particularly relevant for the European Data Protection Board (EDPB) in assessing the “essential equivalence” of “redress” mechanisms under Article 45 of the GDPR.

B. Flexibility Introduced by the “Essentially Equivalent” Standard of EU Data Protection Law

A flexible interpretation of the “effective remedy” requirement is also supported by the “essential equivalence” standard of the GDPR for third countries.

In Schrems I, the Court clearly acknowledged that “the means to which [a] third country has recourse, [… ] for the purpose of ensuring such a level of protection may differ from those employed within the European Union, [… ] those means must nevertheless prove, in practice, effective in order to ensure protection essentially equivalent to that guaranteed within the European Union” (§74 of the October 6, 2015 judgment, emphasis added).

The CJEU Advocate General emphasised in his 2019 Schrems II Opinion that the “essentially equivalent” standard “does not mean that the level of protection must be ‘identical’ to that required in the Union”. He explained that:

“It also follows from that judgment, in my view, that the law of the third State of destination may reflect its own scale of values according to which the respective weight of the various interests involved may diverge from that attributed to them in the EU legal order. Moreover, the protection of personal data that prevails within the European Union meets a particularly high standard by comparison with the level of protection in force in the rest of the world. The ‘essential equivalence’ test should therefore in my view be applied in such a way as to preserve a certain flexibility in order to take the various legal and cultural traditions into account” (§§ 248-249, emphasis added).

The EDPB previously had endorsed this flexible interpretation of the elements for adequacy. In its 2016 Opinion on Privacy Shield, for instance, the EDPB’s predecessor (WP29) emphasised that:

“the WP29 does not expect the Privacy Shield to be a mere and exhaustive copy of the EU legal framework […]. The Court has underlined that the term ‘adequate level of protection’, although not requiring the third country to ensure a level of protection identical to that guaranteed in the EU legal order, must be understood as requiring the third country in fact to ensure, by reason of its domestic law or its international commitments, a level of protection of fundamental rights and freedoms that is essentially equivalent to that guaranteed within the European Union [… ]” (p. 3).

It is precisely this flexible approach that allowed EU authorities to set aside the requirement that a redress body should be a “tribunal” – despite clear terms to the contrary in Article 47(1) of the Charter. As the EDPB noted in its Recommendations 02/2020 on the European Essential Guarantees for surveillance measures of November 10, 2020 (§47): “an effective judicial protection against such interferences can be ensured not only by a court, but also by a body which offers guarantees essentially equivalent to those required by Article 47 of the Charter” (emphasis added). The EDPB noted that the CJEU itself “expressly” used the word “body” in §197 of Schrems II. Indeed, in all its extant positions on U.S. redress mechanisms, the EDPB has recognised that the applicable standards equate with those in Article 13 of the ECHR, which “only obliges Members States to ensure that everyone whose rights and freedoms are violated shall have an effective remedy before a national authority, which does not necessarily need to be a judicial authority” (ibid, §46, emphasis added).

Therefore, provided that the U.S. redress mechanism meets the substantive requirements of Article 13 ECHR as cited in Schrems II and the EDPB opinions, a judicial body will not be necessarily required, and an “established by law” standard need not be applied in order to meet the “essential equivalence” test. As the astute European legal observer Chris Docksey concluded:

“This could be an opportunity for the CJEU to give meaning to the difference between essential equivalence and absolute equivalence mentioned above when deciding on the standard of individual redress to be applied in the specific case of international transfers. If the content of the right under Article 47 is ensured, then the form should not be an obstacle” (emphasis added).

C. Interpreting “Law” in a Substantive, Not Formal, Sense

European human rights law seems, in fact, to prioritise substance over form even in situations that go beyond an “essential equivalence” assessment. This can be shown by examining interpretations of the “in accordance with the law” requirement found in the ECHR, the Charter and several fundamental EU data protection sources of law, including the GDPR.

ECHR articles concerning human rights, including Article 8 (right to privacy), stipulate that some restrictions to these rights may be acceptable provided they are “in accordance with the law” and “necessary in a democratic society” in order to protect certain legitimate interests (such as national security, public safety, or the prevention of disorder or crime). Similarly, Article 52 of the Charter requires that: “Any limitation on the exercise of the rights and freedoms recognised by this Charter must be provided for by law (…)”.

Both the Convention and the Charter, however, interpret the term “law” in a flexible way. The ECtHR, for instance, has emphasised on multiple occasions that:

“[A]s regards the words “in accordance with the law” and “prescribed by law” which appear in Articles 8 to 11 of the Convention, the Court observes that it has always understood the term “law” in its “substantive” sense, not its “formal” one; it has included both “written law”, encompassing enactments of lower ranking statutes and regulatory measures (…), and unwritten law” (Sanoma Uitgevers B.V. v. the Netherlands, 2010, § 83, emphasis added). See also Sunday Times (No. 1) v. the United Kingdom, 1979, §47).

Similarly, in EU data protection law, both the Law Enforcement Data Protection Directive (LED) and the GDPR also understand the term “law” in its substantive sense. According to Recital 33 of the LED, for instance:

“Where this Directive refers to Member State law, a legal basis or a legislative measure, this does not necessarily require a legislative act adopted by a parliament, without prejudice to requirements pursuant to the constitutional order of the Member State concerned (…)” (emphasis added).

Further, Recital 41 of the GDPR provides:

“Where this Regulation refers to a legal basis or a legislative measure, this does not necessarily require a legislative act adopted by a parliament, without prejudice to requirements pursuant to the constitutional order of the Member State concerned. However, such a legal basis or legislative measure should be clear and precise and its application should be foreseeable to persons subject to it, in accordance with the case-law of the [CJEU] and the European Court of Human Rights” (emphasis added).

This flexible interpretation of the term “law” in the data protection context for assessing the incursion of state interests on fundamental rights is formally separate from the requirement in Article 47(2) of the Charter that a tribunal be “previously established by law”. However, this analytic flexibility is consistent with how EU bodies have interpreted the “essentially equivalent” standard, including in the context of the Privacy Shield. It therefore supports the conclusion that a U.S. decision to put in place an independent and effective redress mechanism for surveillance would satisfy the requirements of European law even if it does not involve the adoption of a statute. This conclusion is also supported by the European DPAs previous positions concerning the Privacy Shield Ombudsperson.

D. The CJEU and EU DPAs Did Not Object to Non-Statutory Redress

The fact that the Privacy Shield Ombudsperson was not created by statute did not seem to be a primary concern for either the CJEU or the EDPB in assessing whether this mechanism offers  “essentially equivalent” protection to European law.

In Schrems II the Court did not identify as a deficiency that the Ombudsperson mechanism was not created by statute. Rather, the problems detected were that there was “nothing in [the Privacy Shield Decision] to indicate that the dismissal or revocation of the appointment of the Ombudsperson is accompanied by any particular guarantees” and, also, that there was “nothing in that decision to indicate that the ombudsperson has the power to adopt decisions that are binding on those intelligence services (…)” (§§ 195-196). Thus, provided there is a way to fix these deficiencies by non-statutory means, the new redress solution could pass the “essential equivalence” test.

The EDPB also seems to support this argument. In its 2016 Opinion on Privacy Shield, the WP29 began by stating that:

“in addition to the question whether the Ombudsperson can be considered a ‘tribunal’, the application of Article 47 (2) Charter implies an additional challenge, since it provides that the tribunal has to be ‘established by law’. It is doubtful however whether a Memorandum which sets forth the workings of a new mechanism can be considered ‘law’” (p. 47).

The WP29 therefore seemed to link Articles 47(1) and 47(2). However, it did not appear to consider the legal form by which the Ombudsperson was created as an insuperable obstacle. It stated:

“As a consequence – with the principle of essential equivalency in mind – rather than assessing whether an Ombudsperson can formally be considered a tribunal established by law, the Working Party decided to elaborate further the nuances of the case law as regards the specific requirements necessary to consider ‘legal remedies’ and ‘legal redress’ compliant with the fundamental rights of Articles 7, 8 and 47 Charter and Article 8 (and 13) ECHR” (ibid., emphasis added).

The WP29 then went on to analyse the requirements of European law concerning the “right to effective remedy”, focusing primarily on the case law of the ECtHR, and concluded that the Ombudsperson did not meet these requirements, essentially for the same reasons mentioned by the CJEU in the Schrems II Judgment.

In their subsequent assessments of Privacy Shield, the WP29 and the EDPB arrived at the same conclusion. They did not consider that the means by which the Ombudsperson was created represented an obstacle to passing the “essentially equivalent” test. On the contrary, the EDPB “welcomed the establishment of an Ombudsperson mechanism as a new redress mechanism” (see for instance here, §99) and repeated that “having analysed the jurisprudence of the ECtHR in particular”, it “favored an approach which took into account the powers of the Ombudsperson” (see here, p.19).

Similarly, the European Data Protection Supervisor (EDPS) did not oppose the creation of the Ombudsperson on the grounds that it was done in a non-statutory way. On the contrary he argued that “in order to improve the redress mechanism proposed in the national security area, the role of the Ombudsperson should also be further developed, so that she is able to act independently not only from the intelligence community but also from any other  authority” (here, at 8, emphasis added). 

Conclusion

In sum, European law is flexible in interpreting whether the United States must adopt a new statute to meet redress requirements, especially when the question is viewed through the “essential equivalence” prism of data protection. Substance prevails over form. It remains true that a statutory approach would in abstracto be the easiest way for the United States to establish a permanent and independent redress body for effectively reviewing complaints and adopting decisions that bind intelligence services. However, when one takes into consideration the political, practical and constitutional difficulties confronting negotiators, it makes sense to achieve the same results in a different way.

In a second article, to be published shortly, we will detail specific elements of a non-statutory solution and assess whether it would meet the substantive European requirements on redress.

[i] As this report shows even in a country like Germany, particularly sensitive to intelligence law questions, its major Signals Intelligence (SIGINT) reform did not provide any judicial redress options for non-Germans: “There is no legally defined path for foreign individuals, such as journalists abroad, who want to find out if their communications have been collected in SIGINT operations and, if so, to verify whether the collection and processing of their data was lawful. What is more, the legislators opted to explicitly waive notification rights for foreigners regarding the bulk collection of their personal data.” (p. 63)

[ii] The European Court of Human Rights has developed jurisprudence that is more flexible than U.S. standing law in terms of who may bring a suit. European human rights law accepts since Klass and Others v. Germany case (1978) that an individual may, under certain conditions, claim to be the victim of a violation occasioned by the mere existence of legislation permitting secret measures of surveillance, without having to allege that such measures were in fact applied to him or that that he has been subject to a concrete measure of surveillance (the famous theory of “potential victim” of a human rights violation, see here, paras 34-38 and here, p. 15 for an updated analysis). Notwithstanding this greater flexibility in European law, we reiterate that the limits on U.S. standing are a matter of U.S. constitutional law, which cannot be overruled by a statute enacted by Congress.

Worth Reading :”Understanding EU data protection policy “

European Parliament Research Service (EPRS) : Policy Briefing

Summary : The datafication of everyday life and data scandals have made the protection of personal information an increasingly important social, legal and political matter for the EU. In recent years, awareness of data rights and expectations for EU action in this area have both grown considerably. The right to privacy and the right to protection of personal data are both enshrined in the Charter of Fundamental Rights of the EU and in the EU Treaties. The entry into force of the Lisbon Treaty in 2009 gave the Charter the same legal value as the Treaties and abolished the pillar structure, providing a stronger basis for a more effective and comprehensive EU data protection regime.

In 2012, the European Commission launched an ambitious reform to modernise the EU data protection framework. In 2016, the co-legislators adopted the EU’s most prominent data protection legislation – the General Data Protection Regulation (GDPR) – and the Law Enforcement Directive. The framework overhaul also included adopting an updated Regulation on Data Protection in the EU institutions and reforming the e-Privacy Directive, which is currently the subject of negotiation between the co-legislators. The European Parliament has played a key role in these reforms, both as co-legislator and author of own-initiative reports and resolutions seeking to guarantee a high level of data protection for EU citizens. The European Court of Justice plays a crucial role in developing the EU data protection framework through case law. In the coming years, challenges in the area of data protection will include balancing compliance and data needs of emerging technologies, equipping data protection authorities with sufficient resources to fulfil their tasks, mitigating compliance burdens for small and medium-sized enterprises, taming digital surveillance and further clarifying requirements of valid consent. (This is an updated edition of a briefing written by Sofija Voronova in May 2020.)

LINK TO THE FULL TEXT

VERFASSUNGSBLOG : A cautious green light for technology-driven mass surveillance

The Advocate General’s Opinion on the PNR Directive

by Christian Thönnes

Yesterday, on 27 January 2022, Advocate General (AG) Pitruzzella published his Opinion (“OP”) in the Court of Justice of the European Union’s (CJEU) preliminary ruling procedure C-817/19. The questions in this case pertain to Directive (EU) 2016/681 of 27 April 2016 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime (in short: PNR Directive) and its compatibility with EU primary law.

In his Opinion (which, besides the Press Release (“PR”), was only available in French at the time of writing), the AG, while criticizing the PNR Directive’s overbroad data retention period and its lack of clarity and precision in certain points, generally considers the PNR Directive to be “compatible with the fundamental rights to respect for private life and to the protection of personal data” (PR). His arguments are not convincing.

Certainly, much more can and will be written about this case in general and the Opinion in particular. This entry can only shine a light on some of the AG’s major arguments. In so doing, it shall point out why, in my opinion, the CJEU would do well not to follow the AG’s recommendations. Instead, I believe the PNR Directive is incompatible with Articles 7 and 8 of the EU Charter of Fundamental Rights (CFR). Consequently, it ought to be invalidated.

What the AG has to say about the PNR Directive

The PNR Directive obliges EU Member States to require air carriers to transmit a set of data for each passenger to national security authorities, where they are subjected to automated processing against pre-existing databases (Art. 6 § 3 letter a) and “pre-determined criteria” (Art. 6 § 3 letter b), which contain (allegedly) suspicious flight behaviors (such as a mismatch between luggage and length of stay and destination, see the Commission’s Evaluation Report, point 5.1, in order to identify potential perpetrators of serious crimes or acts of terrorism (a more detailed description of the Directive’s workings can be found in paras 9-18 of the AG’s Opinion or here).

The AG points to certain (limited) problems with the Directive’s wording. Firstly, he contends that point 12 of Annex I, enabling “General Remarks” to be included in PNR data sets, fail to “satisfy the conditions of clarity and precision laid down by the Charter” (PR, also para 150 OP). He also considers the Directive’s five-year-retention period for PNR data excessive and proposes that this period be limited to cases where “a connection is established, on the basis of objective criteria, between those data and the fight against terrorism or serious crime” (PR, also para 245 OP). In addition, he provides clarifying criteria for the relevancy of databases under Art. 6 § 3 letter a (para 219 OP), regarding the applicability of the GDPR (para 53 OP) as well as collisions with the Schengen Borders Code (para 283 OP). He also demands that, due to their lack of transparency, (at least some) “machine-learning artificial intelligence systems” (PR), should not be used for pre-determined criteria (para 228 OP).

The most resounding message of his Opinion, however, is that the PNR Directive’s mass retention and processing regime is “relevant, adequate and not excessive in relation to the objectives pursued” (PR) and thus compatible with Articles 7 and 8 CFR. He therefore recommends to let it stand, albeit with some interpretative limitations (para 254 OP).

Incompatibility with Digital Rights Ireland and its successors

The AG’s reasoning in support of the PNR Directive’s proportionality relies on his central finding that “the Court’s case-law on data retention and access in the electronic communications sector is not transposable to the system laid down by the PNR Directive” (PR). He is referring to decisions like Digital Rights IrelandTele2 Sverige and Quadrature du Net, in which the CJEU had laid down strict limits on governments’ power to collect and process telecommunications data. Notably, it posited that “the fight against serious crime […] and terrorism […] cannot in itself justify that national legislation providing for the general and indiscriminate retention of all traffic and location data should be considered to be necessary for the purposes of that fight” (Tele2 Sverige, para 103; also Digital Rights Ireland, para 51). Instead, the CJEU required that in order to be considered “limited to what is strictly necessary […] the retention of data must continue nonetheless to meet objective criteria, that establish a connection between the data to be retained and the objective pursued” (Tele2 Sverige, para 110).

Evidently, the PNR Directive would clash with these criteria – were they found to be applicable. The collection and automated processing of PNR data is completely indiscriminate. Given Member States’ universal extension to EU domestic flights, it affects all European flight passengers, regardless of their personal histories and independently of a potential increased domestic threat situation (this is proposed as a possible criterion in Quadrature du Net, para 168). The use of pre-determined criteria is not, like the comparison against existing databases, aimed at recognizing known suspects, but at conjuring up new suspicions (see EU Commission PNR Directive Proposal, SEC(2011) 132, p. 12). Also, taking a flight is a perfectly ordinary form of human behavior. There is no empirically demonstrated connection to the perpetration of serious crimes or acts of terrorism (in para 203, the AG presupposes such a “lien objectif” without providing any evidence exceeding anecdotal intuitions about terrorism and human trafficking) and the PNR Directive, given its broad catalogue of targeted crimes, is not limited to dangers caused by air traffic. What behavior will be targeted next? Visiting the museum? Going to a rock concert? Belgium, for example, has already expanded the PNR Directive’s scope to international trains, busses and ferries (Doc. parl., Chambre, 20152016, DOC 54-2069/001, p.7).

Good reasons for applicability

It thus is quite clear: Should Digital Rights Ireland and its successors apply, the PNR Directive is in trouble. Now, why wouldn’t their criteria be transposable? The AG’s arguments mainly turn on a perceived difference in sensitivity of PNR data, compared to telecommunications meta-data. The latter, the AG explains, contain intimate information of users’ private lives (para 195, 196), and almost uncontrollable in their scope and processing because everyone uses telecommunication (paras 196, 198). Moreover, because they are used for communication, telecommunications data, unlike PNR data, have an intrinsic connection to fundamental democratic freedoms (para 197). PNR data, on the other hand, he opines, are limited to a delineated life domain and narrower target groups because fewer people use planes than telecommunication (paras 196, 198).

Under closer examination, this comparison falls apart. Firstly, PNR data contain very sensitive information, too. As the CJEU has pointed out in his Opinion 1/15 regarding the once-envisaged EU-Canada PNR Agreement, “taken as a whole, the data may, inter alia, reveal a complete travel itinerary, travel habits, relationships existing between air passengers and the financial situation of air passengers, their dietary habits or state of health” (para 128). Unlike the AG (see para 195 in his Opinion), I can find no remarks in Opinion 1/15 that would relegate PNR data to a diminished place compared to telecommunications data. But secondly, and more importantly, the AG fails to consider other factors weighing on the severity of the PNR Directive’s data processing when compared against the processing of Directive 2006/24/EC and its siblings: The method and breadth of processing and the locus of storage.

Only a small minority of telecommunication datasets, upon government requests in specific cases (see Articles 4 and 8 of Directive 2006/24/EC), underwent closer scrutiny, while the vast majority remained untouched. Under the PNR Directive, however, all passengers, without exception, are subjected to automated processing. In so doing, the comparison against pre-determined criteria, as the AG points out himself (para 228 OP), can be seen as inviting Member States to use self-learning algorithms to establish suspicious movement patterns. Other EU law statutes like Art. 22 GDPR or Art. 11 of Directive 2016/618, as well as comparable decisions by national constitutional courts (BVerfG, Beschluss des Ersten Senats vom 10. November 2020 – 1 BvR 3214/15 -, para 109) are inspired by an understanding that such automated processing methods greatly increase the severity of respective interferences with fundamental rights. Moreover, while telecommunications data were stored on telecommunication service providers’ servers (to whom users had entrusted these data), PNR data are all transferred from air carriers to government entities and then stored there.

Hence, there are good reasons to assume that the data processing at hand causes even more severe interferences with Articles 7 and 8 CFR than Directive 2006/24/EC did. It thus follows, that the case law of Digital Rights Ireland should apply a fortiori.

An inaccurate conception of automated algorithmic profiling and base rate fallacy

There are other problems with the AG’s reasoning; completely untangling all of them would exceed this space. Broadly speaking, however, the AG seems to underestimate the intrinsic pitfalls of unleashing predictive self-learning algorithms on datapools like these. The AG claims that the PNR Directive contains sufficient safeguards against false-positives and discriminatory results (para 176 OP).

Firstly, it is unclear what these safeguards are supposed to be. The Directive does not enunciate clear standards for human review. Secondly, even if there were more specific safeguards, it is hard to see how they could remedy the Directive’s central inefficiency. That inefficiency does not reside in the text, it’s in the math – and it’s called ‘base rate fallacy’. The Directive forces law enforcement to look for the needle in a haystack. Even if their algorithms were extremely accurate, false-positives would most likely exceed true-positives. Statistics provided by Member States showing extremely high false-positive rates support this observation. The Opinion barely even discusses false-positives as a problem (only in an aside in para 226 OP). Also, it is unclear how the antidiscrimination principle of Art. 6 § 4 is supposed to work. While the algorithms in question may be programmed in way to not process explicit data points on race, religion, health etc., indirect discrimination is a well-established problem of antidiscrimination law. Both humans and algorithms may just use the next-best proxy trait. (see for example Tischbirek, Artificial Intelligence and Discrimination).

Now, the AG attempts to circumvent these problems by reading the PNR Directive in a way that prohibits the use of self-learning algorithms (para 228 OP). But that interpretation, which is vaguely based on some “système de garanties“ (para 228 OP), is both implausible – it lacks textual support and the pile of PNR data is amassed precisely to create a use case for AI at EU borders – and insufficient to alleviate this surveillance tool’s inherent statistical inefficiency.

This cursory analysis sheds light on some of the AG’s Opinion’s shortcomings. It thus follows that the CJEU should deviate from Pitruzzella’s recommendations. The PNR Directive, due to the severity of its effects and its inherent inefficiency in fulfilling its stated purpose, produces disproportionate interferences with Articles 7 and 8 CFR. It ought to be invalidated.

Between 2017 and 2021, the author worked for the German NGO “Gesellschaft für Freiheitsrechte”, among other things, on a similar case (C-148/20 to C-150/20) directed against the PNR Directive.

Does the EU PNR Directive pave the way to Mass surveillance in the EU? (soon to be decided by the CJEU… )

Fundamental Rights European Experts Group

(FREE-Group)

Opinon on the broader and core issues arising in the PNR Case currently before the CJEU (Case C-817/19)

by Douwe Korff (Emeritus Professor of International Law, London Metropolitan University Associate, Oxford Martin School, University of Oxford)

(LINK TO THE FULL VERSION 148 Pages)

EXECUTIVE SUMMARY

(with a one-page “at a glance” overview of the main findings and conclusions)

Main findings and conclusions at a glance

In my opinion, the appropriate tests to be applied to mass surveillance measures such as are carried out under the PNR Directive (and were carried out under the Data Retention Directive, and are still carried out under the national data retention laws of the EU Member States that continue to apply in spite of the CJEU case-law) are:

Have the entities that apply the mass surveillance measure – i.e., in the case of the PNR Directive (and the DRD), the European Commission and the EU Member States – produced reliable, verifiable evidence:

  • that those measures have actually, demonstrably contributed significantly to the stated purpose of the measures, i.e., in relation to the PNR Directive, to the fight against PNR-relevant crimes (and in relation the DRD, to the fight against “serious crime as defined by national law”); and
  • that those measures have demonstrably not seriously negatively affected the interests and fundamental rights of the persons to whom they were applied?

If the mass surveillance measures do not demonstrably pass both these tests, they are fundamentally incompatible with European human rights and fundamental rights law and the Charter of Fundamental Rights; this means the measures must be justified, by the entities that apply them, on the basis of hard, verifiable, peer-reviewable data.

The conclusion reached by the European Commission and Dutch Minister of Justice: that overall, the PNR Directive, respectively the Dutch PNR law, had been “effective” because the EU Member States said so (Commission) or because PNR data were quite widely used and the competent authorities said so (Dutch Minister) is fundamentally flawed, given that this conclusion was reached in the absence of any real supporting data. Rather, my analyses show that:

  • Full PNR data are disproportionate to the purpose of basic identity checks;
  • The necessity of the PNR checks against Interpol’s Stolen and Lost Travel Document database is questionable;
  • The matches against unspecified national databases and “repositories” are not based on foreseeable legal rules and are therefore not based on “law”;
  • The necessity and proportionality of matches against various simple, supposedly “suspicious” elements (tickets bought from a “suspicious” travel agent; “suspicious” travel route; etc.) is highly questionable; and
  • The matches against more complex “pre-determined criteria” and profiles are inherently and irredeemably flawed and lead to tens, perhaps hundreds of thousands of innocent travellers wrongly being labelled to be a person who “may be” involved in terrorism or serious crime, and are therefore unsuited (D: ungeeignet) to the purpose of fighting terrorism and serious crime.

The hope must be that the Court will stand up for the rights of individuals, enforce the Charter of Fundamental Rights, and declare the PNR Directive (like the Data Retention Directive) to be fundamentally in breach of the Charter.

– o – O – o –

Executive Summary

This document summarises the analyses and findings in the full Opinion on the broader and core issues arising in the PNR Case currently before the CJEU (Case C-817/19), using the same headings and heading numbers. Please see the full opinion for the full analyses and extensive references. A one-page “at a glance” overview of the main findings and conclusions is also provided.

The opinion drew in particular on the following three documents, also mentioned in this Executive Summary:

– o – O – o –

  1. Introduction

In the opinion, after explaining, at 2, the broader context in which personal data are being processed under the PNR Directive, I try to assess whether the processing that the PNR Directive requires or allows is suitable, effective and proportionate to the aims of the directive. In doing so, in making those assessments, I base myself on the relevant European human rights and data protection standards, summarised at 3.

NB: The opinion focusses on the system as it is designed and intended to operate, and on what it allows (even if not everything that may be allowed is [yet] implemented in all Member States), and less on the somewhat slow implementation of the directive in the Member States and on the technical aspects that the Commission report and the staff working document often focussed on. It notes in particular a number of elements or aspects of the directive and the system it establishes that are problematic, either conceptually or in the way they are supposed to operate or to be evaluated.

2. PNR in context

In the footsteps of the US and UK intelligence services (as revealed by Snowden), the EU Member States’ law enforcement agencies are increasingly using their access to bulk data – bulk e-communications data, financial data, PNR data, etc. – to “mine” the big data sets by means of sophisticated, self-learning algorithms and Artificial Intelligence (AI).

The European Union Agency for Law Enforcement Cooperation, Europol, has become increasingly involved in algorithm/AI-based data analysis (or at least in the research underpinning those technologies), and last year the Commission proposed to significantly further expand this role.

The processing of PNR data under the PNR Directive must be seen in these wider contexts: the clear and strengthening trend towards more “proactive”, “preventive” policing by means of analyses and algorithm/AI-based data mining of (especially) large private-sector data sets and databases; the increasingly central role played by Europol in this (and the proposal to expand that role yet further); the focusing on “persons of interest” against whom there is (as yet) insufficient evidence for action under the criminal law (including, in relation to Europol, persons against whom there is an “Article 36 alert” in its SIS II database); and the still increasing intertwining of law enforcement and national security “intelligence” operations in those regards.

Notably, “Article 36 SIS alerts” have been increasing, and in the Netherlands, in 2020, 82.4% of all PNR “hits” against the Schengen Information System, confirmed by the Dutch Passenger Information Unit established under the PNR Directive, were “hits” against “Article 36 alerts”.

Human rights-, digital rights- and broader civil society NGOs have strongly criticised these developments and warned of the serious negative consequences. Those concerns should be taken seriously, and be properly responded to.

3 Legal standards

General fundamental rights standards stipulate that all interferences with fundamental rights must be based on a “law” that meets the European “quality of law” standards: the law must be public, clear and specific, and foreseeable in its application; the interferences must be limited to what is “necessary” and “proportionate” to serve a “legitimate aim” in a democratic society; the relevant limitations must be set out in the law itself (and not left to the discretion of states or state authorities); and those affected by the interferences must be able to challenge them and have a remedy in a court of law. Generalised, indiscriminate surveillance of whole populations (such as all air passengers flying to or from the EU) violates the EU Charter of Fundamental Rights. A special exception to this prohibition accepted by the EU Court of Justice in the La Quadrature du Net case, which allows EU Member States to respond to “serious”, “genuine and present or foreseeable” threats to “the essential functions of the State and the fundamental interests of society” must be strictly limited in time and place: it cannot form the basis for continuous surveillance of large populations (such as all air passengers) generally, on a continuous, indefinite basis: that would turn the (exceptional) exception into the rule. Yet that is precisely what the PNR Directive provides for.

European data protection law expands on the above general principles in relation to the processing of personal data. The (strict) case-law of the CJEU and the European Court of Human Rights on data protection generally and generalised surveillance in particular are reflected in the European Data Protection Board’s European Essential Guarantees for surveillance (EEGs).

Processing of information on a person suggesting that that person “may be” involved in criminal activities is subject to especially strict tests of legitimacy, necessity and proportionality.

Contrary to assertions by the European Commission and representatives of EU Member States (inter alia, at the hearing in the PNR case in July 2021) that the processing under the PNR Directive has little or no effect on the rights and interests of the data subjects, the processing under the directive must under EU data protection law be classified as posing “high risks” to the fundamental rights and interests of hundreds of millions of airline passengers.

Under the Law Enforcement Directive (as under the GDPR), this means that the processing should be subject to careful evaluation of the risks and the taking of remedial action to prevent, as far as possible, any negative consequences of the processing – such as the creation of “false positives” (cases in which a person is wrongly labelled to be a person who “may be” involved in terrorism or serious crime). It also means that if it is not possible to avoid excessive negative consequences, the processing is “not fit for purpose” and should not be used.

Under the proposed Artificial Intelligence Act that is currently under consideration, similar duties of assessment and remedial action – or abandoning of systems – are to apply to AI-based processes.

4 The PNR Directive

4.1 Introduction

4.2 The system

Under the PNR Directive, special “Passenger Information Units” (PIUs) in each EU Member State match the data contained in so-called passenger name records (PNRs) that airlines flying into or from the EU have to provide to those units against supposedly relevant lists and databases, to both identify already “known” formally wanted persons or already “known” “persons of interest” who “may be” involved in terrorism or other serious crime, and to “identify” (i.e., label) previously “unknown” persons who “may be” involved in such activities by means of “risk analyses” and the identification of “patterns” and “profiles” based on the identified patterns (see below, at 4.7).

The opinion analyses and assesses all major elements of the system in turn.

4.3 The aims of the PNR Directive

In simple terms, the overall aim of the PNR Directive is to facilitate the apprehension of terrorists and individuals who are involved in terrorism or other serious transnational crime, including in particular international drug- and people trafficking.

However, the first aim of the checking of the PNR data by the PIUs is more limited than the aims of the directive overall; this is: to identify persons who require further examination by the competent authorities [see below, at 4.5], and, where relevant, by Europol [see below, at 4.11], in view of the fact [?] that such persons may be involved in a terrorist offence or serious crime. (Article 6(1)(a))

When there is a match of PNR data against various lists, i.e., a “hit” (see below, at 4.9), the PNR passes this “hit” on to certain “competent authorities” (see below, at 4.5) for “further examination”; if the initial “hit” was generated by automated means, this is only done after a manual review by PIU staff. In practice, about 80% of initial “hits” are discarded (see below, at 4.9).

It is one of the main points of the opinion that the suitability, effectiveness and proportionality of the PNR Directive cannot and should not be assessed by reference to the number of initial “hits” noted by the PIUs, compared to the number of cases passed on for “further examination” to the competent authorities, but rather, with reference to more concrete outcomes (as is done in section 5.2).

4.4 The Legal Basis of the PNR Directive

It appears obvious from the Court of Justice opinion on the Draft EU-Canada Agreement that the PNR Directive, like that draft agreement, should have been based on Articles 16 and 87(2)(a) TFEU, and not on Article 82(1) TFEU. It follows that the PNR Directive, too, appears to not have been adopted in accordance with the properly applicable procedure. That could lead to the directive being declared invalid on that ground alone.

4.5 The Competent Authorities

Although most competent authorities (authorities authorised to receive PNR data and the results of processing of PNR data from the PIUs) in the EU Member States are law enforcement agencies, “many Member States [have designated] intelligence services, including military intelligence services, as authorities competent to receive and request PNR data from the Passenger Information Unit”, and “in some Member States the PIUs are actually “embedded in … [the] state security agenc[ies]”.

Given the increasingly close cooperation between law enforcement agencies (and border agencies) and intelligence agencies, in particular in relation to the mining of large data sets and the development of evermore sophisticated AI-based data mining technologies by the agencies working together (and in future especially also with and through Europol), this involvement of the intelligence agencies (and in future, Europol) in PNR data mining must be seen as a matter of major concern.

4.6 The crimes covered (“PNR- Relevant offences”)

The PNR Directive stipulates that PNR data and the results of processing of PNR data may only be used for a range of terrorist and other serious offences, as defined in Directive 2017/541 and in an annex to the PNR Directive, respectively (so-called “PNR-relevant offences”).

The processing under the PNR Directive aims to single out quite different categories of data subjects from this large base: on the one hand, it seeks to identify already “known” formally wanted persons (i.e., persons formally designated suspects under criminal [procedure] law, persons formally charged with or indicted for, or indeed already convicted of PNR-relevant offences) and already “known” “persons of interest” (but who are not yet formally wanted) by checking basic identity data in the PNRs against the corresponding data in “wanted” lists (such as “Article 26 alerts” in SIS II); and on the other hand, it seeks to “identify” previously “unknown” persons as possibly being terrorist or serious criminals, or “of interest”, on the basis of vague indications and probability scores. In the latter case, the term “identifying” means no more than labelling a person as a possible suspect or “person of interest” on the basis of a probability.

The opinion argues that any assessment of the suitability, effectiveness and proportionality of the processing must make a fundamental distinction between these different categories of data subjects (as is done in section 5).

4.7 The categories of personal data processed

An annex to the PNR Directive lists the specific categories of data that airlines must send to the database of the PIU of the Member State on the territory of which the flight will land or from the territory of which the flight will depart. This obligation is stipulated with regard to extra-EU flights but can be extended by each Member State to apply also to intra-EU flights  – and all but one Member States have done so. The list of PNR data is much longer than the Advance Passenger Information (API) data that airlines must already send to the Member States under the API Directive, and includes information on travel agents used, travel routes, email addresses, payment (card) details, luggage, and fellow travellers. On the other hand, often some basic details (such as date of birth) are not included in the APIs.

The use of sensitive data

The PNR Directive prohibits the processing of sensitive data, i.e., “data revealing a person’s race or ethnic origin, political opinions, religion or philosophical beliefs, trade union membership, health, sexual life or sexual orientation”. In the event that PNR data revealing such information are received by a PIU, they must be deleted immediately. Moreover, competent authorities may not take “any decision that produces an adverse legal effect on a person or significantly affects a person” on the basis of such data. However, PNR data can be matched against national lists and data “repositories” that may well contain sensitive data. Moreover, as noted at 4.9(f), below, the provisions in the PNR Directive do not really protect against discriminatory outcomes of the profiling that it encourages.

4.8 The different kinds of matches

(a) Matching of basic identity data in PNRs against the identity data of “known” formally wanted persons

PNR data are matched against SIS II alerts on “known” formally wanted persons (including “Article 26 alerts”) and against “relevant” national lists of “known” formally wanted persons.

This is usually done by automated means, followed by a manual review. The Commission reports that approximately 81% of all initial matches are rejected – and not passed on to competent authorities for further examination. Notably:

– the quality of the PNR data as received by the PIUs, including even of the basic identity data, is apparently terrible and often “limited”; this is almost certainly the reason for the vast majority of the 81% rejections;

– most of the long lists of PNR data are not needed for basic identity checks: full names, date of birth, gender and citizenship/nationality should suffice – and a passport or identity card number would make the match more reliable still. All those data are included in the API data, and all are included in optical character recognition format in the machine-readable travel documents (MRTD) that have been in wide use since the 1980s.

In other words, paradoxically, PNR data are both excessive for the purpose of basic identity checks (by containing extensive data that are not needed for such checks), and insufficient (“too limited”), in particular in relation to intra-Schengen flights (by not [always] including the dates of birth of the passengers).

– the lists against which the PNR data are compared, including in particular the SIS alerts and the EAW lists, but also many national lists, relate to many more crimes than are subject to the PNR Directive (“PNR-relevant offences”) – but in several Member States “hits” against not-PNR-relevant suspects (etc.) are still passed on to competent authorities, in clear breach of the purpose-limitation principle underpinning the directive.

In that respect, it should be noted that the Commission staff working document claims that in relation to situations in which the PNR data is “too limited” (typically, by not including date of birth), “[t]he individual manual review provided for in Article 6.5 of the PNR Directive protects individuals against the adverse impact of potential ‘false positives’” – but this is simply untrue: While a confirmed matching of identity data in relation to a person who is formally wanted in relation to PNR-relevant offences can be regarded as a “positive” result of the identity check, a “hit” in relation to a person who is wanted for not-PNR-relevant offences should of course not be regarded as a positive result under the PNR Directive.

(b) Matching of basic identity data in PNRs against the identity data of “known” “persons of interest”

In principle, the matching of basic identity data from PNRs against lists of basic identity data of “persons of interest” listed in the SIS system (and comparable categories in national law enforcement repositories), like the matching of data on formally wanted persons, should be fairly straight-forward.

However, the PNRs in this regard first of all suffer from the same two deficiencies as were discussed in relation to matches for formally wanted persons, discussed at (a), above: PNR data are both excessive for the purpose of basic identity checks (by containing extensive data that are not needed for such checks), and insufficient (“too limited”), in particular in relation to intra-Schengen flights (by not [always] including the dates of birth of the passengers). The third issue identified in the previous sub-section, that SIS alerts (and similar alerts in national law enforcement repositories) can relate to many more criminal offences than those that are “PNR-relevant” also applies: many persons labelled “person of interest” will be so labelled in relation to “non-PNR-relevant” offences.

In my opinion, while a confirmed matching of identity data in relation to persons who are formally wanted in relation to (formally suspected of, charged with, or convicted of) PNR-relevant offences can be regarded as a “positive” result of an identity check, a “hit” in relation to persons who are labelled “person of interest” should not be regarded as a positive result under the PNR Directive – certainly of course not if they are so labelled in relation to non-PNR-relevant offences, but also not if they are in no way implicated as in any way being culpable of PNR-relevant offences.

In my opinion, even confirmed “hits” confirming the identity of already listed “persons of interest” should not be regarded as “positive” results under the PNR Directive unless they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

(c) Matching of PNR Data against data on lost/stolen/fake credit cards and lost/stolen/fake identity or travel documents

The staff working document makes clear that PNR data are checked by “a large majority of PIUs” against Interpol’s Stolen and Lost Travel Document database as one “relevant database”. However, this is somewhat of a residual check because that database is also already made available to airlines through Interpol’s “I-Checkit” facility. Moreover:

Even leaving the issue of purpose-limitation aside, a “hit” against a listed lost/stolen/fake credit card or a lost/stolen/fake identity or travel document should still only be considered a “positive result” in terms of the PNR Directive if it results in a person subsequently being formally declared to be (at least) a formal suspect in relation to terrorist or other serious, PNR-relevant criminal offences.

(d) Matching of PNR data against other, unspecified, supposedly relevant (in particular national) databases

It is far from clear what databases can be – and in practice, in the different Member States, what databases actually are – regarded as “relevant databases” in terms of the PNR Directive: this is left to the Member States. At the July 2021 Court hearing, the representative of the Commission said that the data of Facebook, Amazon and Google could not be regarded as “relevant”, and that law enforcement databases (des bases policières) would be the most obvious “relevant” databases. But the Commission did not exclude matches against other databases with relatively “hard” data, such as databases with financial data (credit card data?) or telecommunications data (location data?).

The vagueness of the phrase “relevant databases” in Article 6(3)(a) and the apparently wide discretion granted to Member States to allow matching against all sorts of unspecified data sets is incompatible with the Charter of Fundamental Rights and the European Convention on Human Rights. It means that the application of the law is not clear or foreseeable to those affected – i.e., the provision is not “law” in the sense of the Charter and the Convention (and EU law generally) – and that the laws can be applied in a disproportionate manner.

In other words, even in relation to the basic checks on the basis of lists of “simple selectors”, the PNR Directive does not ensure that those checks are based on clear, precise, and in their application foreseeable Member State laws, or that those laws are only applied in a proportionate manner. In the terminology of the European Court of Human Rights, the directive does not protect individuals against arbitrary interferences with the rights to privacy and protection of personal data.

(e) Matching of PNR data against lists of “suspicious travel agents”, “suspicious routes”, etc.

The staff working document repeatedly refers to checks of PNR data against “patterns” such as tickets being bought from “suspicious” travel agents; the use of “suspicious” travel routes; passengers carrying “suspicious” amounts of luggage (and the Dutch evaluation report even mentions that a person wearing a suit and hastening through customs [while being black] was regarded by custom authorities as fitting a “suspicious” pattern). No proper prosecuting or judicial authority could declare travellers to be a formal suspect – let alone to charge, prosecute or convict a traveller – on the basis of a match against such simple “suspicious” elements alone. In my opinion:

For the purpose of evaluating the suitability, effectiveness and proportionality of the PNR Directive (and of the practices under the directive), a simple “hit” against these vague and far-from-conclusive factors or “criteria” should not be regarded as a “positive” result. Rather, a “hit” against such vague “criteria” as the purchase of an air ticket from a “suspicious” travel agent, or the using of a “suspicious” route, or the carrying of a “suspicious” amount of luggage – let alone “walking fast in a suit (while being black)” – should again only be considered a “positive result” in terms of the PNR Directive if it result in a person subsequently being formally declared to be (at least) a formal suspect in relation to terrorist or other serious, PNR-relevant criminal offences.

(f) Matching of data in the PNRs against more complex “pre-determined criteria” or profiles

(fa)      Introduction

Under the PNR Directive, PIUs may, in the course of carrying out their assessment of whether passengers “may be involved in a terrorist offence or [other] serious crime”, “process PNR data against pre-determined criteria”. As also noted by the EDPS, it is clear that the PNR data can be matched against “patterns” discerned in previous data and against “profiles” of possible terrorists and serious criminals created on the basis of these patterns, that are more complex than the simple patterns discussed at (e), above. This is also undoubtedly the direction in which searches for terrorists and other serious criminals are moving.

(fb)      The nature of the “pre-determined criteria”/“profiles”

The EU and EU Member State agencies are increasingly applying, or are poised to apply, increasingly sophisticated data mining technologies such as are already used by the UK (and US) agencies. This involves self-learning, AI-based algorithms that are constantly dynamically re-generated and refined through loops linking back to earlier analyses. The software creates constantly self-improving and refining profiles against which it matches the massive amounts of data – and in the end, it produces lists of individuals that the algorithm suggests may (possibly or probably) be terrorists, or associates of terrorists or other serious criminals. It is the stated policy of the EU to accelerate the development and deployment of these sophisticated technologies, under the guidance of Europol.

Whatever the current level of use of such sophisticated techniques in law enforcement and national security contexts in the Member States (as discussed at (fd), below), if the PNR Directive is upheld as valid in its current terms, nothing will stand in the way of the ever-greater deployment of these more sophisticated (but flawed) technologies in relation to air passengers. That would also pave the way to yet further use of such (dangerous) data mining and profiling in relation to other large population sets (such as all users of electronic communications, or of bank cards).

(fc)      The creation of the “pre-determined criteria”/“profiles”

Given (a) the increasingly sophisticated surveillance and data analysis/data mining/risk assessment technologies developed by the intelligence services of the EU Member States (often drawing on US and UK experience) and now also by law enforcement agencies and (b) the clear role assigned to Europol in this respect, it would appear clear that there is being developed a cadre of data mining specialists in the EU – and that the PNR data are one of the focus areas for this work. In other words, the “pre-determined criteria” – or AI-based algorithms – that are to be used in the mining of the PNR data are being developed, not solely by or within the PIUs but by this broader cadre that draws in particular on intelligence experts (some of whom may be embedded in the PIUs). The PNR databases are (also) between them a test laboratory for data mining/profiling technologies. And (c) there is nothing in the PNR Directive that stands in the way of using other data than PNR data in the creation of “pre-determined criteria”, or indeed in the way of using profiles developed by other agencies (including intelligence agencies) as “pre-determined criteria” in the PIU analyses.

(fd)      The application of the more complex “pre-determined criteria”/“profiles” in practice

It would appear that to date, few Member States are as yet using data mining in relation to PNR data in as sophisticated a way as described in sub-section (fb), above (or at least acknowledge such uses).

However, in a range of EU Member States algorithm/AI-based profiling is already in use in relation to broader law enforcement (and especially crime prevention). Moreover, the aim of the Commission and the Member States is expressly to significantly expand this use, with the help of Europol and its Travel Intelligence Task Force, and through “training on the development of pre-determined criteria” in “an ongoing EU-funded project, financed under the ISF-Police Union Actions.”

This merely underlines the point I made in the previous sub-sections: that the PNR database is being used as a test laboratory for advanced data mining technologies, and that if the PNR Directive is upheld as valid in its current terms, nothing will stand in the way of the ever-greater deployment of these more sophisticated (but flawed) technologies in relation to air passengers, and others. The fact that sophisticated data mining and profiling is said to not yet be in widespread operational use in most Member States should not be a reason for ignoring this issue – on the contrary: this is the desired destination of the analyses.

(fe)      The limitations of and flaws in the technologies

There are three main problems with algorithmic data mining-based detection of rare phenomena (such as terrorists and serious criminals in a general population):

– The base-rate fallacy and its effect on false positives:

In very simple layperson’s terms, the base-rate fallacy means that if you are looking for very rare instances or phenomena in a very large dataset, you will inevitably obtain a very high percentage of false positives in particular – and this cannot be remedied by adding more or somehow “better” data: by adding hay to a haystack.

As noted above, at 4.7, a very rough guess would be that on average the 1 billion people counted by Eurostat as flying to or from the EU relate to 500 million distinct individuals. In other words, the base rate for PNR data can be reasonably assumed to be in the region of 500 million.

The Commission reports that there are initial “hits” in relation to 0.59% of all PNRs, while 0.11% of all PNRs are passed on as confirmed “hits” to competent authorities for “further examination”. The Commission report and the staff working document appear to imply – and certainly do nothing to refute – that the 0.11% of all confirmed “hits” that are passed on to competent authorities are all “true positives”. However, that glaringly fails to take account of the base rate, and its impact on results.

Even if the PNR checks had a failure rate of just 0.1% (meaning that (1) in relation to persons who are actually terrorists or serious criminals, the PIUs will rightly confirm this as a proper “hit” 99.9% of the time, and fail to do so 0.1% of the time and (2) in relation to persons who are not terrorists, the PIUs will rightly not generate a confirmed “hit” 99.9% of the time, but wrongly register the innocent person as a confirmed “hit” 0.1% of the time) the probability that a person flagged by this system is actually a terrorist would still be closer to 1% than to 99%. In any case, even if the accuracy rate of the PNR checks were to be as high as this assumed 99.9% (which of course is unrealistic), that would still lead to some 500,000 false positives each year.

Yet the Commission documentation is silent about this.

– Built-in biases:

The Commission staff working document claims that, because the “pre-determined criteria” that are used in algorithmic profiling may not be based on sensitive data, “the assessment cannot be carried out in a discriminatory manner” and that “[t]his limits the risk that discriminatory profiling will be carried out by the authorities.” This is simply wrong.

In simple terms: since “intimate part[s] of [a person’s] private life” can be deduced, or at least inferred, from seemingly innocuous information – such as data included in PNRs (in particular if matched against other data) – those “intimate aspects” are not “fully protected by the processing operations provided for in the PNR Directive”. Indeed, in a way, the claim to the contrary is absurd: the whole point of “risk analysis” based on “pre-determined criteria” is to discover unknown, indeed hidden matters about the individuals who are being profiled: inferring from the data on those people, on the basis of the application of those criteria, that they are persons who “may be” involved in terrorism or other serious crimes surely is a deduction of an “intimate aspect” of those persons (even if it is not specifically or necessarily a sensitive datum in the GDPR sense – although if the inference was that a person “might be” an Islamist terrorist, that would be a [tentatively] sensitive datum in the strict sense). Moreover, even without specifically using or revealing sensitive information, the outcomes of algorithmic analyses and processing, and the application of “abstract”, algorithm/AI-based criteria to “real” people can still lead to discrimination.

The PNR Directive stipulates that the assessment[s] of passengers prior to their scheduled arrival in or departure from the Member State carried out with the aim of identifying persons who require further examination by the competent authorities of the directive “shall be carried out in a non-discriminatory manner”. However, this falls considerably short of stipulating: (i) that the “pre-determined criteria” (the outputs of the algorithms) are not biased in some way and (ii) that measures must be taken to ensure that the outcomes of the assessments are not discriminatory. It is important to address both those issues (as explained in a recent EDRi/TU Delft report).

Given that profile-based matches to detect terrorists and other serious criminals are inherently “high risk” (as noted at 3, above and further discussed at 5, below), it requires an in-depth Data Protection Impact Assessment under EU data protection law, and indeed a broader human rights impact assessment. The need for serious pre-evaluation of algorithms to be used in data mining and for continuous re-evaluation throughout their use is also stressed in various paragraphs in the recent Council of Europe recommendation on profiling. The proposed AI Act also requires this.

However, no serious efforts have been made by the European Commission or the EU Member States to fulfil these duties. Neither have ensured that full, appropriate basic information required for such serious ex ante  and ex post evaluations is even sought or recorded.

In sum: the European Commission and the EU Member States have not ensured that in practice the processing of the PNR data, and the linking of those data to other data (databases and lists), does not have discriminatory outcomes. The mere stipulation that outputs of algorithmic/AI-based profiling should not be “solely based on” sensitive aspects of the data subjects (the airline passengers) falls far short of ensuring compliance with the prohibition of discrimination.

– Opacity and unchallengeability of decisions:

In the more developed “artificial intelligence” or “expert” systems, the computers operating the relevant programmes create feedback loops that continuously improve the underlying algorithms – with almost no-one in the end being able to explain the results: the analyses are based on underlying code that cannot be properly understood by many who rely on them, or even expressed in plain language. This makes it extremely difficult to provide for serious accountability in relation to, and redress against, algorithm-based decisions generally. Profiling thus poses a serious threat of a Kafkaesque world in which powerful agencies take decisions that significantly affect individuals, without those decision-makers being able or willing to explain the underlying reasoning for those decisions, and in which those subjects are denied any effective individual or collective remedies.

That is how serious the issue of profiling is: it poses a fundamental threat to the most basic principles of the Rule of Law and the relationship between the powerful and the people in a democratic society. Specifically in relation to PNR:

– PIU staff cannot challenge algorithm-based computer outputs;

– The staff of the competent authorities are also unlikely (or indeed also effectively unable) to challenge the computer output; and

– Supervisory bodies cannot properly assess the systems.

External supervisory bodies such as Member States’ data protection supervisory authorities will generally not be given access to the underlying data, cannot review the algorithms at the design stage or at regular intervals after deployment and in any case do not have the expertise. Internal bodies are unlikely to be critical and may involve the very people who design the system (who write the code that provides the [dynamic] algorithm). The report on the evaluation of the Dutch PNR Law noted that under that law (under which the algorithms/profiles are supposed to be checked by a special commission):

The rules [on the creation of the pre-determined criteria] do not require the weighing [of the elements] or the threshold value [for regarding a “hit” against those criteria to be a valid one] to meet objective scientific standards.

This is quite an astonishing matter. It acknowledges that the algorithm/AI-based profiles are essentially unscientific. In my opinion, this fatally undermines the way the pre-determined criteria are created and “tested” in the Netherlands. Yet at the same time, the Dutch system, with this “special commission”, is probably better than what is in place in most other EU Member States. This surely is a matter that should be taken into account in any assessment of the PNR system EU-wide – including the assessment that is shortly to be made by the Luxembourg Court.

In sum:

– because the “base-rate” for the PNR data mining is so high (in the region of 500 million people) and the incidence of terrorists and serious criminals within this population so relatively low, algorithm/AI-based profiling is likely to result in tens of thousands of “false positives”: individual air passengers who are wrongly labelled to a be person who “may be” involved in terrorism or other serious crime;

– the provisions in the PNR Directive that stipulate that no sensitive data may be processed, and that individual decisions and matches may not be “solely based on” sensitive aspects of the individuals concerned do not protect those individuals from discriminatory outcomes of the profiling;

– the algorithm/AI-based outcomes of the processing are almost impossible to challenge because those algorithms are constantly dynamically changed (“improved” through self-learning) and therefore in effect impossible to fully comprehend even by those carrying out the analyses/risk assessments; and

– the outputs and outcomes of the algorithm/AI-based profiling and data mining and matching are not subject to proper scientific testing or auditing, and extremely unlikely to made subject to such testing and auditing.

4.9 Direct access to PNR data by EU Member States’ intelligence agencies

It appears that at least in the Netherlands, the national intelligence agencies are granted direct access to the bulk PNR database, without having to go through the PIU (or at least without this being properly recorded). If the Dutch authorities were to argue that such direct access to data by the Dutch intelligence agencies is outside EU law, they would be wrong. Specifically, in its LQDN judgment, the CJEU held that the rules on personal data processing operations by entities that are, in that processing, subject to EU data protection law (in that case, providers of electronic communication services, who are subject to the e-Privacy Directive), including processing operations by such entities resulting from obligations imposed on them (under the law) by Member States’ public authorities (in that case, for national security purposes) can be assessed for their compatibility with the relevant EU data protection instrument and the Charter of Fundamental Rights.

In my opinion, if the Dutch intelligence and security agencies do indeed have direct access to the PNR database, without having to go through the Dutch PIU (the Pi-NL), or without that being recorded – as appears to be pretty obviously the case – that is in direct breach of the PNR Directive, of the EU data protection instruments, and of the EU Charter of Fundamental Rights.

Whether the EU data protection instruments and the PNR Directive are similarly circumvented in other EU Member States, I do not know. Let me just recall that in several Member States, the PIU is “embedded in … [the] state security agenc[ies]”. However, the Dutch example shows how dangerous, in a democratic society, the accruing of such bulk databases is.

4.10 Dissemination and subsequent use of the data and purpose-limitation

(a) Spontaneous provision of PNR data and information on (confirmed) “hits”

In principle, subject only to a “relevant and necessary” requirement in relation to transmissions to the other PIUs, confirmed “hits” can be very widely shared across all the EU Member States, both between the PIUs but also, via the PIUs, with any “competent authority” in any Member State (including intelligence agencies where those are designated as such: see at 4.5, above).

(aa)     Spontaneous provision of information to domestic competent authorities on the basis of matches against lists and databases (including SIS II)

The Commission staff working report gives no insight into the actual scope of spontaneous dissemination of PNR data or “results of the processing” of PNR data by the PIUs on the basis of (confirmed) “hits” to competent authorities in the PIUs’ own countries.

The report on the evaluation of the Dutch PNR Law suggests that, in that country, spontaneous provisions of PNR to Dutch authorities “for further examination” are still effectively limited to (confirmed) matches against the SIS II database, and indeed to matches against the alerts listed in Articles 26 and 36 of the Council Decision establishing that database (respectively, alerts for persons wanted for arrest for extradition, and alerts relating to people or vehicles requiring discreet checks). The Dutch SIS II matches amounted to roughly 10 in every 100,000 passengers (2:100,000 “Article 26” matches and 8:100,000 “Article 36” matches).

If the Dutch statistics of 10:100,000 and 82.4% are representative of the overall situation in the EU, this would mean that each year, out of the 500 million passengers on whom PNR data are collected annually, approximately 50,000 passengers are subjected to “further examination” on the basis of a SIS II match, 40,000 of whom are relate to “Article 36 alerts”, i.e., to “persons of interest” who are not (yet) formally wanted in relation to any crime (let alone a PNR-relevant one).

But of course, there are also (confirmed) “hits” on other bases (including on the basis of “pre-determined criteria” and matches resulting from requests for information) – and other countries may also match against more than just Article 26 and Article 36 alerts on SIS II.

(ab)     Spontaneous provision of information to other PIUs on the basis of matches against lists and databases (including SIS II)

It would appear that, until now, in practice, information – including information on matches against SIS II alerts – is only rarely spontaneously shared between PIUs.

However, the clear aim of the Commission is to significantly increase the number of spontaneous transmissions of PNR data and of information on (confirmed) “hits” against SIS II (or against pre-determined criteria: see below) between PIUs, and via PIUs to competent authorities in other EU Member States (again including intelligence agencies in Member States where those are designated as such).

(ac)     Spontaneous provision of information to domestic competent authorities and to other PIUs on the basis of matches against pre-determined criteria

It would appear that matching of PNR data against pre-determined criteria – and consequently also the spontaneous informing of competent authorities of (confirmed) “hits” against such criteria – is still extremely rare in the EU Member States. However, the aim is for the use of such criteria to be greatly expanded.

(ad)     Spontaneous provision of “results of processing” of PNR data other than information on matches against list or databases (such as SIS II) or pre-determined criteria

The spontaneous sharing of new or improved criteria is more likely to occur within the data mining cadre that is being formed (see above, at 4.9(fc)), rather than done through exchanges between PIUs. But that of course does not mean that it will not occur – on the contrary, the aim is clearly to extend the use of pre-determined criteria, and for the EU Member States to cooperate much more closely in the development and sharing of those criteria, specifically through a much-enhanced role for Europol.

(b) Provision of PNR data and analysis data to competent authorities, other PIUs or Europol on request

(ba)     Provision of information to domestic competent authorities at the request of such authorities

In relation to the provision of information by the PIUs to their domestic competent authorities at the latter’s request, the relevant national rules apply. The Commission staff working document provides no information whatsoever on the extent to which this option is used beyond saying that the numbers are increasing. In the Netherlands, some procedural safeguards are established to seek to ensure that requests are only made in appropriate cases, and in particular only in relation to PNR-relevant offences. Whether other Member States impose procedural safeguards such as prior authorisation of requests from certain senior officials, I do not know. The PNR Directive does not require them (it leaves this to the laws of the Member States) and the Commission staff working report does not mention them.

(bb)     Provision of information to competent authorities of other EU Member States at the request of such authorities

The Commission claims that provision of PNR data at the request of competent authorities of other EU Member States is one part of the PNR system that operates well. However, the Commission staff working report suggests that there are problems, in particular in relation to compliance with the purpose-limitation principle underpinning the PNR Directive: see below, at (d).

Moreover, if the Dutch data are anything to go by, it would appear that the vast majority of requests for PNR data come from the national authorities of the PIU’s own country: in the Netherlands, in 2019-20, there were 3,130 requests from national authorities, against just 375 requests from other PIUs and authorities in other EU Member States. This rather qualifies the Commission claim that “the exchange of data between the Member States based on requests functions in an effective manner” and that “[t]he number of requests has grown consistently”. Both statements could be true, but the actual total numbers of requests from other Member States may still be extremely low (for now), at least in comparison with the number of requests the PIUs receive from their own national authorities.

(bc)     Provision of information to Europol at the latter’s request

The Commission staff working document does not provide any information on the number of requests made by Europol, or on the responses to such requests from the PIUs. The report on the evaluation of the Dutch PNR notes that within Europol there appear to be no procedural conditions or safeguards relating to the making of requests (such as the safeguard that requests from Dutch authorities must be checked by a Dutch prosecutor (OvJ).

If the Dutch data are anything to go by, it would appear that there are in fact very view requests for information from Europol: in that country, the PIU only received 32 such requests between June 2019 and the end of 2020, i.e., less than two a month. But if Europol is to be given a much more central role in the processing of PNR data, especially in the matching of those data against more sophisticated pre-determined criteria (with Europol playing the central role in the development of those more sophisticated criteria, as planned), the cooperation between the Member States’ PIUs and Europol, and the sharing of PNR data and data on “hits”, is certain to greatly expand.

(c) Transfer of PNR data to third countries on a case-by-case basis.

The transfer of PNR data by the Member States to countries outside the EU is only allowed on a case-by-case basis and only when necessary for fighting terrorism and serious crime, and PNR data may be shared only with public authorities that are competent for combating PNR-relevant offences. Moreover, the DPO of the relevant PIU must be informed of all such transfers.

However, the Commission reports that four Member States have failed to fully transpose other conditions provided for by the Directive relating to the purposes for which the data can be transferred or the authorities competent to receive it, and two do not require the informing of the DPO.

It is seriously worrying that several Member States do not adhere to the conditions and safeguards relating to transfers of PNR data (and of “the results of processing” of PNR data – which can include the fact that there was a “hit” against lists or criteria) to third countries that may not have adequate data protection rules (or indeed other relevant rule of law-conform rules) in place. Some of the (unnamed) Member States that do not comply with the PNR Directive in this regard are likely to pass on such data in breach of the Directive (in particular, without ensuring that the data are only used in the fight against terrorism and serious crime) to close security and political allies such as the ones that make up the “Five Eyes” intelligence group: the USA, the UK, Australia, Canada and New Zealand.

This concern is especially aggravated in relation to the USA, which the Court of Justice has now held several times to not provide adequate protection to personal data transferred to it from the EU, specifically because of its excessive mass surveillance (and there are similar concerns in relation to the UK, in spite of the Commission having issued an adequacy decision in respect of that country).

Moreover, neither the Commission staff working document nor the Dutch report provides any information on how it is – or indeed can be – guaranteed that data provided in response to a request from a third country are really only used by that third country in relation to PNR-relevant offences, or how this is – or indeed can be – monitored.

For instance, if data are provided to the US Federal Bureau of Investigation (FBI) in relation to an investigation into suspected terrorist activity, those data will also become available to the US National Security Agency (NSA), which may use them in relation to much broader “foreign intelligence purposes”. That issue of course arises in relation to provision of information from any EU Member State to any third country that has excessive surveillance laws.

Furthermore, if I am right to believe that the Dutch intelligence agencies have secret, unrecorded direct access to the PNR database (see above, at 4.10), they may also be sharing data from that database more directly with intelligence partners in other countries, including third countries, bypassing the whole PNR Directive system. Neither the Commission staff working document nor the report on the evaluation of the Dutch PNR law addresses this issue. And that issue, too, may well arise also in relation to other EU Member States.

(d) Subsequent use of the data and purpose-limitation

In principle, any information provided by the PIUs to any other entities, at home or abroad, or to Europol, is to be used by any recipient only for the prevention, detection, investigation and prosecution of terrorist offences and serious crime, more specifically for the prevention, detection, investigation and prosecution of PNR-relevant offences.

But it has become clear that this is far from assured in practice:

– because of the dilemma faced by PIUs in some EU Member States caused by the duty of any agency to pursue any offence that comes to their attention, the PIUs in some Member States pass on information also on (confirmed) “hits” relating to not-PNR-relevant offences (both spontaneously and in response to requests), and those data are then used in relation to the prevention, detection, investigation and prosecution of those not-PNR-relevant offences;

– in the Netherlands (and probably other Member States), once information is provided to a domestic competent authority, those data enter the databases of that authority (e.g., the general police databases) and will be subject to the legal regime that applies to the relevant database – which means that there is no guarantee that their subsequent use is in practice limited to PNR-relevant offences;

– when PNR data are provided by a PIU of one Member State to a PIU of another Member State (or to several or all of the other PIUs), they are provided subject to the purpose-limitation principle of the PNR Directive – but if those data are then provided by the recipient PIU(s) to competent authorities in their own countries, the same problems arise as noted in the previous indents;

– Member States take rather different views of what constitute PNR-relevant offences, and some make “broad and unspecified requests to many (or even all Passenger Information Units)” – suggesting that in this regard, too, the purpose-limitation principle is not always fully adhered to;

– within Europol there appears to be no procedural conditions or safeguards relating to the making of requests for PNR data from PIUs (such as the safeguard that requests from Dutch authorities must be checked by a Dutch prosecutor) and the Commission staff report does not indicate whether all the PIUs check whether Europol requests are strictly limited to PNR-relevant offences (or if they do, how strict and effective those checks are);

– “four Member States have failed to fully transpose … [the] conditions provided for by the Directive relating to the purposes for which [PNR data] can be transferred [to third countries] or [relating to] the authorities competent to receive [such data]”;

– neither the Commission staff working document nor the Dutch report provides any information on how it is – or indeed can be – guaranteed that data provided in response to a request from a third country are really only used by that third country in relation to PNR-relevant offences, or how this is – or indeed can be – monitored;

and

– if I am right to believe that the Dutch intelligence agencies have secret, unrecorded direct access to the PNR database, they may also be sharing data from that database more directly with intelligence partners in other countries, including third countries, bypassing the whole PNR Directive system. Neither the Commission staff working document nor the report on the evaluation of the Dutch PNR law addresses this issue. And that issue, too, may well arise also in relation to other EU Member States.

In sum: There are major deficiencies in the system as concerns compliance, by the EU Member States, by Europol, and by third countries that may receive PNR data on a case-by-case-basis, with the fundamental purpose-limitation principle underpinning the PNR Directive, i.e., with the rule that any PNR data (or data resulting from the processing of PNR data) may only be used – not just by the PIUs, but also by any other entities that may receive those data – for the purposes of the prevention, detection, investigation and prosecution of PNR-relevant offences. In simple terms: in this respect, the PNR system leaks like a sieve.

4.11 The consequences of a “match”

It is quite clear from the available information that confirmed “hits” and the associated PNR data on at the very least tens of thousands and most probably several hundred thousand innocent people are passed on to law enforcement (and in many cases, intelligence agencies) of EU Member States and to Europol – and in some cases to law enforcement and intelligence agencies of third countries – for “further examination”. Many of those data – many of those individuals – will end up in miscellaneous national databases as data on “persons of interest”, and/or in the Europol SIS II database as “Article 36 alerts”. They may even end up in similar databases or lists of third countries.

In terms of European human rights and data protection law, even the supposedly not-very-intrusive measures such as “only” being made the object of “discreet checks” constitute serious interferences with the fundamental rights of the individuals concerned – something that the European Commission and several Member States studiously avoided acknowledging at the Court hearing. More intrusive measure such as being detained and questioned or barred from flying of course constitute even more serious interferences. Both kinds require significant justification in terms of suitability, effectiveness and proportionality – with the onus of proof lying squarely on those who want to impose or justify those interferences, i.e., in casu, the European Commission and the Member States.

Moreover, in practice “watch lists” often become “black lists”. History shows that people – innocent people – will suffer if there are lists of “suspicious”, “perhaps not reliable”, “not one of us” people lying around, and not just in dictatorships.

That is yet another reason why those who argue in favour of such lists – and that includes “Article 36 alerts” and other lists of “persons of interest” “identified” on the basis of flimsy or complex criteria or profiles – bear a heavy onus to prove that those lists are absolutely necessary in a democratic society, and that the strongest possible measures are in place to prevent such further slippery uses of the lists.

5. The suitability, effectiveness and proportionality of the processing

5.1 The lack of data and of proof of effectiveness of the PNR Directive

Neither the European Commission’s review nor the Dutch evaluation has come up with serious, measurable data showing that the PNR Directive and the PNR law are effective in the fight against terrorism or serious crime.

The Dutch researchers at least tried to find hard data, but found that in many crucial respects no records were kept that could provide such data. At most, some suggestions for better recording were made, and some ideas are under consideration, to obtain better data (although the researchers also noted that some law enforcement practitioners thought it would be too much effort).

To date, neither the Commission nor the Member States (including the Netherlands) have seriously tried to design suitable, scientifically valid methods and methodologies of data capture (geeignete Formen der Datenerfassung) in this context. Given that the onus is clearly on them to demonstrate – properly, scientifically demonstrate, in a peer-reviewable manner – that the serious interferences with privacy and data protection they insist on perpetrating are effective, this is a manifest dereliction of duty.

The excuse for not doing this essential work – that it would be too costly or demanding of law enforcement time and staff – is utterly unconvincing, given the many millions of euros that are being devoted to developing the “high risk” intrusive technologies themselves.

5.2 An attempt at an assessment

(a) The appropriate tests to be applied

(aa)     The general tests

In my opinion, the appropriate tests to be applied to mass surveillance measures such as are carried out under the PNR Directive (and were carried out under the Data Retention Directive, and are still carried out under the national data retention laws of the EU Member States that continue to apply in spite of the CJEU case-law) are:

Have the entities that apply the mass surveillance measure – i.e., in the case of the PNR Directive (and the DRD), the European Commission and the EU Member States – produced reliable, verifiable evidence:

(iii) that those measures have actually, demonstrably contributed significantly to the stated purpose of the measures, i.e., in relation to the PNR Directive, to the fight against PNR-relevant crimes (and in relation the DRD, to the fight against “serious crime as defined by national law”); and

(iv) that those measures have demonstrably not seriously negatively affected the interests and fundamental rights of the persons to whom they were applied?

If the mass surveillance measures do not demonstrably pass both these tests, they are fundamentally incompatible with European human rights and fundamental rights law.

This means the measures must be justified, by the entities that apply them, on the basis of hard, verifiable, peer-reviewable data.

(ab)     When a (confirmed) “hit can be said to constitute a “positive” result (and when not)

In the context of collecting and assessing data, it is important to clarify when a (confirmed) “hit can be said to constitute a “positive” result (and when not).

In my opinion, confirmed “hits” confirming the identity of “known” “persons of interest”/subjects of “Article 36 alerts” and the “identification” (labelling) of previously “unknown” persons by the PIUs as “persons who may be involved in terrorism or serious crime” can only be regarded as “positive” results under the PNR Directive if they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

(b) The failure of the European Commission (and the Dutch government) to meet the appropriate test

The conclusion reached by the European Commission and Dutch Minister of Justice: that overall, the PNR Directive, respectively the Dutch PNR law, had been “effective” because the EU Member States said so (Commission) or because PNR data were quite widely used and the competent authorities said so (Dutch Minister) is fundamentally flawed, given that this conclusion was reached in the absence of any real supporting data.

It is the equivalent to a snake oil salesman claiming that the effectiveness of his snake oil is proven by the fact that his franchise holders agree with him that the product is effective, or by the fact that many gullible people bought the stuff.

Or to use the example of Covid vaccines, invoked by the judge-rapporteur: it is equivalent to a claim that a vaccine is effective because interested parties say it is, or because many people had been vaccinated with the vaccine – without any data on how many people were protected from infection or, perhaps worse, how many people suffered serious side-effects.

At the very least, the competent authorities in the EU Member States should have been required to collect, in a systematic and comparable way, reliable information on the outcomes of the passing on of (confirmed) “hits”. Given that they have not done so – and that the Commission and the Member States have not even tried to establish reliable systems for this – there is no insight into how many of the (confirmed) “hits” actually, concretely contributed to the fight against PNR-relevant offences.

(c) An attempt to apply the tests to the different types of matches

In my opinion, confirmed “hits” confirming the identity of “known” “persons of interest”/subjects of “Article 36 alerts” and the “identification” (labelling) of previously “unknown” persons by the PIUs as “persons who may be involved in terrorism or serious crime” can only be regarded as “positive” results under the PNR Directive if they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

At the very least, the competent authorities in the EU Member States should have been required to collect, in a systematic and comparable way, reliable information on such outcomes. Given that they have not done so – and that the Commission and the Member States have not even tried to establish reliable systems for this, there is no insight into how many of the (confirmed) “hits” actually, concretely contributed to the fight against PNR-relevant offences.

However, the following can still usefully be observed as regards the lawfulness, suitability, effectiveness and proportionality of the different kinds of matches:

– Full PNR data are disproportionate to the purpose of basic identity checks;

– The necessity of the PNR checks against Interpol’s Stolen and Lost Travel Document database is questionable;

– The matches against unspecified national databases and “repositories” are not based on foreseeable legal rules and are therefore not based on “law”;

– The necessity and proportionality of matches against various simple, supposedly “suspicious” elements (tickets bought from a “suspicious” travel agent; “suspicious” travel route; etc.) is highly questionable; and

– The matches against more complex “pre-determined criteria” and profiles are inherently and irredeemably flawed and lead to tens and possibly hundreds of thousands of innocent travellers wrongly being labelled to be a person who “may be” involved in terrorism or serious crime, and are therefore unsuited (D: ungeeignet) for the purpose of fighting terrorism and serious crime.

5.3 Overall conclusions

The PNR Directive and the generalised, indiscriminate collection of personal data on an enormous population – all persons flying to or from, and the vast majority of people flying within, the EU – that it facilitates (and intends to facilitate) is part of a wider attempt by the European Union and the EU Member States to create means of mass surveillance that, in my opinion, fly in the face of the case-law of the Court of Justice of the EU.

In trying to justify the directive and the processing of personal data on hundreds of millions of individuals, the vast majority of whom are indisputably entirely innocent, the European Commission and the Member States not only do not produce relevant, measurable and peer-reviewable data, they do not even attempt to provide for the means to obtain such data. Rather, they apply “measures” of effectiveness that are not even deserving of that name: the wide use of the data and the “belief” of those using them that they are useful.

If proper tests are applied (as set out in sub-section 5.2(a), above), the disingenuousness of the “justifications” becomes clear: the claims of effectiveness of the PNR Directive (and the Dutch PNR Law) are based on sand; in fact, as the Dutch researchers rightly noted:

“There are no quantitative data on the way in which [and the extent to which] PNR data have contributed to the prevention, detection, investigation and prosecution of terrorist offences and serious crime.”

The Commission and the Member States also ignore the “high risks” that the tools used to “identify” individuals who “may be” terrorists or serious criminals entail. This applies in particular to the use of algorithm/AI-based data mining and of profiles based on such data mining that they want to massively increase.

If the Court of Justice were to uphold the PNR Directive, it would not only endorse the mass surveillance under the directive as currently practised – it would also give the green light to the massive extension of the application of (so far less used) sophisticated data mining and profiling technologies to the PNR data without regard for their mathematically inevitable serious negative consequences for tens and possible hundreds of thousands of individuals.

What is more, that would also pave the way to yet further use of such (dangerous) data mining and profiling technologies in relation to other large population sets (such as all users of electronic communications, or of bank cards). Given that the Commission has stubbornly refused to enforce the Digital Rights Ireland judgment against Member States that continue to mandate retention of communications data, and is in fact colluding with those Member States in actually seeking to re-introduce mandatory communications data retention EU wide in the e-Privacy Regulation that is currently in the legislative process, this is a clear and imminent danger.

The hope must be that the Court will stand up for the rights of individuals, enforce the Charter of Fundamental Rights, and declare the PNR Directive (like the Data Retention Directive) to be fundamentally in breach of the Charter.

– o – O – o –

Douwe Korff (Prof.)

Cambridge (UK)

November 2021

  1. 1.1           The categories of personal data processed

An annex to the PNR Directive lists the specific categories of data that airlines must send to the database of the PIU of the Member State on the territory of which the flight will land or from the territory of which the flight will depart. This obligation is stipulated with regard to extra-EU flights but can be extended by each Member State to apply also to intra-EU flights  – and all but one Member States have done so. The list of PNR data is much longer than the Advance Passenger Information (API) data that airlines must already send to the Member States under the API Directive, and includes information on travel agents used, travel routes, email addresses, payment (card) details, luggage, and fellow travellers. On the other hand, often some basic details (such as date of birth) are not included in the APIs.

NB: The opinion focusses on the system as it is designed and intended to operate, and on what it allows (even if not everything that may be allowed is [yet] implemented in all Member States), and less on the somewhat slow implementation of the directive in the Member States and on the technical aspects that the Commission report and the staff working document often focussed on. It notes in particular a number of elements or aspects of the directive and the system it establishes that are problematic, either conceptually or in the way they are supposed to operate or to be evaluated.