Does the EU PNR Directive pave the way to Mass surveillance in the EU? (soon to be decided by the CJEU… )

Fundamental Rights European Experts Group

(FREE-Group)

Opinon on the broader and core issues arising in the PNR Case currently before the CJEU (Case C-817/19)

by Douwe Korff (Emeritus Professor of International Law, London Metropolitan University Associate, Oxford Martin School, University of Oxford)

(LINK TO THE FULL VERSION 148 Pages)

EXECUTIVE SUMMARY

(with a one-page “at a glance” overview of the main findings and conclusions)

Main findings and conclusions at a glance

In my opinion, the appropriate tests to be applied to mass surveillance measures such as are carried out under the PNR Directive (and were carried out under the Data Retention Directive, and are still carried out under the national data retention laws of the EU Member States that continue to apply in spite of the CJEU case-law) are:

Have the entities that apply the mass surveillance measure – i.e., in the case of the PNR Directive (and the DRD), the European Commission and the EU Member States – produced reliable, verifiable evidence:

  • that those measures have actually, demonstrably contributed significantly to the stated purpose of the measures, i.e., in relation to the PNR Directive, to the fight against PNR-relevant crimes (and in relation the DRD, to the fight against “serious crime as defined by national law”); and
  • that those measures have demonstrably not seriously negatively affected the interests and fundamental rights of the persons to whom they were applied?

If the mass surveillance measures do not demonstrably pass both these tests, they are fundamentally incompatible with European human rights and fundamental rights law and the Charter of Fundamental Rights; this means the measures must be justified, by the entities that apply them, on the basis of hard, verifiable, peer-reviewable data.

The conclusion reached by the European Commission and Dutch Minister of Justice: that overall, the PNR Directive, respectively the Dutch PNR law, had been “effective” because the EU Member States said so (Commission) or because PNR data were quite widely used and the competent authorities said so (Dutch Minister) is fundamentally flawed, given that this conclusion was reached in the absence of any real supporting data. Rather, my analyses show that:

  • Full PNR data are disproportionate to the purpose of basic identity checks;
  • The necessity of the PNR checks against Interpol’s Stolen and Lost Travel Document database is questionable;
  • The matches against unspecified national databases and “repositories” are not based on foreseeable legal rules and are therefore not based on “law”;
  • The necessity and proportionality of matches against various simple, supposedly “suspicious” elements (tickets bought from a “suspicious” travel agent; “suspicious” travel route; etc.) is highly questionable; and
  • The matches against more complex “pre-determined criteria” and profiles are inherently and irredeemably flawed and lead to tens, perhaps hundreds of thousands of innocent travellers wrongly being labelled to be a person who “may be” involved in terrorism or serious crime, and are therefore unsuited (D: ungeeignet) to the purpose of fighting terrorism and serious crime.

The hope must be that the Court will stand up for the rights of individuals, enforce the Charter of Fundamental Rights, and declare the PNR Directive (like the Data Retention Directive) to be fundamentally in breach of the Charter.

– o – O – o –

Executive Summary

This document summarises the analyses and findings in the full Opinion on the broader and core issues arising in the PNR Case currently before the CJEU (Case C-817/19), using the same headings and heading numbers. Please see the full opinion for the full analyses and extensive references. A one-page “at a glance” overview of the main findings and conclusions is also provided.

The opinion drew in particular on the following three documents, also mentioned in this Executive Summary:

– o – O – o –

  1. Introduction

In the opinion, after explaining, at 2, the broader context in which personal data are being processed under the PNR Directive, I try to assess whether the processing that the PNR Directive requires or allows is suitable, effective and proportionate to the aims of the directive. In doing so, in making those assessments, I base myself on the relevant European human rights and data protection standards, summarised at 3.

NB: The opinion focusses on the system as it is designed and intended to operate, and on what it allows (even if not everything that may be allowed is [yet] implemented in all Member States), and less on the somewhat slow implementation of the directive in the Member States and on the technical aspects that the Commission report and the staff working document often focussed on. It notes in particular a number of elements or aspects of the directive and the system it establishes that are problematic, either conceptually or in the way they are supposed to operate or to be evaluated.

2. PNR in context

In the footsteps of the US and UK intelligence services (as revealed by Snowden), the EU Member States’ law enforcement agencies are increasingly using their access to bulk data – bulk e-communications data, financial data, PNR data, etc. – to “mine” the big data sets by means of sophisticated, self-learning algorithms and Artificial Intelligence (AI).

The European Union Agency for Law Enforcement Cooperation, Europol, has become increasingly involved in algorithm/AI-based data analysis (or at least in the research underpinning those technologies), and last year the Commission proposed to significantly further expand this role.

The processing of PNR data under the PNR Directive must be seen in these wider contexts: the clear and strengthening trend towards more “proactive”, “preventive” policing by means of analyses and algorithm/AI-based data mining of (especially) large private-sector data sets and databases; the increasingly central role played by Europol in this (and the proposal to expand that role yet further); the focusing on “persons of interest” against whom there is (as yet) insufficient evidence for action under the criminal law (including, in relation to Europol, persons against whom there is an “Article 36 alert” in its SIS II database); and the still increasing intertwining of law enforcement and national security “intelligence” operations in those regards.

Notably, “Article 36 SIS alerts” have been increasing, and in the Netherlands, in 2020, 82.4% of all PNR “hits” against the Schengen Information System, confirmed by the Dutch Passenger Information Unit established under the PNR Directive, were “hits” against “Article 36 alerts”.

Human rights-, digital rights- and broader civil society NGOs have strongly criticised these developments and warned of the serious negative consequences. Those concerns should be taken seriously, and be properly responded to.

3 Legal standards

General fundamental rights standards stipulate that all interferences with fundamental rights must be based on a “law” that meets the European “quality of law” standards: the law must be public, clear and specific, and foreseeable in its application; the interferences must be limited to what is “necessary” and “proportionate” to serve a “legitimate aim” in a democratic society; the relevant limitations must be set out in the law itself (and not left to the discretion of states or state authorities); and those affected by the interferences must be able to challenge them and have a remedy in a court of law. Generalised, indiscriminate surveillance of whole populations (such as all air passengers flying to or from the EU) violates the EU Charter of Fundamental Rights. A special exception to this prohibition accepted by the EU Court of Justice in the La Quadrature du Net case, which allows EU Member States to respond to “serious”, “genuine and present or foreseeable” threats to “the essential functions of the State and the fundamental interests of society” must be strictly limited in time and place: it cannot form the basis for continuous surveillance of large populations (such as all air passengers) generally, on a continuous, indefinite basis: that would turn the (exceptional) exception into the rule. Yet that is precisely what the PNR Directive provides for.

European data protection law expands on the above general principles in relation to the processing of personal data. The (strict) case-law of the CJEU and the European Court of Human Rights on data protection generally and generalised surveillance in particular are reflected in the European Data Protection Board’s European Essential Guarantees for surveillance (EEGs).

Processing of information on a person suggesting that that person “may be” involved in criminal activities is subject to especially strict tests of legitimacy, necessity and proportionality.

Contrary to assertions by the European Commission and representatives of EU Member States (inter alia, at the hearing in the PNR case in July 2021) that the processing under the PNR Directive has little or no effect on the rights and interests of the data subjects, the processing under the directive must under EU data protection law be classified as posing “high risks” to the fundamental rights and interests of hundreds of millions of airline passengers.

Under the Law Enforcement Directive (as under the GDPR), this means that the processing should be subject to careful evaluation of the risks and the taking of remedial action to prevent, as far as possible, any negative consequences of the processing – such as the creation of “false positives” (cases in which a person is wrongly labelled to be a person who “may be” involved in terrorism or serious crime). It also means that if it is not possible to avoid excessive negative consequences, the processing is “not fit for purpose” and should not be used.

Under the proposed Artificial Intelligence Act that is currently under consideration, similar duties of assessment and remedial action – or abandoning of systems – are to apply to AI-based processes.

4 The PNR Directive

4.1 Introduction

4.2 The system

Under the PNR Directive, special “Passenger Information Units” (PIUs) in each EU Member State match the data contained in so-called passenger name records (PNRs) that airlines flying into or from the EU have to provide to those units against supposedly relevant lists and databases, to both identify already “known” formally wanted persons or already “known” “persons of interest” who “may be” involved in terrorism or other serious crime, and to “identify” (i.e., label) previously “unknown” persons who “may be” involved in such activities by means of “risk analyses” and the identification of “patterns” and “profiles” based on the identified patterns (see below, at 4.7).

The opinion analyses and assesses all major elements of the system in turn.

4.3 The aims of the PNR Directive

In simple terms, the overall aim of the PNR Directive is to facilitate the apprehension of terrorists and individuals who are involved in terrorism or other serious transnational crime, including in particular international drug- and people trafficking.

However, the first aim of the checking of the PNR data by the PIUs is more limited than the aims of the directive overall; this is: to identify persons who require further examination by the competent authorities [see below, at 4.5], and, where relevant, by Europol [see below, at 4.11], in view of the fact [?] that such persons may be involved in a terrorist offence or serious crime. (Article 6(1)(a))

When there is a match of PNR data against various lists, i.e., a “hit” (see below, at 4.9), the PNR passes this “hit” on to certain “competent authorities” (see below, at 4.5) for “further examination”; if the initial “hit” was generated by automated means, this is only done after a manual review by PIU staff. In practice, about 80% of initial “hits” are discarded (see below, at 4.9).

It is one of the main points of the opinion that the suitability, effectiveness and proportionality of the PNR Directive cannot and should not be assessed by reference to the number of initial “hits” noted by the PIUs, compared to the number of cases passed on for “further examination” to the competent authorities, but rather, with reference to more concrete outcomes (as is done in section 5.2).

4.4 The Legal Basis of the PNR Directive

It appears obvious from the Court of Justice opinion on the Draft EU-Canada Agreement that the PNR Directive, like that draft agreement, should have been based on Articles 16 and 87(2)(a) TFEU, and not on Article 82(1) TFEU. It follows that the PNR Directive, too, appears to not have been adopted in accordance with the properly applicable procedure. That could lead to the directive being declared invalid on that ground alone.

4.5 The Competent Authorities

Although most competent authorities (authorities authorised to receive PNR data and the results of processing of PNR data from the PIUs) in the EU Member States are law enforcement agencies, “many Member States [have designated] intelligence services, including military intelligence services, as authorities competent to receive and request PNR data from the Passenger Information Unit”, and “in some Member States the PIUs are actually “embedded in … [the] state security agenc[ies]”.

Given the increasingly close cooperation between law enforcement agencies (and border agencies) and intelligence agencies, in particular in relation to the mining of large data sets and the development of evermore sophisticated AI-based data mining technologies by the agencies working together (and in future especially also with and through Europol), this involvement of the intelligence agencies (and in future, Europol) in PNR data mining must be seen as a matter of major concern.

4.6 The crimes covered (“PNR- Relevant offences”)

The PNR Directive stipulates that PNR data and the results of processing of PNR data may only be used for a range of terrorist and other serious offences, as defined in Directive 2017/541 and in an annex to the PNR Directive, respectively (so-called “PNR-relevant offences”).

The processing under the PNR Directive aims to single out quite different categories of data subjects from this large base: on the one hand, it seeks to identify already “known” formally wanted persons (i.e., persons formally designated suspects under criminal [procedure] law, persons formally charged with or indicted for, or indeed already convicted of PNR-relevant offences) and already “known” “persons of interest” (but who are not yet formally wanted) by checking basic identity data in the PNRs against the corresponding data in “wanted” lists (such as “Article 26 alerts” in SIS II); and on the other hand, it seeks to “identify” previously “unknown” persons as possibly being terrorist or serious criminals, or “of interest”, on the basis of vague indications and probability scores. In the latter case, the term “identifying” means no more than labelling a person as a possible suspect or “person of interest” on the basis of a probability.

The opinion argues that any assessment of the suitability, effectiveness and proportionality of the processing must make a fundamental distinction between these different categories of data subjects (as is done in section 5).

4.7 The categories of personal data processed

An annex to the PNR Directive lists the specific categories of data that airlines must send to the database of the PIU of the Member State on the territory of which the flight will land or from the territory of which the flight will depart. This obligation is stipulated with regard to extra-EU flights but can be extended by each Member State to apply also to intra-EU flights  – and all but one Member States have done so. The list of PNR data is much longer than the Advance Passenger Information (API) data that airlines must already send to the Member States under the API Directive, and includes information on travel agents used, travel routes, email addresses, payment (card) details, luggage, and fellow travellers. On the other hand, often some basic details (such as date of birth) are not included in the APIs.

The use of sensitive data

The PNR Directive prohibits the processing of sensitive data, i.e., “data revealing a person’s race or ethnic origin, political opinions, religion or philosophical beliefs, trade union membership, health, sexual life or sexual orientation”. In the event that PNR data revealing such information are received by a PIU, they must be deleted immediately. Moreover, competent authorities may not take “any decision that produces an adverse legal effect on a person or significantly affects a person” on the basis of such data. However, PNR data can be matched against national lists and data “repositories” that may well contain sensitive data. Moreover, as noted at 4.9(f), below, the provisions in the PNR Directive do not really protect against discriminatory outcomes of the profiling that it encourages.

4.8 The different kinds of matches

(a) Matching of basic identity data in PNRs against the identity data of “known” formally wanted persons

PNR data are matched against SIS II alerts on “known” formally wanted persons (including “Article 26 alerts”) and against “relevant” national lists of “known” formally wanted persons.

This is usually done by automated means, followed by a manual review. The Commission reports that approximately 81% of all initial matches are rejected – and not passed on to competent authorities for further examination. Notably:

– the quality of the PNR data as received by the PIUs, including even of the basic identity data, is apparently terrible and often “limited”; this is almost certainly the reason for the vast majority of the 81% rejections;

– most of the long lists of PNR data are not needed for basic identity checks: full names, date of birth, gender and citizenship/nationality should suffice – and a passport or identity card number would make the match more reliable still. All those data are included in the API data, and all are included in optical character recognition format in the machine-readable travel documents (MRTD) that have been in wide use since the 1980s.

In other words, paradoxically, PNR data are both excessive for the purpose of basic identity checks (by containing extensive data that are not needed for such checks), and insufficient (“too limited”), in particular in relation to intra-Schengen flights (by not [always] including the dates of birth of the passengers).

– the lists against which the PNR data are compared, including in particular the SIS alerts and the EAW lists, but also many national lists, relate to many more crimes than are subject to the PNR Directive (“PNR-relevant offences”) – but in several Member States “hits” against not-PNR-relevant suspects (etc.) are still passed on to competent authorities, in clear breach of the purpose-limitation principle underpinning the directive.

In that respect, it should be noted that the Commission staff working document claims that in relation to situations in which the PNR data is “too limited” (typically, by not including date of birth), “[t]he individual manual review provided for in Article 6.5 of the PNR Directive protects individuals against the adverse impact of potential ‘false positives’” – but this is simply untrue: While a confirmed matching of identity data in relation to a person who is formally wanted in relation to PNR-relevant offences can be regarded as a “positive” result of the identity check, a “hit” in relation to a person who is wanted for not-PNR-relevant offences should of course not be regarded as a positive result under the PNR Directive.

(b) Matching of basic identity data in PNRs against the identity data of “known” “persons of interest”

In principle, the matching of basic identity data from PNRs against lists of basic identity data of “persons of interest” listed in the SIS system (and comparable categories in national law enforcement repositories), like the matching of data on formally wanted persons, should be fairly straight-forward.

However, the PNRs in this regard first of all suffer from the same two deficiencies as were discussed in relation to matches for formally wanted persons, discussed at (a), above: PNR data are both excessive for the purpose of basic identity checks (by containing extensive data that are not needed for such checks), and insufficient (“too limited”), in particular in relation to intra-Schengen flights (by not [always] including the dates of birth of the passengers). The third issue identified in the previous sub-section, that SIS alerts (and similar alerts in national law enforcement repositories) can relate to many more criminal offences than those that are “PNR-relevant” also applies: many persons labelled “person of interest” will be so labelled in relation to “non-PNR-relevant” offences.

In my opinion, while a confirmed matching of identity data in relation to persons who are formally wanted in relation to (formally suspected of, charged with, or convicted of) PNR-relevant offences can be regarded as a “positive” result of an identity check, a “hit” in relation to persons who are labelled “person of interest” should not be regarded as a positive result under the PNR Directive – certainly of course not if they are so labelled in relation to non-PNR-relevant offences, but also not if they are in no way implicated as in any way being culpable of PNR-relevant offences.

In my opinion, even confirmed “hits” confirming the identity of already listed “persons of interest” should not be regarded as “positive” results under the PNR Directive unless they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

(c) Matching of PNR Data against data on lost/stolen/fake credit cards and lost/stolen/fake identity or travel documents

The staff working document makes clear that PNR data are checked by “a large majority of PIUs” against Interpol’s Stolen and Lost Travel Document database as one “relevant database”. However, this is somewhat of a residual check because that database is also already made available to airlines through Interpol’s “I-Checkit” facility. Moreover:

Even leaving the issue of purpose-limitation aside, a “hit” against a listed lost/stolen/fake credit card or a lost/stolen/fake identity or travel document should still only be considered a “positive result” in terms of the PNR Directive if it results in a person subsequently being formally declared to be (at least) a formal suspect in relation to terrorist or other serious, PNR-relevant criminal offences.

(d) Matching of PNR data against other, unspecified, supposedly relevant (in particular national) databases

It is far from clear what databases can be – and in practice, in the different Member States, what databases actually are – regarded as “relevant databases” in terms of the PNR Directive: this is left to the Member States. At the July 2021 Court hearing, the representative of the Commission said that the data of Facebook, Amazon and Google could not be regarded as “relevant”, and that law enforcement databases (des bases policières) would be the most obvious “relevant” databases. But the Commission did not exclude matches against other databases with relatively “hard” data, such as databases with financial data (credit card data?) or telecommunications data (location data?).

The vagueness of the phrase “relevant databases” in Article 6(3)(a) and the apparently wide discretion granted to Member States to allow matching against all sorts of unspecified data sets is incompatible with the Charter of Fundamental Rights and the European Convention on Human Rights. It means that the application of the law is not clear or foreseeable to those affected – i.e., the provision is not “law” in the sense of the Charter and the Convention (and EU law generally) – and that the laws can be applied in a disproportionate manner.

In other words, even in relation to the basic checks on the basis of lists of “simple selectors”, the PNR Directive does not ensure that those checks are based on clear, precise, and in their application foreseeable Member State laws, or that those laws are only applied in a proportionate manner. In the terminology of the European Court of Human Rights, the directive does not protect individuals against arbitrary interferences with the rights to privacy and protection of personal data.

(e) Matching of PNR data against lists of “suspicious travel agents”, “suspicious routes”, etc.

The staff working document repeatedly refers to checks of PNR data against “patterns” such as tickets being bought from “suspicious” travel agents; the use of “suspicious” travel routes; passengers carrying “suspicious” amounts of luggage (and the Dutch evaluation report even mentions that a person wearing a suit and hastening through customs [while being black] was regarded by custom authorities as fitting a “suspicious” pattern). No proper prosecuting or judicial authority could declare travellers to be a formal suspect – let alone to charge, prosecute or convict a traveller – on the basis of a match against such simple “suspicious” elements alone. In my opinion:

For the purpose of evaluating the suitability, effectiveness and proportionality of the PNR Directive (and of the practices under the directive), a simple “hit” against these vague and far-from-conclusive factors or “criteria” should not be regarded as a “positive” result. Rather, a “hit” against such vague “criteria” as the purchase of an air ticket from a “suspicious” travel agent, or the using of a “suspicious” route, or the carrying of a “suspicious” amount of luggage – let alone “walking fast in a suit (while being black)” – should again only be considered a “positive result” in terms of the PNR Directive if it result in a person subsequently being formally declared to be (at least) a formal suspect in relation to terrorist or other serious, PNR-relevant criminal offences.

(f) Matching of data in the PNRs against more complex “pre-determined criteria” or profiles

(fa)      Introduction

Under the PNR Directive, PIUs may, in the course of carrying out their assessment of whether passengers “may be involved in a terrorist offence or [other] serious crime”, “process PNR data against pre-determined criteria”. As also noted by the EDPS, it is clear that the PNR data can be matched against “patterns” discerned in previous data and against “profiles” of possible terrorists and serious criminals created on the basis of these patterns, that are more complex than the simple patterns discussed at (e), above. This is also undoubtedly the direction in which searches for terrorists and other serious criminals are moving.

(fb)      The nature of the “pre-determined criteria”/“profiles”

The EU and EU Member State agencies are increasingly applying, or are poised to apply, increasingly sophisticated data mining technologies such as are already used by the UK (and US) agencies. This involves self-learning, AI-based algorithms that are constantly dynamically re-generated and refined through loops linking back to earlier analyses. The software creates constantly self-improving and refining profiles against which it matches the massive amounts of data – and in the end, it produces lists of individuals that the algorithm suggests may (possibly or probably) be terrorists, or associates of terrorists or other serious criminals. It is the stated policy of the EU to accelerate the development and deployment of these sophisticated technologies, under the guidance of Europol.

Whatever the current level of use of such sophisticated techniques in law enforcement and national security contexts in the Member States (as discussed at (fd), below), if the PNR Directive is upheld as valid in its current terms, nothing will stand in the way of the ever-greater deployment of these more sophisticated (but flawed) technologies in relation to air passengers. That would also pave the way to yet further use of such (dangerous) data mining and profiling in relation to other large population sets (such as all users of electronic communications, or of bank cards).

(fc)      The creation of the “pre-determined criteria”/“profiles”

Given (a) the increasingly sophisticated surveillance and data analysis/data mining/risk assessment technologies developed by the intelligence services of the EU Member States (often drawing on US and UK experience) and now also by law enforcement agencies and (b) the clear role assigned to Europol in this respect, it would appear clear that there is being developed a cadre of data mining specialists in the EU – and that the PNR data are one of the focus areas for this work. In other words, the “pre-determined criteria” – or AI-based algorithms – that are to be used in the mining of the PNR data are being developed, not solely by or within the PIUs but by this broader cadre that draws in particular on intelligence experts (some of whom may be embedded in the PIUs). The PNR databases are (also) between them a test laboratory for data mining/profiling technologies. And (c) there is nothing in the PNR Directive that stands in the way of using other data than PNR data in the creation of “pre-determined criteria”, or indeed in the way of using profiles developed by other agencies (including intelligence agencies) as “pre-determined criteria” in the PIU analyses.

(fd)      The application of the more complex “pre-determined criteria”/“profiles” in practice

It would appear that to date, few Member States are as yet using data mining in relation to PNR data in as sophisticated a way as described in sub-section (fb), above (or at least acknowledge such uses).

However, in a range of EU Member States algorithm/AI-based profiling is already in use in relation to broader law enforcement (and especially crime prevention). Moreover, the aim of the Commission and the Member States is expressly to significantly expand this use, with the help of Europol and its Travel Intelligence Task Force, and through “training on the development of pre-determined criteria” in “an ongoing EU-funded project, financed under the ISF-Police Union Actions.”

This merely underlines the point I made in the previous sub-sections: that the PNR database is being used as a test laboratory for advanced data mining technologies, and that if the PNR Directive is upheld as valid in its current terms, nothing will stand in the way of the ever-greater deployment of these more sophisticated (but flawed) technologies in relation to air passengers, and others. The fact that sophisticated data mining and profiling is said to not yet be in widespread operational use in most Member States should not be a reason for ignoring this issue – on the contrary: this is the desired destination of the analyses.

(fe)      The limitations of and flaws in the technologies

There are three main problems with algorithmic data mining-based detection of rare phenomena (such as terrorists and serious criminals in a general population):

– The base-rate fallacy and its effect on false positives:

In very simple layperson’s terms, the base-rate fallacy means that if you are looking for very rare instances or phenomena in a very large dataset, you will inevitably obtain a very high percentage of false positives in particular – and this cannot be remedied by adding more or somehow “better” data: by adding hay to a haystack.

As noted above, at 4.7, a very rough guess would be that on average the 1 billion people counted by Eurostat as flying to or from the EU relate to 500 million distinct individuals. In other words, the base rate for PNR data can be reasonably assumed to be in the region of 500 million.

The Commission reports that there are initial “hits” in relation to 0.59% of all PNRs, while 0.11% of all PNRs are passed on as confirmed “hits” to competent authorities for “further examination”. The Commission report and the staff working document appear to imply – and certainly do nothing to refute – that the 0.11% of all confirmed “hits” that are passed on to competent authorities are all “true positives”. However, that glaringly fails to take account of the base rate, and its impact on results.

Even if the PNR checks had a failure rate of just 0.1% (meaning that (1) in relation to persons who are actually terrorists or serious criminals, the PIUs will rightly confirm this as a proper “hit” 99.9% of the time, and fail to do so 0.1% of the time and (2) in relation to persons who are not terrorists, the PIUs will rightly not generate a confirmed “hit” 99.9% of the time, but wrongly register the innocent person as a confirmed “hit” 0.1% of the time) the probability that a person flagged by this system is actually a terrorist would still be closer to 1% than to 99%. In any case, even if the accuracy rate of the PNR checks were to be as high as this assumed 99.9% (which of course is unrealistic), that would still lead to some 500,000 false positives each year.

Yet the Commission documentation is silent about this.

– Built-in biases:

The Commission staff working document claims that, because the “pre-determined criteria” that are used in algorithmic profiling may not be based on sensitive data, “the assessment cannot be carried out in a discriminatory manner” and that “[t]his limits the risk that discriminatory profiling will be carried out by the authorities.” This is simply wrong.

In simple terms: since “intimate part[s] of [a person’s] private life” can be deduced, or at least inferred, from seemingly innocuous information – such as data included in PNRs (in particular if matched against other data) – those “intimate aspects” are not “fully protected by the processing operations provided for in the PNR Directive”. Indeed, in a way, the claim to the contrary is absurd: the whole point of “risk analysis” based on “pre-determined criteria” is to discover unknown, indeed hidden matters about the individuals who are being profiled: inferring from the data on those people, on the basis of the application of those criteria, that they are persons who “may be” involved in terrorism or other serious crimes surely is a deduction of an “intimate aspect” of those persons (even if it is not specifically or necessarily a sensitive datum in the GDPR sense – although if the inference was that a person “might be” an Islamist terrorist, that would be a [tentatively] sensitive datum in the strict sense). Moreover, even without specifically using or revealing sensitive information, the outcomes of algorithmic analyses and processing, and the application of “abstract”, algorithm/AI-based criteria to “real” people can still lead to discrimination.

The PNR Directive stipulates that the assessment[s] of passengers prior to their scheduled arrival in or departure from the Member State carried out with the aim of identifying persons who require further examination by the competent authorities of the directive “shall be carried out in a non-discriminatory manner”. However, this falls considerably short of stipulating: (i) that the “pre-determined criteria” (the outputs of the algorithms) are not biased in some way and (ii) that measures must be taken to ensure that the outcomes of the assessments are not discriminatory. It is important to address both those issues (as explained in a recent EDRi/TU Delft report).

Given that profile-based matches to detect terrorists and other serious criminals are inherently “high risk” (as noted at 3, above and further discussed at 5, below), it requires an in-depth Data Protection Impact Assessment under EU data protection law, and indeed a broader human rights impact assessment. The need for serious pre-evaluation of algorithms to be used in data mining and for continuous re-evaluation throughout their use is also stressed in various paragraphs in the recent Council of Europe recommendation on profiling. The proposed AI Act also requires this.

However, no serious efforts have been made by the European Commission or the EU Member States to fulfil these duties. Neither have ensured that full, appropriate basic information required for such serious ex ante  and ex post evaluations is even sought or recorded.

In sum: the European Commission and the EU Member States have not ensured that in practice the processing of the PNR data, and the linking of those data to other data (databases and lists), does not have discriminatory outcomes. The mere stipulation that outputs of algorithmic/AI-based profiling should not be “solely based on” sensitive aspects of the data subjects (the airline passengers) falls far short of ensuring compliance with the prohibition of discrimination.

– Opacity and unchallengeability of decisions:

In the more developed “artificial intelligence” or “expert” systems, the computers operating the relevant programmes create feedback loops that continuously improve the underlying algorithms – with almost no-one in the end being able to explain the results: the analyses are based on underlying code that cannot be properly understood by many who rely on them, or even expressed in plain language. This makes it extremely difficult to provide for serious accountability in relation to, and redress against, algorithm-based decisions generally. Profiling thus poses a serious threat of a Kafkaesque world in which powerful agencies take decisions that significantly affect individuals, without those decision-makers being able or willing to explain the underlying reasoning for those decisions, and in which those subjects are denied any effective individual or collective remedies.

That is how serious the issue of profiling is: it poses a fundamental threat to the most basic principles of the Rule of Law and the relationship between the powerful and the people in a democratic society. Specifically in relation to PNR:

– PIU staff cannot challenge algorithm-based computer outputs;

– The staff of the competent authorities are also unlikely (or indeed also effectively unable) to challenge the computer output; and

– Supervisory bodies cannot properly assess the systems.

External supervisory bodies such as Member States’ data protection supervisory authorities will generally not be given access to the underlying data, cannot review the algorithms at the design stage or at regular intervals after deployment and in any case do not have the expertise. Internal bodies are unlikely to be critical and may involve the very people who design the system (who write the code that provides the [dynamic] algorithm). The report on the evaluation of the Dutch PNR Law noted that under that law (under which the algorithms/profiles are supposed to be checked by a special commission):

The rules [on the creation of the pre-determined criteria] do not require the weighing [of the elements] or the threshold value [for regarding a “hit” against those criteria to be a valid one] to meet objective scientific standards.

This is quite an astonishing matter. It acknowledges that the algorithm/AI-based profiles are essentially unscientific. In my opinion, this fatally undermines the way the pre-determined criteria are created and “tested” in the Netherlands. Yet at the same time, the Dutch system, with this “special commission”, is probably better than what is in place in most other EU Member States. This surely is a matter that should be taken into account in any assessment of the PNR system EU-wide – including the assessment that is shortly to be made by the Luxembourg Court.

In sum:

– because the “base-rate” for the PNR data mining is so high (in the region of 500 million people) and the incidence of terrorists and serious criminals within this population so relatively low, algorithm/AI-based profiling is likely to result in tens of thousands of “false positives”: individual air passengers who are wrongly labelled to a be person who “may be” involved in terrorism or other serious crime;

– the provisions in the PNR Directive that stipulate that no sensitive data may be processed, and that individual decisions and matches may not be “solely based on” sensitive aspects of the individuals concerned do not protect those individuals from discriminatory outcomes of the profiling;

– the algorithm/AI-based outcomes of the processing are almost impossible to challenge because those algorithms are constantly dynamically changed (“improved” through self-learning) and therefore in effect impossible to fully comprehend even by those carrying out the analyses/risk assessments; and

– the outputs and outcomes of the algorithm/AI-based profiling and data mining and matching are not subject to proper scientific testing or auditing, and extremely unlikely to made subject to such testing and auditing.

4.9 Direct access to PNR data by EU Member States’ intelligence agencies

It appears that at least in the Netherlands, the national intelligence agencies are granted direct access to the bulk PNR database, without having to go through the PIU (or at least without this being properly recorded). If the Dutch authorities were to argue that such direct access to data by the Dutch intelligence agencies is outside EU law, they would be wrong. Specifically, in its LQDN judgment, the CJEU held that the rules on personal data processing operations by entities that are, in that processing, subject to EU data protection law (in that case, providers of electronic communication services, who are subject to the e-Privacy Directive), including processing operations by such entities resulting from obligations imposed on them (under the law) by Member States’ public authorities (in that case, for national security purposes) can be assessed for their compatibility with the relevant EU data protection instrument and the Charter of Fundamental Rights.

In my opinion, if the Dutch intelligence and security agencies do indeed have direct access to the PNR database, without having to go through the Dutch PIU (the Pi-NL), or without that being recorded – as appears to be pretty obviously the case – that is in direct breach of the PNR Directive, of the EU data protection instruments, and of the EU Charter of Fundamental Rights.

Whether the EU data protection instruments and the PNR Directive are similarly circumvented in other EU Member States, I do not know. Let me just recall that in several Member States, the PIU is “embedded in … [the] state security agenc[ies]”. However, the Dutch example shows how dangerous, in a democratic society, the accruing of such bulk databases is.

4.10 Dissemination and subsequent use of the data and purpose-limitation

(a) Spontaneous provision of PNR data and information on (confirmed) “hits”

In principle, subject only to a “relevant and necessary” requirement in relation to transmissions to the other PIUs, confirmed “hits” can be very widely shared across all the EU Member States, both between the PIUs but also, via the PIUs, with any “competent authority” in any Member State (including intelligence agencies where those are designated as such: see at 4.5, above).

(aa)     Spontaneous provision of information to domestic competent authorities on the basis of matches against lists and databases (including SIS II)

The Commission staff working report gives no insight into the actual scope of spontaneous dissemination of PNR data or “results of the processing” of PNR data by the PIUs on the basis of (confirmed) “hits” to competent authorities in the PIUs’ own countries.

The report on the evaluation of the Dutch PNR Law suggests that, in that country, spontaneous provisions of PNR to Dutch authorities “for further examination” are still effectively limited to (confirmed) matches against the SIS II database, and indeed to matches against the alerts listed in Articles 26 and 36 of the Council Decision establishing that database (respectively, alerts for persons wanted for arrest for extradition, and alerts relating to people or vehicles requiring discreet checks). The Dutch SIS II matches amounted to roughly 10 in every 100,000 passengers (2:100,000 “Article 26” matches and 8:100,000 “Article 36” matches).

If the Dutch statistics of 10:100,000 and 82.4% are representative of the overall situation in the EU, this would mean that each year, out of the 500 million passengers on whom PNR data are collected annually, approximately 50,000 passengers are subjected to “further examination” on the basis of a SIS II match, 40,000 of whom are relate to “Article 36 alerts”, i.e., to “persons of interest” who are not (yet) formally wanted in relation to any crime (let alone a PNR-relevant one).

But of course, there are also (confirmed) “hits” on other bases (including on the basis of “pre-determined criteria” and matches resulting from requests for information) – and other countries may also match against more than just Article 26 and Article 36 alerts on SIS II.

(ab)     Spontaneous provision of information to other PIUs on the basis of matches against lists and databases (including SIS II)

It would appear that, until now, in practice, information – including information on matches against SIS II alerts – is only rarely spontaneously shared between PIUs.

However, the clear aim of the Commission is to significantly increase the number of spontaneous transmissions of PNR data and of information on (confirmed) “hits” against SIS II (or against pre-determined criteria: see below) between PIUs, and via PIUs to competent authorities in other EU Member States (again including intelligence agencies in Member States where those are designated as such).

(ac)     Spontaneous provision of information to domestic competent authorities and to other PIUs on the basis of matches against pre-determined criteria

It would appear that matching of PNR data against pre-determined criteria – and consequently also the spontaneous informing of competent authorities of (confirmed) “hits” against such criteria – is still extremely rare in the EU Member States. However, the aim is for the use of such criteria to be greatly expanded.

(ad)     Spontaneous provision of “results of processing” of PNR data other than information on matches against list or databases (such as SIS II) or pre-determined criteria

The spontaneous sharing of new or improved criteria is more likely to occur within the data mining cadre that is being formed (see above, at 4.9(fc)), rather than done through exchanges between PIUs. But that of course does not mean that it will not occur – on the contrary, the aim is clearly to extend the use of pre-determined criteria, and for the EU Member States to cooperate much more closely in the development and sharing of those criteria, specifically through a much-enhanced role for Europol.

(b) Provision of PNR data and analysis data to competent authorities, other PIUs or Europol on request

(ba)     Provision of information to domestic competent authorities at the request of such authorities

In relation to the provision of information by the PIUs to their domestic competent authorities at the latter’s request, the relevant national rules apply. The Commission staff working document provides no information whatsoever on the extent to which this option is used beyond saying that the numbers are increasing. In the Netherlands, some procedural safeguards are established to seek to ensure that requests are only made in appropriate cases, and in particular only in relation to PNR-relevant offences. Whether other Member States impose procedural safeguards such as prior authorisation of requests from certain senior officials, I do not know. The PNR Directive does not require them (it leaves this to the laws of the Member States) and the Commission staff working report does not mention them.

(bb)     Provision of information to competent authorities of other EU Member States at the request of such authorities

The Commission claims that provision of PNR data at the request of competent authorities of other EU Member States is one part of the PNR system that operates well. However, the Commission staff working report suggests that there are problems, in particular in relation to compliance with the purpose-limitation principle underpinning the PNR Directive: see below, at (d).

Moreover, if the Dutch data are anything to go by, it would appear that the vast majority of requests for PNR data come from the national authorities of the PIU’s own country: in the Netherlands, in 2019-20, there were 3,130 requests from national authorities, against just 375 requests from other PIUs and authorities in other EU Member States. This rather qualifies the Commission claim that “the exchange of data between the Member States based on requests functions in an effective manner” and that “[t]he number of requests has grown consistently”. Both statements could be true, but the actual total numbers of requests from other Member States may still be extremely low (for now), at least in comparison with the number of requests the PIUs receive from their own national authorities.

(bc)     Provision of information to Europol at the latter’s request

The Commission staff working document does not provide any information on the number of requests made by Europol, or on the responses to such requests from the PIUs. The report on the evaluation of the Dutch PNR notes that within Europol there appear to be no procedural conditions or safeguards relating to the making of requests (such as the safeguard that requests from Dutch authorities must be checked by a Dutch prosecutor (OvJ).

If the Dutch data are anything to go by, it would appear that there are in fact very view requests for information from Europol: in that country, the PIU only received 32 such requests between June 2019 and the end of 2020, i.e., less than two a month. But if Europol is to be given a much more central role in the processing of PNR data, especially in the matching of those data against more sophisticated pre-determined criteria (with Europol playing the central role in the development of those more sophisticated criteria, as planned), the cooperation between the Member States’ PIUs and Europol, and the sharing of PNR data and data on “hits”, is certain to greatly expand.

(c) Transfer of PNR data to third countries on a case-by-case basis.

The transfer of PNR data by the Member States to countries outside the EU is only allowed on a case-by-case basis and only when necessary for fighting terrorism and serious crime, and PNR data may be shared only with public authorities that are competent for combating PNR-relevant offences. Moreover, the DPO of the relevant PIU must be informed of all such transfers.

However, the Commission reports that four Member States have failed to fully transpose other conditions provided for by the Directive relating to the purposes for which the data can be transferred or the authorities competent to receive it, and two do not require the informing of the DPO.

It is seriously worrying that several Member States do not adhere to the conditions and safeguards relating to transfers of PNR data (and of “the results of processing” of PNR data – which can include the fact that there was a “hit” against lists or criteria) to third countries that may not have adequate data protection rules (or indeed other relevant rule of law-conform rules) in place. Some of the (unnamed) Member States that do not comply with the PNR Directive in this regard are likely to pass on such data in breach of the Directive (in particular, without ensuring that the data are only used in the fight against terrorism and serious crime) to close security and political allies such as the ones that make up the “Five Eyes” intelligence group: the USA, the UK, Australia, Canada and New Zealand.

This concern is especially aggravated in relation to the USA, which the Court of Justice has now held several times to not provide adequate protection to personal data transferred to it from the EU, specifically because of its excessive mass surveillance (and there are similar concerns in relation to the UK, in spite of the Commission having issued an adequacy decision in respect of that country).

Moreover, neither the Commission staff working document nor the Dutch report provides any information on how it is – or indeed can be – guaranteed that data provided in response to a request from a third country are really only used by that third country in relation to PNR-relevant offences, or how this is – or indeed can be – monitored.

For instance, if data are provided to the US Federal Bureau of Investigation (FBI) in relation to an investigation into suspected terrorist activity, those data will also become available to the US National Security Agency (NSA), which may use them in relation to much broader “foreign intelligence purposes”. That issue of course arises in relation to provision of information from any EU Member State to any third country that has excessive surveillance laws.

Furthermore, if I am right to believe that the Dutch intelligence agencies have secret, unrecorded direct access to the PNR database (see above, at 4.10), they may also be sharing data from that database more directly with intelligence partners in other countries, including third countries, bypassing the whole PNR Directive system. Neither the Commission staff working document nor the report on the evaluation of the Dutch PNR law addresses this issue. And that issue, too, may well arise also in relation to other EU Member States.

(d) Subsequent use of the data and purpose-limitation

In principle, any information provided by the PIUs to any other entities, at home or abroad, or to Europol, is to be used by any recipient only for the prevention, detection, investigation and prosecution of terrorist offences and serious crime, more specifically for the prevention, detection, investigation and prosecution of PNR-relevant offences.

But it has become clear that this is far from assured in practice:

– because of the dilemma faced by PIUs in some EU Member States caused by the duty of any agency to pursue any offence that comes to their attention, the PIUs in some Member States pass on information also on (confirmed) “hits” relating to not-PNR-relevant offences (both spontaneously and in response to requests), and those data are then used in relation to the prevention, detection, investigation and prosecution of those not-PNR-relevant offences;

– in the Netherlands (and probably other Member States), once information is provided to a domestic competent authority, those data enter the databases of that authority (e.g., the general police databases) and will be subject to the legal regime that applies to the relevant database – which means that there is no guarantee that their subsequent use is in practice limited to PNR-relevant offences;

– when PNR data are provided by a PIU of one Member State to a PIU of another Member State (or to several or all of the other PIUs), they are provided subject to the purpose-limitation principle of the PNR Directive – but if those data are then provided by the recipient PIU(s) to competent authorities in their own countries, the same problems arise as noted in the previous indents;

– Member States take rather different views of what constitute PNR-relevant offences, and some make “broad and unspecified requests to many (or even all Passenger Information Units)” – suggesting that in this regard, too, the purpose-limitation principle is not always fully adhered to;

– within Europol there appears to be no procedural conditions or safeguards relating to the making of requests for PNR data from PIUs (such as the safeguard that requests from Dutch authorities must be checked by a Dutch prosecutor) and the Commission staff report does not indicate whether all the PIUs check whether Europol requests are strictly limited to PNR-relevant offences (or if they do, how strict and effective those checks are);

– “four Member States have failed to fully transpose … [the] conditions provided for by the Directive relating to the purposes for which [PNR data] can be transferred [to third countries] or [relating to] the authorities competent to receive [such data]”;

– neither the Commission staff working document nor the Dutch report provides any information on how it is – or indeed can be – guaranteed that data provided in response to a request from a third country are really only used by that third country in relation to PNR-relevant offences, or how this is – or indeed can be – monitored;

and

– if I am right to believe that the Dutch intelligence agencies have secret, unrecorded direct access to the PNR database, they may also be sharing data from that database more directly with intelligence partners in other countries, including third countries, bypassing the whole PNR Directive system. Neither the Commission staff working document nor the report on the evaluation of the Dutch PNR law addresses this issue. And that issue, too, may well arise also in relation to other EU Member States.

In sum: There are major deficiencies in the system as concerns compliance, by the EU Member States, by Europol, and by third countries that may receive PNR data on a case-by-case-basis, with the fundamental purpose-limitation principle underpinning the PNR Directive, i.e., with the rule that any PNR data (or data resulting from the processing of PNR data) may only be used – not just by the PIUs, but also by any other entities that may receive those data – for the purposes of the prevention, detection, investigation and prosecution of PNR-relevant offences. In simple terms: in this respect, the PNR system leaks like a sieve.

4.11 The consequences of a “match”

It is quite clear from the available information that confirmed “hits” and the associated PNR data on at the very least tens of thousands and most probably several hundred thousand innocent people are passed on to law enforcement (and in many cases, intelligence agencies) of EU Member States and to Europol – and in some cases to law enforcement and intelligence agencies of third countries – for “further examination”. Many of those data – many of those individuals – will end up in miscellaneous national databases as data on “persons of interest”, and/or in the Europol SIS II database as “Article 36 alerts”. They may even end up in similar databases or lists of third countries.

In terms of European human rights and data protection law, even the supposedly not-very-intrusive measures such as “only” being made the object of “discreet checks” constitute serious interferences with the fundamental rights of the individuals concerned – something that the European Commission and several Member States studiously avoided acknowledging at the Court hearing. More intrusive measure such as being detained and questioned or barred from flying of course constitute even more serious interferences. Both kinds require significant justification in terms of suitability, effectiveness and proportionality – with the onus of proof lying squarely on those who want to impose or justify those interferences, i.e., in casu, the European Commission and the Member States.

Moreover, in practice “watch lists” often become “black lists”. History shows that people – innocent people – will suffer if there are lists of “suspicious”, “perhaps not reliable”, “not one of us” people lying around, and not just in dictatorships.

That is yet another reason why those who argue in favour of such lists – and that includes “Article 36 alerts” and other lists of “persons of interest” “identified” on the basis of flimsy or complex criteria or profiles – bear a heavy onus to prove that those lists are absolutely necessary in a democratic society, and that the strongest possible measures are in place to prevent such further slippery uses of the lists.

5. The suitability, effectiveness and proportionality of the processing

5.1 The lack of data and of proof of effectiveness of the PNR Directive

Neither the European Commission’s review nor the Dutch evaluation has come up with serious, measurable data showing that the PNR Directive and the PNR law are effective in the fight against terrorism or serious crime.

The Dutch researchers at least tried to find hard data, but found that in many crucial respects no records were kept that could provide such data. At most, some suggestions for better recording were made, and some ideas are under consideration, to obtain better data (although the researchers also noted that some law enforcement practitioners thought it would be too much effort).

To date, neither the Commission nor the Member States (including the Netherlands) have seriously tried to design suitable, scientifically valid methods and methodologies of data capture (geeignete Formen der Datenerfassung) in this context. Given that the onus is clearly on them to demonstrate – properly, scientifically demonstrate, in a peer-reviewable manner – that the serious interferences with privacy and data protection they insist on perpetrating are effective, this is a manifest dereliction of duty.

The excuse for not doing this essential work – that it would be too costly or demanding of law enforcement time and staff – is utterly unconvincing, given the many millions of euros that are being devoted to developing the “high risk” intrusive technologies themselves.

5.2 An attempt at an assessment

(a) The appropriate tests to be applied

(aa)     The general tests

In my opinion, the appropriate tests to be applied to mass surveillance measures such as are carried out under the PNR Directive (and were carried out under the Data Retention Directive, and are still carried out under the national data retention laws of the EU Member States that continue to apply in spite of the CJEU case-law) are:

Have the entities that apply the mass surveillance measure – i.e., in the case of the PNR Directive (and the DRD), the European Commission and the EU Member States – produced reliable, verifiable evidence:

(iii) that those measures have actually, demonstrably contributed significantly to the stated purpose of the measures, i.e., in relation to the PNR Directive, to the fight against PNR-relevant crimes (and in relation the DRD, to the fight against “serious crime as defined by national law”); and

(iv) that those measures have demonstrably not seriously negatively affected the interests and fundamental rights of the persons to whom they were applied?

If the mass surveillance measures do not demonstrably pass both these tests, they are fundamentally incompatible with European human rights and fundamental rights law.

This means the measures must be justified, by the entities that apply them, on the basis of hard, verifiable, peer-reviewable data.

(ab)     When a (confirmed) “hit can be said to constitute a “positive” result (and when not)

In the context of collecting and assessing data, it is important to clarify when a (confirmed) “hit can be said to constitute a “positive” result (and when not).

In my opinion, confirmed “hits” confirming the identity of “known” “persons of interest”/subjects of “Article 36 alerts” and the “identification” (labelling) of previously “unknown” persons by the PIUs as “persons who may be involved in terrorism or serious crime” can only be regarded as “positive” results under the PNR Directive if they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

(b) The failure of the European Commission (and the Dutch government) to meet the appropriate test

The conclusion reached by the European Commission and Dutch Minister of Justice: that overall, the PNR Directive, respectively the Dutch PNR law, had been “effective” because the EU Member States said so (Commission) or because PNR data were quite widely used and the competent authorities said so (Dutch Minister) is fundamentally flawed, given that this conclusion was reached in the absence of any real supporting data.

It is the equivalent to a snake oil salesman claiming that the effectiveness of his snake oil is proven by the fact that his franchise holders agree with him that the product is effective, or by the fact that many gullible people bought the stuff.

Or to use the example of Covid vaccines, invoked by the judge-rapporteur: it is equivalent to a claim that a vaccine is effective because interested parties say it is, or because many people had been vaccinated with the vaccine – without any data on how many people were protected from infection or, perhaps worse, how many people suffered serious side-effects.

At the very least, the competent authorities in the EU Member States should have been required to collect, in a systematic and comparable way, reliable information on the outcomes of the passing on of (confirmed) “hits”. Given that they have not done so – and that the Commission and the Member States have not even tried to establish reliable systems for this – there is no insight into how many of the (confirmed) “hits” actually, concretely contributed to the fight against PNR-relevant offences.

(c) An attempt to apply the tests to the different types of matches

In my opinion, confirmed “hits” confirming the identity of “known” “persons of interest”/subjects of “Article 36 alerts” and the “identification” (labelling) of previously “unknown” persons by the PIUs as “persons who may be involved in terrorism or serious crime” can only be regarded as “positive” results under the PNR Directive if they result in those persons subsequently being formally declared to be formal suspects in relation to terrorist or other serious, PNR-relevant criminal offences.

At the very least, the competent authorities in the EU Member States should have been required to collect, in a systematic and comparable way, reliable information on such outcomes. Given that they have not done so – and that the Commission and the Member States have not even tried to establish reliable systems for this, there is no insight into how many of the (confirmed) “hits” actually, concretely contributed to the fight against PNR-relevant offences.

However, the following can still usefully be observed as regards the lawfulness, suitability, effectiveness and proportionality of the different kinds of matches:

– Full PNR data are disproportionate to the purpose of basic identity checks;

– The necessity of the PNR checks against Interpol’s Stolen and Lost Travel Document database is questionable;

– The matches against unspecified national databases and “repositories” are not based on foreseeable legal rules and are therefore not based on “law”;

– The necessity and proportionality of matches against various simple, supposedly “suspicious” elements (tickets bought from a “suspicious” travel agent; “suspicious” travel route; etc.) is highly questionable; and

– The matches against more complex “pre-determined criteria” and profiles are inherently and irredeemably flawed and lead to tens and possibly hundreds of thousands of innocent travellers wrongly being labelled to be a person who “may be” involved in terrorism or serious crime, and are therefore unsuited (D: ungeeignet) for the purpose of fighting terrorism and serious crime.

5.3 Overall conclusions

The PNR Directive and the generalised, indiscriminate collection of personal data on an enormous population – all persons flying to or from, and the vast majority of people flying within, the EU – that it facilitates (and intends to facilitate) is part of a wider attempt by the European Union and the EU Member States to create means of mass surveillance that, in my opinion, fly in the face of the case-law of the Court of Justice of the EU.

In trying to justify the directive and the processing of personal data on hundreds of millions of individuals, the vast majority of whom are indisputably entirely innocent, the European Commission and the Member States not only do not produce relevant, measurable and peer-reviewable data, they do not even attempt to provide for the means to obtain such data. Rather, they apply “measures” of effectiveness that are not even deserving of that name: the wide use of the data and the “belief” of those using them that they are useful.

If proper tests are applied (as set out in sub-section 5.2(a), above), the disingenuousness of the “justifications” becomes clear: the claims of effectiveness of the PNR Directive (and the Dutch PNR Law) are based on sand; in fact, as the Dutch researchers rightly noted:

“There are no quantitative data on the way in which [and the extent to which] PNR data have contributed to the prevention, detection, investigation and prosecution of terrorist offences and serious crime.”

The Commission and the Member States also ignore the “high risks” that the tools used to “identify” individuals who “may be” terrorists or serious criminals entail. This applies in particular to the use of algorithm/AI-based data mining and of profiles based on such data mining that they want to massively increase.

If the Court of Justice were to uphold the PNR Directive, it would not only endorse the mass surveillance under the directive as currently practised – it would also give the green light to the massive extension of the application of (so far less used) sophisticated data mining and profiling technologies to the PNR data without regard for their mathematically inevitable serious negative consequences for tens and possible hundreds of thousands of individuals.

What is more, that would also pave the way to yet further use of such (dangerous) data mining and profiling technologies in relation to other large population sets (such as all users of electronic communications, or of bank cards). Given that the Commission has stubbornly refused to enforce the Digital Rights Ireland judgment against Member States that continue to mandate retention of communications data, and is in fact colluding with those Member States in actually seeking to re-introduce mandatory communications data retention EU wide in the e-Privacy Regulation that is currently in the legislative process, this is a clear and imminent danger.

The hope must be that the Court will stand up for the rights of individuals, enforce the Charter of Fundamental Rights, and declare the PNR Directive (like the Data Retention Directive) to be fundamentally in breach of the Charter.

– o – O – o –

Douwe Korff (Prof.)

Cambridge (UK)

November 2021

  1. 1.1           The categories of personal data processed

An annex to the PNR Directive lists the specific categories of data that airlines must send to the database of the PIU of the Member State on the territory of which the flight will land or from the territory of which the flight will depart. This obligation is stipulated with regard to extra-EU flights but can be extended by each Member State to apply also to intra-EU flights  – and all but one Member States have done so. The list of PNR data is much longer than the Advance Passenger Information (API) data that airlines must already send to the Member States under the API Directive, and includes information on travel agents used, travel routes, email addresses, payment (card) details, luggage, and fellow travellers. On the other hand, often some basic details (such as date of birth) are not included in the APIs.

NB: The opinion focusses on the system as it is designed and intended to operate, and on what it allows (even if not everything that may be allowed is [yet] implemented in all Member States), and less on the somewhat slow implementation of the directive in the Member States and on the technical aspects that the Commission report and the staff working document often focussed on. It notes in particular a number of elements or aspects of the directive and the system it establishes that are problematic, either conceptually or in the way they are supposed to operate or to be evaluated.

Worth reading : the final report by the EU High Level Expert Group on Information Systems and Interoperability (HLEG),

NB: The full version (PDF)  of the Report is accessible HERE

On May 8th the (EU) High Level Expert Group on Information Systems and Interoperability (HLEG) which was set up in June 2016 following the Commission Communication on “Stronger and Smarter Information Systems for Borders and Security ” has published its long awaited 56 long pages Report on Information Systems and Interoperability.

Members of the HLEG were the EU Members States (+ Norway, Switzerland and Liechtenstein), the EU Agencies (Fundamental Rights Agency, FRONTEX, European Asylum Support Office, Europol and the EU-LISA “Large Information Support Agency”) as well as the representatives of the Commission and the European Data Protection Supervisor (EDPS) and the Anti-Terrorism Coordinator (an High Council General Secretariat Official designated by the European Council).

Three Statements, respectively of the EU Fundamental Rights Agency, of the European Data Protection Supervisor and of the EU Counter-Terrorism Coordinator (CTC),  are attached. The first two can be considered as a sort of partially dissenting Opinions while the CTC  statement is quite obviously in full support of the recommendations set out by the report as it embodies for the first time at EU level the “Availability Principle” which was set up already in 2004 by the European Council. According to that principle if a Member State (or the EU) has a security related information which can be useful to another Member State it has to make it available to the authority of another Member State. It looks as a common sense principle which goes hand in hand with the principle of sincere cooperation between EU Member States and between them and the EU Institutions.

The little detail is that when information is collected for security purposes national and European legislation set very strict criteria to avoid the possible abuses by public EU and National Law enforcement authorities. This is the core of Data Protection legislation and of the art. 6, 7 and 8 of the EU Charter of Fundamental Rights which prevent the EU and its Member States from becoming a sort of Big Brother “State of surveillance”. Moreover, at least until now these principles have guided the post-Lisbon European Court of Justice jurisprudence in this domain and it is quite appalling that no reference is made in this report to the Luxembourg Court Rulings notably dealing with “profiling” and “data retention”(“Digital Rights”, “Schrems”, “TELE 2-Watson”…).

Needless to say to implement all the HLWG recommendations several legislative measures will be needed as well as the definition of a legally EU Security Strategy which should be adopted under the responsibility of the EU co-legislators. Without a strong legally founded EU security strategy not only the European Parliament will continue to be out of the game but also the control of the Court of Justice on the necessity and  proportionality of the existing and planned EU legislative measures will be weakened.  Overall this HLWG report is mainly focused on security related objectives and the references to fundamental rights and data protection are given more as “excusatio non petita” than as a clearly explained reasoning (see the Fundamental Rights Agency Statement). On the Content of the  perceived “threats” to be countered with this new approach it has to be seen if some of them (such as the mixing irregular migration with terrorism)  are not imaginary and, by the countrary, real ones are not taken in account.

At least this report is now public. It will be naive to consider it as purely “technical” : it is highly political and will justify several EU legislative measures. It will be worthless for the European Parliament to wake up when the formal legislative proposals will be submitted. If it has an alternative vision it has to show it NOW and not waiting when the Report will be quite likely “endorsed” by the Council and the European Council.

Emilio De Capitani

TEXT OF THE REPORT (NB  Figures have not been currently imported, sorry.)

——- Continue reading “Worth reading : the final report by the EU High Level Expert Group on Information Systems and Interoperability (HLEG),”

Legal Frameworks for Hacking by Law Enforcement: Identification, Evaluation and Comparison of Practices

EXECUTIVE SUMMARY OF A STUDY FOR THE EP LIBE COMMITEE.

FULL TEXT ACCESSIBLE  HERE  

by Mirja  GUTHEIL, Quentin  LIGER, Aurélie  HEETMAN, James  EAGER, Max  CRAWFORD  (Optimity  Advisors)

Hacking by law enforcement is a relatively new phenomenon within the framework of the longstanding public policy problem of balancing security and privacy. On the one hand, law enforcement agencies assert that the use of hacking techniques brings security, stating that it represents a part of the solution to the law enforcement challenge of encryption and ‘Going Dark’ without systematically weakening encryption through the introduction of ‘backdoors’ or similar techniques. On the other hand, civil society actors argue that hacking is extremely invasive and significantly restricts the fundamental right to privacy. Furthermore, the use of hacking practices pits security against cybersecurity, as the exploitation of cybersecurity vulnerabilities to provide law enforcement with access to certain data can have significant implications  for  the security of the internet.

Against this backdrop, the present study provides the LIBE Committee with relevant, actionable insight into the legal frameworks and practices for hacking by law enforcement. Firstly, the study examines the international and EU-level debates on the topic of hacking by law enforcement (Chapter 2), before analysing the possible legal bases for EU intervention in the field (Chapter 3). These chapters set the scene for the primary focus of the study: the comparative analysis of legal frameworks and practices for hacking by law enforcement across six selected Member States (France, Germany, Italy, the Netherlands, Poland and the UK), with further illustrative examples from three non-EU countries (Australia, Israel and the US) (Chapter 4). Based on these analyses, the study concludes (Chapter 5) and presents concrete recommendations and policy proposals for  EU  action  in  the field (Chapter 6).

The international and EU-level debates on the use of hacking techniques by law enforcement primarily evolve from the law enforcement challenge posed by encryption – i.e. the  ‘Going  Dark’  issue.

Going Dark is a term used to describe [the] decreasing ability [of law enforcement agencies] to lawfully access and examine evidence at rest on devices and evidence in motion across   communications   networks.1

According to the International Association of Chiefs of Police (IACP), law enforcement agencies are not able to investigate illegal activity and prosecute criminals without this evidence. Encryption technologies are cited as one of the major barriers to this access. Although recent political statements from several countries (including France, Germany, the UK and the US) seemingly call for ‘backdoors’ to encryption technologies, support for strong encryption at international and EU fora remains strong. As such, law enforcement agencies across the world started to use hacking techniques to bypass encryption. Although the term ‘hacking’ is not used by law enforcement agencies, these practices essentially mirror the techniques used by hackers (i.e. exploiting any possible vulnerabilities – including technical, system  and/or human  vulnerabilities  – within  an  information  technology  (IT) system).

Law enforcement representatives, such as the IACP and Europol, report that access to encrypted and other data through such hacking techniques brings significant investigative benefits. However, it is not the only possible law enforcement solution to the ‘Going Dark’ issue. Outside of the scope of this study, the other options include: requiring users to provide their password or decrypt their data; requiring technology vendors and service providers to bypass   the   security   of   their   own   products   and   services;   and   the    systematic   weakening   of encryption through the mandated introduction of ‘backdoors’ and/or weakened standards for encryption.

With the benefits of hacking established, a 2016 Joint Statement published by the European Union Agency for Network and Information Security (ENISA) and Europol2 noted that the use of  hacking  techniques also brings  several   key  risks.

The primary risk relates to the fundamental right to privacy and freedom of expression and information, as enshrined in international, EU and national-level law. Hacking techniques are extremely invasive, particularly when compared with traditionally intrusive investigative tools (e.g. wiretapping, house searches etc.). Through hacking, law enforcement can gain access to all data stored or in transit from a device; this represents a significant amount of data (e.g. a recent investigation by Dutch law enforcement collected seven terabytes of data, which translates into around 86 million pages of Microsoft Word documents3), as well as extremely sensitive data (e.g. a person’s location and movements, all communications, all stored data etc.). Consequently, the use of hacking techniques will inherently restrict the fundamental right to privacy.

Therefore, current debates at international and EU fora focus on assessing and providing recommendations on the current legal balances and safeguards for the restriction of the right to privacy by hacking techniques. However, these debates have assumed that hacking practices are necessary for law enforcement and simply require governing laws; they have not discussed whether the use of hacking techniques by law enforcement is necessary and proportional. The law enforcement assertions regarding the necessity of these invasive tools have  not   been  challenged.

The second key risk relates to the security of the internet. Law enforcement use of hacking techniques has the potential to significantly weaken the security of the internet by “[increasing] the attack surface for malicious abuse”4. Given that critical infrastructure and defence organisations, as well as law enforcement agencies themselves, use the technologies targeted and potentially weakened by law enforcement hacking, the potential ramifications reach  far  beyond  the intended  target.

As such, debates at international and EU fora focus on the appropriate balances between security and privacy, as well as security and cybersecurity. Regarding security v. privacy, the debates to date have assessed and provided recommendations on the legislative safeguards required to ensure that hacking techniques are only permitted in situations where a restriction of the fundamental right to privacy is valid in line with EU legislation (i.e. legal, necessary and proportional). Regarding security v. cybersecurity, the debates have been limited and primarily centre around the use and/or reporting of zero-day vulnerabilities discovered  by  law enforcement agencies.

Further risks not discussed in the Joint Statement but covered by this study include: the risks to territorial sovereignty – as law enforcement agencies may not know the physical location of the target data; and the risks related to the supply and use of commercially-developed hacking tools by governments with poor consideration for human rights.

Alongside the analysis of international and EU debates, the study presents hypotheses on the legal  bases  for  EU  intervention  in  the  field. Although  possibilities for  EU  legal  intervention  in several areas are discussed, including mutual admissibility of evidence (Art. 82(2) TFEU), common investigative techniques (Art. 87(2)(c) TFEU), operational cooperation (Art. 87(3) TFEU) and data protection (Art. 16 TFEU, Art. 7 & 8 EU Charter), the onus regarding the development of legislation in the field is with the Member States. As such, the management of the risks associated with law enforcement activities is governed at the Member State level.

As suggested by the focus of the international and EU discussions, concrete measures need to be stipulated at national-level to manage these risks. This study presents a comparative analysis of the legal frameworks for hacking by law enforcement across six Member States, as well as certain practical aspects of hacking by law enforcement, thereby providing an overview of the primary Member State mechanisms for the management of these risks. Further illustrative examples are provided from research conducted in three non-EU countries.

More specifically, the study examines the legal and practical balances and safeguards implemented at national-level to ensure: i) the legality, necessity and proportionality of restrictions to the fundamental  right  to  privacy;   and ii) the security  of  the internet.

Regarding restrictions to the right to privacy, the study first examines the existence of specific legal frameworks for hacking by law enforcement, before exploring the ex-ante and ex-post conditions and mechanisms stipulated to govern restrictions of the right to privacy and ensure they are legal, necessary  and  proportional.

It is found that hacking practices are seemingly necessary across all Member States examined, as four Member States (France, Germany, Poland and the UK) have adopted specific legislative provisions and the remaining two are in the legislative process. For all Member States except Germany, the adoption of specific legislative provisions occurred in 2016 (France, Poland and the UK) or will occur later (Italy, the Netherlands).  This  confirms the  new  nature  of these investigative techniques.

Additionally, law enforcement agencies in all Member States examined have used, or still use, hacking techniques in the absence of specific legislative provisions, under so-called ‘grey area’ legal provisions. Given the invasiveness of hacking techniques, these grey areaprovisions are considered  insufficient  to adequately  protect the right to privacy.

Where specific legal provisions have been adopted, all stakeholders agree that a restriction of the right to privacy requires the implementation of certain safeguards. The current or proposed legal frameworks of all six Member States comprise a suite of ex-ante conditions and ex-post mechanisms that aim to ensure the use of hacking techniques is proportionate and necessary. As recommended by various UN bodies, the provisions of primary importance include judicial authorisation of hacking practices, safeguards related to the nature, scope and duration of possible measures (e.g. limitations to crimes of a certain gravity and the  duration  of  the hack,  etc.) and  independent   oversight.

Although many of these types of recommended conditions are common across the Member States examined – demonstrated in the below table – their implementation parameters differ. For instance, both German and Polish law permit law enforcement hacking practices without judicial authorisation in exigent circumstance if judicial authorisation is achieved in a specified timeframe. However, the timeframe differs (three days in Germany compared with five days in Poland). These differences make significant difference, as the Polish timeframe was criticised  by the Council  of  Europe’s  Venice Commission  for being  too long.5

Furthermore, the Member States examined all accompany these common types of ex-ante and ex-post conditions with different, less common conditions. This is particularly true for ex-post oversight mechanisms. For instance, in Poland, the Minister for internal affairs provides macro-level information to the lower (Sejm) and upper (Senat) chambers of Parliament;6 and in the UK, oversight is provided by the Investigatory Powers Commissioner, who reviews all cases of hacking by law enforcement, and the Investigatory Powers Tribunal, which  considers disputes or  complaints surrounding  law enforcement  hacking.7

Key ex-ante considerations
Judicial authorisation The    legal    provisions    of    all    six    Member    States    require    ex-ante judicial        authorisation        for        law        enforcement        hacking.        The information  to  be  provided  in  these requests differ.

Select     Member     States     (e.g.     Germany,     Poland,     the     UK)     also provide for hacking without prior judicial authorisation in exigent circumstances  if  judicial  authorisation  is subsequently  provided. The timeframes  for  ex-post authorisation  differ.

Limitation by crime and  duration All  six Member  States  restrict  the  use  of  hacking  tools  based  on the   gravity   of   crimes.    In    some    Member   States,    the    legislation presents  a  specific  list  of  crimes  for  which  hacking  is permitted; in     others,     the    limit    is    set     for    crimes    that    have    a    maximum custodial    sentence   of   greater   than    a   certain   number    of   years. The lists and numbers  of years required differ by Member   State.

Many Member States also restrict the duration for which hacking may   be   used.   This   restriction   ranges   from   maximum   1   month (France, Netherlands) to a maximum of 6 months (UK), although extensions     are     permitted     under     the     same     conditions     in     all Member States.

Key ex-post considerations
Notification and effective remedy Most    Member    States    provide    for    the    notification    of    targets    of hacking  practices and  remedy  in  cases  of unlawful   hacking.
Reporting and oversight Primarily, Member States report at a micro-level through logging hacking  activities and  reporting them  in  case  files.

However,   some   Member   States   (e.g.   Germany,   Poland   and   the UK) have macro-level  review  and  oversight mechanisms.

Furthermore, as regards the issue of territoriality (i.e. the difficulty law enforcement agencies face obtaining the location of the data to be collected using hacking techniques), only one Member States, the Netherlands, legally permits the hacking of devices if the location is unknown. If the device turns out to be in another jurisdiction, Dutch law enforcement must apply  for Mutual  Legal  Assistance.

As such, when aggregated, these provisions strongly mirror Article 8 of the European Convention on Human Rights, as well as the UN recommendations and paragraph 95 of the ECtHR  judgement  in  Weber and  Saravia  v.  Germany.  However,   there are  many,  and  varied, criticisms when the Member State conditions are examined in isolation. Some of the provisions criticised include: the limits based on the gravity of crimes (e.g. the Netherlands, France and Poland); the provisions for notification and effective remedy (e.g. Italy and the Netherlands); the process for screening and deleting non-relevant data (Germany); the definition of devices that can be targeted (e.g. the Netherlands); the duration permitted for hacking (e.g. Poland); and a lack of knowledge amongst the judiciary (e.g. France, Germany, Italy and the Netherlands).With this said, certain elements, taken in isolation, can be called good  practices. Such  examples  are  presented below.

Select  good practice: Member State legislative frameworks

Germany: Although they were deemed unconstitutional in a 2016 ruling, the provisions for the screening and deletion of data related to the core area of private life are a positive step. If the provisions are amended, as stipulated in the ruling, to ensure screening by an independent body, they would provide strong protection for the targeted individual’s private data.

Italy: The 2017 draft Italian law includes a range of provisions related to the development and monitoring of the continued use of hacking tools. As such, one academic stakeholder remarked that the drafting of the law must have been driven by technicians. However, these provisions bring significant benefits to the legislative provisions in terms of supervision and oversight of the use of hacking tools. Furthermore, the Italian draft law takes great care to separate the functionalities of the hacking tools, thus protecting against the overuse or abuse of a  hacking tool’s  extensive  capabilities.

Netherlands: The Dutch Computer Crime III Bill stipulates the need to conduct a formal proportionality assessment for each hacking request, with the assistance of a dedicated Central Review Commission (Centrale Toetsings Commissie). Also, the law requires rules to be laid down on the authorisation and expertise of the investigation officers that can perform hacking.

With these findings in mind, the study concludes that the specific national-level legal provisions examined provide for the use of hacking techniques in a wide array of circumstances. The varied combinations of requirements, including those related to the gravity of crimes, the duration and purpose of operations and the oversight, result in a situation where the law does not provide for much stricter conditions than are necessary for less  intrusive  investigative activities such  as interception.

Based on the study findings,  relevant  and actionable policy proposals and recommendations have been developed under the two key elements: i) the fundamental right  to  privacy;  and  ii) the security  of the internet.

Recommendations and policy proposals: Fundamental  right  to  privacy

It is recommended that the use of ‘grey area’ legal provisions is not sufficient to protect the fundamental right to privacy. This is primarily because existing legal provisions do not provide for the more invasive nature of hacking techniques and do not provide for the legislative precision  and  clarity  as  required  under  the  Charter and the  ECHR.

Furthermore, many of these provisions have only recently been enacted. As such, there is a need for robust evidence-based monitoring and evaluation of the practical application of these provisions. It is therefore recommended that the application of these new legal provisions is evaluated regularly at national level, and that the results of these evaluations are  assessed at  EU-level.

If specific legislative provisions are deemed necessary, the study recommends a range of good practice, specific ex-ante and ex-post provisions governing the use of hacking practices by  law  enforcement  agencies. These are detailed  in  Chapter 6.

Policy proposal 1: The European Parliament should pass a resolution calling on Member States to conduct a Privacy Impact Assessment when new laws are proposed to permit and govern the use of hacking techniques by law enforcement agencies. This Privacy Impact Assessment should focus on the necessity and proportionality of the use of hacking tools and should  require input  from  national  data protection  authorities.

Policy proposal 2: The European Parliament should reaffirm the need for Member States to adopt a clear and precise legal basis if law enforcement agencies are to use hacking techniques.

Policy proposal 3: The European Parliament should commission more research or encourage the European Commission or other bodies to conduct more research on the topic. In response to the Snowden revelations, the European Parliament called on the EU Agency for Fundamental Rights (FRA) to thoroughly research fundamental rights protection in the context of surveillance. A similar brief related to the legal frameworks governing the use of hacking techniques by law enforcement across all EU Member States would act as an invaluable piece  of  research.

Policy proposal 4: The European Parliament should encourage Member States to undertake evaluation and monitoring activities on the practical application of the new legislative provisions  that  permit  hacking  by  law  enforcement  agencies.

Policy proposal 5: The European Parliament should call on the EU Agency for Fundamental Rights (FRA) to develop a practitioner handbook related to the governing of hacking by law enforcement. This handbook should be intended for lawyers, judges, prosecutors, law enforcement officers and others working with national authorities, as well as non­governmental organisations and other bodies confronted with legal questions in the areas set out by the handbook. These areas should cover the invasive nature of hacking techniques and relevant safeguards as per international and EU law and case law, as well as appropriate mechanisms for supervision  and   oversight.

Policy proposal 6: The European Parliament should call on EU bodies, such as the FRA, CEPOL and Eurojust, to provide training for national-level members of the judiciary and data protection authorities, in collaboration with the abovementioned handbook, on the technical means for hacking in use across the Member States, their potential for invasiveness and the principles of  necessity  and  proportionality in  relation  to these  technical  means.

Recommendations and policy proposals: Security of  the  internet

The primary recommendation related to the security of the internet is that the position of the EU against the implementation of ‘backdoors’ and similar techniques, and in support of strong encryption standards, should be reaffirmed, given the prominent role encryption plays in our society and its importance to the EU’s Digital Agenda. To support this position, the EU should ensure continued engagement with global experts in computer science as well as civil society privacy and  digital  rights groups.

The actual impacts of hacking by law enforcement on the security of the internet are yet unknown. More work should be done at the Member State level to assess the potential impacts such that these data can feed in to overarching discussions on the necessity and proportionality of law enforcement hacking. Furthermore, more work should be done, beyond understanding the risks to the security of the internet, to educate those involved in the authorisation and use of  hacking  techniques by law enforcement.

At present, the steps taken to safeguard the security of the internet against the potential risks of hacking are not widespread. As such, the specific legislative provisions governing the use of hacking techniques by law enforcement, if deemed necessary, should safeguard the security of the internet and the security of the device, including reporting the vulnerabilities used to gain access to a device to the appropriate technology vendor or service provider; and  ensure  the  full  removal  of  the software  or hardware from the targeted  device.

Policy proposal 7: The European Parliament should pass a resolution calling on Member States to conduct an Impact Assessment to examine the impact of new or existing laws governing  the  use  of hacking  techniques by  law  enforcement on  the  security  of  the internet.

Policy proposal 8: The European Parliament, through enhanced cooperation with Europol
and the European Union Agency for Network and Information Security (ENISA), should
reaffirm its commitment to strong encryption considering discussions on the topic of hacking by law enforcement. In addition, the Parliament should reaffirm its opposition to the implementation of  
backdoors and  similar techniques in information technology infrastructures or  services.

Policy proposal 9: Given the lack of discussion around handling zero-day vulnerabilities, the European Parliament should support the efforts made under the cybersecurity contractual Public-Private Partnership (PPP) to develop appropriate responses to handling zero-day vulnerabilities, taking into consideration the risks related to fundamental rights and the security  of the internet.

Policy proposal 10: Extending policy proposal 4, above, the proposed FRA handbook should also cover the risks  posed  to  the  security  of the  internet  by  using hacking  techniques.

Policy proposal 11: Extending policy proposal 5, training provided to the judiciary by EU bodies such as FRA, CEPOL and Eurojust should also educate these individuals on the risks posed  to  the security  of  the internet  by  hacking  techniques.

Policy proposal 12: Given the lack of discussion around the risks posed to the security of the internet by hacking practices, the European Parliament should encourage debates at the appropriate fora specific to understanding this risk and the approaches to managing this risk. It is encouraged that law enforcement representatives should be present within such discussions.

Parliamentary Tracker : the EP incoming resolution on the EU-USA (so called) “Privacy Shield”…

 

NOTA BENE : Below the text that will be submitted to vote at the next EP plenary. As in previous occasions the text is well drafted, legally precise and it confirms the high level of  competence that the European Parliament (and its committee LIBE) has developed along the last 17 years from the first inquiry on Echelon (2000), the Safe Harbor (2000), the EU-USA agreement on PNR (since 2003 a thirteen year long lasting saga…) the SWIFT agreement (2006) …

What is puzzling are the critics raised against the  so called “adequacy finding” mechanism which empowers the European Commission to decide if a third Country protect “adequately” the EU citizens personal data. The weaknesses of the Commission face to our strongest transatlantic ally  were already very well known when recently the parliamentarians have reformed the European legal framework on data protection in view of the new legal basis foreseen by the Treaties and in the art. 7 and 8 of the EU Charter.  However the EP did’nt try to strengthen the “adequacy” mechanism by transforming it at least in a “delegated” function (so that it would had been possible for the EP to block something which could had weackened our standards).

Now the US Congress is weakening the (already poor) US data protection and the new US administration will probably go in the same direction.  It seems to me to easy  to complain now on something that you had recently the chance to fix..

Let’s now hope that the Court of Justice by answering to the request for opinion on the EU-Canada PNR agreement will give to the EU legislator some additional recommendations but as an EU citizen I would had preferred a stronger EU legislation instead of been ruled by european or national Judges…

Emilio De Capitani

B8‑0235/2017 European Parliament resolution on the adequacy of the protection afforded by the EU-US Privacy Shield (2016/3018(RSP))

The European Parliament,

–        having regard to the Treaty on European Union (TEU), the Treaty on the Functioning of the European Union (TFEU) and Articles 6, 7, 8, 11, 16, 47 and 52 of the Charter of Fundamental Rights of the European Union,

–        having regard to Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data (Data Protection Directive)[1],

–        having regard to Council Framework Decision 2008/977/JHA of 27 November 2008 on the protection of personal data processed in the framework of police and judicial cooperation in criminal matters[2],

–        having regard to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)[3], and to Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA[4],

–        having regard to the judgment of the Court of Justice of the European Union of 6 October 2015 in Case C-362/14 Maximillian Schrems v Data Protection Commissioner[5],

–        having regard to the Commission communication to the European Parliament and the Council of 6 November 2015 on the transfer of personal data from the EU to the United States of America under Directive 95/46/EC following the judgment by the Court of Justice in Case C-362/14 (Schrems) (COM(2015)0566),

–        having regard to the Commission communication to the European Parliament and the Council of 10 January 2017 on Exchanging and Protecting Personal Data in a Globalised World (COM(2017)0007),

–        having regard to the judgment of the Court of Justice of the European Union of 21 December 2016 in Cases C-203/15 Tele2 Sverige AB v Post- och telestyrelsen and C-698/15 Secretary of State for the Home Department v Tom Watson and Others[6],

–        having regard to Commission Implementing Decision (EU) 2016/1250 of 12 July 2016 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the EU-US Privacy Shield[7],

–        having regard to Opinion 4/2016 of the European Data Protection Supervisor (EDPS) on the EU-US Privacy Shield draft adequacy decision[8],

–        having regard to the Opinion of the Article 29 Data Protection Working Party of 13 April 2016 on the EU-US Privacy Shield draft adequacy decision[9] and its Statement of 26 July 2016[10],

–        having regard to its resolution of 26 May 2016 on transatlantic data flows[11],

–        having regard to Rule 123(2) of its Rules of Procedure,

  1. whereas the Court of Justice of the European Union (CJEU) in its judgment of 6 October 2015 in Case C-362/14 Maximillian Schrems v Data Protection Commissioner invalidated the Safe Harbour decision and clarified that an adequate level of protection in a third country must be understood to be ‘essentially equivalent’ to that guaranteed within the European Union by virtue of Directive 95/46/EC read in the light of the Charter of Fundamental Rights of the European Union (hereinafter ‘the EU Charter’), prompting the need to conclude negotiations on a new arrangement so as to ensure legal certainty on how personal data should be transferred from the EU to the US;
  2. whereas, when examining the level of protection afforded by a third country, the Commission is obliged to assess the content of the rules applicable in that country deriving from its domestic law or its international commitments, as well as the practice designed to ensure compliance with those rules, since it must, under Article 25(2) of Directive 95/46/EC, take account of all the circumstances surrounding a transfer of personal data to a third country; whereas this assessment must not only refer to legislation and practices relating to the protection of personal data for commercial and private purposes, but must also cover all aspects of the framework applicable to that country or sector, in particular, but not limited to, law enforcement, national security and respect for fundamental rights;
  3. whereas transfers of personal data between commercial organisations of the EU and the US are an important element for the transatlantic relationships; whereas these transfers should be carried out in full respect of the right to the protection of personal data and the right to privacy; whereas one of the fundamental objectives of the EU is the protection of fundamental rights, as enshrined in the EU Charter;
  4. whereas in its Opinion 4/2016 the EDPS raised several concerns on the draft Privacy Shield; whereas the EDPS welcomes in the same opinion the efforts made by all parties to find a solution for transfers of personal data from the EU to the US for commercial purposes under a system of self-certification;
  5. whereas in its Opinion 01/2016 on the EU-US Privacy Shield draft adequacy decision the Article 29 Working Party welcomed the significant improvements brought about by the Privacy Shield compared with the Safe Harbour decision whilst also raising strong concerns about both the commercial aspects and access by public authorities to data transferred under the Privacy Shield;
  6. whereas on 12 July 2016, after further discussions with the US administration, the Commission adopted its Implementing Decision (EU) 2016/1250, declaring the adequate level of protection for personal data transferred from the Union to organisations in the United States under the EU-US Privacy Shield;
  7. whereas the EU-US Privacy Shield is accompanied by several letters and unilateral statements from the US administration explaining, inter alia, the data protection principles, the functioning of oversight, enforcement and redress and the protections and safeguards under which security agencies can access and process personal data;
  8. whereas in its statement of 26 July 2016, the Article 29 Working Party welcomes the improvements brought by the EU-US Privacy Shield mechanism compared with Safe Harbour and commended the Commission and the US authorities for having taken into consideration its concerns; whereas the Article 29 Working Party indicates, nevertheless, that a number of its concerns remain, regarding both the commercial aspects and the access by US public authorities to data transferred from the EU, such as the lack of specific rules on automated decisions and of a general right to object, the need for stricter guarantees on the independence and powers of the Ombudsperson mechanism, and the lack of concrete assurances of not conducting mass and indiscriminate collection of personal data (bulk collection);
  9. Welcomes the efforts made by both the Commission and the US administration to address the concerns raised by the CJEU, the Member States, the European Parliament, data protection authorities (DPAs) and stakeholders, so as to enable the Commission to adopt the implementing decision declaring the adequacy of the EU-US Privacy Shield;
  10. Acknowledges that the EU-US Privacy Shield contains significant improvements regarding the clarity of standards compared with the former EU-US Safe Harbour and that US organisations self-certifying adherence to the EU-US Privacy Shield will have to comply with clearer data protection standards than under Safe Harbour;
  11. Takes note that as at 23 March 2017, 1 893 US organisations have joined the EU-US Privacy Shield; regrets that the Privacy Shield is based on voluntary self-certification and therefore applies only to US organisations which have voluntarily signed up to it, which means that many companies are not covered by the scheme;
  12. Acknowledges that the EU-US Privacy Shield facilitates data transfers from SMEs and businesses in the Union to the US;
  13. Notes that, in line with the ruling of the CJEU in the Schrems case, the powers of the European DPAs remain unaffected by the adequacy decision and they can, therefore, exercise them, including the suspension or the ban of data transfers to an organisation registered with the EU-US Privacy Shield; welcomes in this regard the prominent role given by the Privacy Shield Framework to Member State DPAs to examine and investigate claims related to the protection of the rights to privacy and family life under the EU Charter and to suspend transfers of data, as well as the obligation placed upon the US Department of Commerce to resolve such complaints;
  14. Notes with satisfaction that under the Privacy Shield Framework, EU data subjects have several means available to them to pursue legal remedies in the US: first, complaints can be lodged either directly with the company or through the Department of Commerce following a referral by a DPA, or with an independent dispute resolution body, secondly, with regard to interferences with fundamental rights for the purpose of national security, a civil claim can be brought before the US court and similar complaints can also be addressed by the newly created independent Ombudsperson, and finally, complaints about interferences with fundamental rights for the purposes of law enforcement and the public interest can be dealt with by motions challenging subpoenas; encourages further guidance from the Commission and DPAs to make those legal remedies all the more easily accessible and available;
  15. Acknowledges the clear commitment of the US Department of Commerce to closely monitor the compliance of US organisations with the EU-US Privacy Shield Principles and their intention to take enforcement actions against entities failing to comply;
  16. Reiterates its call on the Commission to seek clarification on the legal status of the ‘written assurances’ provided by the US and to ensure that any commitment or arrangement foreseen under the Privacy Shield is maintained following the taking up of office of a new administration in the United States;
  17. Considers that, despite the commitments and assurances made by the US Government by means of the letters attached to the Privacy Shield arrangement, important questions remain as regards certain commercial aspects, national security and law enforcement;
  18. Specifically notes the significant difference between the protection provided by Article 7 of Directive 95/46/EC and the ‘notice and choice’ principle of the Privacy Shield arrangement, as well as the considerable differences between Article 6 of Directive 95/46/EC and the ‘data integrity and purpose limitation’ principle of the Privacy Shield arrangement; points out that instead of the need for a legal basis (such as consent or contract) that applies to all processing operations, the data subject rights under the Privacy Shield Principles only apply to two narrow processing operations (disclosure and change of purpose) and only provide for a right to object (‘opt-out’);
  19. Takes the view that these numerous concerns could lead to a fresh challenge to the decision on the adequacy of the protection being brought before the courts in the future; emphasises the harmful consequences as regards both respect for fundamental rights and the necessary legal certainty for stakeholders;
  20. Notes, amongst other things, the lack of specific rules on automated decision-making and on a general right to object, and the lack of clear principles on how the Privacy Shield Principles apply to processors (agents);
  21. Notes that, while individuals have the possibility to object vis-à-vis the EU controller to any transfer of their personal data to the US, and to the further processing of those data in the US where the Privacy Shield company acts as a processor on behalf of the EU controller, the Privacy Shield lacks specific rules on a general right to object vis-à-vis the US self-certified company;
  22. Notes that only a fraction of the US organisations that have joined the Privacy Shield have chosen to use an EU DPA for the dispute resolution mechanism; is concerned that this constitutes a disadvantage for EU citizens when trying to enforce their rights;
  23. Notes the lack of explicit principles on how the Privacy Shield Principles apply to processors (agents), while recognising that all principles apply to the processing of personal data by any US self-certified company ‘[u]nless otherwise stated’ and that the transfer for processing purposes always requires a contract with the EU controller which will determine the purposes and means of processing, including whether the processor is authorised to carry out onward transfers (e.g. for sub-processing);
  24. Stresses that, as regards national security and surveillance, notwithstanding the clarifications brought by the Office of the Director of National Intelligence (ODNI) in the letters attached to the Privacy Shield framework, ‘bulk surveillance’, despite the different terminology used by the US authorities, remains possible; regrets the lack of a uniform definition of the concept of bulk surveillance and the adoption of the American terminology, and therefore calls for a uniform definition of bulk surveillance linked to the European understanding of the term, where evaluation is not made dependent on selection; stresses that any kind of mass surveillance is in breach of the EU Charter;
  25. Recalls that Annex VI (letter from Robert S. Litt, ODNI) clarifies that under Presidential Policy Directive 28 (hereinafter ‘PPD-28’), bulk collection of personal data and communications of non-US persons is still permitted in six cases; points out that such bulk collection only has to be ‘as tailored as feasible’ and ‘reasonable’, which does not meet the stricter criteria of necessity and proportionality as laid down in the EU Charter;
  26. Deplores the fact that the EU-US Privacy Shield does not prohibit the collection of bulk data for law enforcement purposes;
  27. Stresses that in its judgment of 21 December 2016, the CJEU clarified that the EU Charter ‘must be interpreted as precluding national legislation which, for the purpose of fighting crime, provides for the general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication’; points out that the bulk surveillance in the US therefore does not provide for an essentially equivalent level of the protection of personal data and communications;
  28. Is alarmed by the recent revelations about surveillance activities conducted by a US electronic communications service provider on all emails reaching its servers, upon request of the National Security Agency (NSA) and the FBI, as late as 2015, i.e. one year after Presidential Policy Directive 28 was adopted and during the negotiation of the EU-US Privacy Shield; insists that the Commission seek full clarification from the US authorities and make the answers provided available to the Council, Parliament and national DPAs; sees this as a reason to strongly doubt the assurances brought by the ODNI; is aware that the EU-US Privacy Shield rests on PPD-28, which was issued by the President and can also be repealed by any future President without Congress’s consent;
  29. Expresses great concerns at the issuance of the ‘Procedures for the Availability or Dissemination of Raw Signals Intelligence Information by the National Security Agency under Section 2.3 of Executive Order 12333’, approved by the Attorney General on 3 January 2017, allowing the NSA to share vast amounts of private data gathered without warrants, court orders or congressional authorisation with 16 other agencies, including the FBI, the Drug Enforcement Agency and the Department of Homeland Security; calls on the Commission to immediately assess the compatibility of these new rules with the commitments made by the US authorities under the Privacy Shield, as well as their impact on the level of personal data protection in the United States;
  30. Deplores the fact that neither the Privacy Shield Principles nor the letters of the US administration providing clarifications and assurances demonstrate the existence of effective judicial redress rights for individuals in the EU whose personal data are transferred to a US organisation under the Privacy Shield Principles and further accessed and processed by US public authorities for law enforcement and public interest purposes, which were emphasised by the CJEU in its judgment of 6 October 2015 as the essence of the fundamental right in Article 47 of the EU Charter;
  31. Recalls its resolution of 26 May 2016 stating that the Ombudsperson mechanism set up by the US Department of State is not sufficiently independent and is not vested with sufficient effective powers to carry out its duties and provide effective redress to EU individuals; notes that according to the representations and assurances provided by the US Government the Office of the Ombudsperson is independent from the US intelligence services, free from any improper influence that could affect its function and moreover works together with other independent oversight bodies with effective powers of supervision over the US Intelligence Community; is generally concerned that an individual affected by a breach of the rules can apply only for information and for the data to be deleted and/or for a stop to further processing, but has no right to compensation;
  32. Regrets that the procedure of adoption of an adequacy decision does not provide for a formal consultation of relevant stakeholders such as companies, and in particular SMEs’ representation organisations;
  33. Regrets that the Commission followed the procedure for adoption of the Commission implementing decision in a practical manner that de facto has not enabled Parliament to exercise its right of scrutiny on the draft implementing act in an effective manner;
  34. Calls on the Commission to take all the necessary measures to ensure that the Privacy Shield will fully comply with Regulation (EU) 2016/679, to be applied as from 16 May 2018, and with the EU Charter;
  35. Calls on the Commission to ensure, in particular, that personal data that has been transferred to the US under the Privacy Shield can only be transferred to another third country if that transfer is compatible with the purpose for which the data was originally collected, and if the same rules of specific and targeted access for law enforcement apply in the third country;
  36. Calls on the Commission to monitor whether personal data which is no longer necessary for the purpose for which it had been originally collected is deleted, including by law enforcement agencies;
  37. Calls on the Commission to closely monitor whether the Privacy Shield allows for the DPAs to fully exercise all their powers, and if not, to identify the provisions that result in a hindrance to the DPAs’ exercise of powers;
  38. Calls on the Commission to conduct, during the first joint annual review, a thorough and in-depth examination of all the shortcomings and weaknesses referred to in this resolution and in its resolution of 26 May 2016 on transatlantic data flows, and those identified by the Article 29 Working Party, the EDPS and the stakeholders, and to demonstrate how they have been addressed so as to ensure compliance with the EU Charter and Union law, and to evaluate meticulously whether the mechanisms and safeguards indicated in the assurances and clarifications by the US administration are effective and feasible;
  39. Calls on the Commission to ensure that when conducting the joint annual review, all the members of the team have full and unrestricted access to all documents and premises necessary for the performance of their tasks, including elements allowing a proper evaluation of the necessity and proportionality of the collection and access to data transferred by public authorities, for either law enforcement or national security purposes;
  40. Stresses that all members of the joint review team must be ensured independence in the performance of their tasks and must be entitled to express their own dissenting opinions in the final report of the joint review, which will be public and annexed to the joint report;
  41. Calls on the Union DPAs to monitor the functioning of the EU-US Privacy Shield and to exercise their powers, including the suspension or definitive ban of personal data transfers to an organisation in the EU-US Privacy Shield if they consider that the fundamental rights to privacy and the protection of personal data of the Union’s data subjects are not ensured;
  42. Stresses that Parliament should have full access to any relevant document related to the joint annual review;
  43. Instructs its President to forward this resolution to the Commission, the Council, the governments and national parliaments of the Member States and the US Government and Congress.

NOTES
[1] OJ L 281, 23.11.1995, p. 31.
[2] OJ L 350, 30.12.2008, p. 60.
[3] OJ L 119, 4.5.2016, p. 1.
[4] OJ L 119, 4.5.2016, p. 89.
[5] ECLI:EU:C:2015:650.
[6] ECLI:EU:C:2016:970.
[7] OJ L 207, 1.8.2016, p. 1.
[8] OJ C 257, 15.7.2016, p. 8.
[9] http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2016/wp238_en.pdf
[10] http://ec.europa.eu/justice/data-protection/article-29/press-material/press-release/art29_press_material/2016/20160726_wp29_wp_statement_eu_us_privacy_shield_en.pdf
[11] Texts adopted, P8_TA(2016)0233.

The Mejiers Committee on the inter-parliamentary scrutiny of Europol

ORIGINAL PUBLISHED ON THE MEJIERS COMMITTE (*) PAGE  HERE

  1. Introducton

Article 88 TFEU provides for a unique form of scrutiny on the functioning of Europol. It lays down that the [regulations on Europol] shall also lay down the procedures for scrutiny of Europol’s activities by the European Parliament, together with national Parliaments.

Such a procedure is now laid down in Article 51 of the Europol Regulation (Regulation (EU) 2016/794), which provides for the establishment of a “specialized Joint Parliamentary Scrutiny Group (JPSG)”, which will play the central role in ensuring this scrutiny. The Europol Regulation shall apply from 1st of May 2017.

Article 51 of the Europol Regulation also closely relates to Protocol (1) of the Lisbon Treaty on the role of national parliaments in the EU. Article 9 of that protocol provides: “The European Parliament and national Parliaments shall together determine the organization and promotion of effective and regular inter-parliamentary cooperation within the Union.”

Article 51 (2) does not only lay down the basis for the political monitoring of Europol’s activities (the democratic perspective), but also stipulates that “in fulfilling its mission”, it should pay attention to the impact of the activities of Europol on the fundamental rights and freedoms of natural persons (the perspective of the rule of law).

The Meijers Committee takes the view that improving the inter-parliamentary scrutiny of Europol, with appropriate involvement of both the national and the European levels, will by itself enhance the attention being paid by Europol on the perspectives of democracy and the rule of law, and more in particular the fundamental rights protection. It will raise the alertness of Europol as concerns these perspectives.

Moreover, the scrutiny mechanism could pay specific attention to the fundamental rights protection within Europol. This is particularly important in view of the large amounts of – often sensitive – personal data processed by Europol and exchanged with national police authorities of Member States and also with authorities of third countries.

The implementation of Article 51 into practice is currently debated, e.g. in the inter-parliamentary committee of the European Parliament and national parliaments.1 As specified by Article 51 (1) of the Europol regulation, the organization and the rules of procedure of the JPSG shall be determined.

The Meijers Commitee wishes to engage in this debate and makes, in this note, recommendations on the organization and rules of procedure.

  1. Context

Continue reading “The Mejiers Committee on the inter-parliamentary scrutiny of Europol”

TELE2 SVERIGE AB AND WATSON ET AL: CONTINUITY AND …RADICAL CHANGE

ORIGINAL PUBLISHED ON EUROPEAN LAW BLOG  (JANUARY 12, 2017)
By Orla Lynskey

 

Introduction

The CJEU delivered its judgment in Tele2 Sverige AB and Watson on 21 December 2016. The Court had been asked by a Swedish and British court respectively to consider the scope and effect of its previous judgment in Digital Rights Ireland (discussed here). The judgment reflects continuity in so far as it follows in the line of this, and earlier judgments taking a strong stance on data protection and privacy. Yet, the degree of protection it offers these rights over competing interests, notably security, is radical. In particular, the Court unequivocally states that legislation providing for general and indiscriminate data retention is incompatible with the E-Privacy Directive, as read in light of the relevant EU Charter rights. While the judgment was delivered in the context of the E-Privacy Directive, the Court’s reasoning could equally apply to other EU secondary legislation or programmes interpreted in light of the Charter. This judgment will be a game-changer for state surveillance in Europe and while it offered an early Christmas gift to privacy campaigners, it is likely to receive a very mixed reaction from EU Member States as such. While national data retention legislation has been annulled across multiple Member States (Bulgaria, Czech Republic, Cyprus, Germany and Romania), this annulment has been based on an assessment of the proportionality of the relevant measures rather than on a finding that blanket retention is per se unlawful. For those familiar with the facts and findings, skip straight to the comment below.

Facts

The preliminary ruling stems from two Article 267 TFEU references regarding the interpretation of the Court’s judgment in Digital Rights Ireland (henceforth DRI). The first, Tele2 Sverige AB, was a Swedish reference resulting from the refusal by Tele2 Sverige (a Swedish electronic communications provider) to continue to retain electronic communications data following the finding in DRI that the Data Retention Directive was invalid. A dispute regarding the interpretation of DRI ensued and the Swedish Justice Minister commissioned a report to assess the compatibility of Swedish law with EU law and the ECHR. This report concluded that DRI could not be interpreted as prohibiting general and indiscriminate data retention as a matter of principle, or as establishing criteria – all of which must be fulfilled – in order for legislation to be deemed proportionate. Rather, it held that it was necessary to conduct an assessment of all the circumstances in order to determine the compatibility of Swedish legislation with EU law. Tele2 Sverige maintained that the report was based on a misinterpretation of DRI. Given these differing perspectives, the referring court asked the Court to give ‘an unequivocal ruling on whether…the general and indiscriminate retention of electronic communications data is per se incompatible with Articles 7 and 8 and 52(1) of the Charter’ [50].

The second preliminary reference (Watson) arose before the Court of Appeal in the context of applications for judicial review of the UK’s Data Retention and Investigatory Powers Act (DRIPA) on the grounds that this Act was incompatible with the EU Charter and the ECHR. It was disputed before the national court whether DRI laid down ‘mandatory requirements of EU law’ that national legislation for communications data retention and access must respect. The domestic referring court suggested that it was appropriate to distinguish between legislation governing retention, and legislation governing access. DRI was confined to an assessment of the former as it assessed the validity of the Data Retention Directive, which excluded provisions relating to data access. The latter, provisions on data access, must be subject to a distinct validity assessment in light of their differing context and objectives, according to the referring court. The Court of Appeal did not however deem the answer to this question obvious, given that six courts in other EU Member States had declared national legislation to be invalid on the basis of DRI. It therefore asked the Court to consider whether, firstly, DRI lays down mandatory requirements of EU law that would apply to the regime governing access to retained data at national level. It also asked whether DRI expands the scope of the Charter rights to data protection and privacy beyond the scope of Article 8 ECHR. The Watson reference was dealt with pursuant to the expedited procedure provided for in Article 105(1) of the Court’s Rules of Procedure and joined to the Tele2 Sverige reference for oral arguments and judgment.

Findings of the Court

The Scope of the E-Privacy Directive

The Court examined, as a preliminary point, whether national legislation on retention and access to data fell within the scope of the E-Privacy Directive. Article 15(1) of that Directive provides for restrictions to certain rights it provides for when necessary for purposes such as national security and the prevention, investigation, detection and prosecution of criminal offences. Article 15(1) also allows for the adoption of data retention legislation by Member States. However, Article 1(3) of that Directive states that the Directive will not apply to, amongst others, ‘activities concerning public security, defence, State security (…) and the activities of the State in areas of criminal law’. There is thus an apparent internal inconsistency within the Directive.

To guide its findings, the Court had regard to the general structure of the Directive. While the Court acknowledged that the objectives pursued by Articles 1(3) and 15(1) overlap substantially, it held that Article 15(1) of the Directive would be deprived of any purpose if the legislative measures it permits were excluded from the scope of the Directive on the basis of Article 1(3) [73]. Indeed, it held that Article 15(1) ‘necessarily presupposes’ that the national measures referred to therein fall within the scope of that directive ‘since it expressly authorizes the Member States to adopt them only if the conditions laid down in the directive are met’. [73]. In order to support this finding, the Court suggests that the legislative measures provided for in Article 15(1) apply to providers of electronic communications services [74] and extend to measures requiring data retention [75] and access to retained data by national authorities [76]. It justifies this final claim – that the E-Privacy Directive includes data access legislation – on the (weak) grounds that recital 21 of the directive stipulates that the directive’s aim is to protect confidentiality by preventing unauthorised access to communications, including ‘any data related to such communications’ [77]. The Court emphasises that provisions on data access must fall within the scope of the Directive as data is only retained for the purpose of access to it by competent national authorities and thus national data retention legislation ‘necessarily entails, in principle, the existence of provisions relating to access by the competent national authorities to the data retained’ [79]. The Court also noted that the Directive requires providers to establish internal procedures for responding to requests for access based on the relevant provisions of national law [80].

The compatibility of ‘general and indiscriminate’ data retention with EU law

The Court then moved on to consider the most important substantive point in the judgment: the compatibility of ‘general and indiscriminate’ data retention with the relevant provisions of EU law. It began by recalling that the E-Privacy Directive’s overarching aim is to offer users of electronic communications services protection against the risks to fundamental rights brought about by technological advances [83]. It emphasised, in particular, the general principle of confidentiality of communications in Article 5(1) of the Directive and the related safeguards for traffic data and location data (in Articles 6 and 9 respectively), [85-87]. While the Court acknowledged that Article 15(1) of the Directive allows for exceptions to these principles by restricting their scope, it held that this provision must be interpreted strictly. It clearly stated that Article 15(1) cannot permit the exception to the Directive’s confidentiality obligation to become the rule, as this would render the confidentiality obligation meaningless [89].

The Court also emphasised that according to Article 15(1)’s wording it must be interpreted in light of general principles of EU law, thus including the fundamental rights in the EU Charter [91]. The Court noted, with reference to its previous case-law, the importance of the fundamental rights engaged in the current context, namely the right to privacy (Article 7), the right to data protection (Article 8) and the right to freedom of expression (Article 11) ([92]-[93]). The limitations on the exercise of these Charter rights are echoed in the E-Privacy Directive, recital 11 of which states that measures derogating from its principles must be ‘strictly’ proportionate to the intended purpose, while Article 15(1) itself specifies that data retention should be ‘justified’ by reference to one of the  objectives stated in Article 15(1) and be for a ‘limited period’ [95]. In considering whether national legislation complies with these requirements of strict necessity, the Court observed that ‘the legislation provides for a general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication’ and that the retention obligation on providers is ‘to retain the data systematically and continuously, with no exceptions’ [97].

Having established the scope of the retention obligation, the Court emphasised the revealing nature of this data and recalled its finding in DRI that the data ‘taken as a whole, is liable to allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained’ [98]. The Court also stated that the data provides the means of profiling the individual concerned and – importantly – that the information is ‘no less sensitive having regard to the right to privacy, than the actual content of the communications’ [99]. The Court held that general and indiscriminate data retention legislation entailed a particularly serious interference with the rights to privacy and data protection and that the user concerned is, as a result, likely to feel that their private lives are the subject of constant surveillance [100]. It could also, according to the Court, affect the use of means of electronic communication and thus the exercise by users of their freedom of expression [101]. The Court therefore held that only the objective of fighting serious crime could justify national data retention legislation [102].

While the Court acknowledged that the fight against serious crime may depend on modern investigative techniques for its effectiveness, this objective cannot in itself justify the finding that general and indiscriminate data retention legislation is necessary for this fight against crime [103]. It noted in particular that such legislation applies to persons for whom ‘there is no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with serious criminal offences’ and that no exception is made for those whose communications are subject to professional secrecy [106]. As a result of these failings, the Court held that the national legislation exceeds the limits of what is strictly necessary and cannot be considered justified under Article 15(1), read in light of the Charter [107].

The Court did not go so far as to deem all data retention unlawful however. It highlighted that Article 15(1) does not prevent a Member State from introducing legislation that would facilitate targeted retention of traffic and location data for the preventive purpose of fighting serious crime. Such legislation must however be limited to what is strictly necessary in terms of the categories of data retained; the means of communication affected, the persons and the period of time concerned [108]. In particular, such legislation should indicate ‘in what circumstances and under which conditions’ a data retention measure could be adopted as a preventive measure [109]. The Court also emphasised that while the precise contours may vary, data retention should meet objective criteria that establish a connection between the data to be retained and the objective pursued [110]. The national legislation must therefore be evidence-based: this objective evidence should make it possible to ‘identify a public whose data is likely to reveal a link, at least an indirect one, with serious criminal offences’ [111].

Mandatory Requirements of DRI?

Having established the incompatibility of generalised data retention legislation with EU law, the Court then went on to consider whether EU law precludes national data retention and access legislation if that legislation:

  • does not restrict access solely to the objective of fighting serious crime;
  • does not require access to be subject to prior review by a court or independent body
  • and, if it does not require that the data should be retained within the EU [114].

The Court reiterated an early finding that access to retained data must be for one of the exhaustive objectives identified in Article 15(1) of the E-Privacy Directive, and that only the objective of fighting serious crime would justify access to retained data [115]. Such legislation must also set out clear and precise rules indicating when and how competent national authorities should be granted access to such data [117]. The Court also held that national legislation must set out the substantive and procedural conditions governing access based on objective criteria [118-119]. Such access can, ‘as a general rule’ be granted only ‘to the data of individuals suspected of planning, committing or having committed a serious crime or of being implicated in one way or another in such a crime’ [119]. Access to the data of others might exceptionally be granted where, for instance, vital national interests are threatened by terrorist activities, if there is objective evidence to reflect the effective contribution access to such data could make [119]. As a result, access to retained data should, with the exception of cases of validly established urgency, be subject to a prior review by a court or an independent administrative authority at the request of the competent national authorities [120]. These competent national authorities must also notify the persons affected by the data access, under the applicable national procedures, as soon as such notification no longer jeopardises the investigations. The Court highlighted that such notice is necessary to enable these individuals to exercise their right to a legal remedy pursuant to the Directive and EU data protection law [121].

On the issue of data security, the Court held that Article 15(1) does not allow Member States to derogate from the Directive’s data security provisions, which require providers to take appropriate technical and organisational measures to ensure the effective protection of retained data. The Court held that a particularly high level of data security was appropriate given the quantity and nature of the data retained and the riskiness of this operation. It therefore held that the national legislation must provide for the data to be retained within the EU, and for the irreversible destruction of the data at the end of the data retention period [122]. Member States must also ensure that an independent authority reviews compliance with EU law, as such independent control of data protection compliance is an essential element of the right to data protection set out in Article 8(3) Charter. The Court emphasised the link between such independent supervision and the availability of a legal remedy for data subjects [123]. The Court therefore concluded that national legislation that did not comply with these conditions would be precluded pursuant to Article 15(1) as read in light of the Charter [125]. However, it was for the relevant national courts to examine whether such conditions were satisfied in the present case [124].

Finally, in relation to the UK Court of Appeal’s query regarding the relationship between the EU Charter rights to data protection and privacy and Article 8 ECHR, the Court held that the answer to this question would not affect the interpretation of the E-Privacy Directive and thus matter in these proceedings [131]. It recalled its settled case-law that the preliminary reference procedure serves the purpose of effectively resolving EU law disputes rather than providing advisory opinions or answering hypothetical questions [130]. This did not however prevent it from offering a sneak preview of its thinking on this matter. It emphasised that, while the EU has not acceded to the ECHR, the ECHR does not constitute a formally incorporated element of EU law. It did however note that Article 52(3) seeks to ensure consistency between the Charter and the ECHR without adversely affecting the autonomy of EU law. EU law is not therefore precluded from providing more extensive protection than the ECHR. The Court added that Article 8 of the Charter concerns a fundamental right which is distinct from that enshrined in Article 7 and which has no equivalent in the ECHR. Therefore, while the Court did not answer the question of which offered a wider scope of protection, it did confirm the distinctiveness of these two rights.

Comment

The Tele2 judgment represents a rupture with the past in one very significant way: the Court, for the first time, unequivocally states that blanket data retention measures are incompatible with EU law, read in light of the Charter. This radical finding is likely to receive a mixed reaction. For instance, in the UK some will lament that this judgment comes too late to have influenced the passage into law of the UK’s new data retention legislation, the Investigatory Powers Act, 2016. This legislation – which allows for bulk interception and hacking, amongst other things – should now be found to be incompatible with EU law, with all of the post-Brexit implications for ‘adequacy’ this may entail (also here). Others, such as the UK’s Independent Reviewer of Terrorism Legislation – David Anderson QC – have expressed regret. Anderson QC suggests that:

‘Precisely because suspects are often not known in advance, data retention which is not universal in its scope is bound to be less effective as a crime reduction measure.  In addition, a person whose data has not been retained cannot be exonerated by use of that data (e.g. by using location data to show that the person was elsewhere).’

The Advocate General (here; and commentary here) had similarly noted that data retention could help competent authorities ‘examine the past’ [AG, 178]. He had refused to declare general retention measures per se unlawful, preferring instead to assess the compatibility of data retention legislation against strict proportionality requirements [AG, 116]. His approach could therefore be said to be more nuanced and systematic than that of the Court. While examining proportionality stricto sensu he concluded that it would be for national courts to weigh the benefit of ‘examining the past’ with the potential it would provide for authorities to abuse this power by using metadata to catalogue entire populations, noting that evidence of abuses had been put before the Court [AG, 259-260]. This evidence before the Court might help to refute the critique that the Court should have focused on the actual harm of communications metadata retention ‘and sought to avoid assertions based on theory or informal predictions of popular feeling’.

Blanket retention was not the only important point on which the Court and the Advocate General departed. The Advocate General explicitly claimed that DRI set out mandatory requirements [AG, 221] while the Court did not. The Advocate General was also more stringent than the Court by requiring that data is retained in the relevant Member State [AG, 241] while the Court opted for the marginally more realistic requirement that data is retained in the EU. The Advocate General did not, however, consider Article 15(1) a derogation to the E-Privacy Directive (and therefore not a provision that required strict interpretation). The Court did not however engage with his elaborate reasoning on this point [AG, 106-115]. The Court did however confirm that competent national authorities must notify persons affected by data access as soon as such notification no longer jeopardises the investigation [121]. This significant procedural right is likely to play an important role in acting as a check on abusive access requests.

Perhaps the only fly in the ointment for the digital rights groups that intervened before the Court is the Court’s seemingly uncritical endorsement of geographic and group profiling. It does this when it emphasises that there should be relationship between the data retained and the threat, for instance when the data pertains to a ‘geographic area’ [108]. The ethical and social issues such profiling may entail would require further consideration. The Court appears to recognise this by suggesting that such profiling would need to be strictly evidence-based ([111]). Should generalised retention measures be replaced by ad hoc location-based retention measures, the legality of the latter would itself be the subject of much controversy.

Threat to Human Rights? The new e-Privacy Regulation and some thoughts on Tele2 and Watson

ORIGINAL PUBLISHED ON EU LAW ANALYSIS

by Matthew White, Ph.D candidate, Sheffield Hallam University

Introduction

In a follow-up to last Christmas’s post, on 10 January 2017, the European Commission released the official version of the proposed Regulation on Privacy and Electronic Communications (e-Privacy Regs). Just as the last post concerned the particular aspect of data retention, this post will too.

Just as the former leaked version maintained, the proposal does not include any specific provisions in the field of data retention (para 1.3). This paragraph continues that Member States are free to keep or create national data retention laws, provided that they are ‘targeted’ and that they comply with European Union (EU) taking into account the case-law of the Court of Justice of the European Union (CJEU) and its interpretation of the e-Privacy Directive and the Charter of Fundamental Rights (CFR). Regarding the CJEU’s interpretation, the proposals specifically refers to Joined Cases C-293/12 and C-594/12 Digital Rights Ireland and Seitlinger and Others, and Joined Cases C-203/15 and C-698/15 Tele2 Sverige AB and Secretary of State for the Home Department. Aspects of the latter case is the focus of this post; the case itself has been thoroughly discussed by Professor Lorna Woods.

So, when is the essence of the right adversely affected?

Before discussing certain aspects of Tele2 and Watson, it is first important to draw attention to the provision which enables data retention in the new e-Privacy Regs. Article 11 allows the EU or its Member States to restrict the rights contained in Articles 5-8 (confidentiality of communications, permissions on processing, storage and erasure of electronic communications data and protection of information stored in and related to end-users’ terminal equipment). From Article 11, it is clear that this can include data retention obligations, so long as they respect the essence of the right and are necessary, appropriate and proportionate. In Tele2 and Watson the CJEU noted that any limitation of rights recognised by the CFR must respect the essence of said rights [94]. The CJEU accepted the Advocate General (AG)’s Opinion that data retention creates an equally serious interference as interception and that the risks associated with the access to communications maybe greater than access to the content of communications [99]. Yet the CJEU were reluctant to hold that data retention (and access to) adversely affects the essence of those rights [101]. This appears to highlight a problem in the CJEU’s reasoning, if the CJEU, like the AG accept that retention of and access to communications data is at least on par with access to the content, it makes little sense to then be reluctant to hold that data retention adversely affects the essence of those rights. The CJEU does so without making any distinction or reasoning for this differential treatment, and thus serves to highlight that perhaps the CJEU themselves do not fully respect the essence of those rights in the context of data retention.

The CJEU’s answer seems only limited catch all powers

The thrust of the CJEU’s judgment in Tele2 and Watson was that general and indiscriminate data retention obligations are prohibited at an EU level. But as I have highlighted previously, the CJEU’s answer was only in response to a very broad question from Sweden, which asked was:

[A] general obligation to retain traffic data covering all persons, all means of electronic communication and all traffic data without any distinctions, limitations or exceptions for the purpose of combating crime…compatible with [EU law]?

Therefore, provided that national laws do not provide for the capturing of all data of all subscribers and users for all services in one fell swoop, this may be argued to be compatible with EU law. Both the e-Privacy Regs and the CJEU refer to ‘targeted’ retention [108, 113]. The CJEU gave an example of geographical criterions for retention in which David Anderson Q.C. asks whether the CJEU meant that ‘it could be acceptable to perform “general and indiscriminate retention” of data generated by persons living in a particular town, or housing estate, whereas it would not be acceptable to retain the data of persons living elsewhere? This is entirely possible given the reference from Sweden and the answer from the CJEU. In essence the CJEU have permitted discriminatory general and indiscriminate data retention which would in any event respect the essence of those rights.

Data retention is our cake, and only we can eat it

A final point on Tele2 and Watson was that the CJEU held that national laws on data retention are within the scope of EU law [81]. This by itself may not raise any concerns about protecting fundamental rights, but it is what the CJEU rules later on in the judgment that may be of concern. The CJEU held that the interpretation of the e-Privacy Directive (and therefore national Member State data retention laws) “must be undertaken solely in the light of the fundamental rights guaranteed by the Charter” [128]. The CJEU has seemingly given itself exclusive competence to determine how rights are best protected in the field of data retention. It is clear from the subsequent paragraph that the CJEU seeks to protect the autonomy of EU law above anything else, even fundamental rights [129]. This is despite the ECHR forming general principles of EU law and is mentioned in Article 15(1) (refers Article 6(3) of the Treaty of the European Union (TEU) specifically referring to the ECHR as such). Article 11 of the e-Privacy Regs refers to restrictions respecting the ‘essence of fundamental rights and freedoms’ and only time will tell whether the CJEU would interpret this as only referring to the CFR. Recital 27 of the e-Privacy Regs just like Recital 10 and 30 of the e-Privacy Directive refers to compliance with the ECHR, but as highlighted previously, Recitals are not legally binding.

Is the CJEU assuming too much?

A further concern, is that had the European Commission added general principles of EU law into Article 11, the CJEU may simply have ignored it, just as it has done in Tele2 and Watson. The problem with the CJEU’s approach is that it assumes that this judgment offers an adequate protection of human rights in this context. The ECHR has always been the minimum floor, but it appears the CJEU wants the CFR to be the ceiling whether it be national human rights protection, or protection guaranteed by the ECHR. What if that ceiling is lower than the floor? The AG in Tele2 and Watson stressed that the CFR must never be inferior to the ECHR [141]. But I have argued before, the EU jurisprudence on data retention is just that, offering inferior protection to the ECHR, and the qualification by the CJEU in Tele2 and Watson does not alter this. This position is strengthened by Judge Pinto De Albuquerque in his concurring opinion in the European Court of Human Rights judgment in Szabo. He believed that:

[M]andatory third-party data retention, whereby Governments require telephone companies and Internet service providers to store metadata about their customers’ communications and location for subsequent law-enforcement and intelligence agency access, appeared neither necessary nor proportionate [6].

Of course, Judge Pinto De Albuquerque could have been referring to the type of third party data retention which requires Internet Service Providers (ISPs) to intercept data from Over The Top (OTT) services, but his description is more in line with data retention of services’ own users and subscribers.

Conclusions

Although the CJEU has prohibited general indiscriminate data retention, the CJEU does not seem to have prevented targeted indiscriminate data retention. If the European Court of Human Rights (ECtHR) were to ever rule on data retention and follow its jurisprudence and the opinion of Judge Pinto De Albuquerque, this may put EU law in violation of the ECHR. This would ultimately put Member States in a damned if they do, damned if they do not situation, comply with the ECHR, and violate EU law autonomy; comply with EU law and violate the ECHR. When the minimum standards of human rights protection in this context are not adhered to, because of EU law, the ECHR should prevail. As anything less is a threat to human rights, meaning that the (even if well intentioned) CJEU can also be.

New ECJ ruling on data retention: Preservation of civil rights even in difficult times!

Original published here on 22. Dezember 2016

by

Translation – German version see here.

The European Court of Justice has made a Christmas present to more than 500 million EU citizens. With its new judgment on data retention (C-203/15 of 21 December 2016) – the highest court of the European Union stresses the importance of fundamental rights. All Member States are required to respect the rights represented in the European Charter of Fundamental Rights in their national legislation. The ECJ issued an important signal that can hardly be surmounted taking into account the current political discussions on internal and external threats and the strengthening of authoritarian political currents providing the public with simplistic answers to difficult questions.

The ECJ remains true to itself

The ruling of the European Court of Justice is in line with its judgment of 8 April 2014, by which the Court annulled Directive 2006/24/EC on the retention of data. The general obligation to retain traffic and location data required by this Directive was not limited to the absolutely necessary and thus disproportionate to the fundamental rights of respect for private life and the protection of personal data (Articles 7 and 8 of the European Charter of Fundamental Rights).

Despite the annulment of the Data Retention Directive by the ECJ, several Member States have continued or even broadened their practice of data retention. The latter took place in Great Britain, where shortly after the ECJ ruling – in July 2014 – a new legal basis for data retention was passed, which even went beyond the abolished EC directive. According to the British Parliament’s intention to implement the so-called „Investigatory Powers Act“, the major current commitments to compulsory data storage and the supervisory powers of the security authorities are to be extended in the short term and will include web services, in particular transactions on social networks. On November 29, 2016, the upper and lower house agreed on a corresponding legal text, which is to enter into force soon after its formal approval by the Queen. In other Member States, too, there are – differently broad-ranging – legal requirements which oblige providers of telecommunications and internet services to reserve traffic and location data whose conservation is not necessary for the provision or the billing of the respective service.

European Charter of Fundamental Rights binding for national legislature

A Swedish and a British court had asked the ECJ to clarify whether the respective national regulations on the retention of data corresponded to the European legal requirements.

In its new ruling the ECJ answered this question by stating that national regulations which provide a general and indiscriminate storage of data are not in line with the EU law. A national regulation providing for the storage of traffic and location data, is to be regarded as serious interference in fundamental rights. Member States must not maintain or re-adopt rules which are based on, or even go beyond, an EU act which has been annulled on grounds of its fundamental illegality.

The provisions of EU law bind the national legislature. The EU Directive 2002/58/EC on data protection in electronic communications (the ePrivacy Directive) has to be interpreted in the light of the Charter of Fundamental Rights. Exceptions to the protection of personal data should be limited to the absolutely necessary. This applies not only to the rules on data retention, but also to the access of authorities to the stored data. A national provision providing for general and indiscriminate data retention which does not require a link between the data for which it is originally intended to be stored and a threat to public security, and in particular is not limiting the data on a period and / or a geographical area and / or of a group of persons which could be involved in a serious criminal act, transcends the limits of the absolutely necessary and can not be regarded as justified in a democratic society. Laws of Member States that do not meet these requirements must be abolished or amended accordingly.

With regard to the contested British and Swedish laws, the competent national courts which had appealed to the ECJ are now required to enforce the ECJ ruling in substance. However, even the parliaments and governments of the Member States are, too, responsible for reviewing and, where appropriate, correcting the relevant provisions of national law.

What happens to German data retention?

The implications of the ECJ ruling for the German data retention recently reintroduced must also be urgently examined. The retention obligations of the new German Data Retention Act remain behind the predecessor regulation, which was repealed by the Federal Constitutional Court in 2010. However, it is highly doubtful whether the provisions of the ECJ will be fulfilled by the new data retention act, since it obliges the telecommunications providers to store the data without any material restriction on a specific area or a particular risk situation.

The fact that the Federal Government or the parliamentary fractions backing them will now carry out this examination in an objective manner appears to be highly unlikely in the light of the additional powers which they have recently decided to hand over to the security authorities. In the end, the Federal Constitutional Court will probably have to ensure clarity again.

Peter Schaar (21 December 2016)

Data retention and national law: the ECJ ruling in Joined Cases C-203/15 and C-698/15 Tele2 and Watson (Grand Chamber)

ORIGINAL PUBLISHED ON EU LAW ANALYSIS

Lorna Woods, Professor of Internet Law, University of Essex

Introduction

Today’s judgment in these important cases concerns the acceptability from a human rights perspective of national data retention legislation maintained even after the striking down of the Data Retention Directive in Digital Rights Ireland (Case C-293/12 and 594/12) (“DRI”) for being a disproportionate interference with the rights contained in Articles 7 and 8 EU Charter of Fundamental Rights (EUCFR).  While situated in the context of the Privacy and Electronic Communications Directive (Directive 2002/58), the judgment sets down principles regarding the interpretation of Articles 7 and 8 EUCFR which will be applicable generally within the scope of EU law. It also has possible implications for the UK’s post-Brexit relationship with the EU.

Background and Facts

The Privacy and Electronic Communications Directive requires the confidentiality of communications, including the data about communications to be ensured through national law. As an exception it permits, under Article 15, Member States to take measures for certain public interest objectives such as the fight against terrorism and crime, which include requiring public electronic communications service providers to retain data about communications activity. Member States took very different approaches, which led to the enactment of the Data Retention Directive (Directive 2006/24) within the space for Member State action envisaged by Article 15.  With that directive struck down, Article 15 remained the governing provision for exceptions to communications confidentiality within the field harmonised by the Privacy and Electronic Communications Directive.  This left questions as to what action in respect of requiring the retention of data could be permissible under Article 15, as understood in the light of the EUCFR.

The cases in today’s judgment derive from two separate national regimes. The first, concerning Tele2, arose when – following the DRI judgment – Tele2 proposed to stop retaining the data specified under Swedish implementing legislation in relation to the Data Retention Directive. The second arose from a challenge to the Data Retention and Investigatory Powers Act 2014 (DRIPA) which had been enacted to provide a legal basis in the UK for data retention when the domestic regime implementing the Data Retention Directive fell as a consequence of the invalidity of that directive.  Both sets of questions referred essentially asked about the impact of the DRI reasoning on national regimes, and whether Articles 7 and 8 EUCFR constrained the States’ regimes.

The Advocate General handed down an opinion in July (noted here) in which he opined that while mass retention of data may be possible, it would only be so when adequate safeguards were in place.  In both instances, the conditions – in particular those identified in DRI – were not satisfied.

Judgment

Scope of EU Law

A preliminary question is whether the data retention, or the access of such data by police and security authorities, falls within EU law.  While the Privacy and Electronic Communications Directive regulated the behaviour of communications providers generally, Article 1(3) of that Directive specifies that matters covered by Titles V and VI of the TEU at that time (e.g. public security, defence, State security) fall outside the scope of the directive, which the Court described as relating to “activities of the State” . Further Article 15(1) permits the State to take some measures resulting in the infringement of the principle of confidentiality found in Art 5(1) which again “concern activities characteristic of States or State authorities, and are unrelated to fields in which individuals are active” [para 72]. While there seems to be overlap between Article 1(3) and Article 15(1), this does not mean that matters permitted on the basis of Article 15(1) fall outside the scope of the directive as “otherwise that provision would be deprived of any purpose” [para 73].

In the course of submissions to the Court, a distinction was made between the retention of data (by the communications providers) and access to the data (by police and security services).  Accepting this distinction would allow a line to be drawn between the two, with retention as an activity of the commercial operator regulated by the Privacy and Electronic Communications Directive within its scope and the access, as an activity of the State lying outside it. The Court rejected this analysis and held that both retention and access lay within the field of the Privacy and Electronic Communications Directive [para 76]. It argued that Article 5(1) guarantees confidentiality of communications from the activities of third parties whether they be private actors or state authorities. Moreover, the effect of the national legislation is to require the communications providers to give access to the state authorities which in itself is an act of processing regulated by the Privacy and Electronic Communications Directive [para 78]. The Court also noted that the sole purpose of the retention is to be able to give such access.

Interpretation of Article 15(1)

The Court noted that the aim of the Privacy and Electronic Communications Directive is to ensure a high level of protection for data protection and privacy. Article 5(1) established the principle of confidentiality and that “as a general rule, any person other than the user is prohibited from storing, without the consent of the users concerned, the traffic data”, subject only to technical necessity and the terms of Article 15(1) (citing Promusicae) [para 85].  This requirement of confidentiality is backed up by the obligations in Article 6 and 9 specifically dealing with restrictions on the use of traffic and location data. Moreover, Recital 30 points to the need for data minimisation in this regard [para 87]. So, while Article 15(1) permits exceptions, they must be interpreted strictly so that the exception does not displace the rule; otherwise the rule would be “rendered largely meaningless” [para 89].

As a result of this general orientation, the Court held that Member States may only adopt measures for the purposes listed in the first sentence of Article 15(1) and those measures must comply with the requirements of the EUCFR.  The Court, citing DRI (at paras 25 and 70), noted that in addition to Articles 7 and 8 EUCFR, Article 11 EUCFR – protecting freedom of expression – was also in issue. The Court noted the need for such measures to be necessary and proportionate and highlighted that Article 15 provided further detail in the context of communications whilst Recital 11 to the Privacy and Electronic Communications Directive requires measures to be “strictly proportionate” [para 95].

The Court then considered these principles in the light of the reference in Tele2 at paras 97 et seq of its judgment. Approving expressly the approach of the Advocate General on this point, it  underlined that communications “data, taken as a whole, is liable to allow very precise conclusions to be drawn concerning the private lives of the persons whose data has been retained” and that such data is no less sensitive that content [para 99]. The interference in the view of the Court was serious and far-reaching in relation to Articles 7, 8 and 11.  While Article 15 identifies combatting crime as a legitimate objective, the Court – citing DRI – limited this so that only the fight against serious crime could be capable of justifying such intrusion.  Even the fight against terrorism “cannot in itself justify that national legislation providing for the general and indiscriminate retention of all traffic and location data should be considered necessary” [para 103].  The Court stressed that the regime provides for “no differentiation, limitation or exception according to objectives pursued” [para 105].  The Court did confirm that some measures would be permissible:

… Article 15(1) of Directive 2002/58, read in the light of Articles 7, 8 and 11 and Article 52(1) of the Charter, does not prevent a Member State from adopting legislation permitting, as a preventive measure, the targeted retention of traffic and location data, for the purpose of fighting serious crime, provided that the retention of data is limited, with respect to the categories of data to be retained, the means of communication affected, the persons concerned and the retention period adopted, to what is strictly necessary. [para 108]

It then set down some relevant conditions:

Clear and precise rules “governing the scope and application of such a data retention measure and imposing minimum safeguards, so that the persons whose data has been retained have sufficient guarantees of the effective protection of their personal data against the risk of misuse” [para 109].

while “conditions may vary according to the nature of the measures taken for the purposes of prevention, investigation, detection and prosecution of serious crime, the retention of data must continue nonetheless to meet objective criteria, that establish a connection between the data to be retained and the objective pursued” [110].

The Court then emphasised that there should be objective evidence supporting the public whose data is to be collected on the basis that it is likely to reveal a link, even an indirect one, with serious criminal offences, and thereby contribute in one way or another to fighting serious crime or to preventing a serious risk to public security. The Court accepted that geographical factors could be one such ground, on the basis that “that there exists, in one or more geographical areas, a high risk of preparation for or commission of such offences” [para 111].

Conversely,

…Article 15(1) of Directive 2002/58, read in the light of Articles 7, 8 and 11 and Article 52(1) of the Charter, must be interpreted as precluding national legislation which, for the purpose of fighting crime, provides for the general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication [para 112].

Acceptability of legislation where (1) the measure is not limited to serious crime; (2) where there is no prior review; and (3) where there is no requirement that the data stays in the EU.

This next section deals with the first question referred in the Watson case, as well as the Tele 2 reference.

As regards the first point, the answer following the Court’s approach at paragraphs 90 and 102 is clear: only measures justified by reference to serious crime would be justifiable.  As regards the second element, the Court noted that it is for national law to law conditions of access so as to ensure that the measure does not exceed what is strictly necessary.  The conditions must be clear and legally binding. The Court argued that since general access could not be considered strictly necessary, national legislation must set out by reference to objective criteria the circumstances in which access would be permissible.  Referring to the European Court of Human Rights (ECtHR) judgment in Zakharov, the Court specified:

access can, as a general rule, be granted, in relation to the objective of fighting crime, only to the data of individuals suspected of planning, committing or having committed a serious crime or of being implicated in one way or another in such a crime [para 119].

It then distinguished the general fight against crime from the fight against terrorism to suggest that in the latter case:

access to the data of other persons might also be granted where there is objective evidence from which it can be deduced that that data might, in a specific case, make an effective contribution to combating such activities [para 119].

The conditions set down must be respected. The Court therefore held that, save in cases of genuine emergency, prior review by an independent body must be carried out on the basis of a reasoned request by the investigating bodies. In making this point, the Court referred to the ECtHR judgment in Szabó and Vissy v. Hungary, as well as its own previous ruling in DRI. Furthermore, once there was no danger to the investigation by so doing, individuals affected should be notified, so as to those affected people the possibility to exercise their right to a remedy as specified in Article 15(2) read with Article 22 of the Data Protection Directive (Directive 95/46).

Article 15(1) permits derogation only in relation to specified provisions in the directive; it does not permit derogation with regard to the security obligations contained in Article 4(1) and 4(1a). the Court noted the quantity of data as well as its sensitivity to suggest that a high level of security measures would be required on the part of the electronic communications providers. Following this, the Court then stated:

…, the national legislation must make provision for the data to be retained within the European Union and for the irreversible destruction of the data at the end of the data retention period (see, by analogy, in relation to Directive 2006/24, the Digital Rights judgment, paragraphs 66 to 68) [para 122].

The Court noted that as a separate obligation from the approval of access to data, that States should ensure that independent review of compliance with the required regulatory framework was carried out by an independent body. In the view of the Court, this followed from Article 8(3) EUCFR. This is an essential element of individuals’ ability to make claims in respect of infringements of their data protection rights, as noted previously in DRI and Schrems.

The Court then summarised the outcome of this reasoning, that Article 15 and the EUCFR:

must be interpreted as precluding national legislation governing the protection and security of traffic and location data and, in particular, access of the competent national authorities to the retained data, where the objective pursued by that access, in the context of fighting crime, is not restricted solely to fighting serious crime, where access is not subject to prior review by a court or an independent administrative authority, and where there is no requirement that the data concerned should be retained within the European Union. [para 125]

Relationship between the EUCFR, EU law and the ECHR

The English Court of Appeal had referred a question about the impact of the ECHR on the scope of the EUCFR in the light of Article 52 EUCFR. While the Court declared the question inadmissible, it –like the Advocate General – took the time to point out that the ECHR is not part of EU law, so the key issue is the scope of the EUCFR; and in any event Article 52(3) does not preclude Union law from providing protection that is more extensive than the ECHR. As a further point, the Court added that Article 8 EUCFR, which provides a separate right to data protection, does not have an exact equivalent in the ECHR and that there is therefore a difference between the two regimes.

Comment

Given the trend of recent case law, the outcome in this case is not surprising.  There are some points that are worth emphasising.

The first relates to the scope of EU law, which is a threshold barrier to any claim based on the EUCFR.  The Advocate General seemed prepared to accept a distinction between the retention of data and the access thereto (although conditions relating to the latter could bear on the proportionality of the former).  The Court took a different approach and held that the access also fell within the scope of the Directive/EU law, because the national regime imposed an obligation on the communications service provider to provide access to the relevant authorities. Given this was an obligation on the service provider, it fell within the regulatory schema.  This approach thus avoids the slightly unconvincing reasoning which the Advocate General adopted.  It also possibly enlarges the scope of EU law.

In general terms, the Court’s reasoning looks at certain provisions of the Privacy and Electronic Communications Directive.  While the reasoning is set in that context, it does not mean that the Court’s interpretation of the requirements deriving from the EUCFR is limited only to this set of surveillance measures.  The rules of interpretation of particularly Articles 7 and 8 could apply more generally – perhaps to PNR data (another form of mass surveillance) – and beyond.  It is also worth noting that according to a leaked Commission document, it is proposed to extend the scope of the Privacy and Electronic Communications Directive to other communications service providers not currently regulated by the directive, but who may be subject to some data retention requirements already.

Whilst the Court makes the point that Articles 7 and 8 EUCFR are separate and different, and that data retention implicates also Article 11 EUCFR, in its analysis of the impact of national measures providing for retention it does not deal with Articles 7 and 8 separately (contrast DRI where a limited consideration was given to this). Having flagged Article 11 EUCFR, it takes that analysis no further.  This is the leaves questions as to the scope of the rights, and particularly how Article 11 issues play out.

Note that the Court does not state that data retention itself is impermissible; indeed, it specifies circumstances when data retention would be acceptable. It challenges the compatibility of mass data retention with Articles 7 and 8 EUCFR, however, even in the context of the fight against terrorism.  In this, it is arguable that the Court has taken a tougher stance than its Advocate General on this point of principle.  In this we see a mirror of the approach in DRI, when the Court took a different approach to its Advocate General.  In that case too, the Advocate General focussed on safeguards and the quality of law, as has the Advocate General here. For the Court here, differentiation – between people and between types of offences and threats – based on objective, evidenced grounds is central to showing that national measures are proportionate and no more than – in the terms of the directive – strictly necessary. This seems to go close to disagreeing with the Opinion of the Advocate General that in DRI, the Court ‘did not, however, hold that that absence of differentiation meant that such obligations, in themselves, went beyond what was strictly necessary’ (Opinion, para 199). The Advocate General used this point to argue that DRI did not suggest that mass surveillance was per se unlawful (see Opinion, para 205). Certainly, in neither case did the Court expressly hold that mass surveillance was per so unlawful, so the question still remains. What is clear, however, is that the Court supports the retention of data following justified suspicion – even perhaps generalised suspicion – rather than using the analysis of retained data to justify suspicion.

In its reasoning, the Court did not –unlike the Advocate General – specifically make a ruling on whether or not the safeguards set down in DRI, paras 60-68, should be seen as mandatory – in effect creating a 6 point check list. Nonetheless, it repeatedly cited DRI approvingly. Within this framework, it highlighted specific aspects – such as the need for prior approval; the need for security and control over data; a prohibition on transferring data outside the EU; the need for subjects to be able to exercise their right to a remedy. Some of these points will be difficult to reconcile with the current regime in the United Kingdom regarding communications data.

It did not, however, touch on acceptable periods for retention (even though it – like its Advocate General – referred to Zakharov). More generally, the Court’s analysis – by comparison with that of the Advocate General – was less detailed and structured, particularly about the meaning of necessity and proportionality. It did not directly address the points the Advocate General made about lawfulness, with specific reference to reliance on codes (an essential feature of the UK arrangements); it did in passing note that the conditions for access to data should be binding within the domestic legal system. Is this implicit agreement with the Advocate General on this point? It certainly agreed with him that the seriousness of the interference meant that data retention of communications data should be restricted to ‘serious crime’ and not just any crime.

One final issue relates to the judicial relationship between Strasbourg and Luxembourg.  Despite emphasising that the ECHR is not part of EU law, the Court relies on two recent cases from the ECtHR, perhaps seeking to emphasis the consistency in this area between the two courts – or perhaps seeking to put pressure on Strasbourg to hold the line as it faces a number of state surveillance cases on its own docket, many against the UK. The position of Strasbourg is significant for the UK. While many assume that the UK will maintain the GDPR after Brexit in the interests of ensuring equivalence, it could be that the EUCFR will no longer be applicable in the UK post-Brexit. For UK citizens, the ECHR then is the only route to challenge state intrusion into privacy. For those in the EU, data transfers to the UK post-Brexit could be challenged on the basis that the UK’s law is not sufficiently adequate compared to EU standards. Today’s ruling – and the UK’s response to it, if any – could be a significant element in arguing that issue.

‘I Travel, therefore I Am a Suspect’: an overview of the EU PNR Directive

ORIGINAL PUBLISHED ON  EU Immigration and Asylum Law and Policy BLOG

By Niovi Vavoula, Queen Mary University of London

According to the PNR (Passenger Name Record) Directive 2016/681 of 27 April2016, a series of everyday data of all air passengers (third-country nationals but also EU citizens, including those on intra-Schengen flights) will soon be transferred to specialised units to be analysed in order to identify persons of interest in relation to terrorist offences and other serious crimes. This new instrument raises once again fundamental rights challenges posed by its future operation, particularly in relation to privacy and citizenship rights. Therefore, the story of the PNR Directive, as described below, is probably not finished as such concerns open up the possibility of a future involvement of the Court of Justice.

1. The story behind the EU PNR System

In the aftermath of 9/11 and under the direct influence of how the terrorist attacks took place, the US legislature established inextricable links between the movement of passengers, ‘border security’ and the effective fight against international terrorism. Strong emphasis was placed on prevention through pre-screening of passengers, cross-checking against national databases and identification of suspicious behaviours through dubious profiling techniques. At the heart of this pre-emptive logic has been the adoption of legislation obliging airlines flying into the US to provide their domestic authorities with a wide range of everyday data on their passengers. These so-called PNR data constitute records of each passenger’s travel arrangements and contain the information necessary for air carriers to manage flight reservations and check-in systems. Under this umbrella definition, a broad array of data may be included: from information on name, passport, means of payment, travel arrangements and contact details to dietary requirements and requests for special assistance. Amidst concerns regarding the compliance of such mechanisms with EU privacy and data protection standards, this model was internalized at EU level through the conclusion of three PNR Agreements with the US – one in 2004, which wasstruck down by the CJEU in 2006, and others in 2007 and 2012. In addition, PNR Agreements with Canada (currently awaiting litigation before the CJEU) andAustralia have also been adopted.

The idea of developing a similar system to process EU air travel data had been on the agenda for almost a decade, particularly since the EU-US PNR Agreements contain reciprocity clauses referring to the possibility of the EU developing such systems. The first proposal for a Framework Decision dates back to 2007. However, no agreement was reached until the entry into force of the Lisbon Treaty. A revised proposal was released in 2011, essentially mimicking the EU-US PNR model, at least as regards the types of data to be processed and the focus on assessing the risks attached to passengers as a mean of preventing terrorist attacks or other serious crimes. In comparison to the proposed Framework Decision it constituted an improvement (for instance, it provided for a reduced retention period and prohibited the processing of sensitive data), yet it was met with great scepticism by a number of EU actors, including the European Data Protection Supervisor, the Fundamental Rights Agency and the Article 29 Working Party who argued that it failed to respect the principles of necessity and proportionality. Eventually, the proposal was rejected by the European Parliament on fundamental rights grounds, but the voting was postponed and the proposal was transferred back to the LIBE Committee.

The EU PNR project was brought back to life after the Charlie Hebdo events in January 2015. In the extraordinary JHA Council meeting of 20 November, immediately after the Paris terrorist attacks, the Council reiteratedthe urgency and priority to finalise an ambitious EU PNR before the end of 2015’. Indeed, on 4 December 2015 a compromise text was agreed. A few days later, the Council confirmed the agreement, but the Parliament did not give its blessing until April 2016, presumably in the light of the negotiations on the Data Protection legislative reforms, which were running in parallel. The fact that the legality of the EU-Canada PNR Agreement was disputed did not affect the course of the negotiations.

2. The EU PNR Directive in a nutshell

The EU PNR Directive places a duty on airline carriers operating international flights between the EU and third countries to forward PNR data of all passengers (as set out in Annex 1) to the Passenger Information Unit (PIU) established at domestic level for this purpose (Article 4). According to Article 2 of the Directive, Member States are given the discretion to extend the regime set out in the Directive to intra-EU flights, or to a selection of them (for a discussion see Council Documents 8016/11 and 9103/11, partly accessible). Perhaps unsurprisingly, all participating Member States have declared their intention to make use of their discretion.

Once transmitted, the data will be stored and analysed by the PIU. The purpose of this is to ‘identify persons who were previously unsuspected of involvement in terrorism or serious crime’ and require further examination by the competent authorities in relation to the offences listed in Annex II of the Directive. Contrary to the Commission’s assertions that PNR data will be used in different ways – reactively, pro-actively and in real-time – the focus on prevention is central. The analysis entails a risk assessment of all passengers prior to their travel on the basis of predetermined criteria to be decided by the respective PIU and possibly involving cross-checking with existing blacklists (Article 6(3)).

Furthermore, the PIUs will respond to requests by national authorities to access the data on a case-by-case basis and subject to sufficient indication (Article 6(2(b)). Nevertheless, processing should not take place on the basis of sensitive data revealing race, ethnic origin, religion or belief, political or any other opinion, trade union membership, health or sexual orientation etc. (Recital 20). According to Article 12, the initial retention period is six months, after which PNR data will be depersonalised, meaning that the PIU is entrusted with the task of masking out the names, address and contact information, payment information, frequent flyer information, general remarks and all API data. This process should not be confused with anonymisation, as the data could be re-identifiable and may still be used for criminal law purposes under ‘very strict and limited conditions’ (Recital 25). Therefore, upon expiry of the six-month retention period, disclosure of the full PNR data is permitted if so approved by a judicial authority or another national authority competent to review whether the conditions have been met and subject to information and ex post review by the Data Protection Officer of the PIU (Articles 12(3) and 5).

3. Privacy and surveillance of movement

The challenges that the development of the EU PNR system poses to the protection of privacy and data protection rights are acute. In essence, as with thePNR Agreements, the Directive allows the systematic, blanket and indiscriminate transfer, storage and further processing of a wide range of personal data of millions of travellers from and to the EU. Drawing from Digital Rights Ireland and the recent opinion of AG Mengozzi on the EU-Canada PNR Agreement, the interference with the rights to privacy (Article 7 EUCFR and 8 ECHR) and data protection (Article 8 EUCFR) is particularly serious. On the basis of the data collected, which include biographic information, credit card details and contact information, law enforcement authorities shall be able to compile a rather complete profile of travellers’ private lives.

The involvement of the private sector in the fight against terrorism and serious crime is considerably extended, particularly if one takes into account that the obligations on air carriers are extended to non-carrier economic operators (e.g. travel agencies). In addition, the inclusion of intra-EU flights within the scope of the Directive significantly expands the reach of surveillance. Indeed, back in 2011, it was noted that intra-EU flights represent the majority of EU flights (42%) followed by international flights (36%), and only 22% of flights operate within a single Member State (Council Document 8016/11). In this framework, the movement of the vast majority of travellers, including EU citizens, is placed under constant monitoring, irrespective of the fact that they are a priori innocent and not suspected of any criminal offence. In fact, the operation of the PNR scheme signifies the reversal of the presumption of innocence, whereby everyone is deemed as a potential security risk, thus necessitating their examination in order to confirm or rebut this presumption. Besides, there is no differentiation between flights transporting persons at risk and others.

Furthermore, the risk assessment will take place in a highly obscure manner, particularly since the Directive fails to prescribe comprehensively and in detail how the data will be analysed. The underlying rationale is the profiling of all passengers and the identifying of behavioural patterns in a probabilistic logic, but nowhere in the Directive it is indicated that this is indeed the case. This lack of clarity raises concerns considering that the recently adopted Data Protection Directive includes a definition of profiling (Article 3(4)). Moreover, it is stated that ‘relevant databases’ may be consulted, however, it is not clear which these are. For instance, a possible examination on a routine basis of the databases storing asylum seekers’ fingerprints’ or visa applicants’ data (Eurodac and VIS respectively) will frustrate their legal framework, resulting in a domino effect of multiple function creeps. It may even grow the appetite for Member States to desire the systematic processing of EU nationals’ personal data in centralised databases in the name of a more ‘efficient’ fight against terrorism.

This ambiguous modus operandi of PIUs may even call into question the extent to which the interference with privacy is ‘in accordance with law’ (Article 8(2) ECHR) or in EU terms ‘provided for by law’ (Article 52(1) EU Charter). According to settled case law of the ECtHR, every piece of legislation should meet the requirements of accessibility and foreseeability as to its effects (Rotaru v Romania). The lack of clear rules as to how the processing of data will take place may suggest that travellers cannot foresee the full extent of the legislation.

Another contested issue is the ambiguous definitions of terrorism and serious crimes at EU level. The offences falling under the remits of terrorism are currently revised, which had led to criticism for lack of clarity, whereas the definition of serious offences (acts punishable by a custodial sentence or detention order of a maximum period of three years or longer) constitutes a relatively low threshold, particularly in those Member States where domestic criminal law allows for potentially long custodial sentences for minor crimes. In addition, as regards the conditions of access by national competent authorities, the requirement that the request must be based on ‘sufficient indication’ seems to falls short of the criteria established in Digital Rights Ireland. The threshold is particularly low and may lead to generalised consultation by law enforcement authorities, whereas it is uncertain who will check that there is indeed sufficient indication. As for the offences covered by the scope of the Directive, although Annex II sets out a list in this regard, PNR data could still be used for other offences, including minor ones, when these are detected in the course of enforcement action further to the initial processing.

Moreover, in relation to the period for which the data will be retained, it appears that the EU institutions by no means have a clear understanding of what constitutes a proportionate retention period. For instance, the 2007 proposal envisaged an extensive retention period of five years, after which time the data would be depersonalised and kept for another eight years, whereas the 2011 proposal prescribed a significantly reduced initial retention period of 30 days, after which the data would be anonymised and kept for a further period of five years. In its General Approach (Council Document 14740/15), the Council called for an extension of the initial retention period to two years, followed by another three years of storage for depersonalised data. A more privacy-friendly approach can be found in an Opinion of the Council Legal Service dating from 2011, according to which the data of passengers in risky flights would be initially retained for 30 days and then be held for an overall period of six months (Council Document 8850/11in German). Some Member States supported a retention period of less than 30 days (Council Document 11392/11). Although it is welcomed that there are two sets of deadlines and, more importantly, that re-personalisation may take place only under limited circumstances. However, there is no indication of why the chosen retention periods are proportionate. Furthermore, an approach suggesting a differentiation between flights at risk or not at risk, with different retention periods, seems more balanced.

4. Free movement and citizenship concerns

In addition to the privacy challenges highlighted above, another point of concern is whether the processing of PNR data, including on intra-EU flights, could infringe free movement enjoyed by EU citizens. In this respect, the Commission Legal Service found that the EU PNR does not obstruct free movement (see Council Document 8230/11, which is partially available to the public, although the outcome of the opinion is attested in Council Document 8016/11). Nonetheless, the Parliament managed to include a reference that any assessments on the basis of PNR data shall not jeopardise the right of entry to the territory of the Member States concerned (in Article 4). The extent to which this reference is sufficient is doubtful.

According to Article 21 of the Schengen Borders Code, police controls performed in the territory of a Member State are allowed insofar as they do not have the equivalent effect of border control. Such an effect is precluded when, inter alia, the checks are carried out on the basis of spot-checks. In Melki, the CJEU found that ‘controls on board an international train or on a toll motorway’ limiting their application to the border region ‘might (…) constitute evidence of the existence of such an equivalent effect’ (para 72). By analogy, the focus on controls at the border area to the systematic manner set out in the directive, could have the equivalent effect of a border check. The lack of any differentiation between flights at risk or not at risk (an approach that was also favoured by the Council Legal Service, Council Document 8850/11) and the fact that member States are left entirely free to determine the extent to which they monitor flights to and from other Member States could enhance the risk of falling into the category of controls with an equivalent effect to border control.

5. Conclusion

The EU PNR Directive is yet another example of how the counter-terrorism rhetoric outweighs serious fundamental rights concerns in the name of ensuring security. The storyline is well-known: after a terrorist attack, numerous ideas – either incorporated in legislative proposals that have stalled or which were ultimately too ambitious and controversial to be presented in the first place – feature on the EU agenda. The EU PNR initiative was buried due to privacy concerns and was brought back from the dead when the circumstances matured. Soon national law enforcement authorities will put their hand into the passengers’ data jar and will deploy their surveillance techniques on an unprecedented and unpredictable scale.

By internalising US standards, the EU puts the privacy of individuals under threat. The new instrument does no longer target third-country nationals only, but also EU citizens, thus marking the end of an era where instruments were used ‘solely’ on foreigners. Undoubtedly, there is a strong ‘momentum’ for justifying mass surveillance practices. In waiting for the ruling on the EU-Canada PNR Agreement, as well as the ruling on Tele2 Sverige (following up on Digital Rights Ireland), one can only hope that the CJEU will uphold its inspiring reasoning and reiterate the important limits placed on deploying surveillance practices, by giving proper weight to the fundamental right to the protection of personal data.