The surveillance society (4): a further study for the European Parliament

Following the so called “Snowden revelations” at the end of the last legislature the European Parliament adopted a wide ranging resolution addressing the main problems arising from an emerging surveillance society.  The resolution adopted inter alia “A European Digital Habeas Corpus” deemed to  protect  fundamental rights in a digital age.

Work on this sensitive issue is continuing also in this legislature as the European Parliament has to play a pivotal role in the establishment of the European Digital Agenda, the reform of data protection and to approve an “umbrella” agreement with the United States which is deemed to cover also the access to personal data for security purposes.

To support this Parliamentary strategy several studies have been done the last of them being a study done by the EP “Scientific and Technology Options Assessment “(STOA) which was presented in the responsible Parliamentary Committee (LIBE) Meeting on 23 April 2015.

The aim of the study is to propose measures to reduce the risks identified with the current generation of networks and services and to identify long-term technology oriented policy options for a better, more secure and more privacy friendly internet, whilst at the same time allowing governmental law enforcement and security agencies to perform their duties, and obtain quickly and legally all the information needed to fight crime and to protect national security interests.

The first part of the study concludes with a list of security solutions to help citizens protect themselves from illicit mass surveillance activities. In its Conclusions it recognise that “Mass surveillance is a reality today and has been applied for years by national intelligence agencies of a number of countries, namely those allied in the Five Eyes coalition, but also including EU members and other countries. The agencies involved in mass surveillance practices justify these methods with the doctrine of pre-emptive prevention of crime and terrorism and adopt the principle of omniscience as its core purpose. This objective of intercepting all communication taking place over Internet or telephone networks is in many cases pursued by applying questionable, if not outright illegal intrusions in IT and Telecommunication systems.This strategy accumulates an amount of information that can only be processed and analysed by systems of artificial intelligence, able to discern patterns which indicate illegal, criminal, or terrorist activities. While warranted and lawful interception of data on targeted suspects is a required and undisputed tool for law enforcement to access evidence, the generalised approach of information gathering through mass surveillance is violating the right to privacy and freedom of speech. The delegation of decisions on suspicious data patterns or behaviour of citizens to intelligent computer systems is furthermore preventing accountability and creating the menace of an Orwellian surveillance society. Many citizens are not aware of the threats they may be subject to when using the Internet or telecommunication devices. As of today, the only way for citizens to counteract surveillance and prevent breach of privacy consists in guaranteeing uncorrupted end-to-end encryption of content and transport channel in all their communications. Due to the amount/complexity/heterogeneity of tools this is however a task too complex to achieve for most of technically unexperienced user. This situation calls for both, awareness creation and the provision of integrated, user friendly and easy to use solutions that guarantee privacy and security of their communications. But policy makers must understand that the problem of mass surveillance can not be solved on a technical terrain, but needs to be addressed on a political level. An adequate balance between civil liberties and legitimate national security interests has to be found, based on a public discussion that empowers citizens to decide upon their civil rights affected and the societal values at stake”.

The second part of the study concludes with the proposal of several policy options with different levels of public intervention and technological disruption.

A STOA options brief below provides  an overview of all the policy options and  Two short Video-Clips  have been published on YouTube to raise the awareness of the public.

Further information

 

The surveillance society (3) by David COLE

Original published on TIME 

NSA Ruling Is a Victory for Privacy

By David COLE (*)

Renew the NSA’s authority — but only if it is significantly reined in

In a major victory for privacy and democracy, the U.S. Court of Appeals for the Second Circuit ruled today that the National Security Agency has been illegally collecting information about Americans’ phone calls—all Americans’ phone calls—for at least nine years. In the name of fighting terror, the agency has been collecting records on all of us—who we call, when we call, and how long we talk, although not the contents of the calls—without regard to whether we are connected to terrorism. The court unanimously ruled that the NSA’s massive “phone metadata” program, first revealed by Edward Snowden in June 2013, is not authorized by the statute the NSA has long relied on to conduct the program. Congress is currently considering whether to renew, reform, or let the provision expire. Today’s ruling should inform Congress’s debate, and supports renewing the NSA’s authority only if it is significantly reined in.

The court’s decision turned on the meaning of Section 215 of the USA Patriot Act, passed shortly after 9/11. It authorizes the government to obtain records from businesses if they are “relevant” to an “authorized investigation … of international terrorism.” This language would plainly enable the NSA to obtain the phone calling records, for example, of a suspected terrorist, or of persons closely connected to him. But in a secret interpretation allowed by a secret intelligence court in 2006, the NSA asserted that this provision empowered it to obtain the phone records of every American, regardless of whether they were in any way connected to terrorism. It’s that interpretation that the U.S. Court of Appeals wisely rejected today.

The NSA argued that every American’s records were “relevant” and therefore subject to collection because at some point in the future they might come in handy to a terrorism investigation. But as the court of appeals reasoned, that theory is limitless. It would authorize the NSA to collect all business records about everyone—including financial records, medical records, and email and internet search records—without any showing of an actual tie to terrorism.

The court of appeals is not the first to find the NSA’s interpretation a stretch. When Representative Jim Sensenbrenner, a Wisconsin Republican who drafted the Patriot Act provision in question, learned of the NSA’s interpretation, he said that he never intended it to authorize such “dragnet collection” of information on innocent Americans. The Privacy and Civil Liberties Oversight Board, a government oversight body created by Congress and appointed by the president, concluded in January 2014 that Section 215 did not authorize the NSA’s program.

But the unanimous decision of the federal court of appeals has the force of law. More important, its opinion makes eminent sense, underscoring that when Congress gives the executive authority to obtain information only where it is relevant to a specific investigation, the NSA should not secretly expand that to collect records on us all.

The court’s timely decision comes as Congress is considering what to do about Section 215. A bipartisan group of members, including Senators Pat Leahy and Mike Lee, and Representatives Sensenbrenner and John Conyers, has introduced the USA Freedom Act, which would end the NSA’s bulk collection authority, and allow it to seek phone records only when reasonably connected to specific identifiers or “selectors” tied to terrorism. Senator Mitch McConnell, by contrast, has proposed a bill that would reauthorize Section 215 with no reforms whatsoever.

Congress should be guided by the federal appeals court’s careful reasoning. As the court found, the authority asserted and exercised by the NSA was entirely unprecedented. It goes far beyond any preexisting authority to obtain records in any other investigative context. Digital technology makes this possible; the government can now track us in ways that until very recently were simply impossible. But just because it can do so doesn’t make it right to do so. If we are to preserve our privacy in the digital age, we must confront that reality and insist that the government’s new spying technologies be appropriately constrained.

Congress should pass the USA Freedom Act. But doing so will by no means be sufficient. Snowden revealed a wide range of NSA spy programs that intrude on the privacy rights of innocent Americans and non-Americans alike. The USA Freedom Act deals only with one such program. But the court of appeals, and the USA Freedom Act, point the way forward in a more general way. If we are to rein in the NSA, we must insist first that there be public debate before the government institutes sweeping new surveillance programs, and we must demand, second, that surveillance be targeted at individuals as to whom there is suspicion of wrongdoing, and not applied indiscriminately to us all.

 (*) George J. Mitchell Professor in Law and Public Policy at Georgetown University Law Center.

The Surveillance society (2) by Jens-Henrik JEPPESEN

Controversial French Surveillance Regulation Should Re-Ignite EU Debate on Surveillance Reform

Original Published HERE

by 

As has been widely reported in the press, France is moving ahead with new legislation to enable expanded electronic surveillance. As expected, the surveillance bill, the Projet de Loi Relatif au Renseignement, was passed by Members of the French National Assembly by an overwhelming majority on May 5, sparking a fresh round of heated debate.  The legislation will now move to France’s other parliamentary house, the Senate.

The bill is so excessive that we believe it could, and should, lead to a renewed debate on surveillance reform across Europe..

A wide range of French civil society groups, lawyers, and technology industry groups have voiced strong opposition to the bill from its inception. Some have even dubbed the law a French Patriot Act, and the expanded powers found in the legislation would in fact pose a serious threat to human rights in France.  Indeed, the bill is so excessive that we believe it could, and should, lead to a renewed debate on surveillance reform across Europe.   We have long believed that action at the EU level is critical to protecting human rights in the surveillance context, and the French bill shows that this need is more urgent than ever.

According to an analysis by one of the main opponents of the bill, the French digital rights group La Quadrature du Net, the draft bill was introduced by Prime Minister Valls with the ostensible goal of providing a clear legal framework for intelligence gathering that respects fundamental rights. In reality, however, the law expands the scope of permissible electronic surveillance and legalizes a range of highly problematic monitoring techniques that can be extended for potentially indefinite periods and are subject only to relatively weak oversight.  This creates a range of serious civil liberties concerns.

One issue is the widespread use of privacy-invasive surveillance technology. The law would authorize government officials to compel telecommunications service providers to install so-called “black boxes” to monitor the metadata of users’ personal communications for suspicious patterns or behavior, based on automated analysis and algorithms. No judicial review, or judicial warrant, would be required for such surveillance.  Additionally, although the data would initially be analyzed on an anonymous basis (and would not include the content of messages), the authorities would have the power to lift this anonymity for at least some individual users if they believe the patterns show a terrorist threat.  Some experts have already begun to highlight the risk of false positives as well as the technical flaws in the idea of “anonymous” data that can be “de-anonymized”. These practices show that the French interior minister’s claim that the bill is “not aimed at installing generalized surveillance” in France is flat wrong.

Another problem is the broad objectives for which the surveillance techniques foreseen in the bill can be used. The bill uses wording such as “essential foreign policy interests,” “international commitments,” “essential economic or scientific interests,” and “collective violence that could cause serious harm to the public peace.” This is in addition to protecting national security and fighting terrorism and organized crime. With such a vaguely defined and broad scope of application, the surveillance measures authorized by the bill could be brought to bear in a very wide set of contexts and cover large sections of society.

Now would be an excellent time to open a proper European debate on what sort of surveillance may be justified, and what proper oversight of surveillance programs looks like…

Furthermore, the bill creates a set of separate rules on “communications sent or received abroad.” LQDN’s analysis shows that interception, collection, retention, and use of such communications by the intelligence services would not be covered by any of the usual privacy protections found in French law. The rules on this data would be set out in a classified decree to be adopted sometime in the future.

Now would be an excellent time to open a proper European debate on what sort of surveillance may be justified, and what proper oversight of surveillance programs looks like. We are conscious of the limits on the authority of the EU institutions in matters of national security. However, the EU Member States have clear and inescapable obligations under EU law as well as the European Convention on Human Rights to conduct their surveillance activities in strict accordance with privacy and other fundamental rights.   Neither France nor any other Member State can ignore those obligations, including by passing laws as excessive as the one the French Parliament is currently considering.  These pressing issues need to be debated, and any country that overreaches must be held to account.

Thus far, the European Member States have been reluctant to engage in such a debate on their own initiative. Therefore, it would be appropriate for both the European Parliament and the European Commission to take the lead in getting that debate going.

The Surveillance Society (1) by Emilio Mordini

Original published HERE

By Emilio MORDINI

Today (May 7) a US federal appeals court has ruled the phone metadata program of the National Security Agency’s (NSA) is illegal. Metadata is ancillary details generated by a piece of information.  Telephone metadata includes details  such as the length of a call, the phone number from which the call was made, the phone number called,  the telephone devices used, the location of the call, and so. Telephone metadata do not include voice recording and call contents. In 2014 Stanford computer scientist and lawyer, Jonathan Mayer, demonstrated that from phone metadata it is possible to draw very sensitive inferences, such as details about an individual’s familial, political, professional, religious, and sexual life.  Mayer demonstrated that metadata are highly meaningful even in a small population and over a short time period.

The NSA’s telephone metadata program, which started seven months before the September 11, 2001, collected metadata of hundreds of billions of telephone calls made along several years through the largest telephone carriers in the United States. In 2006, the existence of the NSA program was brought to the light by USA TODAY. However, it was only on June 5, 2013 that The Guardian published a top-secret document, which provided the conclusive evidence that the NSA collected phone metadata from hundreds of millions of phone subscribers.  Such a document was included in NSA classified files leaked by Edward Snowden.

On June 11, 2013, the American Civil Liberties Union (ACLU) filed a lawsuit against the NSA, challenging the legality and constitutionality of the phone metadata program. On Dec 16, 2013 the District Court for Southern District of New York ruled the phone metadata program was legal and does not violate the Fourth Amendment (on August 29, 2013, the Foreign Intelligence Surveillance Court had already stated that phone metadata: “is not protected by the Fourth Amendment, since the content of the calls is not accessed”). The ACLU appealed against this decision. Now the court of appeals has definitely ruled that phone metadata program is illegal, because it “exceeds the scope of what Congress has authorized and therefore violates § 215” of the Patriot Act.  Ruling the illegality of the program, the court avoided taking a stance about its constitutionality.  However, what is interesting is the court’s main argument, say, the Patriot Act § 215 provides the legal framework for investigation, but not for a generic threat assessment. Investigation – argues the court – is an activity that entails “both a reason to conduct the inquiry and an articulable connection between the particular inquiry being made and the information being sought. The telephone metadata program, by contrast, seeks to compile data in advance of the need to conduct any inquiry (or even to examine the data), and is based on no evidence of any current connection between the data being sought and any existing inquiry”. Why is this argument intriguing? Because it implies a counter-intuitive explanation of surveillance policies.

Why so many governments and rulers are passionate of surveillance technologies? Because they want to know everything about us, the standard account goes. No, the court tells us; they spy because they do not have any inquiry to do, any explanation to test, any investigation to carry out. Briefly, because they do not know,  are not able to know, and do not want to know. They do not understand the world and its conflicts, they do not have interpretation grids, they cannot figure out  the future. They are just “walking shadows, poor players that strut and fret their hour upon the stage”. They spy just for spying, because of their political emptiness, because of their intellectual laziness. Surveillance is for them the obscene surrogate for knowledge. Understanding is precluded by their shortsighted view; modern, sophisticated, technologies become a surrogate for intelligence.

Today, privacy advocates are celebrating, yet this sentence makes justice also of some of their paranoid fantasies. The surveillance society is not ruled by the big brother, rather by an idiot Peeping Tom.

THE CJEU WASHES ITS HANDS OF MEMBER STATES’ FINGERPRINT RETENTION (JOINED CASES C-446/12 – 449/12 WILLEMS)

ORIGINAL PUBLISHED ON EU LAW BLOG

by

When is the Charter of Fundamental Rights of the EU applicable to a Member State measure? In C-446/12 – 449/12 Willems the CJEU held that a Member State which stores and uses fingerprint data, originally collected in compliance with Regulation No 2252/2004, but which the Member State then uses for purposes other than those stipulated in the Regulation, is not acting within the scope of EU law, and therefore is not bound by the Charter. This case appears to indicate a retreat by the Court from the expansive interpretation of the scope of application of the Charter which it had previously laid down in C-617/10 Fransson.

Facts and judgment

Council Regulation No 2252/2004/EC requires Member States to collect and store biometric data, including fingerprints, in the storage medium of passports and other travel documents, and require that such data be used for verifying the authenticity of the document or the identity of the holder. Spain introduced measures requiring the collection and retention of the fingerprint data for use in connection with travel documents. However, those national measures also provide that such data can be kept in a central register, and used for other purposes (such as national security, prevention of crime and identification of disaster victims). The applicants made passport applications, but refused to provide the fingerprint data. They argued, inter alia, that the storage and further use of those data breached their fundamental rights under Article 7 and 8 of the Charter of Fundamental Rights of the EU. The national court referred two questions for preliminary ruling.

The first question concerned the applicability of the Regulation to national identity cards. The Court held that the Regulation did not apply to such cards. The second question is the one I want to focus on: Does Article 4(3) of the Regulation, read together with Articles 6 and 7 of Directive 95/46/EC  and Articles 7 and 8 of the Charter, require Member States to guarantee that the biometric data collected and stored pursuant to that Regulation will not be collected, processed and used for purposes other than the issue of passports or other travel documents?

The ECJ had already held (in C-291/12 Schwarz) that the collection of those data for the purposes stipulated in the regulation (to verify the authenticity of the passport or the identity of the holder) was compatible with the Charter. The question was whether further processing of those data by the Member State would similarly be compatible.

The Court noted that the Regulation did not provide a legal basis for such further processing – if a Member State were to retain those data for other purposes, it would need to do so in exercise of its own competence (para 47). On the other hand, the Regulation did not require a Member State not to use it for other purposes. From these two observations the Court concluded that the Regulation was not applicable. The Court then cited its famous passage in C-617/10 Franssonwhere it had held that the applicability of EU law entails the applicability of the Charter. As the Regulation was not applicable, the Charter was not applicable either.

The Court then turned to Directive 95/46/EC  (the Data Protection Directive). It merely observed that the referring court requested the interpretation of the Regulation “and only that Regulation”. As the Regulation was not applicable, there was no need to examine whether the Data Protection Directive may affect the national measures.

Comment

I will focus on the question of applicability of the Charter (See Steve Peers comment on the “appalling” reasoning of the Court in respect of the Data Protection Directive). This judgment appears to signal a retreat by the Court from the expansive understanding of the scope of application which was laid down inFransson. It is true that in that case the Court had held that when EU law is not applicable, the Charter is not applicable. But when applying that test to the facts, the Court observed that the national (Swedish) measure was connected (in part) to infringements of the VAT Directive, and therefore was designed to implement an obligation imposed on the Member States by EU law “to impose effective penalties for conduct prejudicial to the financial interests of the European Union”. So inFransson the Court held that national measures which were connected in part to a specific obligation imposed by EU law on the Member State fell within the scope of application of EU law, and therefore of the Charter.

In the present case, the national measures are designed (in part) to implement the obligation imposed on the Member States by the Regulation, to collect and retain fingerprint data. Applying the reasoning in Fransson it would seem to follow that such measures would fall within the scope of EU law – after all, the measures relate to the retention of fingerprints, and the reason the fingerprints need to be retained stems from a specific obligation imposed, by EU law, on Member States: the obligation to collect and store biometric data with a view to issuing passports and travel data, set out in Article 4(3) of the Regulation.

Of course, this case can be distinguished from Fransson. In Fransson the Member State’s measure could be seen as not only stemming from the specific obligation imposed by EU law, but also as furthering the EU purpose of preventing conduct prejudicial to its financial interests. In contrast, in the present case the Member State’s measure is in furtherance of a member state’s purposes, and not an EU purpose.

But such a distinction would seem to entail a very strict approach to what obligations are imposed by EU law. Because the obligation which the Regulation imposes is not just to collect and store date, but also (under Article 4(3) of the Regulation) to ensure that the data are only used to for the specified purposes set out in the Regulation. That obligation was subsequently modified by Recital 5 inRegulation 444/2009, which states that Regulation 2252/2004 is “without prejudice to any other use or storage of these data in accordance with national legislation of Member States.” But is such a Recital sufficient to place the measures concerning those data outside the scope of EU law, or does it merely confer a discretion on states to adopt such measures, provided that they are compatible with EU law? Unfortunately, the reasoning in this judgment does not provide much guidance.

Conclusion

The approach of the Court in Fransson did not meet universal approval, and the judgement of the German Federal Constitutional Court in the Counter-Terrorism Database case may be read as a warning shot across the CJEU’s bows to make sure that the Charter is not applied to Member States’ measures in a way that “question[s] the identity of the [national] constitutional order”.  And by emphasising the autonomy of EU fundamental rights in its recent Opinion 2/13 on the accession to the ECHR, the Court certainly raised the stakes involved in demanding Member State compliance with the Charter. So this case may indicate a desire to ensure that the EU fundamental rights standard is reserved for those Member State measures where it matters most that a EU standard is applied – those matters where the primacy, unity and effectiveness of EU law is at stake.

In effect, this case can be read as tacit acceptance of AG Cruz Villalón in hisOpinion in Fransson, who proposed that the oversight by the Court of the exercise of public authority by the Member States be limited to those cases where there was “a specific interest of the Union in ensuring that that exercise of public authority accords with the interpretation of the fundamental rights by the Union”. However, that Opinion was a well reasoned legal argument. This judgment leaves many questions unanswered, and makes it very difficult to predict when a national measure will fall within the scope of EU law.

Furthermore, this approach sits uneasily with the self-understanding of the EU as a Union based on the rule of law inasmuch as neither Member States nor its institutions can avoid review of the conformity of their acts with fundamental rights (C-402/05 P and C-415/05 P Kadi). Through this Regulation, the EU requires the Member States to collect and store sensitive personal data of all EU citizens who wish to travel; but where the Member States go on to use those data in ways that may breach the fundamental rights of those EU citizens, the Court washes its hands of the matter.

 

 

The revision of the EU Anti-Money Laundering legal framework is fast approaching..

By Dalila DELORENZI (Free Group trainee)

1.Foreword

Broadly speaking Money laundering means the conversion of the proceeds of criminal activity into apparently clean funds, usually via the financial system  by disguising the sources of the money, changing its form, or moving the funds to a place where they are less likely to attract attention. Terrorist financing is the provision or collection of funds, by any means, directly or indirectly, with the intention that they should be used or in the knowledge that they are to be used in order to carry out terrorist offences. At EU level since 1991 at EU level legislation has been introduced to limit these activities and to protect the integrity and stability of the financial sector and, more in general, of the Internal Market. The EU rules are to a large extent based on Recommendations  adopted by the Financial Action Task Force (FATF) which is an intergovernmental body with 36 members, and with the participation of over 180 countries in the world.

The directive currently into force is the Third Anti-Money Laundering (AML) Directive which applies to the financial sector (credit institutions, financial institutions) as well as to professionals such as lawyers, notaries, accountants, real estate agents, casinos and company service providers. Its scope also encompasses all providers of goods, when payments are made in cash in excess of EUR 15.000. All these addressees are considered “obliged entities”. The Directive requires these obliged entities to identify and verify the identity of customers (so-called customer due diligence, hereinafter ‘CDD’) and beneficial owners, and to monitor the financial transactions of the customers. It then includes obligations to report suspicions of money laundering or terrorist financing to the relevant Financial Intelligence Units (FIUs), as well as other accompanying obligations. The Directive also introduces additional requirements and safeguards (such as the requirement to conduct enhanced customer due diligence) for situations of higher risk.

In force since 2005 the third Money Laundering Directive required a revision against the backdrop of the constantly changing nature of money laundering and terrorist financing threats, facilitated by a constant evolution of technology and of the means at the disposal of criminals. In particular, the recent terrorist attacks in Paris have increased the necessity of decisive actions against terrorist financing and further efforts need to be made in adapting the current framework to a different reality. Therefore in accordance with this purpose, at the international level measures have been taken by the Financial Action Task Force (FATF): a fundamental review of the international standards has been undertaken and a new set of Recommendations have been adopted in February 2012.

In parallel to the international process, the European Commission with a view to complying with the international standards has undertaken its own review of the European Anti-Money Laundering framework. This revision consisted in an external study (the so called Deloitte study) on the application of the Third AMLD (Directive 2005/60/EC) and in extensive contacts and consultations with private stakeholders and civil society organisations, as well as with representatives of EU Member State regulatory and supervisory authorities and Financial Intelligence Units (FIUs).

The results of the Commission’s review were set out in a Report , addressed to EU Parliament and Council, where it was analysed how the different elements of the existing framework have been applied and how it may need to be changed, highlighting the necessity to introduce clarifications or refinements in a number of areas.

More specifically, the main problems in the current EU anti-money laundering/combating terrorist financing legislative framework are: (i) inconsistency with the recently revised international standards; (ii) different interpretation and application of rules across EU Member States; and (iii) inadequacies and loopholes with respect to the new money laundering and terrorist financing risks.

2. The EU Commission’s proposals Continue reading “The revision of the EU Anti-Money Laundering legal framework is fast approaching..”

State Surveillance: the Venice Commission updates its 2007 Report

By Emilio DE CAPITANI

The Council of Europe’s, European Commission for Democracy Through Law (VENICE COMMISSION) during its 102nd Plenary Session (Venice, 20-21 March 2015) has updated its 2007 Report on the democratic Oversight of the security services and report on the democratic oversight of Signals Intelligence Agencies.
In a time where EU founding States such as France are discussing some very cotroversial rules on potential mass interception and the European Union is more and more attracted by the so called “intelligence led policing” the Venice Commission recommendations are particulary timely and worth reading.

Below the Executive Summary of the updated Venice Commission’s Report.

1. The scope of the study.
As a result of processes of globalization and of the creation of internet, internal and external security threats may not be easily distinguished anymore. Significant threats may come from non-state actors. Consequently, one of the most important developments in intelligence oversight in recent years has been that Signals Intelligence or SIGINT does not relate exclusively to military and external intelligence anymore, but also falls to some extent into the domain of internal security. Thus, signals intelligence now can involve monitoring “ordinary telecommunications” (it is “surveillance”) and it has a much greater potential of affecting individual human rights. Different states organize their signals intelligence function in different ways. The summary which follows discusses issues generally, and should not be seen as asserting that all states follow a particular model of signals intelligence, or regulate it in a particular way.

2. Is there a need for improved democratic control?
Strategic surveillance involves access both to internet and telecommunications content and to metadata (all data not part of the content of the communication). It begins with a task being given to the signals intelligence agency to gather intelligence on a phenomenon or a particular person or group. Very large quantities of content data, and metadata, are then collected in a variety of different ways. The bulk content is subjected to computer analysis with the help of “selectors”. These can relate to persons, language, key words concerning content (e.g. industrial products) and communication paths and other technical data.

3. Unlike “targeted” surveillance (covert collection of conversations by technical means (bugging), covert collection of the content of telecommunications and covert collection of metadata), strategic surveillance does not necessarily start with a suspicion against a particular person or persons. Signals intelligence aims to inform foreign policy generally and/or military/strategic security, not necessarily at investigating internal security threats. It has a proactive element, aiming at find or identify a danger rather than merely investigating a known threat. Herein lies both the value it can have for security operations, and the risks it can pose for individual rights.

4. Agencies engaged in signals intelligence tend to have the bulk of the intelligence budget, and produce most intelligence, but the systems of oversight over them have tended to be weaker. There are a variety of explanations for this.
First, it is argued that access to mere metadata does not seriously affect privacy, and nor does access to content data because this is done by computerized search programmes (“selectors”). However, metadata now can reveal much about private life, and the content selectors can be designed to collect information on specific human beings and groups.
Second, telecommunications used to be mainly by radio, with an ensuing lower level of privacy expectations; however, the vast bulk of telecommunications is now by fiber-optic cable.
Third, strategic surveillance being aimed at external communications, it was argued that it is the privacy of non-citizens or non-residents which is affected; however, leaving aside the issue of whether such a distinction is acceptable under the ECHR, for technical reasons there is an inevitable mixing of the internal and external communications, and an ensuing risk of circumvention of tougher domestic controls and oversight which might exist over “ordinary” surveillance. Fourthly, controls have been weaker on account of the technical complexity and rapid technological growth of the area. It should be borne in mind, however, that if this sector is left unregulated, it will be the intelligence agency itself instead of the legislature which carries out the necessary balancing of rights, with the risk of erring on the side of over-collecting intelligence. The fifth reason is that various factors – too rapid growth in the size of a signals intelligence agency, rapid growth in technology, loss in institutional memory, political pressure to secure quick results – may adversely impact the integrity and professionalism of the staff. Finally, signals intelligence is an international cooperative network, which creates specific oversight problems.

5. Strategic surveillance is not necessarily “mass” surveillance but can be when bulk data is collected and the thresholds for accessing that data are set low. Signals intelligence agencies tend to possess much more powerful computing facilities and thus have a greater potential to affect privacy and other human rights. They thus need proper regulation in a Rechtsstaat.

6. Jurisdiction.
The collection of signals intelligence may legitimately take place on the territory of another state with its consent, but might still fall under the jurisdiction of the collecting state from the view point of human rights obligations under the ECHR. At any rate, the processing, analysis and communication of this material clearly falls under the jurisdiction of the collecting State and is governed by both national law and the applicable human rights standards. There may be competition or even incompatibility between obligations imposed on telecommunications companies by the collecting state and data protection obligations in the territorial state; minimum international standards on privacy protection appear all the more necessary.

7. Accountability. Organization.
Signals intelligence is expensive and requires sophisticated technical competence. Hence, while all developed states nowadays require a defensive function – cyber security – only some have an offensive signals intelligence capacity, either in the form of a specialist signals intelligence agency or by allocating a signals intelligence task to the external intelligence agency.

8. Form of the mandate.
Most democratic states have placed at least part of the mandate of the signals intelligence function in primary legislation, as required by the ECHR. More detailed norms or guidelines are normally set out in subordinate legislation promulgated either by the executive (and made public) or by the Head of the relevant agency (and kept secret). There may be issues of quality of the law (foreseeability etc) in this respect.

9. Content of the mandate.
The mandate of a signals intelligence agency may be drafted in very broad terms to allow collection of data concerning “relevant” “foreign intelligence” or data of “relevance” to the investigation of terrorism. Such broad mandates increase the risk of over-collection of intelligence. If the supporting documentation is inadequate, oversight becomes very difficult.

10. Collection of intelligence for “the economic well-being of the nation” may result in economic espionage. Strategic surveillance however is useful in at least three areas of business activity: proliferation of weapons of mass destruction (and violation of export control conditions generally), circumvention of UN/EU sanctions and major money laundering. A clear prohibition of economic espionage buttressed by strong oversight and the prohibition for the intelligence agencies to be tasked by the government departments or administrative agencies involved in promoting trade would be useful prevention mechanisms.

11. Bulk transfers of data between states occur frequently.
In order to avoid circumvention of rules on domestic intelligence gathering, it would be useful to provide that the bulk material transferred can only be searched if all the material requirements of a national search are fulfilled, and this is duly authorized in the same way as searches of bulk material obtained through national searches.

12. Government control and tasking.
Taskers depend on the nature of the intelligence sought (diplomatic, economic, military and domestic). Taskers should not be regarded as external controls.

13. Network accountability.
Due to their different geographical location and to the nature of internet, states frequently collect data which is of interest to other states or have access to different parts of the same message. The links between allied states as regards signals intelligence may be very strong. The “third party” or “originator rule” may thus be a serious obstacle to oversight and should not be applied to oversight bodies.

14. Accountability and the case law of the European Court of Human Rights.
The ECHR consists of minimum standards, and it is only a point of departure for European States, which should aim to provide more extensive guarantees. The European Court of Human Rights has not defined national security but has gradually clarified the legitimate scope of this term. In its case-law on secret measures of surveillance, it has developed the following minimum safeguards to be set out in statute law in order to avoid abuses of power: the nature of the offences which may give rise to an interception order; definition of the categories of people liable to have their telephones tapped, a limit on the duration of telephone tapping; the procedure to be followed for examining, using and storing the data obtained; the precautions to be taken when communicating the data to other parties; and the circumstances in which recordings may or must be erased or the tapes destroyed.

15. The Court’s case law on strategic surveillance is so far very limited, although there is also national case law and oversight bodies practice based on the ECHR. Several of the standards related to ordinary surveillance have to be adapted to make them apply to strategic surveillance. The first safeguard (applicable only to states which allow the use of signals intelligence to investigate crimes) is that the offences which may be investigated through signals intelligence should be enumerated, and thus provision should be made for the destruction of data which might incidentally be gathered on other offences. The exception of transferring data to law enforcement should be narrowly defined and subject to oversight.

16. Another safeguard is a definition of the categories of people liable to have their communications intercepted. The power to contact chain (i.e. identify people in contact with each other) should be framed narrowly contact chaining of metadata should normally only be possible for people suspected of actual involvement in particularly seriously offences, such as terrorism. If the legislature nonetheless considers that such a widely framed contact-chaining power is necessary, then this must be subject to procedural controls and strong oversight.

17. As regards searches of content data, there are particular privacy implications when a decision is being considered to use a selector which is attributable to a natural person (e.g. his or her name, nickname, email address, physical address etc.). Strengthened justification requirements and procedural safeguards should apply, such as the involvement of a privacy advocate. The safeguard is also relevant as regards subsequent decisions to transfer intelligence obtained by strategic surveillance to internal security agencies, to law enforcement or to foreign services.

18. Interception of privileged communications by means of signals intelligence is particularly problematic as is use of signals intelligence against journalists in order to identify their sources. Methods must be devised to provide lawyers and other privileged communicants and journalists with some form of protection, such as requiring a high, or very high, threshold before approving signals intelligence operations against them, combined with procedural safeguards and strong external oversight.

19. The safeguard of setting out time limits is not as meaningful for strategic surveillance as it is for ordinary surveillance. Periods of surveillance tend to be long, and continually renewed. Retention periods also tend to be long: data originally thought to be irrelevant may, as a result of new data, come to be seen as relevant. Provision could be made for a requirement to make periodic internal reviews of the (continued) need to retain the data. To be meaningful, such a duty must be backed up by external oversight.

20. Two very significant stages in the signals intelligence process where safeguards must apply are the authorization and follow-up (oversight) processes. That the latter must be performed by an independent, external body is clear from the ECtHR’s case law. The question which arises here is whether even the authorization process should be independent.

21. Internal and governmental controls as part of overall accountability systems. For a number of reasons, It has been particularly tempting to rely primarily on internal controls in the area of strategic surveillance, but they are insufficient. Generally speaking, external oversight over signals intelligence needs to be strengthened considerably.

22. Parliamentary accountability.
There are a number of reasons why parliamentary supervision of strategic surveillance is problematic. First, the technical sophistication of signals intelligence makes it difficult for parliamentarians to supervise without the aid of technical experts. Second, the general problem of parliamentarians finding sufficient time for oversight along with all their other duties is particularly acute as regards strategic surveillance, where for controlling the dynamic process of refining the selectors (as opposed to a post-hoc scrutiny), some form of standing body is necessary. Thirdly, the high degree of network cooperation between certain signals intelligence agencies means an added reluctance to admit in parliamentary oversight, which can thus affect not simply one’s own agencies, but also those of one’s allies. In some states the doctrine of parliamentary privilege means that parliamentary committees cannot be security-screened, adding to an already-existing fear of leaks. The other, crucial, factor is that strategic surveillance involves an interference with individual rights. Supervision of such measures has traditionally been a matter for the judiciary. The constitutional principle of separation of powers can make it problematic for a parliamentary body to play such a quasi-judicial role.

23. A decision to use particular selectors, resembles, at least in some ways, a decision to authorize targeted surveillance. As such, it can be taken by a judicial body. As the decision involves considerable policy elements, knowledge of intelligence techniques and foreign policy are also desirable. Finding a group of people who combine all three types of competence is not easy, even for a large state. Thus, it is easier to create a hybrid body of judges and other experts. As regards follow-up (oversight) it is necessary to oversee decisions made by automated systems for deleting irrelevant data, as well as decisions by human analysts to keep the personal information collected, and to transfer it to other domestic and foreign agencies. This type of oversight is of a “data protection” character, most suitably assigned to an independent, expert administrative body. Neither of these types of decision is “political” in nature. What, by contrast, is more “political” is the prior decision taken, that somebody, or something, is of sufficient importance to national security to need intelligence about. This is the type of decision which would benefit from a (closed) discussion in a political body, where different spectrums of opinion are represented. Another type of policy-oriented issue is deciding the general rules regarding who, and under what circumstances, signals intelligence can be exchanged with other signals intelligence organisations. A third is making a general evaluation of the overall effectiveness and efficacy of signals intelligence measures. A fourth role for a political body is to engage in a continuous dialogue with whatever expert oversight body is established.

24. Judicial authorization.
A system of authorization needs to be complemented by some form of follow-up control that conditions are being complied with. This is necessary both because the process of refining selectors is dynamic and highly technical and because judges do not tend to see the results of the signals intelligence operations as these seldom lead to prosecutions. Thus the safeguards applying to a subsequent criminal trial do not become applicable.

25. Accountability to expert bodies.
The boundary line between parliamentary, judicial, and expert bodies is not hard and fast; in some states, oversight bodies are a mixture of the three. Expert bodies have a particular role to play in ensuring that signals intelligence agencies comply with high standards of data protection.

26. Complaints mechanisms.
Under the ECHR, a state must provide an individual with an effective remedy for an alleged violation of his or her rights. Notification that one has been subject to strategic surveillance is not an absolute requirement of Article 8 ECHR. If a state has a general complaints procedure to an independent oversight body, this can compensate for non-notification. There are certain requirements before a remedy can be seen as effective.

27. Concluding remarks.
States should not be content with the minimum standards of the ECHR. Signals intelligence has a very large potential for infringing the right to private life and other human rights. It can be regulated in a lax fashion, meaning that large numbers of people are caught up in a trawl and intelligence on them is retained, or relatively tightly, meaning that the actual infringement with private life and other human rights is kept down. The Swedish and German models have definite advantages over the other models studied from this perspective. In any event it is necessary to regulate the main elements in statute form and to provide for strong mechanisms of oversight. The national legislature must be given a proper opportunity to understand the area and draw the necessary balances.

Do Facebook and the USA violate EU data protection law? The CJEU hearing in Schrems

ORIGINAL PUBLISHED ON EU LAW ANALYSIS
Sunday, 29 March 2015
by Simon McGarr, solicitor at McGarr solicitors (*)

Last week, the CJEU held a hearing in the important case of Schrems v Data Protection Commissioner, which concerns a legal challenge brought by an Austrian law student to the transfers of his personal data to the USA by Facebook, on the grounds that his data would be subject to mass surveillance under US law, as revealed by Edward Snowden. His legal challenge was actually brought against the Irish data protection commissioner, who regulates such transfers pursuant to an agreement between the EU and the US known as the ‘Safe Harbour’ agreement. This agreement takes the form of a Decision of the European Commission made pursuant to the EU’s data protection Directive, which permits personal data to be transferred to the USA under certain conditions. He argued that the data protection authority has the obligation to suspend transfers due to breaches of data protection standards occurring in the USA. (For more detail on the background to the case, see the discussion of the original Irish judgment here).

The following summarises the arguments made at the hearing by the parties, including the intervening NGO Digital Rights Ireland, as well as several Member States, the European Parliament, the Commission and the European Data Protection Supervisor. It then sets out the question-and-answer session between the CJEU judges (and Advocate-General) and the parties. The next step in this important litigation will be the opinion of the Advocate-General, due June 24th.

Please note: these notes are presented for information purposes only. They are not an official record or a verbatim account of the hearing. They are based on rough contemporaneous notes and the arguments made at the hearing are paraphrased or compressed. Nothing here should be relied on for any legal or judicial purpose, and all the following is liable to transcription error.

Schrems v Data Protection Commissioner
Case C-362/14
Judges:
M.V Skouris (president); M.K. Lenaerts (Vice President); M.A. Tizzano; Mme R. Silva de Lapuerta; M. T. Von Danwitz (Judge Rapporteur); M. S. Rodin; Mme K. Jurimae; M. A Rosas; M. E. Juhász; M. A. Borg Barthet; M. J. Malenovsky; M. D. Svaby; Mme M. Berger; M. F. Biltgen; M. C. Lycourgos; M. F. Biltgen
M. Y. Bot (Advocat General)

Max Schrems

Noel Travers SC for Mr. Schrems told the court that personal data in the US is subject to mass and indiscriminate mass surveillance. The DRI v Ireland case struck down the EU data retention directive, establishing a principle which applies a fortiori to this case. However, the court held that Data Retention did not affect the essence of the right under Article 8, as it concerned only metadata. The surveillance carried out in the US accesses the content of data as well as the metadata, and without judicial oversight. This interference is so serious that it does violate the essence of Article 8 rights, unlike the data retention directive. Mr. Travers held that the Safe Harbour decision is contrary to the Data Protection directive’s own stated purpose, and that it was accordingly invalid.
Answering the Court’s question as to whether the decision precludes an investigation by a Data Protection Authority (DPA) such as the Irish Data Protection Commissioner, he submitted that compliance with fundamental rights must be part of the implementation of any Directive. Accordingly, national authorities, when called upon in a complaint to investigate breaches must have the power to do so.
Article 25.6 of the data protection Directive allows for findings on adequacy regarding a third country “by reason of its domestic law or of the international commitments it has entered into”. The Safe Harbour Principles (SHPs) and FAQs are not a law or an international agreement under the meaning of the Vienna Convention. And the SHPs do not apply to US public bodies. The Safe Harbour Principles are set out in an annex to a Commission Decision, but that annex is subject to US courts for interpretation and for compliance. Where there is a requirement for compliance with law, it is with US law, not EU law.

Irish Data Protection Commissioner

For the Data Protection Commissioner, Mr. Paul Anthony McDermott said that with power must come limitations. All national regulators are firstly bound by domestic law. The Data Protection Commissioner is also bound by the Irish Constitutional division of powers. She cannot strike down laws, Directives or a Decision.
Mr. Schrems wanted to debate Safe Harbour in a general way- it wasn’t alleged then that Facebook was in breach of safe harbour or that his data was in danger. The Irish High Court had a limited Judicial Review challenge in front of it. Mr. Schrems didn’t challenge Safe Harbour, or the State, or EU law directly, and the Irish High Court declined the application by Digital Right Ireland to refer the validity of the Safe Harbour Decision to Luxembourg. Mr. McDermott asked the court to respect the parameters of the case.
Europe has decided to deal with the transfer of data to the US at a European level. The purpose of the Safe Harbour agreement is to reach a negotiated compromise. The words “negotiate”, “adapt” and “review” appear in the Decision. It is clear therefore that a degree of compromise is envisaged. Such matters are not to be dealt with in a court but, as they involve both legal and political issues, by diplomacy and realpolitik.
The Data Protection Commissioner can have regard to the EU Charter of Fundamental Rights when she’s balancing matters but it doesn’t trump everything. It doesn’t allow her to ignore domestic law or European law, Mr. McDermott concluded. Continue reading “Do Facebook and the USA violate EU data protection law? The CJEU hearing in Schrems”

Another episode of the EU PNR saga: remarks of the national data protection authorities

LETTER SENT BY THE PRESIDENT OF THE ART 29 WORKING PARTY (*) TO THE CHAIRMAN OF THE PARLIAMENTARY COMMITTEE IN CHARGE OF THE EU PNR  DRAFT DIRECTIVE (emphasized by me)

Dear Mr Moraes,
Since the terrorist attacks in Paris and Copenhagen, the discussion on the possible introduction of an EU Passenger Name Records system (hereafter: EU PNR) has moved significantly forward, both in the Council and in the European Parliament. In particular, Mr Kirkhope, rapporteur on this issue, has presented an updated report on the Commission’s 2011 draft directive establishing an EU PNR to your Committee.
As stated early last month, the Article 29 Working Party (hereafter: the WP 29) is not in principle either in favour of or opposed to PNR data collection schemes  (See press release issued by the Article 29 Working Party on EU PNR on 5 February 2015), as long as they are compliant with the fundamental rights to respect for private life and to the protection of personal data.
However, considering the extent and indiscriminate nature of EU PNR data processing for the fight against terrorism and serious crime, the WP 29 believes that it is likely to seriously undermine the rights as set out in Articles 7 and 8 of the Charter of Fundamental Rights in the European Union.
In this regard, the Working Party acknowledges that there have been some improvements to the initial draft from a data protection perspective. Still, the Working Party wishes to urgently draw your attention to the following outstanding issues to ensure that the aforementioned fundamental rights are respected.
First, the necessity of an EU PNR scheme still has to be justified.  Precise argumentation and evidence are still lacking in that respect.   Further restrictions should also be made to ensure that the data processing is proportionate to the purpose pursued, in particular considering that the report now includes intra-EU flights in the data processing. Therefore, it is recommended that the data collection is limited with reference to specific criteria in order for the scheme to guarantee respect for individuals’ fundamental rights and to take the CJUE data retention judgment into account.  Besides this, the scope of the offences concerned should be further reduced and the retention period shortened and clearly justified.
In addition, a major error in the new Articles 10a and 12(1b) stemming from an apparent misunderstanding of the data protection authority’s role must be rectified in order to set the responsibilities of governments and data controllers.
Finally, the WP29 insists on the necessity to present as soon as possible a detailed evaluation of the efficiency of the PNR scheme. A sunset clause should also be inserted into the directive to assist in ensuring periodic review of the necessity of the system.

All these points will be developed in an appendix of this letter, as well as concrete modifications and improvements proposed to the text by the Working Party. I would be grateful if you would be so kind as to forward this letter to the members of your committee in order for them to take account of these views before the deadline for further amendments to the proposal. Naturally, the Working Party remains at your disposal for any clarification you would require and further input during the discussion on EU PNR.

Yours sincerely,
On behalf of the Article 29 Working Party,
Isabelle FALQUE-PIERROTIN Chairwoman

Appendix :
Demonstrating the necessity and ensuring the proportionality of the EU PNR scheme

Continue reading “Another episode of the EU PNR saga: remarks of the national data protection authorities”

The Proposed Data Protection Regulation: What has the Council agreed so far?

ORIGINAL PUBLISHED ON STATEWATCH 

Analysis (Second version) by Steve Peers, Professor of Law, University of Essex, Twitter: @StevePeers

13 March 2015

Introduction

Back in January 2012, the Commission proposed a new data protection Regulation that would replace the EU’s existing Directive on the subject. It also proposed a new Directive on data protection in the sphere of law enforcement, which would replace the current ‘Framework Decision’ on that subject.

Over three years later, there has been considerable progress on discussing these proposals. The European Parliament (which has joint decision-making power on both proposals) adopted its positions back in the spring of 2014. For its part, the EU Council (which consists of Member States’ justice ministers) has been adopting its position on the proposed Regulation in several pieces. It has not yet adopted even part of its position on the proposed Directive.

For the benefit of those interested in the details of these developments, the following analysis presents a consolidated text of the five pieces of the proposed Regulation which the Council has agreed to date, including the two parts just agreed in March 2015. This also includes the parts of the preamble which have already been agreed. I have left intact the footnotes appearing in the agreed texts, which set out Member States’ comments.

The underline, italics and bold text indicate changes from the Commission proposal. I have added a short summary of the subject-matter of the Chapters and Articles in the main text which have not yet been agreed by the Council.

For detailed analyses of some parts of the texts agreed so far, see the links to the blog  posts. The Council might always change its current position at a later point, and of course the  final text of the new legislation will also depend on negotiations between the Council and  the European Parliament.

SEE THE CONSOLIDATED TEXT (156 PAGES)

Background documents

‘Public sector’ provisions, agreed by Dec. 2014 JHA Council:

Chapter IV, agreed by Oct. 2014 JHA Council:

Rules on territorial scope, agreed by June 2014 JHA Council:

Rules on ‘one-stop-shop’, agreed by March 2015 JHA Council:

Rules on basic principles, agreed by March 2015 JHA Council:

Proposal from Commission:

Position of European Parliament:

Analysis of agreed territorial scope rules:

Analysis of agreed ‘privacy seals’ rules

Analysis of data protection supervision (one-stop-shop) rules:

Analysis of rules on basic principles