Artificial intelligence in the EU: promoting economy at the expenses of the rights of the individual?

by Emilio DE CAPITANI (*)[1]

“The advent of artificial intelligence (‘AI’) systems is a very important step in the evolution of technologies and in the way humans interact with them. AI is a set of key technologies that will profoundly alter our daily lives, be it on a societal or an economic standpoint. In the next few years, decisive decisions are expected for AI as it helps us overcome some of the biggest challenges we face in many areas today, ranging from health to mobility, or from public administration to education. However, these promised advances do not come without risks. Indeed, the risks are very relevant considering that the individual and societal effects of AI systems are, to a large extent, unexperienced.…”[2]

Foreword

1. According to the European Commission the recent proposal for a regulation on Artificial Intelligence is consistent with the EU Charter of Fundamental Rights and the secondary EU legislation on data protection, consumer protection, non-discrimination and gender equality. Notably, it “complements” the General Data Protection Regulation (Regulation (EU) 2016/679) and the Law Enforcement Directive (Directive (EU) 2016/680) by setting “..harmonised rules applicable to the design, development and use of certain high-risk AI systems and restrictions on certain uses of remote biometric identification systems”.

Is it true or the text is mainly economic oriented and fail to place the rights of those who will be subject to such AI systems at the heart of its reflection?

2. First of all it is worth noting that, while some commentators may have considered this new proposal to be the equivalent of the General Data Protection Regulation for AI, its general scheme is much more similar to Regulation (EU) 2019/1020 of 20 June 2019 on market surveillance and product compliance, the objective of which is to improve the internal market by strengthening market surveillance of products covered by Union legislation instead of protecting or promoting fundamental rights. The Commission’s proposal is essentially aimed at holding companies producing and marketing AI systems accountable, which is in itself a positive element in the context of the establishment of a European normative framework on artificial intelligence. According to EC Proposal AI systems must meet a number of criteria and undergo conformity assessment procedures, which are more or less stringent depending on the risks involved (see Articles 8 to 51 and Annexes IV to VIII [3]

3. However,  it is quite surprising that the proposal is focused only on a “product”, (a “software” developed from techniques and approaches listed in an annex) and does not address the general notions of “algorithms” and “big data” which are the main feature artificial intelligence (AI) applications which needs huge amounts of data necessary to train it and it allowing, in return, to process the same data.  By not referring directly on the nature of algorithms or the notion of big data, the Commission avoid placing the AI applications within the general framework of fundamental rights and data protection. Needless to say, a “right-based” approach is specular to the notion of ”duty” to protect that right by another individual or by the public administration. Take the case of Regulation 2016/679 or of Directive 2016/680 where the “rights of the data subject” are detailed in specific chapter whereas there is no similar provision in the AI proposal. Similarly, if the proposal defines AI system providers (“providers”), users (“users”), importers (“importers”) and distributors (“distributors”), it makes no reference at any time to persons who are subject to such systems. Moreover nothing, in particular, is said about the possible possibilities of recourse of individuals challenging the use of an AI system.

By choosing a market centric approach the Commission is undermining the aim of placing the individual at the core of the EU policies as declared in the EU Charter preamble. 

I- Definitions and classifications

4. The proposal is built on a risk-based approach, but the classification of the systems as unacceptable, high or low is not clear:

– Article 6 on the classification of high-risk systems is a simple description of the systems falling within this category, without justification of the reasons for the choices made,

– Article 7, on the “amendments to Annex III”, which is the annex containing the systems considered to be high risk, does, however, contain a number of criteria which the Commission will have to take into account in order to add other systems in the future, if necessary.- However, the terms chosen lack precision: the systems referred to are those likely to harm health, safety or have a negative effect on fundamental rights («risque of adverse impact on fundamental rights»). But how to understand in this context the concept of negative effect?

5. The breakdown between the systems to be prohibited and those with a high risk is not further explained: why, for example, prohibit real-time remote facial recognition in public places for repressive purposes, But to authorize, considering them at high risk, the systems that, in terms of criminal prosecutions or management of migration, asylum and border control aim to detect the emotional state of a person? Similarly, what about systems that generate or manipulate audio or video content or images, which then appear to be falsely authentic, and which can be used in criminal proceedings without informing persons (section 52)?

6. Above all, this approach suggests that respect for fundamental rights may be variable in geometry, even though fundamental rights are not negotiable and must also be guaranteed, regardless of the level of risk presented by the AI system in question.[4]

II- Articulation with data protection 

7. In this proposal, the Commission’s position on the European data protection framework is characterized by its ambiguity:

–  Article 16 TFEU is one of the two legal bases of the proposal, alongside Article 114 TFEU. However, in its statement of reasons, the Commission is careful to point out that the basis of Article 16 concerns only those provisions relating to restrictions on the use of AI systems with regard to remote biometric identification in places accessible to the public and for the purposes of criminal proceedings (point 2.1. See also recital 2 of the proposal). However, the protection of individuals about the processing of their personal data cannot be limited to this single hypothesis, given the operating modalities of AI systems which, as indicated above, are based on massive data collections, which are not all non-personal or anonymized. In addition, anonymized data may in some cases be re-identified, and an interlaced set of non-personal data may identify individuals. In addition, anonymized data can be used to build profiles and have a direct impact on the privacy of individuals and create discrimination.

– Recital 41 states that the new Regulation should not be understood as constituting a legal basis for the processing of personal data, including special categories of data. Nevertheless, under recital 41 above, the classification of an AI system as high risk does not imply that its use is necessarily lawful under other European legislation, in particular those relating to the protection of personal data, and the use of polygraphs and similar tools or other systems to detect the emotional state of individuals. That recital specifies to that end that such use should continue to occur only in accordance with the applicable requirements resulting from the Charter and Union law. It therefore seems to follow that certain provisions of this proposal may prove to be incompatible with other provisions of European law: far from «supplementing» the legislative framework on data protection, the future regulation may, on the contrary, open the way to situations of conflict of laws.

– on the other hand, recital 72 states that this Regulation should provide the legal basis for the use of personal data collected for other purposes with a view to developing certain AI systems in the public interest in the case of AI regulatory “sandboxes”. However, as reminded above the Commission also states in its explanatory statement that this proposal is without prejudice to and complements the General Data Protection Regulation 2016/679 and Police Directive 2016/68 (point 1.2).

8. Furthermore, if certain AI systems authorized by this proposal are not to be approved because they would infringe the provisions of the Charter and European data protection law, this raises the question of the relevance of the proposed classification, if it legitimizes systems contrary to fundamental rights in general, and to data protection in particular. But who will decide at EU and national level which rule should prevail between the Data Protection and AI Regulations? The establishment of a new committee, the European Artificial Intelligence Committee, and the creation of national authorities responsible for ensuring the application of the proposal (Articles 56 to 59) risks to become a conflicting structure with the parallel decentralized structure for Data Protection and its European Data Protection Board and the EDPS [5].

III- Prohibitions and their limits

9. In a very symbolic way, the proposal opens, after a first title relating to the general provisions, with a title entitled “prohibited artificial intelligence practices”, which in reality only contains a single article, while the next title on high-risk systems consists of 46 articles.

There are four systems considered unacceptable:

–  systems deploying subliminal techniques to distort a person’s behavior in a manner that causes or is likely to cause physical or psychological harm to the person or to another person;

– systems exploiting the vulnerabilities of a specific group of people due to their age, physical or mental disability, to distort the behavior of a person belonging to that group in a manner that causes or is likely to cause physical or psychological harm;

– systems used by public authorities for the evaluation or classification of the reliability of individuals over a period of time based on their social behavior or known or predicted personal or personality characteristics, with the establishment of a social score (“social score”) leading to one or both of the following: adverse or adverse treatment of persons in social contexts unrelated to the contexts in which the data were initially generated or collected; or/and adverse or adverse treatment of persons that is unjustified or disproportionate to their behavior or the seriousness of their behavior;

–  ‘real-time’ remote biometric identification systems in public spaces in a criminal context, unless and to the extent that such use is strictly necessary for one of the following purposes: the targeted search for potential victims of an offence, the prevention of a specific, serious and imminent threat to the life or safety of persons, or of a terrorist attack, the search, location, the identification or prosecution of the offender or a suspect, where the maximum penalty for the offence is at least three years.

10. It follows from this list that the prohibitions mentioned are subject to several limitations and prohibitions:

– in the case of the first two prohibitions, they both imply at least the possibility of physical or psychological harm. However, with regard to vulnerable persons, the demonstration of the existence of a possibility of harm may be sensitive,

– with regard to the prohibition of the social score, it is envisaged only to the extent that this score is established by public authorities (and not private entities) and leads to unfavorable treatment in a context unrelated to the context from which the data were collected or in cases where such treatment appears disproportionate. The reading of these conditions reveals that in reality the social score is not prohibited as such. This analysis is confirmed by the review of Annex III, which includes several high-risk AI systems.  Among them, systems to assess the reliability of individuals or establish their credit score (“credit score”) in cases of access to and use of essential public and private services,

– Finally, remote biometric identification systems are prohibited only if they aim at “real-time” identification, in public spaces and in criminal proceedings.

11. These limitations leave the field open to a posteriori identification, by private entities or by public authorities not acting in a repressive framework. It should also be noted that despite its regulatory form, the proposal leaves considerable room for manoeuvre for Member States to decide whether or not to use remote biometric identification systems in real time.

IV- Uses in criminal matters

12. In addition to the exceptions to the aforementioned prohibitions on real-time remote biometric identification, the proposal allows the possible use of AI systems in criminal matters[6].

Annex III, which lists the high-risk systems referred to in Article 6(2), provides for the following systems:

– systems intended to be used for the risk assessments for the commission of an offence or for recidivism by a person, or risk assessments for potential victims of an offence,

– systems intended to be used as polygraphs and similar tools or to detect the emotional state of a natural person,

– systems intended to be used to detect “deepfake” referred to in Article 52 (3),

– systems intended for use in assessing the reliability of evidence during an investigation or criminal prosecution,

– systems intended to be used to predict the occurrence or repetition of an actual or potential criminal offence, on the basis of the profiling of natural persons referred to in Article 3 para.4 of Directive 2016/680 or the assessment of personality traits and characteristics or past criminal behaviour of persons or groups,

– systems intended to be used for profiling persons referred to in Article 3 par.4 of Directive 2016/680 in the course of the detection, investigation or prosecution of criminal offences,

– AI systems for use in the analysis of crime involving natural persons, enabling law enforcement authorities to search for large datasets available in different data sources or data formats, to identify unknown patterns or to discover hidden relationships in the data.

13. Furthermore, a certain number of guarantees are limited or even excluded in the context of the use of AI systems in criminal matters:

– prior authorisation by a judicial or independent administrative authority for the use of real-time remote biometric identification may be postponed in urgent cases,

– Article 52, which seeks to impose an obligation to inform persons subject to certain systems, whether they are high-risk or not, excludes this obligation in criminal matters. This applies in particular to systems of emotional recognition or biometric categorisation, as well as those generating or manipulating audio, video or image content, which then appear to be falsely authentic,

– finally, Article 43, on conformity assessment of systems, provides for an assessment limited to internal control for all systems considered to be high-risk, with the exception of those relating to biometric identification and the categorization of persons.

14. The framework proposed by the Commission paves the way for highly controversial practices, particularly in predictive policing. The doctrine is very divided on the added value of AI systems in the assessment of future behavior of offenders, highlighting the risks of discrimination inherent in the functioning of algorithms [7].

It is worth recalling that this practice has already been unfortunately authorized by the EU with the anti-Money Laundering legislation [8]and notably by the infamous EU Directive on the use of Passenger Name Record data [9]. On the latter practice the CJEU has already adopted a very interesting Opinion (1/15) [10] dealing with a draft EU-Canada PNR Agreement  but is now again seized of this subject because of several Preliminary Ruling requests challenging the EU Directive compliance with the art. 7 and 8 of the EU Charter as well as the with the principles of necessity and proportionality [11].

15. The possible use of lie detectors (“polygraphs”) also generates debate and there is no consensus on its use in criminal matters. It should also be pointed out that the Commission allows the use of polygraphs in the field of migration, asylum and the management of external borders, thus reinforcing the experience currently carried out under the “iBorderCtrl” project.

Similarly, the possibilities for the use of a posterior biometric recognition systems are also the subject of criticism within doctrine and civil society. Thus, on May 27, 2021, the NGO Privacy International announced the filing of several claims in Europe against the American company Clearview AI [12], specialized in facial recognition and the commercialization of data collected to law enforcement.

Conclusion

The European Commission may have missed the opportunity here to ensure full respect for European values in the context of the ‘collective digital transformation dimension of our society’. Beyond the question of whether the AI proposal is fully compatible with European data protection legislation and the requirements of the European Charter, it is clear that when decisions are taken on the basis of AI applications individuals should have the right to specific explanations, and collective rights should also be strengthened as it is already the case in other domains of wider impact (as it happens with the Aarhus legal framework in the environment related legislation).

Negotiations on the European Commission proposal are currently underway inside the European Parliament [13]  and the Council of the EU [14]. Once established their respective positions the interinstitutional dialogue will start. In the meantime it is worth noting that the EP has already voted on October a non-legislative resolution curtailing  the use of AI techniques for such activities as facial surveillance and predictive policing [15].

It remains to be seen if this “non-legislative” resolution will be mirrored in the coming months also in the legislative trialogue between the EP, the Commission and the Council where the pressure of the interior Ministers in favor of surveillance measures risks to remain rather strong.

NOTES


[1] I hereby thanks Mrs Michelle DUBROCARD working at the European Data Protection Supervisor Office for her unvaluable contribution and comments when drafting the present article.

[2] EDPS and EDPB joint Opinion 5/2021 recalling also that “…in line with the jurisprudence of the Court of Justice of the EU (CJEU), Article 16 TFEU provides an appropriate legal basis in cases where the protection of personal data is one of the essential aims or components of the rules adopted by the EU legislature. The application of Article 16 TFEU also entails the need to ensure independent oversight for compliance with the requirements regarding the processing of personal data, as is also required Article 8 of the Charter of the Fundamental Rights of the EU.”

[3] It is also likely that all these new obligations, which will have to be placed on the shoulders of companies, will not fail to revive the debate on the cumbersome nature of European legislation.

[4] Consistently with this approach the EDPS and the EDPB in their Joint Opinion 5/2021 “…call for a general ban on any use of AI for an automated recognition of human features in publicly accessible spaces – such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals – in any context. A ban is equally recommended on AI systems categorizing individuals from biometrics into clusters according to ethnicity, gender, as well as political or sexual orientation, or other grounds for discrimination under Article 21 of the Charter. Furthermore, the EDPB and the EDPS consider that the use of AI to infer emotions of a natural person is highly undesirable and should be prohibited.”

[5] To avoid these risks, the future AI Regulation should clearly establish the independency of the supervisory authorities in the performance of their supervision and enforcement tasks. According to the EDPB/EDPS Joint Opinion cited above “..The designation of data protection authorities (DPAs) as the national supervisory authorities would ensure a more harmonized regulatory approach, and contribute to the consistent interpretation of data processing provisions and avoid contradictions in its enforcement among Member States.”

[6] Furthermore, according to the EDPS/EDPB Joint Opinion 5/2021, “..the exclusion of international law enforcement cooperation from the scope set of the Proposal raises serious concerns for the EDPB and EDPS, as such exclusion creates a significant risk of circumvention (e.g., third countries or international organisations operating high-risk applications relied on by public authorities in the EU)”.

[7] Literature on the risks of “Predictive Criminal Policy” is growing day by day.  As rightly stated by A.Rolland in “Ethics, Artificial Intelligence and Predictive Policing” First, the data can be subject to error: law enforcers may incorrectly enter it into the system or overlook it, especially as criminal data is known to be partial and unreliable by nature, distorting the analysis. The data may be incomplete and biased, with certain areas and criminal populations being over-represented. It may also come from periods when the police engaged in discriminatory practices against certain communities, thereby unnecessarily or incorrectly classifying certain areas as ‘high risk’. These implicit biases in historical data sets have enormous consequences for targeted communities today. As a result, the use of AI in predictive policing can exacerbate biased analyses and has been associated with racial profiling”.

[8] Fight against money laundering and terrorist financing (AML/CFT) at EU level is governed by a number of instruments which have to provide for rules affecting both public authorities and private actors who constitute the obliged entities: supervision, exchange of information and intelligence, investigation and cross-border cooperation on the one side, and obligations such as reporting or customer due diligence on the other. For this reason, the relevant instruments are based on a number of different legal bases spanning from economic policy and internal market to police and judicial cooperation. On 20 July 2021, the Commission proposed a legislative package that should enhance many of the above rules. The package consists of 1)A Regulation establishing a new EU AML/CFT Authority; 2)A Regulation on AML/CFT, containing directly-applicable rules; 3-A sixth Directive on AML/CFT (“AMLD6”), replacing the existing Directive 2015/849/EU (the fourth AML directive as amended by the fifth AML directive); 4) A revision of the 2015 Regulation on Transfers of Funds to trace transfers of crypto-assets (Regulation 2015/847/EU); 5)A revision of the Directive on the use of financial information (2019/1153/EU), which is not presented as part of the package, but is closely related to it.

[9] Directive (EU) 2016/681 of the European Parliament and of the Council of 27 April 2016 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime.

[10] Opinion 1/15 pursuant to Article 218(11) TFEU — Draft agreement between Canada and the European Union — Transfer of Passenger Name Record data from the European Union to

[11] The leading Case 817/19 has been raised by the Belgian Constitutional Court and it will give the opportunity to the CJEU to decide if the indiscriminate collection of passengers data and their scoring for security purposes through secret algorithms (as currently done also in some Third Countries) is compatible with the EU Charter and with the ECHR and does not amount to a kind of general surveillance incompatible with a democratic society.

[12] In June 2020, the European Data Protection Board expressed its doubts about the existence of a European legal basis for the use of a service such as that proposed by Clearview AI  .

[13] See the current state of legislative preparatory works here : https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence

[14] See the State of the play diffused by the Council Presidency here: https://data.consilium.europa.eu/doc/document/ST-9674-2021-INIT/en/pdf

[15] See the report Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters,

Evaluation of the General Data Protection Regulation

by EIAD – European Academy for Freedom of Information and Data Protection. Europaische Akademie fur Informationsfreiheit und Datenschutz Academie europeenne pour la liberie d’information et la protection des donnees

Berlin, 27 January 2020

A. General remarks

Article 8 of the EU Charter of Fundamental Rights (EUCFR) guarantees the protection of personal data and requires independent data protection oversight. With the General Data Protection Regulation (GDPR), there has been one EU data protection law directly applicable in all Member States since 25 May 2018. The extent to which the goals of the GDPR have been achieved cannot yet be seriously assessed after only 18 months. According to Art. 97 GDPR, the European Commission is required to continuously review the application and effectiveness of the Regulation and to report on this for the first time on 25 May 2020 and, if necessary, to submit proposals for amending and further developing the Regulation.

There is no denying that the GDPR has advanced the harmonisation of European data protection law and its application compared to the largely fragmented previous legal situation. The regulation has also strengthened the data protection rights of individuals subject to the processing of their data. The GDPR also provided data protection supervisory authorities with effective means of enforcement. However, it has become apparent that there are still shortcomings in the areas described above which need to be remedied.

The GDPR has had a significant impact on the global debate on data protection issues.

Several non-European countries and federal states have now passed laws based on the model of the GDPR. Examples include the Californian Consumer Privacy Act (CCPA), which came into force on January 1, 2020, and the new Thai Data Protection Act. The US Congress has received several drafts for a federal data protection act. It is currently discussing them on a bipartisan basis.

In addition, the data protection agreement concluded between the European Union and Japan in early 2019 has created the world’s largest zone with a uniformly high level of data protection. This has improved the opportunities for the European economy to remain competitive in the face of ongoing digitization.

The present opinion is based on the experience gained so far and is therefore provisional in nature. It is focused on key areas of action in which further development of the legal framework already appears appropriate.

B. Proposals

1.     Harmonisation

The large number of opening and concretisation clauses in the GDPR urgently needs to be reviewed with a view to reducing them. As a result of the fact that the Member States have made use of national options in very different ways, a regulatory patchwork of the most diverse provisions continues to exist in many areas. This severely compromises the goal of harmonising data protection law in the EU as far as possible and the associated free movement of data. Moreover, fragmentation causes considerable practical and legal problems for legal practitioners.

1.1.     The opening clauses of the GDPR for processing by public authorities allow not only for more precise regulations in the law of the Member States, but also for clarification by Union law.

The legal requirements for such regulations, such as those in Article 6 (3) GDPR and Article 9 (2) GDPR, should be specified with regard to the particular relevance of data processing by public authorities to fundamental rights in such a way that the guarantees specified in the GDPR may only be deviated from in favour of the persons concerned. In addition, so far there are Europe-wide references, the EU legislator should make greater use of its power to specify in order to further develop the principles of the GDPR in a harmonised manner for the public sector.

1.2. The diversity of regulations is particularly serious in the research field. The application of the provisions on scientific research has shown the need for more integrated rules on processing for scientific purposes, in particular for European cross-border research. Art. 89 GDPR should be revised accordingly in order to ensure a uniformly high level of data protection throughout the EU.

1.3. A higher degree of harmonisation is also needed for the processing of personal data in the employment context. The requirements for employee data protection of Art. 88 GDPR should be designed as binding guidelines for the processing of employee data and not merely as an option for national legislators. Nevertheless, it should still be possible to specify the requirements in national law and collective agreements.

1.4. In view of the increasing importance of interactive, cross-border media, more binding and concrete criteria are needed for weighing up the relationship between data protection, free speech and freedom of information. Art. 85 GDPR should be further developed accordingly.

1.5. The cooperation of DPAs is crucial for the uniform application of data protection law. The principles set out in Chapter VII (Art. 60-78) GDPR must be made more effective. There is a need for legal remedies if a supervisory authority fails to take a decision pursuant to Art. 58 in cases of cross-border importance, delays it, or intends to refrain from taking a formal measure pursuant to Art. 58 (2) GDPR with a view to amicably resolving a dispute with the company. It must be ensured by corresponding changes in Art. 64-66 GDPR that the provisions on the coherence procedure also apply to such cases.

2. Profiling / Automated decisions

Greater attention must be paid to automated systems which make or prepare decisions important for the individual or for society. Of particular relevance in terms of data protection law is the compilation and evaluation of data for the purpose of assessing individuals (profiling) and the use of algorithmic decision making systems, for example in connection with the use of “artificial intelligence” (AI).

2.1.    Art. 22 GDPR should be adapted to cover all cases where the rights and freedoms of natural persons are significantly affected. Profiling must be regulated as such (and not just decisions based on it). It should be clarified that the rules for automated decision-making also apply to decisions that are essentially based on algorithmic systems (algorithmic decisions). In this respect, absolute limits must be defined, admissibility requirements must be standardised and the principle of proportionality must be specified. In doing so, the specific requirements for the use of sensitive data and for the use of data relating to children shall be taken into account. The transparency requirements of Art. 12 et seqq. for profiling and automated decisions should be formulated more specific. Persons affected must always be informed when profiling is carried out and what the consequences are. In the case of algorithmic and algorithm-based decision-making systems, the underlying data and their weighting for the specific case must be disclosed in a comprehensible form.

2.2.    With regard to the functioning and effects of algorithmic and algorithm-based decision systems, in particular to avoid discrimination effects, mechanisms of algorithm control should be implemented. The requirements for data protection impact assessment formulated in Art. 35 (7) GDPR should be specified accordingly.

3.     Data protection technology

In addition to written law, ensuring effective data protection is largely determined by the design of technical systems. The statement “Code is Law” (Lessig) applies more than ever in view of increasingly powerful IT systems and global processing. It is therefore all the more important to ensure that technical systems are designed in a way compatible with data protection, especially with regard to limiting the scope of personal data processed (data avoidance, data minimisation).

Anonymisation and the use of pseudonyms are effective techniques for limiting risks to the fundamental rights and freedoms of natural persons, without unduly restricting the knowledge that can be gained from the data processing. In view of the high speed of innovation, it is necessary to examine to what extent the legal requirements guarantee adequate protection.

3.1. The provisions on technological data protection (Art. 25 GDPR) should take into account the particular risks arising from the use of new technologies and business models (in particular artificial intelligence, data mining, platforms). Corresponding specifications for the design of such systems should be specified by the European Data Protection Board.

3.2. In view of the rapid technological development, the requirements for anonymisation and pseudonymisation in Art. 25 GDPR and for the use of anonymised data should be made more specific. This should be supplemented by prohibitions of de-anonymisation and the unauthorised dissolution of pseudonyms, with the possibility of criminal prosecution.

3.3.     The responsibility of the manufacturers of hardware and software should be increased, for example by extending the definition of the responsible person to include the natural or legal person, public authority, agency or other body marketing file systems or personal data processing services.

At the very least, they should be included as addressees of the rules on data protection by means of technology design and by means of privacy by default, Article 25, and security of processing, Article 32 GDPR, in addition to the controller and processor, with the consequence that providers of personal data processing systems and services are responsible and liable for the implementation of the requirements at the time of placing on the market. In particular, they are to be legally obliged to provide all information required for a data protection impact assessment prior to the conclusion of a contract and all information and means necessary for the implementation of the rights of the persons concerned, irrespective of company and business secrets. This could also make the provisions on certification under Art. 42 effective. Consideration should also be given to extending the regulations on liability and compensation (Art. 82 GDPR) and on sanctions (Art. 83 f) to manufacturers.

4.     Rights of data subjects / self-determination

Self-determination and the rights of the persons concerned by the processing are at the centre of the fundamental right to data protection and the fundamental right to informational self-determination established by the Federal Constitutional Court. Although the GDPR standardises the central possibilities of influence of the individual on the processing of his or her data and his or her rights vis-à-vis the responsible parties, the actual possibilities of influence of the data subjects are often very limited. This applies in particular to the practice of various powerful companies offering services in which data subjects are trapped by lock-in effects. The rights of those affected should therefore be further strengthened.

4.1. The rules on consent (Art. 7 GDPR) and the right of objection (Art. 21 GDPR) must be supplemented in such a way that the persons concerned can make use of technical systems to determine their data protection preferences when exercising their decision-making powers. Those responsible must be obliged to respect these specifications and the decisions based on them.

4.2. In Art. 12 ff GDPR it must be ensured that the information provided for the data subject relates to data processing actually intended. It should also be clarified that the controller must inform the data subject of all known recipients to whom personal data of the data subject are or have been disclosed. In addition, the person responsible must be obliged to record the transmission of the data and the recipients, so that he cannot evade his obligation to provide information on the grounds of “lack of knowledge”.

4.3. The transparency obligations pursuant to Art. 12 et seq. are to be specified with regard to the use of profiling techniques and algorithmic decision-making procedures (cf. point 1, 2nd indent above).

4.4. The right to restriction of processing (blocking) in Art. 18 GDPR should be extended to those cases in which the necessary deletion is not carried out because the data must be kept only for the purpose of complying with retention periods.

4.5. The right to data portability (Art. 20 GDPR) should be specified in such a way that the data must be made available to the data subject in an interoperable format. It should also be ensured that the right covers all data processed by automated means that the data subject has generated (including metadata) and not only those that he has deliberately entered into a system. Furthermore, companies and platforms with a high market penetration should be obliged to make their offerings interoperable by providing interfaces with open standards.

C. Evaluation of the legal framework

Article 97(1) of the DS-GVO provides for an evaluation of the GDPR after 25 May 2020 at four-year intervals. In view of the rapid technical development in the field of data processing, it appears necessary to shorten this evaluation interval to two years. Even if the legal framework is designed to be technologically neutral, it must react to technical developments as quickly as possible otherwise it will fast become obsolete.

DATA RETENTION: A LANDMARK COURT OF JUSTICE’s RULING (1)

SOURCE : EUROPEANLAWBLOG
Written by Orla Lynskey

JOINED CASES C-293/12 AND 594/12 DIGITAL RIGHTS IRELAND AND OTHERS: THE GOOD, THE BAD AND THE UGLY

In its eagerly anticipated judgment in the Digital Rights Ireland case, the European Court of Justice held that the EU legislature had exceeded the limits of the principle of proportionality in relation to certain provisions of the EU Charter (Articles 7, 8 and 52(1)) by adopting the Data Retention Directive. In this regard, the reasoning of the Court resembled that of its Advocate General (the facts of these proceedings and an analysis of the Advocate General’s Opinion have been the subject of a previous blog post). However, unlike the Advocate General, the Court deemed the Directive to be invalid without limiting the temporal effects of its finding. This post will consider the Court’s main findings before commenting on the good, the bad and the ugly in the judgment.

 The Court’s Findings

 In reaching this conclusion, the Court reasoned as follows. It first narrowed the multiple questions referred by the Irish and Austrian courts down to one over-arching issue, whether the Data Retention Directive is valid in light of Articles 7, 8 and 11 of the Charter (setting out the rights to privacy, data protection and freedom of expression respectively). It then conducted its assessment in three parts.

 First, it examined the relevance of these Charter provisions with regard to the validity of the Data Retention Directive. Although the Court recognised the potential impact of data retention on freedom of expression, it chose not to examine the validity of the Directive in light of Article 11 of the Charter. It noted that the Directive must be examined in light of Article 7 as it ‘directly and specifically affects private life’ and in light of Article 8 as it ‘constitutes the processing of personal data within the meaning of that article and, therefore necessarily has to satisfy the data protection requirements arising from that article’[29].

 Second, it considered whether there was an interference with the rights laid down in Articles 7 and 8 of the Charter. It noted that the Data Retention Directive derogates from the system of protection set out in the Data Protection Directive and the E-Privacy Directive [32]. It cited Rundfunk  as authority for the proposition that an interference with the right to privacy can be established irrespective of whether the information concerned is sensitive or whether the persons concerned have been inconvenienced in any way [33]. The Court therefore held that the obligations imposed by the Directive to retain data constitutes an interference with the right to privacy [34] as does the access of competent authorities to that data [35]. The Court also held that the Directive interferes with the right to data protection on the mystifyingly simplistic grounds that ‘it provides for processing of personal data’ [36]. It observed that these interferences were both wide-ranging and particularly serious [37].    

 The Court then, thirdly, assessed whether these interferences with the Charter rights to privacy and data protection were justified. According to Article 52(1) of the Charter, in order to be justified limitations on rights must fulfil three conditions: they must be provided for by law, respect the essence of the rights and, subject to the principle of proportionality, limitations must be genuinely necessary to meet objectives of general interest.
The Court held that the essence of the right to privacy was respected as the Directive does not permit the acquisition of content data [39] and the essence of the right to data protection was respected as the Directive requires Member States to ensure that ‘appropriate technical and organisational measures are adopted against accidental or unlawful destruction, accidental loss or alteration of data’ [40].
With regard to whether the interference satisfies an objective of general interest, the Court distinguished between the Directive’s ‘aim’ and ‘material objective’: it noted that the aim of the Directive is to harmonise Member States’ provisions regarding data retention obligations while the ‘material objective’ of the Directive is to contribute to the fight against serious crime [41].
The Court observed that security is a right protected by the EU Charter and an objective promoted by EU jurisprudence [42]. It therefore held that the Data Retention Directive ‘genuinely satisfies an objective of general interest’ [44] and proceeded to analyse the proportionality of the Directive.

 The Court effectively adopted a two-pronged proportionality test, considering whether the measure was appropriate to achieve its objectives and did not go beyond what was necessary to achieve them [46].
Applying the ECtHR’s Marper judgment by analogy, it noted that factors such as the importance of personal data protection for privacy and the extent and seriousness of the interference meant the legislature’s discretion to interfere with fundamental rights was limited [47-48]. It held that the data retained pursuant to the Directive allow national authorities ‘to have additional opportunities to shed light on serious crime’ and are ‘a valuable tool for criminal investigations’ [49]. Therefore, it found that the Directive was suitable to achieve its purpose.

With regard to necessity, it noted that limitations to fundamental rights should only apply in so far as is strictly necessary [52] and that EU law must lay down clear and precise rules governing the scope of limitations and the safeguards for individuals [54]. It held that the Directive did not set out clear and precise rules regarding the extent of the interference [65]. It highlighted several elements of the Directive which fell short in this regard.
By applying to all traffic data of all users of all means of electronic communications the Directive entailed ‘an interference with the fundamental rights of practically the entire European population’ [56] and did not require a relationship between the data retained and serious crime or public security [58-59].
Moreover, no substantive conditions (such as objective criterion by which the number of persons authorised to access data can be limited) or procedural conditions (such as review by an administrative authority or a court prior to access) determined the limits of access and use to the data retained by competent national authorities [60-62]. Nor did the Directive determine the time period for which data are retained on the basis of objective criteria [64-65].

 The Court also held that the Directive did not set out clear safeguards for the protection of the retained data. This finding was supported by the Court’s observation that the rules in the Directive were not tailored to the vast quantity of sensitive data retained and to the risk of unlawful access to these data [66]. Rather, the Directive allowed providers to have regard to economic considerations when determining the technical and organisational means to secure these data [67]. Moreover, the Directive did not specify that the data must be retained within the EU and thus within the control of national Data Protection Authorities [68]. For these reasons, the Directive was declared invalid by the Court [69].

 The Good, the Bad and the Ugly

 The Good The judgment is to be welcomed for its end result – the invalidity of the Directive – as well as for many other reasons. It is a victory for grassroots civil liberties organisations and citizen movements: the preliminary references stemmed from actions taken by Digital Rights Ireland – an NGO – and just under 12,000 Austrian residents. More of these types of initiatives are needed in order to assure effective privacy and data protection. From a more substantive perspective, the judgment also recognises the dangers posed by aggregated meta-data – that it may ‘allow very precise conclusions to be drawn concerning the private lives’ of individuals [27] – and by data retention more generally – that it ‘is likely to generate in minds of the persons concerned the feeling that their private lives are the subject of constant surveillance’[37]. It also acknowledges that such data retention may have a chilling effect on individual freedom of expression [28].

 The Bad Nevertheless, some aspects of the judgment are less welcome. Most notably here, the Court glosses over the fact that it assesses the proportionality of the Directive in light of its ‘material objective’ – crime prevention – rather than its stated objective – market harmonisation. This sits uncomfortably with the Court’s finding in Ireland v Council that the Directive was enacted on the correct legal basis as its predominant purpose was to ensure the smooth functioning of the EU internal market. The Court also incorrectly applies Article 8 of the EU Charter. Not only does it consider that there is an interference with this right every time data are processed [36], it also fails to consider how the application of this right can be applied to a piece of legislation which pursues law enforcement objectives. The Data Protection Directive excludes data processing for law enforcement purposes from its scope (Article 3(2)) and the right to Data Protection should, pursuant to Articles 51(2) and 52(2) of the Charter, be interpreted in light of and reflect the scope of the Directive. This conundrum is conveniently overlooked by the Court.

 And the Ugly However, the most disappointing element of the judgment, like the Opinion of the Advocate General, is that it does not query the appropriateness of data retention as a tool to fight serious crime [49]. Given the prominence of this issue in both the EU and the US in the post-PRISM period, empirical evidence is needed to justify this claim.

Written by Orla Lynskey Posted in EU constitutional law, Fundamental rights, General, Internal Market, Proportionality and Subsidiarity Tagged with article 7 Charter, article 8 Charter, data retention directive, Directive 2002/58/EC, directive 2006/24/EC, Joined Cases C-293/12 and 594/12 Digital Rights Ireland ltd and Seitlinger and others, personal data, Privacy, proportionality, right to data protection
– See more at: http://europeanlawblog.eu/?p=2289#more-2289