“Person identification, human rights and ethical principles: Rethinking biometrics in the era of artificial intelligence”

STUDY (*) : European Parliament Research Service (EPRS) 16/12/2021

ABSTRACT : As the use of biometrics becomes commonplace in the era of artificial intelligence (AI), this study aims to identify the impact on fundamental rights of current and upcoming developments, and to put forward relevant policy options at European Union (EU) level.

Taking as a starting point the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on AI, presented by the European Commission in April 2021, the study reviews key controversies surrounding what the proposal addresses through the notions of ‘remote biometric identification’ (which most notably includes live facial recognition), ‘biometric categorisation’ and so-called ’emotion recognition’.

Identifying gaps in the proposed approaches to all these issues, the study puts them in the context of broader regulatory discussions. More generally, the study stresses that the scope of the current legal approach to biometric data in EU law, centred on the use of such data for identification purposes, leaves out numerous current and expected developments that are not centred on the identification of individuals, but nevertheless have a serious impact on their fundamental rights and democracy.


This study explores biometrics in the era of artificial intelligence (AI), focusing on the connections between person identification, human rights and ethical principles. As such, it covers a subject of the greatest political and societal prominence. Among the many controversies in this area, certainly one of the most salient is the discussion surrounding facial recognition, and more specifically about the potential risks stemming from the use of live facial recognition technology in public spaces. The potentially negative impact of the widespread use of such technology has indeed mobilised a strong response from parts of civil society in Europe and globally.

From a policy and legislative viewpoint, in the European Union (EU) this discussion is currently being framed in terms of regulating possible uses of remote biometric identification. Live facial recognition technology uses facial templates that allow for the unique identification of individuals, and thus constitute – due to such capability for ‘unique identification’ – biometric data for the purposes of applicable EU data protection law.

For many years, the exploration of possible normative frameworks to accompany and duly channel the advent of AI has primarily turned around ethical considerations and principles. In 2020, however, the European Commission started openly and decidedly moving towards the adoption of a new legal framework for AI as main priority in this regard. For this purpose, the European Commission notably published in April 2021 a proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on AI (COM(2021) 206 final) (hereafter also ‘the proposed AI act’ or ‘the proposed AIA’).

The proposal puts forward rules that apply to a variety of AI systems. Demonstrating the importance of biometric technologies, three types of AI systems, explicitly defined in the proposal and subject to specific rules, are in fact defined in the very text of the proposal on the basis of their connection with biometric data: these are ‘remote biometric identification systems’, ’emotion recognition systems’ and ‘biometric categorisation systems’:

  • remote biometric identification systems are defined as AI systems used ‘for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified’;
  • emotion recognition systems are defined as AI systems used ‘for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data’, and
  • biometric categorisation systems are defined as AI systems used ‘for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data’.

These notions are however not yet fully consolidated at EU level, and thus one of the objectives of the study is to unpack their rationale, scope and possible limitations.

The proposed regulation defines ‘biometric data’ as ‘personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data’ (COM(2021) 206 final 42). This definition of biometric data is exactly the same as the one featured in the main instruments of EU data protection law, where the processing of biometric data for the purpose of uniquely identifying a natural person is regarded as constituting the processing of a special category of data that deserves the most stringent level of protection.

Scope and structure of the study

This study has been prepared on the basis of desk research. The focus of the study is the EU framework, although due consideration has also been given to international developments when relevant. The study first provides an overview of current trends in biometrics and AI, including technological considerations and information about notable uses, as well as specific information in relation to remote biometric identification, emotion recognition and biometric categorisation. Second, it presents the regulatory framework, illustrating that ongoing developments in the area of biometrics and AI do not occur in a legal vacuum, but amid pre-existing legal provisions and overarching EU fundamental rights obligations. Third, it reviews current policy discussions, in particular in the EU and as embodied by the European Commission’s proposal for a regulation on AI, and then puts forward policy options.

Biometrics and AI

Biometric data are increasingly used in a great variety of contexts. At EU level, the processing of biometric data has been actively encouraged and directly supported over the past years in the context of EU-level large-scale information technology (IT) systems in the area of freedom, security and justice (AFSJ). These systems, initially set up by the EU for asylum and migration management but increasingly also serving internal security, almost systematically rely on the massive collection of biometric data.

The review of ongoing technological and societal developments at the crossroads of biometrics and AI shows that, although identification is a crucial notion for biometrics, there are many developments aimed not primarily at identification but at the categorisation of individuals, assigning them to different categories, for instance on the basis of age or gender. It is however not always clear how the processing occurring for the purposes of categorisation is linked to identification, or to what extent such practices can always be separated.

Most notably, it is sometimes unclear, first, whether the data processed for categorisation purposes concern an identified or identifiable person at all, and whether such data should thus be regarded as personal data for the purposes of EU law. Second, it is sometimes unclear whether the data at stake – which often relate to the body – constitute or not biometric data, which requires taking into account whether the data allow for the identification of the individual (even if they are processed for the purpose of categorisation). Complicating the situation further, sometimes the categorisation of individuals is in practice a step taken towards their identification.

Regulatory framework

There is currently no European legislation relating exclusively to biometrics. The most directly relevant specific rules of EU law are to be found in EU data protection law. In addition, the whole existing EU fundamental rights architecture is fully applicable to the use of biometric technologies.

A review of this architecture and of the most relevant rules on biometrics and on automated decision-making in EU data protection law, as well as of the most important case law in this area emanating from the Court of Justice of the EU (CJEU) and the European Court of Human Rights (ECtHR), shows that ongoing technological developments are taking place amid – and possibly also somehow despite existing rights and principles, which might thus possibly need to be reinforced, clarified, or at least fine-tuned.

Impact on fundamental rights

AI-enabled biometric technologies pose significant risks to numerous fundamental rights, but also to democracy itself. In this sense, for instance, the pervasive tracking of individuals in public spaces constitutes not only a major interference with their rights to respect for private life and to the protection of personal data, but can also impact negatively on their rights to freedom of expression, and to freedom of assembly and association, altering the way in which certain individuals and groups are able to exercise social and political protest. The deployment of facial recognition technologies during peaceful assemblies can discourage individuals from attending them, limiting the potential of participatory democracy. Bias and discrimination are a well-documented issue in this field, and can be the result of a variety of factors.

Different uses of biometric technologies can have different specific types of impact on fundamental rights. The deployment of remote biometric identification in public spaces, in this sense, is particularly problematic as it potentially concerns the processing of individuals’ data – without their cooperation or knowledge, on a massive scale.

Regulatory trends and discussions

There is an ongoing – even if not fully systematic – shift from the discussion of ethical frameworks for AI to the regulation of AI systems by law. It appears nevertheless clear to many actors that an improved framework is needed to guarantee the fairness, transparency and accountability of AI systems, an objective that can be pursued by enhancing representation at various levels of decision-making.

Developments in the United States (US) are numerous and illustrate a variety of approaches, most notably targeting facial recognition. In Europe, the Council of Europe has been particularly active in this area and is currently working on a possible new legal framework at its level for the development, design and application of AI, based on recognised Council of Europe standards in the field of human rights, democracy and the rule of law. In 2021, there was registered a European citizens’ initiative named ‘Civil society initiative for a ban on biometric mass surveillance practices’, calling for strict regulation of the use of biometric technologies in order to avoid undue interference with fundamental rights.

The European Commission published its proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on AI (COM(2021) 206 final) on 21 April 2021. The proposal is based on Articles 16 and 114 of the TFEU, on personal data protection and the internal market, respectively. The proposed AI regulation prohibits the use of some AI systems (listed in the proposed Article 5), and qualifies other AI systems as ‘high-risk’, detailing the rules applicable to such ‘high-risk’ systems.

The area of biometric identification and categorisation of natural persons is in principle ‘high risk’, but under this heading (heading 1), only a concrete group of AI systems are mentioned: ‘AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons’. There is, however, no reference to biometric categorisation being recognised as ‘high risk’. Potentially, it is possible to imagine there might exist AI systems that involve the processing of biometric data in all other areas listed as ‘high risk’.

The AI regulation proposed by the European Commission foresees, as a general principle, ‘the prohibition of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement‘. Nevertheless, such real-time remote biometric identification systems can be used as far as such use is strictly necessary for certain objectives and under certain conditions.

The proposed AI regulation explicitly excludes from its scope of application AI systems that are components of existing and upcoming EU-wide large-scale IT systems, if the systems were placed on the market or put into service during the first year of application of the regulation, or before that date. This rule would, however, not be applicable if the legal acts establishing such EU-wide large-scale IT systems would lead ‘to a significant change in the design or intended purpose of the AI system or AI systems concerned’ (proposed Article 83(1) AIA). The proposed text notes, despite the proposed regulation not being applicable as such to the systems mentioned, that the requirements that it lays down must ‘be taken into account, where applicable’ in the evaluation of these large-scale IT systems as provided for in those respective acts (idem), but it is unclear what such ‘taking into account’ would imply.

Policy options

In light of the findings of the study, the following policy options are put forward:

Delimit better the regulation of biometrics and biometric data: the proposed AIA reproduces the definition of ‘biometric data’ present in EU data protection law since 2016. The interpretation of the definition is not completely clear, and there are significant uncertainties as to how to apply EU data protection rules to biometric data. The definition, in any case, does not appear to cover all the problematic practices that are often framed in the literature and even by policy-makers as related to biometrics. It is thus important to shed further light on the scope and relevance of the definition, but also to think critically about the impact of conditioning some other notions put forward in the AIA (such as ‘biometric categorisation’ or ’emotion recognition’) to the processing of biometric data defined in such a way.

Improve the future qualification of new AI systems as high-risk: it is necessary to envisage a faster, clearer and accessible path to qualifying additional AI systems as high-risk systems in the future. Civil society organisations could be given a role to raise the alarm of major risks, especially insofar as the affected persons would potentially be in vulnerable positions.

Explicitly ban certain uses of live facial recognition: the proposed AI regulation fails to prohibit real-time remote biometric identification in public spaces for law enforcement purposes, despite conceding that it triggers even more risks than ‘high-risk’ AI systems. The regulation should at least formally and effectively ban the persistent tracking of individuals in public spaces by means of remote biometric identification, as it has major consequences for fundamental rights and democracy.

Regulate ‘post’ remote biometric identification in the same manner as ‘real-time’ remote biometric identification: the proposed AI regulation fails to address properly the risks connected with the retroactive identification, using facial recognition, of individuals whose images have been recorded while they were in public spaces. In practice, the risk of persistent tracking and its associated adverse impact on fundamental rights and democracy are, however, at least equivalent to the risk associated with ‘real-time’ remote biometric identification. ‘Post’ remote biometric identification of natural persons recorded while in public spaces should be subject to the same rules as the ‘real-time’ equivalent.

Establish at EU level the necessary safeguards for real-time remote biometric identification: the proposed AI regulation leaves it up to the Member States to define, by law the exact conditions for the use of in principle prohibited but actually permitted real-time remote biometric identification in public spaces for law enforcement purposes. The only detailed condition is the need for prior authorisation granted by a judicial authority or by an independent administrative authority. Substantive safeguards for the prohibited but exceptionally permitted uses of real-time remote biometric identification, if any, must be specified at EU level in the future AIA itself, as opposed to being left to the discretion of the Member States.

Ban AI systems assigning to categories that constitute sensitive data based on biometric data: the proposed AI regulation gives a definition of ‘biometric categorisation system’ that is unclear and conceptually problematic, most notably to the extent that it seems to endorse the idea that it is possible – scientifically, ethically and legally – to use AI systems to assign natural persons to a sexual or a political orientation. If a reference to the use of similar AI systems persists in the draft, it should be phrased clearly as a prohibition.

Clarify the regulation of ’emotion recognition’: the status of ’emotion recognition’ in the proposal for a regulation on AI is not entirely clear. The proposed definition of emotion recognition seems to imply that emotions and intentions of individuals can be inferred from biometric data. This would only possibly make sense if biometric data are understood in a broad sense, not limited to data concerned with the unique identification of individuals. In addition, the list of high-risk systems in Annex III includes various references to systems used ‘to detect the emotional state of a natural person’, without clarifying if these would correspond to what is defined as ’emotion recognition’ systems or would potentially be something else.

Increase transparency towards individuals as a necessary means to guarantee rights and remedies: the proposed AI regulation privileges imposing obligations on actors other than the users of AI systems, who are only subject to a limited number of provisions. The use of extremely high-risk systems in particular should be conditioned to additional obligations imposed on users towards individuals, notably in terms of transparency both prior to the use and during the use. Transparency is crucial for the exercise of rights and the effectiveness of remedies. Limitations to transparency should be compensated with measures that guarantee the accountability of such limitations.

Do not allow for special exemptions to general rules for EU large-scale databases: the use of biometrics and AI in EU large-scale IT systems is massive, raising serious risks for fundamental rights. The fact that the European Commission’s proposal for a regulation on AI deliberately leaves out of its scope of application certain AI systems to be used in the AFSJ is of great concern. It is essential that large-scale IT systems in the AFSJ comply fully with the highest standards of EU law.


(*) This study has been written by Professor Gloria González Fuster and Michalina Nadolna Peeters of the Law, Science, Technology and Society (LSTS) Research Group at Vrije Universiteit Brussel (VUB) at the request of the Panel for the Future of Science and Technology (STOA) and managed by the Scientific Foresight Unit, within the Directorate-General for Parliamentary Research Services (EPRS) of the Secretariat of the European Parliament.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: