Automated Facial Recognition: A Case Study in Bias, Transparency and Proportionality

The use of facial recognition technology in public spaces has always been a contentious issue around the world; mainly due to the lack of transparency surrounding such processing of sensitive biometric data and the bias that is often embedded in such technology.

In a US Government study, the National Institute of Standards and Technology tested nearly 200 such algorithms from developers such as Microsoft, Tencent, Toshiba and Intel and found that these algorithms “failed on race”

In August 2020, the Court of Appeal of England & Wales declared that the use of Automated Facial Recognition technology (AFR) by South Wales Police was unlawful, overturning the initial decision from London’s High Court authorising the use. This decision is of great significance for the use of AFR technology in public places in England & Wales and has laid out guidance for the use of such methods of surveillance – regardless of whether such processing is done for national security and law enforcement purposes. 

Given that here at Gerrish Legal we are assisting more and more clients with the privacy issues related to this next generation, and innovative (but also intrusive) technology, we thought it would be useful to set out our overview of the issues at stake.

The Case 

In R (Bridges) v CC South Wales, Mr Bridges, acting alongside activist group Liberty, appealed the use of AFR technology by SW Police. Bridges’ sensitive personal data was processed twice using the technology and he claimed that this caused him distress and was a violation of his right to privacy under Article 8 of the Human Rights Act 1998 and the European Convention on Human Rights

Importantly, five points were raised at the Court of Appeal, of which three were upheld. 

Data Protection Impact Assessment 

Firstly, the Court held that the Data Protection Impact Assessment (DPIA) conducted by SW Police was “deficient”.

A DPIA is a document that lays out the type of processing to be conducted, the categories of data involved, the risks to privacy that are posed by this data processing, and the technical and organisational measures that the data controller (and processor(s), if applicable) have put in place to minimise or mitigate the risks posed to data subjects’ privacy. 

The DPIA in question was said to be deficient because it was drafted on the basis that the right to privacy in Article 8 was not infringed – when this is not the case at all. The right to privacy is in fact infringed, however what is important is to show that the purpose for the processing is adequately proportional to the infringement to privacy.

However, stating that rights are indeed infringed can create a catch-22 situation, whereby admitting risks can create a precarious position when arguing that these risks have been mitigated. This is why you should have a thorough and well-written DPIA before conducting any processing that requires such an evaluation under the GDPR. 

Gender or racial bias 

Secondly, the Court held that SW Police did not take reasonable steps to ascertain whether the AFR technology had any gender or racial bias. Bias is a huge issue when it comes to automated technology and is often an issue that can stem from many factors, such as the initial data sets that are used to train the algorithm used in the automated technology. If the initial data set itself contains bias, which is often the case when using real world data sets as bias inherently exists in today’s society, then the algorithm will be trained that way. Likewise, if initial data sets underrepresent certain categories of people, algorithms will be less efficient at recognising these categories of people and can thus be prone to mistakes. 

Presence of bias in automated technology can be a breach of the GDPR and human rights laws in Europe and the UK, if such technology is used for automated decision making that creates legal or similarly significant effects for data subjects

Of course, use of such technology containing bias by the Police is problematic and as such, this is an area that needs to be addressed as a priority.

In the study conducted by the National Institute of Standards and Technology, the one-to-one matching algorithms tested, namely algorithms matching a particular photo to another one of the same face (profiling), were found to have falsely identified African-American and Asian faces “between 10 to 100 times more than Caucasian ones”. 

Lack of transparency

Thirdly, the Court held that there was a lack of clear information and guidance on the actual data processing. Information such as when, where and how the AFR technology is and would be used was not available to the public. Whilst we know that the technology would be used to track individuals who are on a watchlist, it was not made clear who could be put on a watchlist and the requirements needed before the AFR technology could be deployed in a certain public location.

This means that the: (i) potential data subjects are not identifiable; and (ii) purposes for processing are not made clear.  

As such, more transparency is required for the processing by SW Police. Otherwise the use of the AFR technology will be deemed to be too wide under Article 8(2) of the European Convention on Human Rights

“There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others”. 

Proportionate interference to rights

Fourthly, the Court of Appeal did however state that the use of the AFR technology in this case was indeed proportionate to the interference with the rights of the data subjects. This was because the Court thought that the benefits to the public and the rule of law outweighed the impact that the processing had on Bridges’ right to privacy. So, whilst the discriminatory element of the processing needs to be assessed by SW Police as the data controller, the actual processing is not an issue in this instance. 

This is predominantly due to the substantial value that such processing can add to public safety and law enforcement. Had such processing been carried out for any other purpose, such as for commercial gain, it is highly likely that the Court would have decided to the contrary. This demonstrates how important the proportionality test is when looking at the proposed processing of data and the impact to rights. 

Documentation requirements under the GDPR

Lastly, Bridges had claimed that SW Police did not use the appropriate documentation for the processing as per the provisions of Section 42 of the Data Protection Act 2018. This section relates to the safeguards required in relation to sensitive processing – which is the processing of sensitive data under the GDPR including biometric and genetic data. This ground was rejected by the Court of Appeal as the two instances of processing relating to Bridges occurred in 2017 – before the DPA 2018 came into force. 

However, if you are processing sensitive data as defined in the GDPR under Article 9, then you will have to make sure that you are relying on a lawful basis under both Article 6 and Article 9, as well as fulfilling the requirements at Section 42, and consider the effects of any automated decision making if applicable. 

What does this mean for automated facial recognition technology going forward?

SW Police have confirmed that they will not appeal the decision and will be working on implementing the advice imparted by the Court of Appeal. In any case, a better DPIA and removal of bias are paramount for any AFR data processing, especially one that can create legal effects for data subjects. 

This decision will mean that more transparency will be required in relation to AFR data processing, regardless of whether it is for public interest or not. However, this decision will have a significant effect on other police and law enforcement divisions in England and Wales, as well as other public uses of this technology. 

In light of the Covid-19 pandemic, trials and talks of AFR technology use in public spaces such as airports and stations are rife, with the aim of creating contact-free zones. Furthermore, use of thermal imaging technology in offices to monitor the spread of the virus is also on the rise. Such technology will also be subject the strict requirements of the GDPR – regardless of where such amenities are based. As the GDPR applies to all those who are processing the personal data of citizens from the European Union, its effect is widespread and almost global. 

However, the European Commission are yet to provide a European response on AFR technology and instead have left this up to individual Member States to govern.

Their White Paper on Artificial Intelligence published in February 2020 did however suggest support for the temporary suspension of use of facial recognition technology until regulations and guidelines could be agreed. 

In England & Wales, this could be an opportunity for the Home Office to update its code of practice on the use of facial recognition and surveillance methods, which has been called “woefully out of date” by England & Wales’ Surveillance Camera Commissioner, Tony Porter. The Information Commissioner’s Office (the UK’s data protection authority) has called for a new code of practice altogether, in light of recent investigations of the highly controversial Clearview AI Inc technology.

Likewise, calls from the House of Commons' Science and Technology Committee to suspend the use of AFR technology until up-to-date regulations are put in place could become reality in light of the decision. 

Conclusion 

Automated facial recognition technology will continue to be in the limelight going forward in a contact-free and tech-driven post-pandemic world. However, compliance with privacy laws must be considered when deploying such technology, a point that has been reinforced by the Court of Appeal. 

If you have any questions relating to automated facial recognition technology, automated decision making or sensitive data processing, then please do not hesitate to contact us

Article by Komal Shemar @ Gerrish Legal, August 2020 / Cover photo by arvin keynes on Unsplash

Previous
Previous

Canadian Privacy Laws: the GDPR's transatlantic cousin?

Next
Next

Schrems II - Where are we now for EU-US data transfers?