The New Artificial Intelligence Regulation: Europe’s Next Gold Standard Law?

Once a concept restricted to state-of-the-art labs, artificial intelligence (AI) now underpins a large number of functions and services in our day-to-day life, from banking, to recruitment, to education and even healthcare.

The increasing need for regulation of AI has been acknowledged for a long time now - and all over the world. Now, the pioneers of the gold standard legislation for data protection are breaking new ground with a legal framework for AI. The European Commission officially published its proposal for an EU Regulation for AI on 21 April, a week after it was leaked on 14 April 2021. 

The proposal, the first of its kind in the world, follows the findings of the European Commission’s White Paper on AI from 2020, which set out policy options on how to achieve the twin objective of promoting the uptake of AI but also addressing the risks associated with the use of such technology

 After extensive consultation with major stakeholders, the Commission has drafted legislation that allows us to see what the future of AI will look like on the European legal landscape. 

What is the aim of the proposed AI Regulation?

The European Union (EU), known in the world as the reference point for data protection laws, is now on a mission “to turn Europe into a global hub for trustworthy AI”

It is no secret that the General Data Protection Regulation (EU) 2016/679 (the GDPR) and AI systems appear to be somewhat incompatible. Therefore, the aim of the AI Regulation is also to enhance governance and effective enforcement of existing EU laws in relation to fundamental rights (notably, the right to privacy) and safety requirements as applicable to AI systems. 

As opposed to the GDPR, which has a rights-based approach, the new AI regulation is underpinned by a risk-based approach

This position is somewhat controversial, as certain actors have highlighted that there could be discussions and disagreements over what is actually considered “high-risk”. For example, whilst some public sector AI, such as police use, is deemed to be high-risk, certain public sector uses have been excluded - such as military use and use by public authorities in third countries and international organisations. However, since the public at large have less of a say over the use of their personal data in these public sector AI systems, this is seen as hypocritical by some, including the European policy analyst Daniel Leufer and European Centre for Not-for-Profit Law, who had also contributed to the AI White Paper last year.

On the other hand, looking at the nature of AI and machine learning algorithms, applying a rights-based approach seems unfeasible, as has already been seen with the application of the GDPR to AI and other technologies such as facial recognition. 

Indeed, the European Commission itself admitted that the provisions of the GDPR, as they stand, are sometimes incompatible with the operability of AI systems. Further comments on the need to have an AI-specific regulation were made by European actors last year during the AI France conference attended by Gerrish Legal (you can read our article on this here). 

Who would the AI Regulation apply to?

The AI Regulation proposal aims to mainly regulate high-risk applications of AI. As such, the reality is that most uses of AI won’t face strict regulation under the new framework, provided they do not meet the threshold to be deemed high risk. 

The actual definition of “artificial intelligence” in the proposal “aims to be as technology neutral and future proof as possible, taking into account the fast technological and market developments related to AI” (see page 12 of the linked document). 

As such, an AI system is defined as a “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (see page 39) This includes techniques and approaches such as machine learning, logic- and knowledge-based approaches, statistical approaches, Bayesian estimation and search and optimization methods – casting the net as wide as possible (see Annexe 1). However, this definition of AI systems could be subject to change during the upcoming rounds of approvals. 

The AI Regulation proposal applies to:

  • AI providers placing their systems on the market or putting them into service in the EU, whether or not they are established within the EU; 

  • users of AI systems located within the EU; and 

  • providers and users of AI systems located in a third country where the output produced by the system is used in the EU.

In such a way, like the GDPR, the AI Regulation has extra-territorial application, and will apply to those selling and using AI systems even if they are not based within the EU. 

TYPES OF AI SYSTEMS TARGETED:

Three categories of AI systems have been expressly targeted in the AI Regulation proposal: 

1.     AI that creates an unacceptable risk – and is expressly prohibited; 

2.     AI that creates a “high-risk”; and

3.     AI that is intended to interact with humans, even if only posing a low or minimal risk. 

Looking further into these areas -

1.     PROHIBITED AI: 

The European Commission has included a list of AI systems that are deemed to have an unacceptable risk to the fundamental rights of EU residents and to EU values attached to its use – and have therefore been expressly prohibited such systems from being used in the EU. 

This includes AI systems that: 

i)       use subliminal techniques to manipulate, exploit, or materially distort an individual’s behaviour in way that causes or could likely cause harm or detriment; 

ii)     exploit any vulnerabilities, such as use of AI with minors or individuals with disabilities, in a way that causes or could likely cause harm or detriment; 

iii)    evaluate or classify the trustworthiness of individuals, i.e. “social scoring”, when used by public authorities or anyone on their behalf, if such social scoring leads to detrimental or unfavourable treatment of individuals in a context that is unrelated to the purpose for which the data was originally collected or if such social scoring leads to detrimental or unfavourable treatment that is unjustified or disproportionate to the facts; and

iv)    use ‘real-time’ remote biometric identification systems in public for law enforcement – i.e. indiscriminate public surveillance through methods such as facial recognition technology. 

AI systems that fall into this latter category, which was previously absent from the leaked draft of the proposal, can be used for one of the following exceptions: when these systems are required for searching for victims of crime (for example, missing children), the prevention of a terrorist attack or imminent threat to life, or the detection of a perpetrator or suspect of a criminal offence subject to certain requirements. 

2.     HIGH-RISK AI: 

AI systems that do not fall into any of the categories specified above will then have to see if they meet the threshold to be deemed “high-risk”

If an AI system is deemed to create a high risk to the fundamental rights of EU residents or a high risk to EU values, then that AI system must comply with the mandatory requirements and ex-ante conformity assessment in the AI Regulation in order to be permitted on to the European market. 

An AI system is deemed to be high-risk where both:

1.     the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II of the proposal, such as, for instance, the EU legislation on machinery, toys, pressure equipment or medical devices, etc. (Article 6); 

2.     the AI system is required to undergo a third-party conformity test with a view to placing this AI system on the market or putting it into service pursuant to Union harmonization legislation listed in Annex II of the proposal. 

Additionally, AI systems used in areas set out in Annex III of the proposal are also considered high-risk. This Annex specifically refers to stand-alone AI systems that are considered high risk due to the threat they represent towards fundamental rights. They include, AI systems intended to be used for the remote biometric identification of persons in publicly accessible spaces, or the management and operation of essential public infrastructure networks, education and vocational training, employment, law enforcement, migration, border control or administration of justice. 

Operating a high-risk system:

If you are operating or intend to operate a high-risk AI system, you have certain obligations under the AI Regulation proposal, beforeand during the placement of such systems on the market. 

Providers need to ensure that their AI systems comply with strict data and governance requisites, such as using reliable datasets, as well as creating technical documentation demonstrating the conformity, and assessing the risks and risk mitigation measures. 

Providers will also have to design their high-risk systems to meet certain accuracy, robustness, transparency, and cybersecurity standards, enable their outputs to be interpretable by users and ensure that human intervention is possible during use.

Certain AI systems must undergo a conformity assessment before being placed on the market or put into service. An EU declaration of conformity will need to be drawn up and the CE marking of conformity affixed to the system. This is in addition to any third-party ex-ante conformity assessments that certain systems may already be subject to. Providers of other stand-alone high-risk systems are required to conduct internal assessments themselves, except for uses of facial identification that must be assessed by independent third parties.

After the high-risk AI system is sold or put into use, the providers must implement a quality management system. This system should be documented in a systematic and orderly manner in the form of policies and instructions, as well as through automatically generated logs. 

3.     TRANSPARENCY OBLIGATIONS FOR AI SYSTEMS INTERACTING WITH HUMANS:

In addition to the above obligations on prohibited and high-risk AI systems, Article 52 of the AI Regulation proposal also contains transparency obligations for AI systems that interact with humans - even if such AI systems pose a low or minimal risk. 

Providers of AI systems that are designed and developed in such a way that the system interacts with people must fulfil certain transparency obligations, including informing people that they are indeed interacting with an AI system – unless this is obvious from the circumstances and the context of use. 

This obligation will not apply to AI systems that are authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.

Chatbots or deep fakes run through an AI system are a classic example of where this obligation of transparency is imperative. 

Specifically, the AI Regulation proposal imposes that deep fakes, which are AI systems that generate or manipulate images and videos resembling existing people, must disclose that the content has been artificially generated or manipulated – which could be an essential way to tackle the rise of the “fake news” phenomenon. 

What happens if you do not comply with the Regulation? 

Enforcement is left, to a great extent, to the national competent authorities designated by the Member States. However, the proposal also provides for the creation of a European Artificial Intelligence Board. 

Non-compliance could impose significant penalties. Member States shall determine the fines for violations. Under the current proposal, the following thresholds need to be taken into account when imposing such penalties: 

  • for infringements related to prohibited practices or non-compliance with data requirements, fines can reach up to €30 million or 6% of the total worldwide annual turnover

  • for infringements relating to non-compliance with any other requirements of the AI Regulation, fines can reach up to €20 million or 4% of the total worldwide annual turnover; and 

  • for infringements relating to the supply of incorrect, incomplete or misleading information to authorities, fines can reach up to €10 million or 2% of the total worldwide annual turnover

Is the AI Regulation Proposal a welcome change?

There has been a need for a specific regulation or set of rules relating to the use AI for a long time now, not only in Europe, but throughout the world. The fast-paced nature of AI development has meant that existing legislation has not been able to keep up, and in some cases, provisions of certain statutes have become entirely incompatible. However, this should not mean that key principles underpinning such legislation should be thrown out the window – a harmonization between the need to protect fundamental rights and promote innovation is always required. Therefore, this AI Regulation proposal is of course a welcome step in the right direction.

However, having the merit of being the first-of-its-kind also comes with drawbacks, such as not having similar legislation to compare to identify any shortcomings. Indeed, despite being innovative and having the merit of putting fundamental rights and public interest first, the legislation appears to be missing some important elements. 

The omission of safeguards for the risk of algorithmic bias:
Whilst highly referred to in the recitals, the provisions of the AI Regulation proposal are not as strong on requirements to ensure algorithmic fairness, such as conducting and publishing impact assessments. Indeed, whilst the legislation makes some occasional references to bias monitoring, detection and correction, it never specifically requires impact assessments on protected classes. 

This is surprising when considering that bias is one of the main concerns of using AI in public places or instances where legal or similar decisions are made for people. An example of this is the use of AI in recruitment

Loopholes regarding biometric mass surveillance technologies:
Despite lobbying from digital rights activists and even members of the European Parliament, some believe that the AI Regulation is missing important propositions in order to ensure a satisfying protection of fundamental rights when it comes to biometric mass surveillance.

Although the proposal provides for the prohibition of real-time remote biometric identification systems in publicly accessible spaces by law enforcement authorities, some believe that what is lacking is obvious: what about other public authorities or private actors?

There is no doubt as to the dangerousness of the use of these AI systems by such actors, so the absence of any reference to it raises eyebrows. However, one could counterargue that the GDPR sufficiently protects against such uses in its own right too, for example, through the need for a lawful basis for processing. 


Moreover, the proposal provides for numerous exceptions to this prohibition. These provisions could be subject to extensive interpretation and give wide powers to authorities, thus allowing for human rights violations and allowing for mass surveillance. For example, the use of AI for automated facial recognition by public authorities has been a contentious topic in the UK and the EU for a while. 

Conclusion

This is an exciting proposition from the EU, and it will be interesting (and important if you are the provider or user of AI systems) to keep monitoring the progress of this draft legislation. 

If you have any questions in relation to the provision or use of AI systems, please do not hesitate to contact us

Article by Komal Shemar and Evane Alexandre @ Gerrish Legal, May 2021

Previous
Previous

Online marketplaces: Part 2 - Liability for Counterfeit Goods

Next
Next

Online marketplaces: Part 1 - Obligations and Liabilities of Platform Providers