France’s Data Protection Regulator Proposes AI Regulation Plan

 France’s data protection regulator - Commission Nationale de l’Informatique et des Libertés (CNIL) has published an action plan proposing a framework for regulating artificial intelligence (AI).

The plan proposes to create a new AI department within the CNIL dedicated to understanding, auditing, and regulating the technology, particularly generative AI platforms such as ChatGPT, in France and Europe.

What are the Key Features of the CNIL Plan?

The key features of the CNIL plan are based on four components:

1. Understanding how AI systems work as well as their impact on users. 

2. Encouraging and supervising the development of ethical AI.

3. Supporting innovative AI developers in France and Europe.

4. Protecting the public through auditing and controlling AI Systems.

The speed and innovation with which AI systems are being developed mean new issues are constantly being raised, with data protection being a particular concern. As a result, understanding how AI systems work and their impact on people will help CNIL establish how to ensure privacy, fairness, and transparency for users. Other priority areas include the protection of publicly accessible data, information transmitted through AI by users, protection against bias or discrimination, and the unprecedented security issues arising from AI.

One of the issues with AI systems like ChatGPT (which we talk about in another post), is that it can provide biased or false answers to questions asked of it. ChatGPT works by learning from user behaviour and making links between words and prompts. When you ask ChatGPT a question, it’s not like Google where you can see the source of the information and decide for yourself if it is reliable or not. The chatbot simply gives you an answer which could be inaccurate or even discriminatory. The new CNIL action plan proposes to protect against this type of risk. 

Furthermore, CNIL will supervise and encourage the development of ethical AI with a focus on the rights of users and how their access, right to challenge, or rectification of data is managed. CNIL plans to help businesses that use AI by providing guides and producing information to support organisations. The data regulator is also working to synchronise its efforts with other authorities such as the EU to protect the public through controls and audits of AI systems.

The EU has already proposed a new law called the Artificial Intelligence Act which aims to promote trust and transparency in technology and give developers and users clear requirements to follow. This is a positive step towards the regulation of AI and will go hand in hand with CNIL’s plans too. 

How Will the CNIL Plan Protect Individual Privacy Rights?

CNIL’s plan of action is first to understand how AI models work and how they impact people in terms of privacy and copyrights, and how long data is retained. For example, generative AI models such as ChatGPT and DALL-E are able to produce text or images from prompts due to the large volume of data they have been fed, or ‘trained’ on.

CNIL hopes that by understanding how AI models work it can then create controls and safeguards to support privacy-friendly standards. The regulator will carry out audits, investigate complaints, and measure compliance through this new legal framework. 

Key players in the AI sector will have to show that they have undertaken data protection impact assessments and implemented adequate measures to ensure users know how their data is being used.

What Will Be the Impact of Regulation on Companies?

The main impact of the new CNIL action plan on companies that work in the field of AI or that use AI to support their business practices will be having to adhere to new regulations and ensure compliance with data laws. Most companies in Europe have to comply with GDPR rules already and AI is just the newest frontier in the tech sector. 

However as AI systems are far more complicated and the rules are constantly evolving to meet new threats, organisations will have to be nimble in their response to changes. Companies may need to create AI departments or hire technical and legal specialists in the field to ensure they do not fall foul of the rules.

On the other hand, organisations such as CNIL and the EU are aware that we are all learning together about how AI affects our lives and how we can safely protect our personal data. The key is to strike a balance between regulating AI but not so much that we prevent growth and innovation which AI can help us achieve. As such, regulatory bodies are happy to provide companies with the training and guidance they need to support AI regulation, as it is in the best interests of everyone involved to protect privacy rights. 

How Can Gerrish Legal Help?

Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property. 

We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements.

We are here to help you, get in contact with us today for more information.



Previous
Previous

Ed Sheeran v Marvin Gaye: When Are Copyrights Infringed? 

Next
Next

New Cybersecurity Regulations for US Businesses