Legitimate Interest, Anonymity, and AI Models: A Guide for Non-Legal Experts on Development and Deployment
Artificial Intelligence (AI) is transforming industries, from healthcare to finance, and even creative fields like content generation. However, as AI models become more sophisticated, so do the legal and ethical challenges surrounding their development and deployment. One of the most pressing issues is the use of personal data in training these models. What happens when personal data is processed unlawfully during the development of an AI model? How does this affect the model’s subsequent use, and what remedies are available to address these risks?
The recent Opinion 28/2024 from the European Data Protection Board (EDPB) sheds light on the complex interplay between AI and data protection, offering guidance on how businesses can navigate the challenges of developing and deploying AI models. It addresses key issues such as the risks of unlawfully processed data, the anonymity of AI models, the use of legitimate interest as a legal basis for data processing and provides a roadmap for compliance with the GDPR in the rapidly evolving AI landscape.
Whether you’re a tech enthusiast, a business leader, or simply curious about the intersection of AI and data protection, this post will provide valuable insights into navigating this complex landscape.
What are the Risks of Unlawful Data Processing in AI Development?
AI models are often trained on vast amounts of data, including personal data. However, if this data is processed unlawfully—meaning it violates GDPR principles such as lawfulness, fairness, and transparency—significant risks can arise. Here are some real-world consequences for both businesses and individuals:
Data Subject Rights at Risk: When personal data is processed unlawfully, individuals’ rights under the GDPR are compromised. For example, if data is scraped from public websites without consent, individuals may lose control over how their information is used, leading to potential privacy violations.
Reputational and Financial Damage: Companies that use unlawfully processed data in their AI models risk reputational harm and financial penalties. Regulatory authorities, such as data protection supervisory authorities (SAs), have the power to impose fines and corrective measures, which can be substantial.
Legal Uncertainty for Subsequent Use: If an AI model is developed using unlawfully processed data, the legality of its subsequent use—whether by the same company or a third party—can be called into question. This creates significant uncertainty for businesses, especially those that depend on AI models to produce key deliverables that are vital to their daily operations and overall success."
What is Anonymity in AI Models?
The EDPB opinion explores the principle of anonymity in relation to AI models. If an AI model processes data in a way that ensures true anonymity, it may be exempt from GDPR regulations, as anonymous data is no longer considered personal data. To determine anonymity, Data Protection Authorities (DPAs) assess two key factors on a case-by-case basis:
Likelihood of Direct Extraction: Can personal data be directly or probabilistically extracted from the individuals whose data was used to train the model?
Unintentional Disclosure: Can personal data be unintentionally obtained from queries made to the AI model?
Businesses will need to evaluate the anonymity of their AI models to determine if they need to comply with the GDPR.
Legitimate Interest in AI Model Processing
The EDPB also discusses how businesses that rely on AI models to process personal data can rely on the legitimate interest basis under GDPR. Article 6 of the GDPR outlines this legal basis, but businesses must conduct a legitimate interest assessment (LIA) to ensure compliance.
To rely on legitimate interest, companies must follow a three-step assessment:
Identify the Legitimate Interest: The interest must be lawful, clearly articulated, and real (not speculative). For example, using AI to detect fraudulent behaviour or improve customer service could qualify as legitimate interests.
Assess Necessity: The processing must be necessary to achieve the legitimate interest. Companies should consider whether less intrusive alternatives are available.
Conduct a Balancing Test: Companies must weigh their interests against the rights and freedoms of data subjects. This includes considering the nature of the data, the context of processing, and the potential impact on individuals.
If data subjects’ rights outweigh the company’s interests, the company can introduce mitigating measures. These may include anonymisation, pseudonymisation, or offering opt-out options to data subjects.
However, relying on the ‘legitimate interest’ test presents unique challenges to businesses that integrate AI models into their operations:
Demonstrating Balance: Demonstrating that the business’s legitimate interest outweighs the potential risk to individual privacy can be complex for AI models using large datasets.
Explaining AI: AI can be opaque, making it tough to explain data use to individuals.
Data Overload: AI relies on large amounts of data, which clashes with GDPR’s data minimisation and purpose limitations.
Automated Decisions: AI systems making significant decisions about individuals require special safeguards and human intervention.
Reputational Risk: Overreliance on legitimate interest can make businesses appear privacy insensitive.
Here are some tips on how to navigate these challenges while developing or deploying AI in the course of your business:
Adopt privacy-by-design principles in AI development.
Conduct detailed and well-documented assessments (LIAs).
Use anonymised data to train your AI to reduce compliance risks.
Regularly review and update AI systems to align with evolving regulations and ethical norms.
Be transparent about how you collect and use data, especially when scraping data from public sources. Providing clear information to data subjects and offering opt-out mechanisms can help build trust and reduce legal risks.
Deploying AI responsibly and complying with the GDPR if necessary, requires careful navigation of legal, ethical, and operational challenges. A proactive approach can help businesses innovate while building trust with users and regulators alike.
How Can Gerrish Legal Help?
Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We provide our expertise in many aspects of digital law such as GDPR, data privacy, technology law, commercial law, and intellectual property.
We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements.
We are here to help you, get in contact with us today for more information.
Article by Abigail Lee and Marina Danielyan, paralegals at Gerrish Legal