Beauty-Tech & Privacy: Covert surveillance of our daily routines?

The world of beauty is at the tipping point of a digital (re)evolution. At the time of writing - and in the midst of the pandemic - beauty specialists invite themselves (virtually) into our homes. Whilst some applications allow us to analyse our workout activities, others enable us to closely monitor our skin health, all with a commercial objective: finding and introducing us to the most suitable beauty products. 

Generation Z’s experience of our digital world is 360°, not least in respect of artificial intelligence (AI) which offers a great number of benefits and innovative solutions, but which systematically pushes individual privacy rights to its limits.

Here at Gerrish Legal, we have had a lot of experience advising companies working with AI and augmented reality technology and wanted to share our insights into reconciling cutting-edge algorithms with the right to privacy - particularly in the field of beauty-tech where sensitive personal data is so often at stake.

What does at-home beauty-tech promise?

The boom in beauty technologies has occurred as as result of Covid-lockdowns in many countries. During these periods, beauty advice is given online and stores allow customers to collect the products that are recommended to them via their "click & collect" schemes - that is, if the customer does not simply choose to have them delivered directly to their door.

According to a worldwide consumer survey on getting back to high-street shopping, the French are the most confident, with 77% having returned to "non-essential" shops. Nevertheless, the British and Americans are more wary since 50% and 40% of them still favour e-commerce in a post-lockdown world. In light of this, major beauty brands have seized the opportunity of virtual consulting through devices that use a mixed approach of hardware and software, in order to compensate for the fact that they are unable to meet their customers in-person.

Thus, through applications or physical devices, beauty brands offer their customers a more in-depth personalized consultation and service, since through the use of beauty-tech, brands will be able to analyse and take control over customers’ beauty routines and adapt them accordingly. Beauty brands are committed to guaranteeing the health of our skin and verifying the real effectiveness of the products they suggest in real time. According to Jenni Middleton, WGSN's Beauty Director and trend expert, "This is how you know you are doing things right and it's how you motivate yourself to maintain good habits". 

As an example, we can expect to see the following launches in the near future: L'Oréal, which recently acquired ModiFace, an augmented reality software that allows users to benefit from a make-up experience based on this technology. Being able to try a product out before buying should boost sales that meet a real consumer need. Furthermore, Neutrogena's NAIA Skin360 application, launched before the pandemic, analyses users' skin with a smartphone camera and then offers a personalised routine and monitors the skin's progress. This application is being promoted as the solution to the limited access to dermatological advice from experts, who are often overbooked - and even more so in these uncertain times. 

How does beauty-tech work?

Sometimes a simple seflie allows the customer to get their ideal skin care routine - this is notably the case with Vichy or Hello Ava brands, which offer customers a 100% personalised dermatological diagnosis, carried out with the help of a simple selfie - there is no magic or technical wizardry - and the collection of the customer’s sensitive personal data will form the basis of the results, since the aim is, of course, to ensure that the results gained from the analysis correspond as closely as possible to the customer’s profile and needs. 

These solutions are in fact the result of the "tailor-made" combination of several algorithms which respond to a customer’s specific requirements, and therefore are subject to tailor-made settings. The process involves identifying the importance of the problem/objective, analysing past issues and studying all of the possible variables related to the problem. Then, thanks to a system of statistics, the results will always be predicted on the basis of “known variables” - and once the system has all the data from the customer, it provides the most realistic solution. In this way, AI tools can use this data to learn how to automatically solve similar problems in the future. In order to do so, the algorithms must be provided with accurate data such as the customer’s skin type, skin colour, or even their medical background in order to understand and propose long-term solutions to its users. 

This is the marketing objective of Estée Lauder, which has invested in digital technology: its partnership with the creative technology agency Rehab envisions a skincare experience via WhatsApp. "Liv" an AI-based chatbot helps users create and stick to a personalised skincare routine. 

What are the underlying privacy issues?

Sensitive data

By using beauty-tech technologies, users will transmit their personal data to the suppliers of the technology (i.e., the beauty brands), or to their sub processors and partners. However, this is not just any usual or standard contact information (although that forms part of it too - so that the user can receive their personalized plan), the data submitted is sensitive data in that it often relates to a user’s health and ethnic origins. If the collection of such data is justified by the brands as being necessary to provide a product that is 100% suitable, the brands will be nonetheless be aware of medical-related data that a user will have consented to provide, as well as information about a user’s skin colour or tone - which suggests information surrounding a user’s racial or ethnic origins - and which therefore, must be treated with extra care: indeed, according to privacy laws, such as the GDPR:

Special category or “Sensitive” personal data is protected by Article 9 of the GDPR, which prohibits the collection of such data unless a user has given prior consent (amongst other possible grounds).

Attention should be paid to the fact that some Member States of the EU may even restrict or limit data collection when genetic, biometric or health data is involved.

An obvious example of where health data is collected would be La Roche-Posay's Effaclar Spotscan application, which focuses on an analysis of acne-prone skin and has the My Skin Track UV sensor to measure skin's exposure to UV, pollution and pollen.

Data retention for machine learning, trend forecasting and product commercialization

Users of beauty-tech should therefore be aware of the purposes at stake when they given consent to brands and tech companies for the use such sensitive data. Brands collecting such data also need to ensure that users are well-informed of all potential risks - and that the level of consent that is obtained when a user submits their sensitive data meets the GDPR standard - and that such consent is capable of being withdrawn at any time by the users.

In particular, beauty brands should advise users that their data could also be used for the benefit of existing trend forecasting tools or used for the development of certain products - even if this occurs on an aggregated or anonymous basis where an individual user cannot be personally identified. The key here is transparency and evidencing compliance with the accountability principle.

Control over data submitted

As far as control over their personal data is concerned, it is important to note that user data may be shared with the brand's various employees, resellers and subcontractors. It is then incumbent on the brand, acting as a data controller, to inform users about such activities through its privacy policy - which should be transparent and specifically mention the different ways in which the user’s data will be processed. Most importantly, the beauty brand’s privacy policy should provide for a mechanism allowing users to exercise their privacy rights.. 

Risks of bias

One of the many other risks is data bias. In other words, when there has been a distortion of data through use of automated technology. In particular, this can lead to genuine algorithmic discrimination defined as unfavourable or unequal treatment, in comparison with other persons or other equal or similar situations, based on a ground expressly prohibited by law. The risk here is perhaps heightened when the data involved is sensitive data relating to a user’s race, ethnic origins or underlying health conditions - which is so often the case for beauty-tech applications - the onus is on the beauty brands to ensure they are using any data collected in an ethical manner and that they are not carrying our any profiling or automated decision making on the basis of sensitive, personal characteristics, where this would be unlawful.

This goes hand in hand with a second facet of algorithm risk: beauty-tech, if taken too far, might result in inadvertent surveillance of all aspects of a user’s life, from the aggregation of information for investors to user eating habits, hobbies, or to their overall state of health. This unwitting “surveillance” of an individual user is a potential breach of the very essence of an individual’s right to privacy.

Can the “beauty-tech” itself be held accountable?

Civil liability requires everyone to be accountable for their actions. However, at present, there does not seem to be any specific regime for AI in the French Civil Code.  However, as AI can self-develop to become autonomous, this means that such development may mean that the actions and output of AI go far beyond what was initially envisaged by the AI creators. It therefore seems that accountability needs to be attributed somewhere, or to someone. Moreover, unlike natural or legal persons, AI has no legal identity, which in current legislation, prohibits achieveing compensation for losses caused by AI which - by its nature - is devoid of any tangible assets aside from its own creation and work product created through data mining.

Many experts and the European Parliament have considered creating a legal identity specific to robots, the so-called: electronic personality. The European Parliament raised the idea that the most sophisticated autonomous robots could be given the status of “electronic persons” and would therefore be capable of being held liable for compensating any losses they might cause. This regime could also be applied when AI makes autonomous decisions independently or otherwise interacts with third parties in a way that causes loss. 

Numerous other solutions have been suggested: a no-fault liability regime, a regime of shared liability between the developer, the user and the manufacturer. For the time being, only the liability of the data controller (or processor) and that of the person responsible for, or for the creation of, an AI programme can be engaged. So one thing is inevitable: legislators will have to adapt to the digital era we are living in and create a civil liability regime specific to robots and AI - not least in the field of beauty-tech where the risk of bias, profiling, privacy breaches, non-compliance with marketing and e-privacy rules and scope creep is so high. Perhaps this sort of regime could be the incentive required to ensure that innovators create more reliable and controllable AI systems - not just in beauty-tech but across the board?

What is the future for beauty-tech in our digital era?

Regardless of any potential downsides, it is clear that artificial intelligence has become an integral part of our lives - and often in a positive sense. As we have just seen, AI monitors our daily life closely, whether it is by analysing the kilometres we drive in a day or by analysing our complexion to obtain the ideal cosmetic - all things that are so important to modern living and that we have all become accustomed to having on-demand. This modern need for obtaining personalisation and tailor-made products (including on an online environment) has fueled the development of these technologies, which are at the heart of many concerns regarding privacy rights. It should be borne in mind that often it is pure commercial motivation that attempts to justify a potential infringement on individual privacy rights. Ultimately, the challenge here is to reconcile freedom of innovation and fundamental freedoms. 

In this respect, the GDPR remains the guarantor of our rights across the EU, in particular by granting users the right to freely give and withdraw consent to data processing, or to allow users to exercise their right to be forgotten. There is undoubtedly a high burden on innovators when acting data controllers (or processors), who will have to reconcile individual rights with innovation and commercial objectives. Finally, it should be noted that the fines recently imposed against multinationals by data protection authorities, such as the CNIL in France or the ICO in the UK, should act as a deterrent to irresponsible development of beauty-tech.

If you have any questions about privacy, artificial intelligence or automated technologies, please do not hesitate to contact us! Likewise, let us know what you think about at-home beauty tech!


Article by Manon Coste @ Gerrish Legal, November 2020 / Cover photo by Jamie Street on Unsplash

Previous
Previous

What the EU-UK Trade and Cooperation Agreement could mean for companies in the Digital Trade space 

Next
Next

GDPR: 6 Tips to Brexit-proof your data practices