Digital Services Act - reworking the status quo

On 15th December 2020, the European Commission unveiled the contents of its proposal for a regulation on a Single Market for Digital Services: the Digital Services Act (hereafter DSA). The long-awaited proposal purports to clarify and harmonise the transparency and accountability obligations resting on intermediary service providers’ shoulders, using a progressive, tier-based approach.

Essentially, intermediary services will be required to meet different, increasingly demanding requirements, depending on their role, size and impact within the internal market.  At the same time, the proposal also aims to create a safe online browsing environments for EU citizens, with Executive Vice President Margrethe Vestager - one of the two Commissioners behind the proposal, alongside Commissioner Thierry Breton -  drawing a striking analogy between the DSA and the first-ever traffic lights that brought order to the streets in Cleveland, Ohio and which were devised in response to the major risks presented by a just-as-major technological breakthrough: the invention of the car. 

As per the EU’s co-decision procedure, the proposal is now in the hands of both the EU Parliament and the European Council for review and potential amendments - with the two Commissioners at the inception of this proposal, contending that the whole process could take up to a year and half.

The subsequent articles in this mini-series will delve deeper into the proposal’s progressive, tier-based methodology and what it could mean for businesses in the field. But before getting into the details of the proposal, in this first article we propose to take a short detour through the past and present, in order to get a better grasp of the current status quo which the DSA will be working with and ultimately aiming to alter.

This is because the DSA hasn’t appeared out of thin air: it explicitly aims to build on the established foundation of the E-Commerce Directive (2000/31/EC; hereafter ECD) all the while making necessary amendments to it. The ECD, which was adopted in 2000, is widely-credited for permitting the birth of the tech services we all benefit from today on a daily-basis – and at the same time, virtually all players and stakeholders in the industry concur that the ECD is now largely outdated and in much need of revamping (although they don’t necessarily agree on the ways and how far the revamping should go). 

To this end, we’ve taken a look at some of the answers that were submitted by industry players (businesses, civil society organisations) in response to the public consultation launched by the EU Commission between June 2nd 2020 and September 8th 2020.  The answers submitted are particularly useful: they are revelatory of the different ‘conflicting considerations’ which the EU commission has had to consider and balance, while drafting the proposal; these answers also gives us clues as to what has or hasn’t been working within the current regulatory landscape – and essentially, what needs to change going forward.

A fragmented legal landscape with low interstate trust 

The current legal environment which serves to regulate online platforms’ activities is particularly ‘fragmented’ – to borrow a terminology dear to the European Union. 

As stated above, many observers and stakeholders would agree that, in its time, the ECD played a key role in fostering a dynamic digital economy within the EU. But in the 20 or so years since its adoption, a number of novel issues have emerged (most notably the question of how to address the harmful, and not just illegal, content circulating on intermediary platforms – see below for more; but also the systemic risks directly linked to the inherent designs of the intermediary platforms) and the directive has increasingly appeared as insufficient.

As a result, a number of discrete, uncoordinated legal initiatives have sprung up in a number of different member states, in an attempt to address these novel issues.  To name just two, in June 2017, Germany’s Bundestag adopted the Network Enforcement Act, which targets criminally punishable content in particular and which subjects social network operators to a number of take-down-notices and reporting obligations.

Less than two years later, in April 2019, the French parliament adopted the Loi contre la Manipulation de l’Information, which purports to target fake news and imposes a number of obligations on online platforms, especially during, but not exclusively restricted to, the 3-month period preceding electoral polls.

The result has been two-fold. For industry players, and in particular relatively young players which are still scaling and rely on international growth for their survival – a reliance which is very common amongst EU businesses, as most Member States have relatively small populations relative to the much larger domestic markets which can be found in countries such as the US or China - this has meant higher costs, more red-tape and a multitude of applicable law regimes to comply with. 

At the level of the Member States, this has also meant low inter-state trust: the ECD relies on the principle of “mutual recognition” – the EU law principle according to which a good or service that is not regulated at the EU-level but that has successfully met a first set of requirements in one EU country, shouldn’t have to meet a second set of requirements in a second EU country.

The problem is that under the current regime, some countries have taken it upon themselves to regulate intermediary services, while other countries have very little or no requirements at all – meaning Member States which are more proactive in regulating have been reluctant to trust their neighbors with the task of regulating the intermediary services established there.

Legal uncertainty surrounding the applicable standards for liability

 A review of the submissions made in response to the Commission’s public consultation also reveals some level confusion and uncertainty pertaining to the legal standards currently employed to assess online platforms’ liability under the E-Commerce directive. 

The  ‘horizontal liability exemption regime’ – the ECD’s cornerstone which can currently be found at Article 14 of the directive - is widely lauded and virtually all submissions coming in from the business community which we consulted demanded that it be maintained as part of any future proposal.

This regime shields hosting services from liability so long as they (a) do not have actual knowledge of illegal activity or information hosted on their services and (b) act swiftly once made aware of such activity and information.

Despite its many advantages, there seems to have been some confusion and uncertainty surrounding the knowledge standard enshrined in Article 14 (a). It seems that a number of online service providers have been disincentivised from proactively addressing harmful and illegal content online (by means of the implementing AI tools for instance), out of the fear that this may pull them out of the ‘safe harbour’ provided by the ECD.

Matters seem to have been further complicated by the absence of a ‘Good Samaritan Clause’ within the ECD, akin to the one that can be found in the US’ 1996 Communications Decency Act (the famed ‘section 230’ – which happens to be facing scrutiny from the US Congress and which the Biden-Harris Administration has already announced it will be looking into in the coming months). 

A massive blindspot: harmful content 

One of the main blindspots of the ECD is harmful content: indeed, the ECD only addresses illegal content – although, as has become clear to everyone in the last few years, it is not only illegal content that is problematic, but harmful content especially (and in particular coordinated disinformation pertaining to the electoral process, health policy, hatred, violence, vulnerable audiences). 

As harmful content can’t and shouldn’t be treated in the same way as illegal content, new obligations and requirements for providers to deal with harmful content have been a sticking point of the negotiations - and are likely to be keep being so during the co-decision procedure. 

Whatever the issues and difficulties that may be associated with such obligations, the EU Commission seems determined to address harmful content in the new proposal – in light of the acknowledgement that some of this harmful content and activities are at least in part caused by the inherent design of very large online platforms (ie loops holes allowing ill-intentioned users to ‘game the algorithm’ and create fake accounts in mass; recommender systems which bring users into contact with more harmful content than they would encounter by their own volition). 

This, then, is the status quo that the DSA will have to compose with, build on and ameliorate – with the  stated objective to create a better digital services space, for both the end-users and the intermediary platforms.

Stay tuned for more articles from this mini-series: we’ll be delving further into the DSA’s tier-based approach to accountability and transparency and exploring what this could mean for you and your business going forward.

In the meantime, if you would like to discuss the impact of the Digital Services Act on your online business, or if you need assistance regarding the compliance of your online marketplace or e-commerce business, do not hesitate to get in touch

Article by Leila Saidi @ Gerrish Legal, April 2021

Previous
Previous

Chatbots and Privacy-by-Design: a few tips to ensure GDPR Compliance

Next
Next

PART 1 - Data Protection Officers: A Requirement or a luxury?