Europe is very close to the finishing line of an extraordinary project: the adoption of the new General Data Protection Regulation (GDPR), a single, comprehensive replacement for the 28 different laws that implement Europe’s existing 1995 Data Protection Directive. More than any other instrument, the original Directive has created a high global standard for personal data protection, and led many other countries to follow Europe’s approach. Over the years, Europe has grown ever more committed to the idea of data protection as a core value. In the Union’s Charter of Fundamental Rights, legally binding on all the EU states since 2009, lists the “right to the protection of personal data” as a separate and equal right to privacy. The GDPR is intended to update and maintain that high standard of protection, while modernising and streamlining its enforcement.
The battle over the details of the GDPR has so far mostly been a debate between advocates pushing to better defend data protection, against companies and other interests that find consumer privacy laws a hindrance to their business models. Most of the compromises between these two groups have now already been struck.
But lost in that extended negotiation has been another aspect of public interest. By concentrating on privacy, pro- or con-, the GDPR as it stands has omitted sufficient safeguards to protect another fundamental right: the right to freedom of expression, “to hold opinions and to receive and impart information…regardless of frontiers”.
It seems not to have been a deliberate omission. In their determination to protect the personal information of users online, the drafters of the GDPR introduced provisions that streamline the erasure of such information from online platforms—while neglecting to consider those who published that information to those platforms who were exercising their own human right of free expression in doing so, and their audiences who have the right to receive such information. Almost all digital rights advocates missed the implications, and corporate lobbyists didn’t much care about the ramifications.
The result is a ticking time-bomb that will be bad for online speech, and bad for the future reputation of the GDPR and data protection in general.
Europe’s data protection principles include a right of erasure, which has traditionally been about the right to delete data that a company holds on you, but has been extended over time to include a right to delete public statements that contain information about individuals that is “inadequate, irrelevant or excessive”. The first widely-noticed sign of how this might pose a problem for free speech online came from the 2014 judgment of the European Court of Justice, Google Spain v. Mario Costeja González—the so-called Right to Be Forgotten case.
EFF expressed concern at the time that this decision created a new and ambiguous responsibility upon search engines to censor the Web, extending even to truthful information that has been lawfully published.
The current draft of the GDPR doubles down on Google Spain, and raises new problems. (The draft currently under negotiation is not publicly available, but July 2015 versions of the provisions that we refer to can be found in this comparative table of proposals and counter-proposals by the European institutions [PDF]. Article numbers referenced here, which will likely change in the final text, are to the proposal from the Council of the EU unless otherwise stated.)
First, it requires an Internet intermediary (which is not limited to a search engine, though the exact scope of the obligation remains vague) to respond to a request by a person for the removal of their personal information by immediately restricting the content, without notice to the user who uploaded that content (Articles 4(3a), 17, 17a, and 19a.). Compare this with the DMCA takedown notices, which include a notification requirement, or even the current Right to Be Forgotten process, which give search engines some time to consider the legitimacy of the request. In the new GDPR regime, the default is to block.
Then, after reviewing the (also vague) criteria that balance the privacy claim with other legitimate interests and public interest considerations such as freedom of expression (Articles 6.1(f), 17a(3) and 17.3(a)), and possibly consulting with the user who uploaded the content if doubt remains, the intermediary either permanently erases the content (which, for search engines, means removing their link to it), or reinstates it (Articles 17.1 and 17a(3)). If it does erase the information, it is not required to notify the uploading user of having done so, but is required to notify any downstream publishers or recipients of the same content (Articles 13 and 17.2), and must apparently also disclose any information that it has about the uploading user to the person who requested its removal (Articles 14a(g) and 15(1)(g)).
Think about that for a moment. You place a comment on a website which mentions a few (truthful) facts about another person. Under the GDPR, that person can now demand the instant removal of your comment from the host of the website, while that host determines whether it might be okay to still publish it. If the host’s decision goes against you (and you won’t always be notified, so good luck spotting the pre-emptive deletion in time to plead your case to Google or Facebook or your ISP), your comment will be erased. If that comment was syndicated, by RSS or some other mechanism, your deleting host is now obliged to let anyone else know that they should also remove the content.
Finally, according to the existing language, while the host is dissuaded from telling you about any of this procedure, they are compelled to hand over personal information about you to the original complainant. So this part of EU’s data protection law would actually release personal information!
What are the incentives for the intermediary to stand by the author and keep the material online? If the host fails to remove content that a data protection authority later determines it should have removed, it may become liable to astronomical penalties of €100 million or up to 5% of its global turnover, whichever is higher (European Parliament proposal for Article 79).
That means there is enormous pressure on the intermediary to take information down if there is even a remote possibility that the information has indeed become “irrelevant”, and that countervailing public interest considerations do not apply.
These procedures are deficient in many important respects, a few of which are mentioned here:
- Contrary to principle 2 of the Manila Principles on Intermediary Liability, they impose an obligation on an intermediary to remove content prior to any order by an independent and impartial judicial authority. Indeed, the initial obligation to restrict content comes even before the intermediary themselves has had an opportunity to substantively consider the removal request.
- Contrary to principle 3 of the Manila Principles, the GDPR does not set out any detailed minimum requirements for requests for erasure of content, such as the details of the applicant, the exact location of the content, and the presumed legal basis for the request for erasure, which could help the intermediary to quickly identify baseless requests.
- Contrary to principle 5, there is an utter lack of due process for the user who uploaded the content, either at the stage of initial restriction or before final erasure. This make the regime even more likely to result in mistaken over-blocking than the DMCA, or its European equivalent the E-Commerce Directive, which do allow for such a counter-notice procedure.
- Contrary to principle 6, there is precious little transparency or accountability built into this process. The intermediary is not, generally, allowed to publish a notice identifying the restriction of particular content to the public at large, or even to notify the user who uploaded the content (except in difficult cases).
More details of these problems, and more importantly some possible textual solutions, have been identified in a series of posts by Daphne Keller, Director of Intermediary Liability at the Center for Internet and Society (CIS) of Stanford Law School. However at this late stage of the negotiations over the GDPR in a process of “trialogue” between the European Union institutions, it will be quite a challenge to effect the necessary changes.
Even so, it is not too late yet: proposed amendments to the GDPR are still being considered. EFF have written a joint letter with ARTICLE 19 to European policymakers, drawing their attention to the problem and explaining what needs to be done. EFF contends that the problems identified can be overcome by relatively simple amendments to the GDPR, which will help to secure European users’ freedom of expression, without detracting from the strong protection that the regime affords to their personal data.
Without fixing the problem, the current draft risks sullying the entire GDPR project. Just like the DMCA takedown process, these GDPR removals won’t just be used for the limited purpose they were intended for. Instead, it will be abused to censor authors and invade the privacy of speakers. A GDPR without fixes will damage the reputation of data protection law as effectively as the DMCA permanently tarnished the intent and purpose of copyright law.
Originally written by Aylin Akturk and Jeremy Malcolm for Electronic Frontier Foundation.