By Scott Mason
June 23, 2016
Article from Scott Mason
Why Google, Twitter and Facebook may be liable for your online activities and how does it affect innovation and free speech?
A few weeks ago the EU published its long-awaited analysis of its online platforms consultation. The consultation which ran from 24th September to 6th January, gathered opinions on the regulatory environment for online platforms and is part of the EU’s broader Digital Single Market Strategy for Europe, which examines the role of online platforms in the economy and society as a whole. One of the most important issues raised by the consultation was that of “intermediary liability” or the question of whether online platforms such as Facebook and Twitter should be legally responsible for content posted, distributed or linked to via their services.
Whilst intermediaries within the EU are currently exempt from liability under the ecommerce directive, with the EU Commission looking to overhaul the regulatory environment for online platforms, many digital right advocates have voiced concerns that intermediaries could find themselves being liable for third party content, a move which they claim could have catastrophic implications for free speech and innovation online.
But what exactly is “intermediary liability?” I’ll try to answer some of the key questions surrounding this contentious issue: What is it? Who supports it? What are its implications? And what can you do to help prevent it from becoming part of our online life?
What are intermediaries?
There is currently a lot of confusion over what precisely classifies as an intermediary (a concern raised by a number of respondents to the platforms consultation).
Broadly speaking though, intermediaries can be thought of as services which host, transmit, index or give access to content, products and services originated by third parties on the Internet. For example, this may include social media platforms such as Facebook and Twitter, blogging platforms and message boards, but also potentially search engines such as Google. Today almost all forms of online activity are in some way mediated by intermediaries. Therefore changes in the way in which they are regulated could have significant implications for our online freedoms.
What is intermediary liability?
Intermediary liability refers to an approach to Internet regulation in which online intermediaries are held legally responsible for any unlawful or harmful content created by their users. EU exemptions mean that with the limited exceptions of child abuse imagery or instances whereby a court order has been granted, ordering the removal of content, online platforms are currently not legally required to actively moderate their content, nor do they have liability for the content posted on their sites by third parties.
Despite this however, many sites still chose to moderate their content; often using complex algorithms to identify material which infringes copyright or breaches their terms of service. Whilst these approaches to moderation have themselves attracted much criticism for, the role of prejudice and bias in taking down content of their choice, their lack of transparency and the limitations placed on freedom of expression, they nevertheless currently remain voluntary and at the discretion of the platforms themselves.
However, moves to enforce a stricter liability regime for online platforms could see more proactive and burdensome moderation become effectively mandatory for intermediaries which host or link to third party content, encumbering online platforms and services with the need to carry out costly moderation and creating a situation in which intermediaries are forced to act as the arbiters of acceptable speech online.
Who supports stricter intermediary liability?
Reasons given in support of stricter liability regimes are varied, but can generally be said to fall into two categories.
First, are those that see platforms as having a ‘duty of care’ to users. For these supporters of intermediary liability, online platforms in their role as mediators and hosts of content, have a responsibility to protect users from offensive or illegal content by removing such material from their services.
Second, are those who wish to enforce stricter liability not in order to protect users, but rather to prevent copyright infringement. For many rights holders, intermediaries such as YouTube often function as effectively as broadcasters for copyrighted material and so should be held accountable for hosting or linking to any material which infringes their rights.
Again, while most intermediaries will choose to remove content which violates their terms of service voluntarily, both those who believe intermediaries have a duty of care as well as rights holders who wish to protect their copyright, have argued that introducing stricter liability will provide more of an incentive for platforms to take a more proactive approach.
What are the possible implications of Intermediary Liability?
While the concerns of those who support stricter liability regimes are understandable, there are a number of good reasons why such measures would be a very bad idea, for users as well as intermediaries themselves.
Firstly, by forcing intermediaries to take a more proactive approach in censoring and filtering content, liability regimes risk threatening freedom of speech online. Over recent years, we have repeatedly seen how overzealous moderation and filtering can lead to censorship and the removal of perfectly legal content. The introduction of stricter liability regime would almost certainly make this situation worse, potentially leading to the removal of increasing amounts of lawful content by platforms and service fearful about their own legal risk.
As such, many have warned that forcing intermediaries to filter their content would inevitably lead to a situation in which platforms become the unofficial arbiters of freedom of expression online, determining the boundaries of acceptable speech and removing any content that they deem to be in any way offensive.
We know that such a situation may be bad for users, but it could also have costly implications for the intermediaries themselves, who would find themselves encumbered with the unenviable task of inspecting and filtering the vast quantities of images, comments and videos hosted on their platforms, while also facing possible litigation should they fail to do so. This would place enormous financial and technical burdens on platforms, a strain felt most keenly no doubt, by the small or start-up businesses who lack the financial or technical resources to carry out such extensive surveillance programmes.
The stated aim of the Digital Single Market strategy is to enhance online innovation in Europe, kick-start the digital economy, and give a boost to European-based innovators by providing the tools needed to succeed in a competitive marketplace. It is ironic, then, that as part of the Digital Single Market Strategy the Commission is seriously considering measures which could have catastrophic implications online innovation and which would burden innovators and start-ups with onerous regulation and the risk of litigation.
What can I do to help?
Although the EU’s consultation is now complete, there are still things you can do to help stop the EU enforcing stricter liability on platforms and users.
You can show your opposition to strict liability regimes by endorsing the Manila principles to protect freedom of expression worldwide, the first of which affirms that “intermediaries must never be made strictly liable for hosting unlawful third party content, nor should they ever be required to monitor content proactively as part of an intermediary liability regime.”
It is likely that in the next few months there will be big developments on this issue. The Commission is talking about these voluntary measures, which they could try to introduce without public and democratic consent.
Sign up to support the Save the Link campaign and receive future updates on this issue, so you can take action to prevent online censorship.
Scott is a PhD researcher at Keele University (UK). His research examines the democratic legitimacy of internet governance institutions. He also regularly writes on a number of digital rights issues including privacy, access to knowledge, freedom of speech and censorship.
March 15, 2018
March 5, 2018