The Council of Europe declared long ago that human rights are the same offline and online, but now that social networks have proliferated, we have seen up close that the world behaves differently than we would like.
The example fresh in everyone’s mind is the mass shooting in the mosques of Christchurch, which left fifty people dead. The principal suspect, an Australian national, broadcast the massacre live on Facebook. The recording made the rounds of the social network for another several hours, causing observers to accuse Facebook of cashing in on the bloodbath. New Zealand authorities asked users and providers not to post the video, threatening them with imprisonment.
Facebook later announced it had deleted 1.5 million copies of the recording in only twenty-four hours, 1.2 million of them as they were being uploaded. YouTube disabled keyword searches for the terrorist attack in New Zealand.
The incident sparked a discussion online about the way social networks react to such events. Why would they permit a mass murder to be broadcast live? The French Council of the Muslim Faith (CFCM) filed a complaint against Facebook and YouTube for not preventing the live video feed and the video’s subsequent dissemination.
At a November 2018 Academy of European Law (ERA) seminar, “Hate Speech and the Limits to Freedom of Expression in Social Media,” legal remedies for combating hatred on the internet and the effectiveness of the current policies of social networks were discussed.
What Is Hate Speech?
As enshrined in Article 10 of the European Convention on Human Rights, the right to freedom of expression applies, inter alia, to information and ideas that can offend, shock and disturb states, communities, and individuals. Hate speech is defined in Appendix 1 to Recommendation No. R (97) 20, as adopted by the Council of Europe Committee of Ministers on October 30, 1997.
[T]he term “hate speech” shall be understood as covering all forms of expression [e.g., texts, photographs, audio and video recordings, etc.] which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including [] intolerance expressed by aggressive nationalism and ethnocentrism, discrimination and hostility against minorities, migrants and people of immigrant origin.
This is the approach taken by the European Court of Human Rights (ECHR) in its rulings on the matter. It has identified all forms of expression that spread, encourage or justify hatred based on intolerance, including religious intolerance, as abuses of the right to the freedom of expression.
The European Commission on Racism and Intolerance (ECRI) is more specific. It has defined hate speech as
the advocacy, promotion or incitement, in any form, of the denigration, hatred or vilification of a person or group of persons, as well as any harassment, insult, negative stereotyping, stigmatization or threat in respect of such a person or group of persons and the justification of all the preceding types of expression, on the ground[s] of “race,” colour, descent, national or ethnic origin, age, disability, language, religion or belief, sex, gender, gender identity, sexual orientation and other personal characteristics or status[.]
The most complicated cases involve criticism of religious beliefs and attacks on religious beliefs. Europe does not have a single standard for evaluating the potential harm of anti-religious statements, and the EHCR has thus given the states under its jurisdiction broad discretion in adjudicating such cases.3Ad van Loon, “European Court of Human Rights: Seizure of ‘blasphemous’ film does not violate Article 10 ECHR,” IRIS 1995-1:3/1.However, the common position is that the insult must be a statement directed at individuals or groups of people. Criticism of a religious institution per se cannot be construed as an insult.
Recently, the ECHR has dealt with cases of incitement to hatred in social networks and internet forums, and the liability of online media outlets for content posted by their users.
The EU’s Standards: Human Rights or Censorship?
Adopted on June 8, 2000, the European Union’s Directive 2000/31/EC (“Directive on electronic commerce”) defines the ground rules for online platforms and providers whose servers host user content. In particular, service providers cannot be held liable for illegal content, including extremist content, if they were unaware of it or remove it as soon as they discover it. States can define circumstances under which access to inappropriate published matter is restricted, oblige private companies to report illegal actions on the part of users, and formally request information about users who violate laws. However, the directive prohibits EU countries from obliging the proprietors of online platforms to engage in preemptive monitoring (Article 15). This stipulation, however, does not concern individual cases, as covered by the laws of member states.
The European Commission has encouraged social network service providers to more vigorously remove so-called illegal content and prevent it from being repeatedly uploaded. One of the most controversial attempts to regulate social networks in an entire country is Germany’s Netzwerkdurchsetzungsgesetz (Network Enforcement Act or NetzDG), passed by the Bundestag in June 2017. The law stipulates a one-day rule for major social networks (i.e., those with over two million users, including Facebook and Instagram): they are obliged to respond to user complaints and remove “manifestly” illegal content within twenty-four hours. Failure to do so can result in fines of up to fifty million euros. After the law was passed, German Justice Minister Heiko Maas said, “Freedom of speech ends where the criminal law begins.”
NetzDG was supposed to aid the fight against hate speech, disinformation, and fake news on social networks. The law applies to social networks with over two million registered users in Germany. Social networks are defined as “telemedia service providers which, for profit-making purposes, operate internet platforms which are designed to enable users to share any content with other users or to make such content available to the public.” Platforms offering journalistic or editorial content, and platforms limited to specific topics and groups of persons, including professional networks, specialist portals, online games, sales platforms, and dating websites, are presumably not subject to the law.
The “users” (and, thus, the content producers) envisioned by the law are people who have registered on a social network with a German IP and accepted the social network’s user agreement.
NetzDG differentiates between “manifestly” unlawful and merely unlawful content. “Unlawful” content is content that unlawfully opposes the provisions of the German Criminal Code dealing with offences against the democratic constitutional state, public order, personal dignity, and sexual self-determination.
Content is manifestly unlawful if “the illegality can be detected within 24 hours without an in-depth examination and with reasonable efforts, i.e. immediately by trained personnel.” Such content must be deleted within twenty-four hours. In controversial cases, the content is not considered “manifestly” unlawful: it must be evaluated within seven days.
NetzDG also stipulates that the procedure for submitting complaints should be easy to understand and access. The procedure should ensure the social network provider responds immediately to a complaint, checks whether the content in question is unlawful, and writes to the complainant explaining what decision was made and the reasons for the decision.
Deleted content must be stored on servers within the EU for a period of ten weeks. All correspondence and communications regarding the disputed content must be documented. In the interests of transparency, social network providers that receive more than one hundred complaints on unlawful content per calendar year are obliged to produce semi-yearly, publicly accessible reports on their handling of user complaints.
Germany’s attempt to regulate social networks has triggered criticism from all stakeholders, including users, providers, and civil rights activists. Some lawyers have argued NetzDG is at odds with the right to freely expression one’s opinion, since social networks will almost certainly resort to “algorithmic” censorship to proactively block dubious content. (Algorithms have long been employed in the EU to detect allegedly pirated content.) Using algorithms to evaluate content is risky because they are prone to error.
The one-day rule, which stipulates that manifestly unlawful posts must be deleted within twenty-four hours, has also been the target of stinging criticism, since social network providers are encouraged to remove disputed content immediately, rather than wasting time on analyzing and assessing it, and thus avoid paying steep fines. Some observers have compared the recently adopted Russian laws criminalizing public disrespect of the authorities and so-called fake news with Germany’s innovative approach to policing social networks.
A Human Rights Approach to Social Media
David Kaye, UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, has researched how social media moderate the ways their users disseminate information. He has suggested steps for bringing content policies and social media corporate standards in line with human rights principles. He notes civil society’s concern over the fact that states have delegated the protection of civil rights, including freedom of speech and the right to privacy, to private companies such as Facebook and Twitter. He argues private companies are willing to respect human rights only to the extent national laws oblige them to do so. On the other hand, these companies are usually quite responsive when states ask them to delete posts or supply them with information about users. That is, there is no evidence of effective self-regulation and a voluntary commitment to international human rights standards on the part of social networks.
Currently, major social media providers have published their policies on content and principles of moderation, which are based, at least explicitly, on human rights principles. Facebook’s Community Standards, for example, run to several pages and contain a special section on hate speech. The company claims it is guided by concerns for freedom of speech, safety, and integrity when it regulates content, meaning it filters out harmful content and shows flexibility when it evaluates posts on topics of public interest. Facebook deems credible threats and direct attacks on people that are based on race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability (what it calls “protected characteristics”) as objectionable content. It claims to be able to be able to distinguish serious statements from humorous ones and identify actual, specific threats.
In all of these cases, we allow the content but expect people to clearly indicate their intent, which helps us better understand why they shared it. Where the intention is unclear, we may remove the content. [] We allow humor and social commentary related to these topics. In addition, we believe that people are more responsible when they share this kind of commentary using their authentic identity.
Currently, social network providers use automatic filters, notification systems, content labeling, account deactivation, and content deletion for moderating what their users post. In reality, however, they often neglect to inform users when their posts have been removed or inform them without giving them a chance to challenge the decision.
At the November 2018 Academy of European Law (ERA) seminar, mentioned above, the regulation of user content by private companies was the most controversial topic. Both speakers and audience members voiced their doubts about the integrity of the people who moderate social media networks. They spoke of the need for greater transparency and accountability, and effective procedures for challenging the unfair removal of posted matter. The standards of social networks for combating abuse often exist only on paper or are simply ineffective, because actual bullying and aggression on social media have not been on the wane. Meanwhile, social network providers have been putting up a spirited fight against nude bodies in old pictures and artworks, as well as humorous statements and documented reports about conflicts and war crimes.
Social networks should, undoubtedly, take measures to identify extremist matter, but not at the expense of freedom of speech. The test of whether something constitutes hate speech should be balanced and non-discriminatory. According to the speakers at the seminar, social networks would make gains in “private law enforcement” if they collaborated more enthusiastically with NGOs and members of civil society on identifying and analyzing the messages disseminated on their platforms.
References[+]
↑1 | “ECRI General Policy Recommendation No. 15: On Combating Hate Speech (Adopted on 8 December 2015),” p. 3 |
---|---|
↑2 | “Although the Court recognised that freedom of expression is especially important for elected representatives of the people, it reiterated that it was crucial for politicians, when expressing themselves in public, to avoid comments that might foster intolerance.” Dirk Voorhoof, “European Court of Human Rights: Case of Féret v. Belgium,” IRIS 2009-8:2/1. |
↑3 | Ad van Loon, “European Court of Human Rights: Seizure of ‘blasphemous’ film does not violate Article 10 ECHR,” IRIS 1995-1:3/1. |
↑4 | Dirk Voorhoof, “European Court of Human Rights: Fouad Belkacem v. Belgium,” IRIS 2017-9:1/1; “Smajić v. Bosnia and Herzegovina, February 8, 2018,” Columbia Global Freedom of Expression; Dirk Voorhoof, “European Court of Human Rights: Hans Burkhard Nix v. Germany,” IRIS 2018-6:1/2; “Delfi AS v. Estonia (2015),” Wikipedia; Dick Voorhoof, “European Court of Human Rights: Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary,” IRIS 2016-3:1/2. |
↑5 | Facebook’s July 2018 NetzDG Transparency Report can be accessed here |
↑6 | Facebook, “Community Standards: Objectionable Content.” |