Earlier this month, France’s Emmanuel Macron and New Zealand’s Jacinda Ardern, unveiled a new joint plan to eliminate extremism shown online, following an earlier meeting between Macron and Facebook CEO Mark Zuckerberg about tackling hate speech shown on the social media platform.
The joint nine-point plant has been dubbed the “Christchurch call”, and is a summit aimed at signing countries and companies up to a pledge, to clamp down on hateful content; named for the city in New Zealand, where 51 people were killed in two mosques on March 15th. The goal is to continue the initiative at a meeting of G-7 leaders later this year, and a meeting of the United Nations General Assembly in September.
The gunman live-streamed his attack on Facebook. It was viewed over 4,000 times before it was removed, but even after its deletion, copies were spread all over Facebook, YouTube, and Twitter.
Leaders from Britain, Canada, Ireland, Senegal, Indonesia, Jordan, and the EU, were all in attendance, as were representatives of Twitter, Microsoft, Google, Facebook, and other internet and social media companies (Zuckerberg was not present, having met with Macron the week prior). In a joint statement, the companies called the Christchurch shootings a “horrifying tragedy”, adding that “it is right that we come together”.
Noticeably absent from the proceedings was the United States.
New Zealand Prime Minister, Jacinda Ardern, has used the Christchurch killings to rally global support for measures to keep violent and extremist content off of the world’s largest social media platforms. Facebook, Twitter, Google, Microsoft, and Amazon, have all vowed to step up their monitoring for such violent material and remove it.
During a later interview, Microsoft president, Brad Smith, said the companies’ plans are part of the tech industry’s broader shift away from self-regulation. “Now you see a clear reaction and, in some cases, rejection of that”, said Smith.
“We all need to act, and that includes social media providers taking more responsibility for the content that is on their platforms.” Ardern told reporters, before the May 15th meeting with France. Last month, she had said “this isn’t about freedom of expression; this is about preventing violent extremism and terrorism online. I don’t think anyone would argue that the terrorist had a right to livestream the murder of 50 people”.
“We have taken steps to act”, Macron added.
Bringing Tech On Board
Both France and New Zealand’s leaders were clear in stating that governments alone cannot tackle online hatred. Tech companies must also do their part. To that end, the Christchurch call specifically urges tech giants to scrub their platforms of extremist content, and contains plans to rein in tech companies, when it comes to the unfettered sharing of toxic, or extremist content -like hate speech and terrorist material over the internet. It also includes the development of tools to block users from downloading violent content, along with increased transparency about how social media platforms find and remove offensive content. Finally, the Christchurch call also demands that tech company algorithms do not direct users to violent content, thus curbing its reach.
Directly addressing the tech companies representatives present at the meeting, Adern said, “The internet is made up of vast, complex technological platforms. But they were created by people. They are managed by people. When they harm, they harm people.”
“I know that none of you want your platforms to perpetuate and amplify terrorism and extremist violence. But these platforms have grown at such pace, with such popularity, that we are all now dealing with consequences you may not have imagined, when your company was just a start-up. Your scale and influence brings a burden of responsibility.”
“The Christchurch attack was unprecedented. But our response is equally unprecedented”, added Adern.
The Christchurch Call “is a global response to a tragedy that occurred on the shores of my country, but was ultimately felt around the world”, commented Ardern, at a joint news conference with Macron. “Fundamentally, it ultimately commits us all to build a more humane internet, which cannot be misused by terrorists for their hateful purposes.”
The Christchurch call is not binding, nor does it include penalties for platforms or governments that do not comply. However, as governments are increasingly considering new laws and regulations regarding technology, these tech companies are under increasing pressure to prove that they can police their own platforms. Facebook, for example, has already agreed to place additional restrictions on the use of their live video service.
For example, people who break Facebook’s “most serious policies” will be immediately banned from using Facebook Live for a period of time, such as 30 days. However, Facebook did not specify all the rules that it will use to enforce the new one-strike approach, but pointed to current community standards that prohibit spreading terrorist propaganda on the social network. The policy will expand to other topics in the coming weeks, and the company says it will stop the same offenders from purchasing ads. Facebook has also removed 1.5 million copies of the Christchurch killer’s live-stream video, along with 3 billion fake profiles, and 7 million hate speech posts.
Nevertheless, analysts cautioned that without any punitive measures or consequences for failing to remove extremist content, it is unlikely that tech companies would alter their behaviour, or act in good faith. Robyn Caplan, a researcher at Data & Society, a research institute in New York, and a doctoral candidate at Rutgers University, said that simply asking tech companies to remove violent content has not worked for some other countries that had tried that approach. “Each country will have its own definitions of what constitutes hate speech and what constitutes harassment”, she said. Moreover, there is often a lack of specific resources, as countries require tech companies to hire staff members with specific cultural and linguistic knowledge, to track specific extremist groups and content related thereof.
On top of that, many extremists are often radicalised outside the largest social media groups. While this doesn’t include Facebook, it does involve WhatsApp messaging groups, or message boards, like 4chan and 8chan.
Moreover, these measures open a wider debate about regulation of the internet overall, and what constitutes free expression online. While companies and governments are quickly able to focus on violent, terrorist, and child-exploitation material, figuring out what “counts” as hate speech versus offensive, but tolerable, political speech – or even just disinformation – has proven to be a tougher line to draw.
A Notable Absence
The United States did not sign on to the Christchurch call, but did release a statement that emphasised its belief that tech companies should enforce their guidelines on terror, and counter-terrorism policies should not stifle freedom of speech or freedom of press.
“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the call”, said a White House spokesperson. “We will continue to engage governments, industry, and civil society, to counter terrorist content on the internet.”
“The best tool to defeat terrorist speech is productive speech.”
Former U.S. Secretary of State, John Kerry, said that while “a higher level of responsibility is demanded from all of the platforms”, it is necessary to find a way to not censor legitimate discussion. “It’s a hard line to draw sometimes”, he concluded.
“That the U.S. is a no-show to such an important meeting, indicates a shocking lack of concern about the tremendous harms perpetuated by the internet, including terrorism and killing”, said Ghosh. “Further, our lack of participation will reinforce the intellectual divide between Americans and the rest of the world.”
Ghosh noted that the Christchurch call, while not binding, is still symbolically significant, as it served to put the tech companies on notice. “If companies participate in the accord, they are necessarily representing to consumers that they will live up to its demands, and they will be compelled by governmental agencies to live up to those commitments”, he said.
In contrast, Europe has been taking the global lead on holding large tech companies accountable on a multitude of issues, from data privacy to taxation. The EU has also introduced the General Data Protection Regulation, the key principle that provides users with control of their data collected by various companies, while requiring tech firms to obtain users’ consent for data sharing purposes.
EU lawmakers have also backed additional proposals to levy fines on user-generated websites that fail to remove extremist content within one hour. France and Britain have also proposed new laws, requiring companies to delete toxic content hosted on their platforms. After the Christchurch massacre, Australia passed a law making executives personally liable for violent material spread on their company’s platforms.
During a later press conference, both Macron and Ardern played down the Trump administration’s position on the matter, noting that American officials did express broad support for the pledge’s goals. Macron also acknowledged the difference in their understanding on free speech, but argued that stricter policies were nonetheless necessary to halt the spread – not just of violent content, but of hate speech and racist material that incites extremist behaviour. However, what constitutes such speech is often difficult to clearly define. “That’s the grey zone”, Macron explained.
Both leaders also praised the tech companies for agreeing to join the initiative, and make changes. “We have an agreement here that involves both tech companies and countries”, said Ardern. “In the past, we have had either one or the other.”