Whistleblower Petition to SEC: Facebook Is Misleading Shareholders About Terror and Hate Content on its Website

Facebook is not an innocent bystander in terror and hate content online
Hold Facebook Accountable Take Action
Please, share this page

Introduction

In April 2018, Facebook launched a media campaign to promote its successful crackdown on the use of its website by terror groups. Citing the role of advanced artificial intelligence (AI) and a growing team of expert human reviewers, Facebook asserted to the public that it could now block 99% of terrorist content of ISIS, al-Qaeda, and affiliated groups before it was reported by users.  

Thanks to a whistleblower working with the National Whistleblower Center, we now know that this alleged crackdown and new era of responsibility was a fiction. In a petition filed in January 2019 and updated in April 2019, the anonymous whistleblower delivers an analysis showing that during a five-month period in late 2018, the percentage of profiles of those who identify themselves as Friends of selected terrorist groups removed by Facebook was less than 30%Of the profiles of those Friends who displayed symbols of terrorist groups, Facebook removed just 38% during the study period. 

The petition alleges that not only is Facebook failing to remove terror content, but it is also actively creating new terror content on the website with its auto-generation feature. The petition delivers similar findings about the hate content on the website.  

Facebook’s misleading statements about its handling of terror and hate content violate its duties to disclose material information and risks to its shareholders. 

Summary of Key Findings

The whistleblower analyzed 3,228 Facebook profiles of individuals expressing affiliation with terror or hate groups. The analysis produced the following key findings:

1. Terror and hate speech and images are proliferating on Facebook. Thousands of Facebook profiles and pages reviewed by the whistleblower contained speech and images expressing support for, and affiliation with, terrorist organizations and hate groups around the world.

2. Contrary to its assurances, Facebook has no meaningful strategy for removing this terror and hate content from its website. As noted above, Facebook has stated that it has put in place a strategy that enables it to block 99% of the activity of selected terrorist groups before it is reported by users. The whistleblower presents extensive research showing that this is untrue, and that in fact, Facebook has not yet put in place controls needed to monitor the extremist content that users are creating. Large amounts of terror and hate content, including content generated by groups allegedly targeted by Facebook, remained on the website five months after initially being found by the whistleblower.

3. Facebook is generating its own terror and hate content, which is being Liked by individuals affiliated with terrorist organizations. The whistleblower demonstrates that Facebook is a terror and hate content creator, producing terror and hate content using its auto-generation feature. One page created by Facebook, for the Syrian Salafist militant group Hay’at Tahrir Al-Sham (“HTS,” also known as al-Qaeda in Syria), received over 4,400 Likes. Several of those Liking this page have profiles using terrorist iconography; one lists his employment as “mujahid.” Several are associated with illicit looting and trafficking of antiquities, an activity that HTS regulates and taxes in the region as a source of terror financing.

4. Facebook is providing a powerful networking and recruitment tool to terrorist and hate groups. Some of the terror- and hate-related pages on Facebook have thousands of Likes, a feature that terrorist and hate groups can use to identify and recruit supporters. The widespread and persistent promotion of violent content and extremist ideology by terror and hate groups shows that they see Facebook as a valuable tool for networking and recruiting new members.

5. Facebook has argued it is not a content provider, failing to disclose that it is generating terror and hate content. In its press releases and other public statements, Facebook has never disclosed that it auto-generates its own terror and hate content and that this content is being Liked by individuals who affiliate themselves with terrorist organizations. To the contrary, its core defense in litigation filed by families of terror victims is that it is a mere platform, not a content provider.

Click Here To Hold Facebook Accountable

Key Findings

1. Terror and hate speech and images are proliferating on Facebook 

The whistleblower found that 317 profiles out of the 3,228 surveyed contained the flag or symbol of a terrorist group in their profile images, cover photo, or featured photos on their publicly accessible profiles. The study also details hundreds of other individuals who had publicly and openly shared images, posts, and propaganda of ISIS, al-Qaeda, the Taliban and other known terror groups, including media that appeared to be of their own militant activity. 

2. Contrary to its assurances, Facebook has no meaningful strategy for removing this terror and hate content from its website

A survey of terror content on Facebook found that despite the company’s public claims, far more extremist content remains on the platform than is blocked.  

Facebook has purportedly accepted culpability for terror and hate content on Facebook. During an April 2018 appearance before a congressional panel, CEO Mark Zuckerberg stated: “When people ask if we’re a media company what I heard is, ‘Do we have a responsibility for the content that people share on Facebook,’ and I believe the answer to that question is yes.” Facebook has repeatedly stated that it blocks 99% of the activity of targeted terrorist groups such as ISIS and al-Qaeda without the need for user reporting. 

The whistleblower began by searching Facebook for the English and Arabic name for several groups that the United States has designated as transnational terrorist groups, including ISIS and al-Qaeda. The searches turned up hundreds of results for people who listed jobs, names, or other profile attributes affiliating them with a terror group.  

To study the issue in closer detail, the whistleblower selected a dozen profiles of self-identified terrorists who had publicly accessible “Friends lists and reviewed the profiles of these 3,228 Friends. These Friends of self-declared terrorists spanned the Middle East, Europe, Asia, and Latin America, and many openly identified as terrorists themselves and shared extremist content.   

After a five-month period ending in December 2018, the whistleblower found that less than 30% of the profiles of these Friends had been removed by Facebook and just 38% of the Friends who were displaying symbols of terrorist groups had been removed. This directly contradicted the assurance by Facebook, discussed above, that it is removing 99% of such content. 

The ease with which the whistleblower found these individuals exposes several major failures in Facebook’s content review process. The company’s AI only targets two groups out of the dozens of designated terrorist organizations: ISIS and al-Qaeda and their affiliates. Even then, it fails to catch most permutations of their names.  

The whistleblower found similar extremist content from self-identified Nazis and white supremacist groups in the United States that went unchallenged. And while Facebook banned the far-right extremist group “Proud Boys” in October 2018, it has allowed dozens of other Nazi and white supremacist groups to continue to operate openly.  

3. Facebook is generating its own terror and hate content, which is being Liked by individuals affiliated with terrorist organizations  

Facebook’s problem with terror and hate content goes beyond just its misleading statements about its removal of content that violates community standards. Facebook has also never addressed the issue that it actively promotes terror and hate content across the website via its auto-generated feature 

In multiple documented cases, Facebook enabled networking and recruiting by repurposing user-generated content and auto-generating pages, logos, promotional videosand other propaganda. These auto-generated pages also filled in information about terrorist groups from Wikipedia.  

For example, Facebook auto-generated Local Business pages for terrorist groups using the job designations that users placed in their profiles. Facebook also auto-filled terror icons, brandingand flags that appear when a user searches for members of that group on the platform.  

Facebook’s auto-generation of Pages is not limited to designated terrorist groups. The company has also generated dozens of pages connected to Nazis and other white supremacist groups both inside and outside of the United States. The whistleblower’s research identified at least 31 different pages and locations that were auto-generated by Facebook for such groups.  

The terror and hate content generated by Facebook is Liked by thousands of Facebook users. As explained below, these Likes provide yet another means for individuals affiliated with extremist groups to network and recruit.  

4. Facebook is providing a powerful networking and recruitment tool to terrorist and hate groups 

The whistleblower found that individuals who elect to become Friends of terrorist groups, including ISIS and al-Qaeda, share terror-related content frequently and openly on Facebook. For example, the whistleblower’s searches in Arabic for the names of other terror groups like Boko Haram, Al Shabaab, and Hay’at Tahrir Al-Sham immediately uncover Facebook pages, jobs, and profiles expressing affiliation with and support for those extremist groups. All of this appears to be part of an ongoing attempt by terrorist groups to network and recruit new members. 

Facebook provides an ideal platform for networking and recruiting. On Facebook, people can see when their Friends Like a certain group, and this often induces them to explore the material and can persuade them to Like it as well. Some of the terror-related pages on Facebook have thousands of Likes.  

Facebook also facilitates networking and recruiting through its “Suggested Friends” feature, which puts individuals who profess affiliation with and support of terrorist groups in contact with one another.  

5. Facebook has argued it is not a content provider, failing to disclose that it is generating terror and hate content 

Facebook put its shareholders at risk by failing to fully disclose to them its liability risk. Historically, Facebook has benefited from Section 230 of the Communications Decency Act, which provides immunity from tort liability to certain internet companies that serve as hosts of content produced by others. To the extent that Facebook is auto-generating its own terror content and this content, in turn, facilitates networking and harmful acts by terrorists or white supremacists, Facebook may no longer be immune from tort liability under the CDA. 

As a result of its failure to put in place meaningful controls over the extremist content on its website and its own auto-generation of such content, Facebook may be exposing its shareholders to potentially enormous losses. Facebook’s lack of controls and auto-creation of terror and hate content constitute “material information” affecting stock prices; Facebook was therefore required by securities law to disclose this information to shareholders. Its failure to disclose this information, and its exaggeration of the extent of its controls, creates an obligation on the part of the SEC to step in and take enforcement action. 

Click Here To Hold Facebook Accountable

FREQUENTLY ASKED QUESTIONS