In April 2018, Facebook launched a media campaign to promote its “successful crackdown” on the use of its website by terror groups. Citing the role of advanced artificial intelligence (AI) and a growing team of expert human reviewers, Facebook asserted to the public that it could now block 99% of terrorist content of ISIS, al-Qaeda, and affiliated groups before it was reported by users.
Thanks to a whistleblower working with the National Whistleblower Center, we now know that this alleged crackdown and new era of responsibility was a fiction. In a petition filed in January 2019 and updated in April 2019, the anonymous whistleblower delivers an analysis showing that during a five-month period in late 2018, the percentage of profiles of those who identify themselves as Friends of selected terrorist groups removed by Facebook was less than 30%. Of the profiles of those Friends who displayed symbols of terrorist groups, Facebook removed just 38% during the study period.
The petition alleges that not only is Facebook failing to remove terror content, but it is also actively creating new terror content on the website with its auto-generation feature. The petition delivers similar findings about the hate content on the website.
Facebook’s misleading statements about its handling of terror and hate content violate its duties to disclose material information and risks to its shareholders.
A supplementary petition, filed mid-September 2019, highlights Facebook’s continued failure to address and remove hate content from platform. Read the supplemental SEC petition report here.
Congressman Max Rose (D-NY), Chairman of the Subcommittee on Intelligence and Counterterrorism, Committee on Homeland Security, has led the charge on this issue in Congress. In addition to leading several letters to Facebook demanding answers and solutions on the ongoing problem of terror content on Facebook, Rep. Rose delivered a speech on the floor of Congress specifically discussing Facebook’s role in generating this content promoting terrorism.
“In fact, instead of preventing terrorist content from spreading on their platform, as reported by the Associated Press, recently Facebook has been making videos and promoting terrorist content on its own system,” said Rose in a speech on the House floor. “For instance, an Al Qaeda -linked terrorist group has an autogenerated Facebook page that has nearly 4,500 likes. This case was profiled in the AP story and serves as yet another glaring example of Facebook’s inability to police itself. But what is even more striking, is before coming to speak on the House floor today, I checked and this profile is still up there! This profile that the AP reported to Facebook is still up there.”
The U.S. House Committee on Homeland Security held a hearing on the issue, titled “Examining Social Media Companies Efforts to Counter Terror Online,” on June 26, 2019. It was clear during the hearing that the members of Congress, and in fact in the committee of jurisdiction, were displeased with Facebook’s efforts on this issue.
“This is a collective action problem and we are all in this together…There are things happening that are highly preventable… We have every right to believe you aren’t taking this seriously.”- Rep. Max Rose (D-NY), Chairman of Subcommittee on Intelligence and Counterterrorism
“At the time, I was optimistic with its [Global Internet Forum to Counter Terrorism] intentions and goals… They [social media companies] were unable to comply. …We are yet to receive satisfactory efforts ”- Homeland Security Committee Chairman Bennie Thompson (D-MS)
“My constituency and I want strong policies from your companies that will keep us safe”- Rep. Lauren Underwood (D-NY)
Summary of Key Findings
The whistleblower analyzed 3,228 Facebook profiles of individuals expressing affiliation with terror or hate groups. The analysis produced the following key findings:
1. Terror and hate speech and images are proliferating on Facebook. Thousands of Facebook profiles and pages reviewed by the whistleblower contained speech and images expressing support for, and affiliation with, terrorist organizations and hate groups around the world.
2. Contrary to its assurances, Facebook has no meaningful strategy for removing this terror and hate content from its website. As noted above, Facebook has stated that it has put in place a strategy that enables it to block 99% of the activity of selected terrorist groups before it is reported by users. The whistleblower presents extensive research showing that this is untrue, and that in fact, Facebook has not yet put in place controls needed to monitor the extremist content that users are creating. Large amounts of terror and hate content, including content generated by groups allegedly targeted by Facebook, remained on the website five months after initially being found by the whistleblower.
3. Facebook is generating its own terror and hate content, which is being Liked by individuals affiliated with terrorist organizations. The whistleblower demonstrates that Facebook is a terror and hate content creator, producing terror and hate content using its auto-generation feature. One page created by Facebook, for the Syrian Salafist militant group Hay’at Tahrir Al-Sham (“HTS,” also known as al-Qaeda in Syria), received over 4,400 Likes. Several of those Liking this page have profiles using terrorist iconography; one lists his employment as “mujahid.” Several are associated with illicit looting and trafficking of antiquities, an activity that HTS regulates and taxes in the region as a source of terror financing.
4. Facebook is providing a powerful networking and recruitment tool to terrorist and hate groups. Some of the terror- and hate-related pages on Facebook have thousands of Likes, a feature that terrorist and hate groups can use to identify and recruit supporters. The widespread and persistent promotion of violent content and extremist ideology by terror and hate groups shows that they see Facebook as a valuable tool for networking and recruiting new members.
5. Facebook has argued it is not a content provider, failing to disclose that it is generating terror and hate content. In its press releases and other public statements, Facebook has never disclosed that it auto-generates its own terror and hate content and that this content is being Liked by individuals who affiliate themselves with terrorist organizations. To the contrary, its core defense in litigation filed by families of terror victims is that it is a mere platform, not a content provider.
1. Terror and hate speech and images are proliferating on Facebook
The whistleblower found that 317 profiles out of the 3,228 surveyed contained the flag or symbol of a terrorist group in their profile images, cover photo, or featured photos on their publicly accessible profiles. The study also details hundreds of other individuals who had publicly and openly shared images, posts, and propaganda of ISIS, al-Qaeda, the Taliban and other known terror groups, including media that appeared to be of their own militant activity.
2. Contrary to its assurances, Facebook has no meaningful strategy for removing this terror and hate content from its website
A survey of terror content on Facebook found that despite the company’s public claims, far more extremist content remains on the platform than is blocked.
Facebook has purportedly accepted culpability for terror and hate content on Facebook. During an April 2018 appearance before a congressional panel, CEO Mark Zuckerberg stated: “When people ask if we’re a media company what I heard is, ‘Do we have a responsibility for the content that people share on Facebook,’ and I believe the answer to that question is yes.” Facebook has repeatedly stated that it blocks 99% of the activity of targeted terrorist groups such as ISIS and al-Qaeda without the need for user reporting.
The whistleblower began by searching Facebook for the English and Arabic name for several groups that the United States has designated as transnational terrorist groups, including ISIS and al-Qaeda. The searches turned up hundreds of results for people who listed jobs, names, or other profile attributes affiliating them with a terror group.
To study the issue in closer detail, the whistleblower selected a dozen profiles of self-identified terrorists who had publicly accessible “Friends” lists and reviewed the profiles of these 3,228 Friends. These Friends of self-declared terrorists spanned the Middle East, Europe, Asia, and Latin America, and many openly identified as terrorists themselves and shared extremist content.
After a five-month period ending in December 2018, the whistleblower found that less than 30% of the profiles of these Friends had been removed by Facebook and just 38% of the Friends who were displaying symbols of terrorist groups had been removed. This directly contradicted the assurance by Facebook, discussed above, that it is removing 99% of such content.
The ease with which the whistleblower found these individuals exposes several major failures in Facebook’s content review process. The company’s AI only targets two groups out of the dozens of designated terrorist organizations: ISIS and al-Qaeda and their affiliates. Even then, it fails to catch most permutations of their names.
The whistleblower found similar extremist content from self-identified Nazis and white supremacist groups in the United States that went unchallenged. And while Facebook banned the far-right extremist group “Proud Boys” in October 2018, it has allowed dozens of other Nazi and white supremacist groups to continue to operate openly.
3. Facebook is generating its own terror and hate content, which is being Liked by individuals affiliated with terrorist organizations
Facebook’s problem with terror and hate content goes beyond just its misleading statements about its removal of content that violates community standards. Facebook has also never addressed the issue that it actively promotes terror and hate content across the website via its auto-generated feature.
In multiple documented cases, Facebook enabled networking and recruiting by repurposing user-generated content and auto-generating pages, logos, promotional videos, and other propaganda. These auto-generated pages also filled in information about terrorist groups from Wikipedia.
For example, Facebook auto-generated “Local Business” pages for terrorist groups using the job designations that users placed in their profiles. Facebook also auto-filled terror icons, branding, and flags that appear when a user searches for members of that group on the platform.
Facebook’s auto-generation of Pages is not limited to designated terrorist groups. The company has also generated dozens of pages connected to Nazis and other white supremacist groups both inside and outside of the United States. The whistleblower’s research identified at least 31 different pages and locations that were auto-generated by Facebook for such groups.
The terror and hate content generated by Facebook is Liked by thousands of Facebook users. As explained below, these Likes provide yet another means for individuals affiliated with extremist groups to network and recruit.
4. Facebook is providing a powerful networking and recruitment tool to terrorist and hate groups
The whistleblower found that individuals who elect to become Friends of terrorist groups, including ISIS and al-Qaeda, share terror-related content frequently and openly on Facebook. For example, the whistleblower’s searches in Arabic for the names of other terror groups like Boko Haram, Al Shabaab, and Hay’at Tahrir Al-Sham immediately uncover Facebook pages, jobs, and profiles expressing affiliation with and support for those extremist groups. All of this appears to be part of an ongoing attempt by terrorist groups to network and recruit new members.
Facebook provides an ideal platform for networking and recruiting. On Facebook, people can see when their Friends Like a certain group, and this often induces them to explore the material and can persuade them to Like it as well. Some of the terror-related pages on Facebook have thousands of Likes.
Facebook also facilitates networking and recruiting through its “Suggested Friends” feature, which puts individuals who profess affiliation with and support of terrorist groups in contact with one another.
5. Facebook has argued it is not a content provider, failing to disclose that it is generating terror and hate content
Facebook put its shareholders at risk by failing to fully disclose to them its liability risk. Historically, Facebook has benefited from Section 230 of the Communications Decency Act, which provides immunity from tort liability to certain internet companies that serve as hosts of content produced by others. To the extent that Facebook is auto-generating its own terror content and this content, in turn, facilitates networking and harmful acts by terrorists or white supremacists, Facebook may no longer be immune from tort liability under the CDA.
As a result of its failure to put in place meaningful controls over the extremist content on its website and its own auto-generation of such content, Facebook may be exposing its shareholders to potentially enormous losses. Facebook’s lack of controls and auto-creation of terror and hate content constitute “material information” affecting stock prices; Facebook was therefore required by securities law to disclose this information to shareholders. Its failure to disclose this information, and its exaggeration of the extent of its controls, creates an obligation on the part of the SEC to step in and take enforcement action.