Stop Terror and Hate Content on Facebook

Facebook is misleading shareholders about terror and hate content on its website
Take Action Today HOLD FACEBOOK ACCOUNTABLE

In April 2018, Facebook launched a media campaign to promote its successful crackdown on the use of its website by terror groups. Citing the role of advanced artificial intelligence (AI) and a growing team of expert human reviewers, Facebook asserted to the public that it could now block 99% of terrorist content of ISIS, al-Qaeda, and affiliated groups before it was reported by users.  

Thanks to a whistleblower working with the National Whistleblower Center, we now know that this alleged crackdown and new era of responsibility was a fiction. In a petition to the Securities and Exchange Commission (SEC) filed in January 2019 and updated in April 2019, the anonymous whistleblower delivers an analysis showing that during a five-month period in late 2018, the percentage of profiles of those who identify themselves as Friends of selected terrorist groups removed by Facebook was less than 30%Of the profiles of those Friends who displayed symbols of terrorist groups, Facebook removed just 38% during the study period. 

The petition alleges that not only is Facebook failing to remove terror content, but it is also actively creating new terror content on the website with its auto-generation feature. The petition delivers similar findings about the hate content on the website.  

The Associated Press reported on this explosive story in May 2019, writing:

The complaint is landing as Facebook tries to stay ahead of a growing array of criticism over its privacy practices and its ability to keep hate speech, live-streamed murders and suicides off its service. In the face of criticism, CEO Mark Zuckerberg has spoken of his pride in the company’s ability to weed out violent posts automatically through artificial intelligence. During an earnings call last month, for instance, he repeated a carefully worded formulation that Facebook has been employing.

“In areas like terrorism, for al-Qaida and ISIS-related content, now 99 percent of the content that we take down in the category our systems flag proactively before anyone sees it,” he said. Then he added: “That’s what really good looks like.”

A supplementary petition, filed mid-September 2019, highlights Facebook’s continued failure to address and remove hate content from platform. Read the Associated Press article on the update here.

These misleading statements about the company’s handling of terror and hate content violate its duties to disclose material information and risks to its shareholders. It is time for the SEC to rein in Facebook’s deception of its shareholders and the public on its approach to terrorism and white supremacy. 

Click Here To Hold Facebook Accountable

Congressional Response

Congressman Max Rose (D-NY), Chairman of the Subcommittee on Intelligence and Counterterrorism, Committee on Homeland Security, has led the charge on this issue in Congress. In addition to leading several letters to Facebook demanding answers and solutions on the ongoing problem of terror content on Facebook, Rep. Rose delivered a speech on the floor of Congress specifically discussing Facebook’s role in generating this content promoting terrorism.

“In fact, instead of preventing terrorist content from spreading on their platform, as reported by the Associated Press, recently Facebook has been making videos and promoting terrorist content on its own system,” said Rose in a speech on the House floor. “For instance, an Al Qaeda -linked terrorist group has an autogenerated Facebook page that has nearly 4,500 likes. This case was profiled in the AP story and serves as yet another glaring example of Facebook’s inability to police itself. But what is even more striking, is before coming to speak on the House floor today, I checked and this profile is still up there! This profile that the AP reported to Facebook is still up there.”

Read the full speech and learn more about Chairman Rose’s efforts in Congress on this issue here. The video of his speech can be viewed directly here.

The U.S. House Committee on Homeland Security also held a hearing on the issue, titled “Examining Social Media Companies Efforts to Counter Terror Online,” on June 26, 2019. It was clear during the hearing that the members of Congress, and in fact in the committee of jurisdiction, were displeased with Facebook’s efforts on this issue.

“This is a collective action problem and we are all in this together…There are things happening that are highly preventable… We have every right to believe you aren’t taking this seriously.”- Rep. Max Rose (D-NY), Chairman of Subcommittee on Intelligence and Counterterrorism
“At the time, I was optimistic with its [Global Internet Forum to Counter Terrorism] intentions and goals… They [social media companies] were unable to comply. …We are yet to receive satisfactory efforts ”- Homeland Security Committee Chairman Bennie Thompson (D-MS)
“My constituency and I want strong policies from your companies that will keep us safe”- Rep. Lauren Underwood (D-NY)

A video recording of the full hearing can be accessed here. The Whistleblower Protection Blog reported on the hearing here.

Five Key Findings in the Petition

The whistleblower analyzed 3,228 Facebook profiles of individuals expressing affiliation with terror or hate groups. The analysis produced the following key findings:

1. Terror and hate speech and images are proliferating on Facebook 

The whistleblower found that 317 profiles out of the 3,228 surveyed contained the flag or symbol of a terrorist group in their profile images, cover photo, or featured photos on their publicly accessible profiles. The study also details hundreds of other individuals who had publicly and openly shared images, posts, and propaganda of ISIS, al-Qaeda, the Taliban and other known terror groups, including media that appeared to be of their own militant activity. 

2. Contrary to its assurances, Facebook has no meaningful strategy for removing this terror and hate content from its website

A survey of terror content on Facebook found that despite the company’s public claims, far more extremist content remains on the platform than is blocked.  

Facebook has purportedly accepted culpability for terror and hate content on Facebook. During an April 2018 appearance before a congressional panel, CEO Mark Zuckerberg stated: “When people ask if we’re a media company what I heard is, ‘Do we have a responsibility for the content that people share on Facebook,’ and I believe the answer to that question is yes.” Facebook has repeatedly stated that it blocks 99% of the activity of targeted terrorist groups such as ISIS and al-Qaeda without the need for user reporting. 

The whistleblower began by searching Facebook for the English and Arabic name for several groups that the United States has designated as transnational terrorist groups, including ISIS and al-Qaeda. The searches turned up hundreds of results for people who listed jobs, names, or other profile attributes affiliating them with a terror group.  

To study the issue in closer detail, the whistleblower selected a dozen profiles of self-identified terrorists who had publicly accessible “Friends lists and reviewed the profiles of these 3,228 Friends. These Friends of self-declared terrorists spanned the Middle East, Europe, Asia, and Latin America, and many openly identified as terrorists themselves and shared extremist content.   

After a five-month period ending in December 2018, the whistleblower found that less than 30% of the profiles of these Friends had been removed by Facebook and just 38% of the Friends who were displaying symbols of terrorist groups had been removed. This directly contradicted the assurance by Facebook, discussed above, that it is removing 99% of such content. 

The ease with which the whistleblower found these individuals exposes several major failures in Facebook’s content review process. The company’s AI only targets two groups out of the dozens of designated terrorist organizations: ISIS and al-Qaeda and their affiliates. Even then, it fails to catch most permutations of their names.  

The whistleblower found similar extremist content from self-identified Nazis and white supremacist groups in the United States that went unchallenged. And while Facebook banned the far-right extremist group “Proud Boys” in October 2018, it has allowed dozens of other Nazi and white supremacist groups to continue to operate openly.  

3. Facebook is generating its own terror and hate content, which is being Liked by individuals affiliated with terrorist organizations  

Facebook’s problem with terror and hate content goes beyond just its misleading statements about its removal of content that violates community standards. Facebook has also never addressed the issue that it actively promotes terror and hate content across the website via its auto-generated feature 

In multiple documented cases, Facebook enabled networking and recruiting by repurposing user-generated content and auto-generating pages, logos, promotional videosand other propaganda. These auto-generated pages also filled in information about terrorist groups from Wikipedia.  

For example, Facebook auto-generated Local Business pages for terrorist groups using the job designations that users placed in their profiles. Facebook also auto-filled terror icons, brandingand flags that appear when a user searches for members of that group on the platform.  

Facebook’s auto-generation of Pages is not limited to designated terrorist groups. The company has also generated dozens of pages connected to Nazis and other white supremacist groups both inside and outside of the United States. The whistleblower’s research identified at least 31 different pages and locations that were auto-generated by Facebook for such groups.  

The terror and hate content generated by Facebook is Liked by thousands of Facebook users. As explained below, these Likes provide yet another means for individuals affiliated with extremist groups to network and recruit.  

4. Facebook is providing a powerful networking and recruitment tool to terrorist and hate groups 

The whistleblower found that individuals who elect to become Friends of terrorist groups, including ISIS and al-Qaeda, share terror-related content frequently and openly on Facebook. For example, the whistleblower’s searches in Arabic for the names of other terror groups like Boko Haram, Al Shabaab, and Hay’at Tahrir Al-Sham immediately uncover Facebook pages, jobs, and profiles expressing affiliation with and support for those extremist groups. All of this appears to be part of an ongoing attempt by terrorist groups to network and recruit new members. 

Facebook provides an ideal platform for networking and recruiting. On Facebook, people can see when their Friends Like a certain group, and this often induces them to explore the material and can persuade them to Like it as well. Some of the terror-related pages on Facebook have thousands of Likes.  

Facebook also facilitates networking and recruiting through its “Suggested Friends” feature, which puts individuals who profess affiliation with and support of terrorist groups in contact with one another.  

5. Facebook has argued it is not a content provider, failing to disclose that it is generating terror and hate content 

Facebook put its shareholders at risk by failing to fully disclose to them its liability risk. Historically, Facebook has benefited from Section 230 of the Communications Decency Act, which provides immunity from tort liability to certain internet companies that serve as hosts of content produced by others. To the extent that Facebook is auto-generating its own terror content and this content, in turn, facilitates networking and harmful acts by terrorists or white supremacists, Facebook may no longer be immune from tort liability under the CDA. 

As a result of its failure to put in place meaningful controls over the extremist content on its website and its own auto-generation of such content, Facebook may be exposing its shareholders to potentially enormous losses. Facebook’s lack of controls and auto-creation of terror and hate content constitute “material information” affecting stock prices; Facebook was therefore required by securities law to disclose this information to shareholders. Its failure to disclose this information, and its exaggeration of the extent of its controls, creates an obligation on the part of the SEC to step in and take enforcement action. 

Donate Today