Stop Terror and Hate Content on Facebook

Facebook is misleading shareholders about terror and hate content on its website

Take Action Today HOLD FACEBOOK ACCOUNTABLE

In April 2018, Facebook launched a media campaign to promote its successful crackdown on the use of its website by terror groups. Citing the role of advanced artificial intelligence (AI) and a growing team of expert human reviewers, Facebook asserted to the public that it could now block 99% of terrorist content of ISIS, al-Qaeda, and affiliated groups before it was reported by users.  

Thanks to a whistleblower working with the National Whistleblower Center, we now know that this alleged crackdown and new era of responsibility was a fiction. In a petition to the Securities and Exchange Commission (SEC) filed in January 2019 and updated in April 2019, the anonymous whistleblower delivers an analysis showing that during a five-month period in late 2018, the percentage of profiles of those who identify themselves as Friends of selected terrorist groups removed by Facebook was less than 30%Of the profiles of those Friends who displayed symbols of terrorist groups, Facebook removed just 38% during the study period. 

The petition alleges that not only is Facebook failing to remove terror content, but it is also actively creating new terror content on the website with its auto-generation feature. The petition delivers similar findings about the hate content on the website.  

The Associated Press reported on this explosive story in May 2019, writing:

The complaint is landing as Facebook tries to stay ahead of a growing array of criticism over its privacy practices and its ability to keep hate speech, live-streamed murders and suicides off its service. In the face of criticism, CEO Mark Zuckerberg has spoken of his pride in the company’s ability to weed out violent posts automatically through artificial intelligence.

A supplementary petition, filed mid-September 2019, highlights Facebook’s continued failure to address and remove hate content from platform. Read the Associated Press article on the update here.

Another supplementary, filed in May 2020, features new research by the Tech Transparency Project and shows that despite Facebook’s assurances since the filing of the 2019 whistleblower petition, and more than two years after Facebook hosted an event page for the “Unite the Right” white supremacist rally in Charlottesville, Virginia, its assistance to white supremacist groups continues unabated. Many of the white supremacist pages identified in TTP research were created by Facebook itself.

In addition to featuring the TTP research, the recent whistleblower’s supplemental petition highlights a Wall Street Journal investigation of Facebook, which reveals that Facebook learned from its own research in 2016 that its algorithms are helping terror and hate groups recruit and expand their ranks. Facebook never disclosed these research findings to its shareholders or the public.

These misleading statements about the company’s handling of terror and hate content violate its duties to disclose material information and risks to its shareholders. This case comes at a time when corporate whistleblowers enjoy broad public support. A Whistleblower News Network poll released in October 2020 shows that the American public considers corporate fraud a national priority and wants to help whistleblowers who expose it. It is time for the SEC to listen to public and this petition and rein in Facebook’s deception of its shareholders and the public on its approach to terrorism and white supremacy.”

Click Here To Hold Facebook Accountable

Facebook’s Legal Responsibility

Facebook’s explosive growth into the world’s largest social networking website has taken place, in significant part, due to its having persuaded regulators that it is merely a platform for content produced by others, not a publisher of its own content. This assertion has been central to its ability not only to avoid regulation but also to maintain immunity from liability in civil litigation.

In lawsuits filed by those who believe Facebook is legally responsible for harm caused by activity on its website, such as families of victims of terror attacks, Facebook has successfully wielded Section 230 of the Communications Decency Act. Section 230 states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Now that our whistleblower has shown that terror and hate content is being generated by Facebook itself and not by “another information content provider,” Facebook’s ability to claim Section 230 immunity in cases filed by families of terror victims has been placed into question.

Plaintiffs in these cases will still carry the heavy burden of proving that Facebook’s content contributed to the acts of terrorism that harmed their family members. But Facebook may now be facing the disappearance of a legal defense and the possibility of being forced to argue over causation before a jury. This will presumably cause major consternation among Facebook shareholders.

Why the Securities and Exchange Commission?

Facebook is well aware that its website is now being used by terror and hate groups to achieve their destructive goals. Apparently, it has concluded that it is no longer tenable for it to claim that it cannot be responsible for terror and hate content. Thus, it has recently begun assuring shareholders and the public that it is now hard at work removing this content.

Facebook regularly states that it is blocking 99% of the activity of targeted terrorist groups on its website, without any need for user notifications, using a combination of Artificial Intelligence (AI) and human reviewers. Yet, after researching the Facebook activity of those professing support for terrorist groups from August through December 2018, our whistleblower found that far more extremist content remains on the platform than is blocked.

As a publicly-traded company, Facebook has an obligation to inform its shareholders of these risk and provide a true accounting of its actions to stop the problem. It has not done so.

Shareholders depend on accurate information to guide their decisions on whether to invest in a company. As owners, they are in a unique position to force management to adopt corporate social responsibility practices that are essential to maintaining the company’s social license and long-term profitability.  Thanks to the whistleblower protections in the Dodd-Frank amendments, the Securities and Exchange Commission (SEC) is now fully aware of Facebook’s deceptive practices and is well-positioned to act to protect shareholders and the public from continued deception.

The question now is whether the SEC will take the action necessary to achieve meaningful changes in Facebook’s behavior.  The Securities and Exchange Act empowers the SEC to pursue civil and criminal penalties when publicly-traded companies make false and misleading statements about matters that materially affect share value. The Dodd-Frank amendments allow the SEC to impose civil penalties through its own administrative proceedings rather than having to go to court, so the SEC has the ability to move quickly.

It is time for the SEC to act. Only through a substantial civil or criminal penalty can the SEC ensure that Facebook’s deceptive practices come to an end and enable shareholders and the public to bring pressure to bear on the company regarding its handling of terror and hate content.

Five Key Findings in the Petition

The whistleblower analyzed 3,228 Facebook profiles of individuals expressing affiliation with terror or hate groups. The analysis produced the following key findings:

1. Terror and hate speech and images are proliferating on Facebook

The whistleblower found that 317 profiles out of the 3,228 surveyed contained the flag or symbol of a terrorist group in their profile images, cover photo, or featured photos on their publicly accessible profiles. The study also details hundreds of other individuals who had publicly and openly shared images, posts, and propaganda of ISIS, Al Qaeda, the Taliban and other known terror groups, including media that appeared to be of their own militant activity.

2. Contrary to its assurances, Facebook has no meaningful strategy for removing this terror and hate content from its website

 A survey of terror content on Facebook found that despite the company’s public claims, far more extremist content remains on the platform than is blocked.

Facebook has purportedly accepted culpability for terror and hate content on Facebook. During an April 2018 appearance before a congressional panel, CEO Mark Zuckerberg stated: “When people ask if we’re a media company what I heard is, ‘Do we have a responsibility for the content that people share on Facebook,’ and I believe the answer to that question is yes.” Facebook has repeatedly stated that it blocks 99% of the activity of targeted terrorist groups such as ISIS and al-Qaeda without the need for user reporting.

The whistleblower began by searching Facebook for the English and Arabic name for several groups that the United States has designated as transnational terrorist groups, including ISIS and al-Qaeda. The searches turned up hundreds of results for people who listed jobs, names, or other profile attributes affiliating them with a terror group.

To study the issue in closer detail, the whistleblower selected a dozen profiles of self-identified terrorists who had publicly accessible “Friends” lists and reviewed the profiles of these 3,228 Friends. These Friends of self-declared terrorists spanned the Middle East, Europe, Asia, and Latin America, and many openly identified as terrorists themselves and shared extremist content.

After a five-month period ending in December 2018, the whistleblower found that less than 30% of the profiles of these Friends had been removed by Facebook and just 38% of the Friends who were displaying symbols of terrorist groups had been removed. This directly contradicted the assurance by Facebook, discussed above, that it is removing 99% of such content.

The ease with which the whistleblower found these individuals exposes several major failures in Facebook’s content review process. The company’s AI only targets two groups out of the dozens of designated terrorist organizations: ISIS and Al Qaeda and their affiliates. Even then, it fails to catch most permutations of their names.

The whistleblower found similar extremist content from self-identified Nazis and white supremacist groups in the United States that went unchallenged. And while Facebook banned the far-right extremist group “Proud Boys” in October 2018, it has allowed dozens of other Nazi and white supremacist groups to continue to operate openly.

 3. Facebook is generating its own terror and hate content, which is being Liked by individuals affiliated with terrorist organizations

Facebook’s problem with terror and hate content goes beyond just its misleading statements about its removal of content that violates community standards. Facebook has also never addressed the issue that it actively promotes terror and hate content across the website via its auto-generated feature.

In multiple documented cases, Facebook enabled networking and recruiting by repurposing user-generated content and auto-generating pages, logos, promotional videos, and other propaganda. These auto-generated pages also filled in information about terrorist groups from Wikipedia.

For example, Facebook auto-generated “Local Business” pages for terrorist groups using the job designations that users placed in their profiles. Facebook also auto-filled terror icons, branding, and flags that appear when a user searches for members of that group on the platform.

The company also created “Memories” and “Celebration” videos by repurposing images celebrating terrorism on user’s pages.

Facebook’s auto-generation of Pages is not limited to designated terrorist groups. The company has also generated dozens of pages connected to Nazis and other white supremacist groups both inside and outside of the U.S. The whistleblower’s research identified at least 31 different pages and locations that were auto-generated by Facebook for such groups.

The terror and hate content generated by Facebook is Liked by thousands of Facebook users. As explained below, these Likes provide yet another means for individuals affiliated with extremist groups to network and recruit. 

4. Facebook is providing a powerful networking and recruitment tool to terrorist and hate groups

The whistleblower found that individuals who elect to become Friends of terrorist groups, including ISIS and al-Qaeda, share terror-related content frequently and openly on Facebook. For example, the whistleblower’s searches in Arabic for the names of other terror groups like Boko Haram, Al Shabaab, and Hay’at Tahrir Al-Sham immediately uncover Facebook pages, jobs, and profiles expressing affiliation with and support for those extremist groups. All of this appears to be part of an ongoing attempt by terrorist groups to network and recruit new members.

Facebook provides an ideal platform for networking and recruiting. On Facebook, people can see when their Friends Like a certain group, and this often induces them to explore the material and can persuade them to Like it as well. Some of the terror-related pages on Facebook have thousands of Likes.

Facebook also facilitates networking and recruiting through its “Suggested Friends” feature, which puts individuals who profess affiliation with and support of terrorist groups in contact with one another.

5. Facebook has argued it is not a content provider, failing to disclose that it is generating terror and hate content

Facebook put its shareholders at risk by failing to fully disclose to them its liability risk. Historically, Facebook has been benefited from Section 230 of the Communications Decency Act, which provides immunity from tort liability to certain internet companies that serve as hosts of content produced by others. To the extent that Facebook is auto-generating  its own terror content and this content, in turn, facilitates networking and harmful acts by terrorists or  white supremacists, Facebook may no longer be immune from tort liability under the CDA. 

As a result of its failure to put in place meaningful controls over the extremist content on its website, and its own auto-generation of such content, Facebook may be exposing its shareholders to potentially enormous losses. Facebook’s lack of controls and auto-creation of terror and hate content constitute “material information” affecting stock prices; Facebook was therefore required by securities law to disclose this information to shareholders. Its failure to disclose this information, and its exaggeration of the extent of its controls, creates an obligation on the part of the SEC to step in and take enforcement action.

Congressional Response

Congressman Max Rose (D-NY), Chairman of the Subcommittee on Intelligence and Counterterrorism, Committee on Homeland Security, has led the charge on this issue in the House of Representatives. In addition to leading several letters to Facebook demanding answers and solutions on the ongoing problem of terror content on Facebook, Rep. Rose delivered a speech on the floor of Congress specifically discussing Facebook’s role in generating this content promoting terrorism.

“In fact, instead of preventing terrorist content from spreading on their platform, as reported by the Associated Press, recently Facebook has been making videos and promoting terrorist content on its own system,” said Rose in a speech on the House floor. “For instance, an Al-Qaeda-linked terrorist group has an autogenerated Facebook page that has nearly 4,500 likes. This case was profiled in the AP story and serves as yet another glaring example of Facebook’s inability to police itself. But what is even more striking, is before coming to speak on the House floor today, I checked and this profile is still up there! This profile that the AP reported to Facebook is still up there.”

Read the full speech and learn more about Chairman Rose’s efforts in Congress on this issue here. The video of his speech can be viewed directly here.

The U.S. House Committee on Homeland Security also held a hearing on the issue, titled “Examining Social Media Companies Efforts to Counter Terror Online,” on June 26, 2019. It was clear during the hearing that the members of Congress, and in fact in the committee of jurisdiction, were displeased with Facebook’s efforts on this issue.

“This is a collective action problem and we are all in this together…There are things happening that are highly preventable… We have every right to believe you aren’t taking this seriously.”- Rep. Max Rose (D-NY), Chairman of Subcommittee on Intelligence and Counterterrorism
“At the time, I was optimistic with its [Global Internet Forum to Counter Terrorism] intentions and goals… They [social media companies] were unable to comply. …We are yet to receive satisfactory efforts ”- Homeland Security Committee Chairman Bennie Thompson (D-MS)
“My constituency and I want strong policies from your companies that will keep us safe”- Rep. Lauren Underwood (D-NY)

A video recording of the full hearing can be accessed here.

The Senate’s Committee on Commerce, Science, and Transportation also held a hearing on this issue on September 18th, 2019, titled “Mass Violence, Extremism, and Digital Responsibility.” Representatives from Facebook, Twitter, and Google all testified, along with the Senior Vice President of Programs for the Anti-Defamation League. Senators expressed similar concerns to many of their colleagues in the House.

“No matter how great the benefits to society these platforms provide, it is important to consider how they can be used for evil at home and abroad” – Sen. Roger Wicker (R-MS)
“While the First Amendment to the Constitution protects free speech, speech that incites imminent violence is not protected and Congress should review and strengthen laws that prohibit threats of violence, stalking, and intimidation to make sure we stop the online behavior that does incite violence.” – Sen. Maria Cantwell (D-WA)

A video recording on the full hearing can be accessed here.

Report Fraud Now