Will Facebook finally come clean about its handling of criminal and extremist activity on its website?

by John Kostyack, Executive Director

Published on October 29, 2020

Please, share this page
Will Facebook finally come clean about its handling of criminal and extremist activity on its website?

Yesterday, the Senate Commerce Committee held a hearing on whether technology companies should continue to benefit from immunity from liability for content on their social media websites. Section 230 of the Communications Decency Act largely shields companies from such liability on the theory that they are not publishers but merely serving as neutral platforms for content published by others. The CEOs of three of the largest technology companies – Facebook, Twitter and Alphabet (owner of Google) – testified and made lots of polite gestures in answering Senators’ questions. But on perhaps the most important question of the day – whether these companies are finally accepting responsibility for doing all that is necessary to address the growth of criminal and extremist activity on their websites – little progress was made. 

Although the National Whistleblower Center does not have a position on what, if any, modifications to Section 230 are warranted, we believe strongly in the need for transparency and accountability among the big technology companies. This is why we are supporting a series of petitions to the Securities and Exchange Commission (SEC) filed by confidential whistleblowers alleging that Facebook has violated federal securities laws by issuing misleading statements, and omitting material information, about its handling of illegal trafficking in endangered wildlife and antiquities as well as white supremacist and terror content. We wrote to the Senate Commerce Committee’s chairman and ranking member in advance of the hearing, noting Facebook’s history of vague and misleading assurances on these issues and asking them to force Facebook to explain, under oath and with specificity, what it is doing to address these problems. 

I was pleased to see that a number of Senators asked questions about extremist content. Senator Gary Peters (D-MI) asked Facebook’s Mark Zuckerberg about an internal company analysis, recently leaked by a whistleblower to the Wall Street Journal, showing that 64 percent of people who joined an extremist group on Facebook only did so because the company’s algorithm recommended it to them. Zuckerberg professed not to be familiar with the analysis, a claim that effectively shreds what remains of his and his company’s credibility. 

Senator Cory Gardner (R-CO) asked the three CEOs whether they should be held liable for harm caused by content they generate themselves and all three said yes. Yet, as shown by one of the whistleblowers supported by the National Whistleblower Center, Facebook regularly generates new pages of content using its “auto-generate” feature, and some of them amplify the networking and recruiting efforts of white supremacist and terror groups. Facebook has never acknowledged its contentgeneration problem, despite extensive reporting by the Associated Press, so its terse answer to Senator Gardner’s question is confusing at best. 

Zuckerberg expressed pride yesterday about the transparency reports that Facebook releases to the public each quarter. But these reports consistently paint a rosy picture, never acknowledging that the company’s algorithms and auto-generation features are assisting white supremacists and other extremist groups with networking and recruiting. Until Facebook comes clean about these critical facts, Congress and the SEC have every reason to look skeptically on its claims of transparency.  

Report Fraud Now