Americans Expect Social Media Content Moderation

By: Michelle Amazeen

In an age where misinformation spreads at the speed of a click, the announcement by Meta—formerly Facebook—to abandon its partnership with independent fact-checking organizations raises urgent questions. Meta’s decision comes at a critical juncture, as the U.S. faces an era where disinformation campaigns—often amplified by political figures—threaten democratic discourse and public trust. How will this shift affect the quality of content on its platforms? And, as Meta is the largest funder of fact-checkers globally, what does this mean for the future of fact-checking itself?

Meta CEO Mark Zuckerberg justified the decision by claiming that the company’s fact-checking program “too often became a tool to censor.” Yet, a recent poll from Boston University’s College of Communication paints a very different picture of public sentiment. A majority (72%) of Americans believe it is acceptable for social media platforms to remove inaccurate information about public health issues. Support spans political divides, with 85% of Democrats, 70% of Independents, and even 61% of Republicans agreeing that such content moderation is necessary.

Instead of relying on independent fact-checkers, Meta is pivoting to a “community notes” model. In this approach, users write and rate notes that accompany posts containing dubious claims. This model mirrors the approach Elon Musk has implemented on Twitter, now rebranded as X.

But Americans remain skeptical. The same poll reveals that nearly two in three adults (63%) believe independent fact-checking organizations should verify social media content. In contrast, less than half (48%) support the “community notes” model. Although there are some partisan differences—73% of Democrats, 62% of Independents, and 55% of Republicans favor a fact-checking model—the lukewarm reception of community notes crosses party lines.

Is there any evidence that crowdsourcing claim verification works? The academic literature is mixed. In certain contexts, crowdsourcing can rival expert verification. However, other research highlights its inconsistencies. Crowdsourcing is generally effective at assessing the credibility of news sources but struggles to reliably identify disinformation. Partisanship often undermines its efficacy, influencing which claims are selected for verification. Moreover, distinguishing verifiable claims from unverifiable ones is a skill that typically requires training.

Black and white photo of a hand dropping a ballot into a ballot box.

In practice, the results are sobering. Despite the presence of the community notes program, X remains a platform rife with misinformation on elections, climate change, and other critical topics. Offloading content moderation responsibilities onto users is yet another example of platforms shirking their duty to ensure the safety of their digital products. By abandoning content moderation, social media platforms risk enabling disinformation from those in power. Accountability measures are essential, especially as a new administration with a history of weaponizing disinformation takes office.

Still, paying independent fact-checkers has its own complications. Under Meta’s program, the platform itself determined which claims were submitted for review. This approach often resulted in fact-checkers debunking viral but non-political content, while more politically charged claims that could influence democratic processes went unaddressed. Additionally, Meta did not disclose what happened to posts flagged as inaccurate, leaving fact-checkers in the dark about the impact of their work.

Thus, the silver lining in Meta’s rejection of fact-checkers may be that the commercial imperatives of the company will no longer influence fact-checker claim selection process. Freed from Meta’s influence, fact-checkers might return their focus to democratic priorities. However, the financial loss will undoubtedly strain these organizations.

There is a potential bright side: the public could play a pivotal role in sustaining independent fact-checking. According to the Boston University poll, one-third of U.S. adults would donate $1 to fund these initiatives through crowdfunding campaigns. Such efforts could restore some of the financial resources that fact-checking organizations need to thrive.

The question of who should moderate social media content—and how—is a critical challenge of the digital age. As political leaders test the limits of truth, the integrity of public discourse hangs in the balance. Social media platforms must rise to the occasion, for their role in shaping the national conversation has never been more consequential.

Michelle A. Amazeen is Associate Professor of Mass Communication at Boston University, Associate Dean for Research at the College of Communication and directs the Communication Research Center.