Abdiel Franz Bernales

On January 7, 2025, Meta, the parent company of Facebook, replaced its third-party fact-checking program with a user-driven "community notes" system. This new approach allows users to collaboratively add context or corrections to posts on Facebook, Instagram, and Threads.


This marks a significant shift toward decentralized content moderation, emphasizing censorship and crowd-sourced oversight over professional fact-checking, a move aligned with debates on balancing misinformation control and expression during the Trump presidency.

Mark Zuckerberg decided to end Meta's fact-checking program primarily because he felt it had become too politically biased and destroyed more trust than it created, especially in the United States. “After Trump first got elected in 2016, the legacy media wrote non-stop about how misinformation was a threat to democracy,” Zuckerberg said.

He also claimed that the fact-checkers were seen as biased in what they chose to fact-check. Their decisions on which topics or statements to fact-check often appeared influenced by personal or political leanings, which undermined their credibility and the trustworthiness of the entire process.

"We tried, in good faith, to address those concerns s without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the U.S.,” Zuckerberg added.

This led to a loss of trust among users and the recent elections were seen as a cultural tipping point toward prioritizing free speech over content policing. As seen in Mark Zuckerberg's announcement video, "We're going to get rid of fact-checkers and replace them with community notes similar to X, starting in the U.S."

He believes replacing fact-checkers with a community-driven system, especially in the U.S., similar to Elon Musk's Community Notes on X (formerly Twitter), will help restore free expression on Meta's platforms. "I think Elon has played an incredibly important role in moving the debate and getting people refocused on free expression, and that's been really constructive and productive," Kaplan said.


Time for an ultimate shift

Meta's independent fact-checking program, launched in 2016, designed to empower users by providing them with additional context about viral hoaxes and online content. This was done by delegating the responsibility to independent fact-checking organizations involving selecting reputable partners adhering to recognized standards, defining clear guidelines for content assessment, ensuring transparency, and leveraging their neutrality, expertise, and global reach to maintain credibility and effectiveness.

By doing so, Meta aimed to avoid being the "arbiters of truth" themselves. Instead, these independent experts include journalists, researchers, and subject-matter specialists trained in assessing the accuracy of information. These organizations typically adhere to standards set by the International Fact-Checking Network (IFCN) and employ rigorous methodologies to evaluate claims, providing users with reliable and contextualized insights to help them make informed decisions about online content. 

The program faced challenges, particularly in the US, as expert biases influenced decisions on what to fact-check, leading to an overreach into legitimate political speech and debate. Last year, Trump threatened Meta CEO Mark Zuckerberg that he could "spend the rest of his life in prison” if he attempted to interfere with the 2024 election. 

Since Trump’s electoral victory, Zuckerberg has tried to mend the relationship by donating $1 million (through Meta) to Trump's inaugural fund and promoting longtime conservative Joel Kaplan to become Meta’s new global policy chief. 

Zuckerberg appeared to be stealing himself and Meta ahead of Trump's second term. With Kaplan's leading global policy, the company seems poised to take a more lax approach to misinformation and content moderation.

This policy change is one of the first significant decisions under Kaplan's leadership. It follows the model of Community Notes championed by Trump ally Elon Musk at X, in which unpaid users, not third-party experts, and police content. 

Meta's program flags content that is deemed misleading or false. When content is flagged, it receives intrusive warning labels and its distribution is limited. Specifically, flagged posts may have reduced visibility, and users are notified if they attempt to share or have already shared such content. 

This approach reduced the visibility of flagged content, limiting its reach on Meta’s platforms. Critics argued it disproportionately targeted conservative or controversial views while intended to combat misinformation - fostering perceptions of censorship and sparking debates over its impact on political discourse and free expression.

Meta will launch its Community Notes system in the U.S., phasing out fact-checking controls like demoting content and intrusive warnings. Instead, flagged posts will display subtle labels that will allow users to easily identify content requiring more context without overwhelming them, offering brief, unobtrusive indicators, offering users optional access to additional context that allows users to decide whether to view more information about flagged content, and providing flexibility to click for details based on their interest or need without requiring immediate interaction.

This moves Meta from authoritative moderation to a user-driven approach, aiming to reduce censorship concerns and encourage voluntary engagement with corrective information. 
In addition to the move to Community Notes, Meta said it’s also getting rid of “a number of restrictions” on topics like immigration and gender, and phasing “civil content” back into Facebook, Instagram, and Threads. 
However, reliance on user participation raises doubts about its effectiveness in curbing misinformation, reflecting a broader focus on free expression and transparency over top-down enforcement.

"While user flagging can be a valuable tool, relying solely on it to combat misinformation can be ineffective. It places a significant burden on users to identify and report harmful content, and the sheer volume of information makes it difficult for platforms to effectively process and act on these reports," indicated by Joan Donovan, Research Director at the Shorenstein Center on Media, Politics, and Public Policy at Harvard University.

Collaborative approach to misinformation

Meta’s new system, Community Notes system inspired by X, involves users from diverse perspectives to add unbiased context to posts collaboratively. 

This community-driven approach addresses misleading content through a balance of diverse perspectives, relying on collective input from users to address misleading content, rather than enforcing judgments through centralized oversight, fostering inclusivity and shared accountability. Early sign-ups for the program are open on Facebook, Instagram, and Threads, offering users the chance to contribute to fostering a more informed and transparent online space.

On the other hand, it could face several challenges since it is a regressive move that undermines the fight against misinformation. This policy change prioritizes a veneer of free speech over the pressing need for content accuracy, leaving the platform more vulnerable to manipulation, misinformation, and societal harm, as stated by Sabrangindia, an Indian news and media platform that focuses on human rights, social justice, communal harmony, and progressive issues.

Past issues, such as the Cambridge Analytica scandal, Facebook’s role in the Myanmar Rohingya crisis, anti-vaccine propaganda during COVID-19, and misinformation surrounding India’s 2024 election, highlight the platform’s vulnerability to misuse. 

Introducing Community Notes could risk exploitation by political groups like the BJP IT cell, known for coordinated propaganda campaigns that potentially distort public discourse, spread falsehoods, and undermine democratic processes and platform credibility.

Without professional fact-checkers, moderation shifts to users, reducing Meta’s accountability and potentially eroding trust. Additionally, subtle labels may fail to prevent the spread of misinformation before users engage with added context.

The transition to a Community Notes system could democratize content moderation by fostering collaborative input, but it also risks amplifying misinformation and bias if not carefully managed, potentially undermining trust in Meta's platforms.

For multiple, high-maintenance revisions

The aim is to reverse Meta's mission creep, making rules less restrictive and prone to over-enforcement. This includes removing restrictions on immigration and gender – which are often discussed in politics. Joel Kaplan, Meta's Vice President of Global Public Policy, created this. 

Mark Zuckerberg, CEO of Meta, stated, "It's not right that things can be said on TV or the floor of Congress, but not on our platforms." He also mentioned that the policy changes "may take a few weeks to be fully implemented." 

Meta is also changing how we enforce policies to reduce mistakes, resulting in unnecessary censorship. Mark Zuckerberg made this statement during his announcement on Meta's updated content moderation strategy, which aimed to strike a balance between minimizing errors and protecting free expression.

In this announcement, Zuckerberg highlighted the shift in automated systems to focus primarily on serious violations like terrorism and scams. At the same time, less severe cases would rely on user reports to determine appropriate action.

Additionally, they're removing most content demotions and ensuring greater confidence in decisions before taking down content. As part of these changes, trust and safety teams will move out of California to locations in Texas and other U.S. states.

People can appeal enforcement decisions but the process can be slow and doesn't always lead to the right outcome. They've added extra staff, introduced multiple reviewers, and tested facial recognition technology. Furthermore, they use large AI language models for a second opinion before taking action.

This approach introduces risks like slower appeals, biased decisions, and inconsistent enforcement. Relocating trust and safety teams and using AI aim to improve efficiency, but they come with their own limitations and challenges.

Gateway for fake news

Fact-checking and verification is a critical due diligence process that responsible journalists and newsrooms go through each time we publish stories to protect the welfare of audiences. A methodology that ensures that citizens are equipped with the accurate information they need to live their lives better and make informed decisions. Allowing manipulative and harmful content to flourish and gain eyeballs on platforms under the guise of “free speech” is opportunistic and puts people’s health, well-being, and safety at risk. 

“Journalists have a set of standards and ethics…. What Facebook is going to do is get rid of that and then allow lies, anger, fear and hate to infect every single person on the platform,” Ressa said in an interview with French news agency Agence France-Presse.

For instance, disinformation has greatly affected at least two presidential elections in the Philippines: in 2016, which saw the rise of strongman Rodrigo Duterte, and 2022, when lies about the Marcos family and legacy of the late dictator Ferdinand E. Marcos were used to bolster the candidacy of Ferdinand Marcos Jr. Many Filipinos believed enough in the disinformation about the distribution of gold in the event of a Marcos electoral victory that they lined up outside the central bank to claim it.

While Zuckerberg has only announced an end to fact-checking partnerships in the US, fact-checking partners in different parts of the world face uncertainty in their dealings with Meta. 

Leaving the online space a free-for-all for lies and disinformation is dangerous for humanity. Fact-checking initiatives need to be strengthened, not scrapped.

Fact-checkers were never given any ability to take down posts on Meta platforms. This power has always rested with Meta’s moderators. Instead, they believe in the power of more information and more context to provide people on social media with tools to protect themselves against lies.

Without giving any evidence, he has lent his voice to a narrative that vilifies fact-checkers. This is a big adversity to fact-checkers all over the world and to communities who support fact-checkers and rely on them for more accurate information.