A month ago, Meta said it was ending its third-party fact-checking program, and said that it would rely on users to flag misinformation. Today, the company has announced Community Notes, which takes a similar approach as X, and will be implemented across Facebook, Instagram, and Threads.
The requirements for being a community notes writer is that you must be a US citizen over 18 years of age, have a verified phone number, and an account that is older than 6 months. Not all community notes will be published though, as that would require a consensus of sorts among other contributors.
“Enough contributors must agree that a community note is helpful before it can be published on a post,” says the company. Meta will soon begin onboarding eligible users (details of which are not available yet), and will start surfacing these publicly-sourced fact-checking notes in the next few months.
Community Notes can be up to 500 characters, must use a language that appears unbiased and easy to understand, and support the claims with a URL linking to a reliable source. Once it has been rated and deemed helpful, it can be publicly attached to the problematic social media post.
How far can the damages go?
Heading ahead in 2025, Meta has a looser content moderation policy and a no-contacts situation with independent fact-checking organisations, starting with the US. The company is hoping that notes contributed by users will solve its problems. It’s a slippery slope.
A misinformation expert, who works with a leading organisation that is a part of Meta’s fact-checking program, told Digital Trends that community notes are largely ineffective. Talking on conditions of anonymity, they said it has failed on X in some of its biggest markets, such as India.
A member of the International Fact-Checking Network (IFCN) by Poynter, the expert also rejected Meta’s claims that fact-checkers were politically biased. IFCN agencies have argued how Meta made them adhere to strict non-partisan and transparency, which also meant saying no to advocacy and affiliations to political parties.
Here is the oxymoronic part of the debate. Professional fact-checkers are the top contributors to Community Notes on X, as per a study that analyzed over 1.2 million notes published in 2024. “Users frequently rely on fact-checking organizations when proposing Community Notes,” says the study.
Ever since Meta announced an end to its fact-checking program, a debate about the intent behind it erupted. In January, Meta ended its diversity, equity and inclusion efforts (DEI) for hiring and training employees. Did Meta prioritize user interests, or simply made the decision based on the political climate?
We might never get an official answer to that. But the damage such failures can do, are far-reaching. It can affect everyone hailing from a minority group, be it religious, political, cultural, or social, just to name a few.
“Recent content policy announcements by Meta pose a grave threat to vulnerable communities globally and drastically increase the risk that the company will yet again contribute to mass violence and gross human rights abuses,” said Amnesty International.
Jamie Krenn, an associate professor at Columbia University and Sarah Lawrence College, tells Digital Trends that Meta’s platforms risk becoming a breeding ground for false narratives and leave users vulnerable to manipulation.
“Expertise in fact-checking plays a crucial role in maintaining this balance, ensuring that free speech is supported by a foundation of truth. Without it, we risk confusing opinion with fact—a dangerous precedent for any society,” says Krenn, who holds a doctorate degree in cognitive studies.
This is not a fix for Meta’s problem
“The deployment of Community Notes to replace fact checking does little to fill the space but big tech has made the calculus that emphasizing free speech was a better move politically with lawmakers and regulators than other reforms,” Jeff Le, an expert on global affairs and public policy, and a former Deputy Cabinet Secretary for California state, tells Digital Trends.
Fact-checking is a rigorous process, and without expert analysis, it can spiral into a massive problem. The United Nations (UN) warns that facts can often be misrepresented, especially in times of conflict, and can affect peace, security, and humanitarian efforts.
When social media panders to the sensitivities of a majority class, fact checking goes for a toss if the same user base is engaged in problematic online behaviour. According to Amnesty’s report on the Rohingya genocide in Myanmar, mass spread of hateful messages “substantially increased the risk of outbreak of mass violence.”
Here is the more concerning part. Meta’s content moderation failures aside, even if a post gets a community note attached to it, that post might not be removed or have its visibility limited. It will continue to stay there, unless it is reported and Meta decides to take action.
The latter seems unlikely to happen. In January this year, the company also loosened its content moderation efforts and diluted its Hateful Conduct policy. According to the Human Rights Campaign, due to these new policies, “LGBTQ+ people will disproportionately suffer harassment and abuse, without any access to recourse.”
A crowd-sourced approach to fact-checking is bound to fail.
Le, who has also successfully campaigned against anti-LGBTI and anti-inclusionary legislation in multiple states, also touched on the recent measures taken by social media companies, such as age restrictions and parental controls to protect vulnerable groups.
“These are likely not the path to meaningful reform or expanded confidence in the platforms going forward,” he tells Digital Trends. Yaron Litwin, CMO at a company named Canopy that makes parental control software, says these changes can lead to “deterioration in online discourses.”
He adds that we should be looking for more ways to address the issues on social media, while also making a point about how AI-generated content is posing a new avenue of challenge, even for fact checkers. On a side note, earlier this year, Meta started pushing AI-generated characters and a new AI-driven video editing app is coming, as well.
A fundamentally problematic solution
The idea of Community Notes on social media is not bad, but it’s a monumental challenge for a lot of reasons. It might backfire for the very reason Meta ended its fact-checking program, which was otherwise handled by actual experts and certified agencies.
“A program intended to inform too often became a tool to censor,” said Meta, when it announced plans of ending the initiative. The core reason was bias among the fact-checkers, as per the company.
“Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how,” Meta reasoned. Here’s the missing part of the plot.
Community Notes writers can be biased, too. Even more so than certified fact-checking agencies, which actually sign a pledge to remain transparent and go through annual checks. We just saw that happening over at X, the platform which inspired Meta in the first place X.
“Unfortunately, @CommunityNotes is increasingly being gamed by governments & legacy media. Working to fix this,” X chief Elon Musk shared just a day ago. That goes against the whole ethos of having a pool of people across all backgrounds that adds factual information to challenge what is being claimed in a problematic social media post.
Unfortunately, @CommunityNotes is increasingly being gamed by governments & legacy media.
Working to fix this …
It should be utterly obvious that a Zelensky-controlled poll about his OWN approval is not credible!!
If Zelensky was actually loved by the people of Ukraine, he… https://t.co/gy0NjtPwiq
— Elon Musk (@elonmusk) February 20, 2025
Essentially, a social media platform decides to arbitrarily and non-transparently fix a crowdsourced and entirely independent system, just because it was flagging content that doesn’t align with likes and dislikes of its owners.
Also, if Musk’s reasoning of “increasingly being gamed” is taken seriously because it was targeting a certain viewpoint, then he clearly missed the train. And given Meta owner Mark Zuckerberg’s recent pivot to more free speech dialogue, the system might run into similar problems across Facebook and Instagram.
Review bombing, the legitimate kind or malicious, has been a reality of the modern Internet for years, and it is not going to be any different with Community Notes. Epic Games had to create a whole new user rating system to handle review bombing.
If a certain body, person, or event is deemed positive by one group, it may attract the ire of the other, armed with facts of their own. That is particularly true when geopolitics and culture enter into the picture.
How will Meta decide which link is a reliable source shared in the community note? Will Meta’s community be unbiased and non-transparent at judging the reliability of news sources? Will the audience engagement figures play a role? If so, a certain biased news channel with an audience of millions can easily overtake a smaller but reliable source in another market. That’s a fundamental problem.
India is the biggest market for Facebook, with over 300 million users, and so does Instagram. If contributors in India cite a local news source (with a much bigger audience than a local outlet the US) for a piece of information, will Meta give precedence to their community note even if it’s biased or incorrect?
That becomes even more complex, when media bias itself is apparent across the world. Who is going to decide the trustworthiness of the “news sources” for community notes on Facebook and Instagram, remains a mystery. It would be interesting to see how Meta balances it all, and avoids another social media-driven ruckus that ensnares a few billion users.