After Mark Zuckerberg’s January turnaround on “free expression,” what’s in store for the billions of users of Facebook and Instagram?

They could encounter at least 277 million more instances of hate speech and other harmful content each year, according to a new estimate from the Center for Countering Digital Hate (CCDH), a nonprofit that frequently battles social media companies over moderation practices.

Last month, Meta announced sweeping changes to the community guidelines that define what sorts of speech it does and doesn’t allow. The company said it would shift how it enforces many of its rules and ratchet back limits to speech targeting women, LGBTQ+ people, immigrants and other marginalized groups.

The main driver for a potential surge in hate speech, CCDH said, is an important change to how Meta says it will identify some harmful content: relying on users to report it rather than automated systems to take it down.

Until last month’s changes, CCDH said, it was Meta’s proactive enforcement measures that barred 97 percent of the hate speech, bullying and harassment kept off its social networks. Reducing those tools will leave Facebook and Instagram wide open for a lot of harmful content, it said.

“We’re talking about a tidal wave of hate and disinformation that will be flooding back onto their platforms,” said Imran Ahmed, the chief executive of CCDH, which published the estimates Monday as part of a broader critique of Meta’s new policies.

That could matter for users choosing which online spaces feel safe for families, for governments weighing how to regulate social media and for advertisers looking to avoid placement aside toxic content.

Meta took issue with CCDH’s estimates.

“CCDH’s methodology is flawed and makes significant assumptions that don’t stand up to scrutiny,” Meta spokesman Ryan Daniels said. “While we will still address content that violates our policies, we are focused on reducing mistakes and over-enforcement of our rules.”

Meta didn’t specify what was flawed about CCDH’s methodology and declined to provide its own projections for how much content will be barred under its new policies.

Callum Hood, CCDH’s head of research, said his estimate of Meta missing about 277 million instances of harmful content is derived from Meta’s own transparency reports. In the most recent one, which runs through the third quarter of 2024, Meta reported proactively taking down 346 million pieces of content in areas like hate speech, bullying and harassment. CCDH adjusted its estimate to account for Meta’s claims that up to 20 percent of its proactive enforcement decisions could be mistakes.

RECOMMENDED VIDEO

Meta has said it plans to curtail proactive enforcement because its automated systems made too many errors – an issue that irked both ends of the political spectrum.

“For less severe policy violations, we’re going to rely on someone reporting an issue before we take any action,” the company said in January.

Meta has said it would continue proactive enforcement on “high-severity violations” including “terrorism, child sexual exploitation, drugs, fraud and scams.” But that list does not include the categories of hate speech, bullying or incitement to violence – and Meta declined to answer questions from The Post about how it would enforce its rules in such cases.

The 277 million more pieces of harmful content in CCDH’s estimate would be on top of the unknown number of posts Meta’s systems previously missed – and also wouldn’t account for new additional activity, CCDH said.

The challenge of measuring Meta’s self-reported enforcement speaks to a broader problem.

“Without any regulatory or economic incentives to provide actual transparency, we have no true idea of how much dangerous or even illegal content is actually enforced, whether by their automated systems or user reports,” said Yael Eisenstat, Facebook’s former head of election integrity for political ads in 2018 who is now director of policy at Cybersecurity for Democracy.

It also isn’t clear how effectively Meta will respond to the user reports it has now placed at the core of its moderation efforts.

“Our repeated tests of Meta’s enforcement of its policies in response to user reports indicate that the company fails to act on the majority of violative content reported to them,” CCDH’s Hood said. Past tests by the ADL and PEN America have similarly found Meta has done a poor job responding to user reports of antisemitism and abuse.

Meta has said it would move its enforcement staff from California to Texas as part of its new policies, but it has not outlined plans to invest in additional staff that might be required to handle a rise in user reports.

“Putting the responsibility on those who are suffering the abuse to also do the moderation work for this multibillion dollar company is a gross abdication of the company’s own responsibility to keep people safe,” Eisenstat said.

LGBTQ+ advocacy group GLAAD said Meta’s shift to user reports is a radical departure from industry best practices. “For years, the company has bragged about how it proactively mitigates enormous quantities of violative content, without user reporting. Meta now says it’s shifting to reactive mode,” said Leanna Garfield, GLAAD’s Social Media Safety Program Manager. “We can expect that more users will increasingly come into closer contact with harmful content.”