Meta is to scrap its longstanding fact-checking programme in favour of a community notes system similar to that on Elon Musk’s social media platform X.

Instead of using news organisations or other third-party groups as it does currently, Meta will rely on users to add notes to posts that might be false or misleading.

The changes will affect Facebook and Instagram, the company’s two largest social media platforms which have billions of users, as well as its newer platform Threads.

“We’re going to get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms,” chief executive Mark Zuckerberg said in a video.

“More specifically, here’s what we’re going to do. First, we’re going to get rid of fact checkers and replace them with community notes similar to X, starting in the US.

“It means that we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”

The policy signals a move towards a more conservative-leaning focus on free speech by Mr Zuckerberg, who met Donald Trump in November after he won the US election.

A community notes system is likely to please the president-elect, who criticised Meta’s fact-checking feature for penalising conservative voices.

Meta donated 1 million dollars to support Mr Trump’s inauguration in December, and has since appointed several Trump allies to high-ranking positions at the firm.

Nick Clegg, the former UK deputy prime minister, also left the social media giant last week, where he had been president of global affairs.

Mr Clegg has been replaced by Joel Kaplan, a prominent Republican and former senior adviser to George W Bush.

Dana White, the head of the Ultimate Fighting Championship and a close ally of Mr Trump, was also appointed to Meta’s board.

Meta said it plans to bring in the community notes function in the US over the next few months and will “continue to improve it” over the year.

It will also stop demoting fact-checked posts and make labels indicating when something is misleading less “obtrusive”.

In a statement, Mr Kaplan added that Meta’s moderation policies had “gone too far”.

Referring to its incoming system, he said: “We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see.”