Meta’s Changes to Community Standards vs. the Online Safety Act: A Legal Showdown
- Details
- Hits: 66

Meta’s Changes to Community Standards vs. the Online Safety Act: A Legal Showdown
Meta’s recent revisions to its Hateful Conduct Community Standards, allowing language that many consider offensive or dehumanising, have sparked concerns from advocacy groups and legal experts. These changes come at a critical time when the UK’s Online Safety Act 2023 places significant duties on online platforms to protect users from harmful content. This article explores the tension between Meta’s free expression stance and its legal obligations under the Act.
Meta’s Policy Changes: What’s New?
On January 7, 2025, Meta introduced changes to its content moderation policies on Facebook and Instagram. These revisions permit certain types of speech previously classified as hateful, including:
- Referring to LGBTQ+ individuals as "mentally ill" based on their sexual orientation or gender identity.
- Labeling transgender or non-binary individuals as "it."
- Depicting women as property or household objects.
Meta’s justification for these changes centres around the need to accommodate political and religious discourse. The company argues that controversial views should be allowed when they reflect public debate on sensitive topics, provided they do not incite violence or other criminal conduct.
Criticism from Advocacy Groups
Advocacy groups have strongly condemned Meta’s new policies, warning that they could exacerbate online harassment and discrimination against marginalised communities. Organisations like the Molly Rose Foundation have expressed concern that the relaxation of moderation standards could lead to increased self-harm and suicide risks among vulnerable users.
The changes have also prompted fears of normalising harmful stereotypes, with critics pointing to the potential rise of content targeting LGBTQ+ individuals and women. Advocacy groups argue that platforms have a moral and legal responsibility to maintain safe online environments, particularly under the UK’s stringent online safety regulations.
The Online Safety Act 2023: An Overview
The Online Safety Act 2023 is a landmark piece of legislation designed to protect UK internet users from harmful and illegal content. It introduces a "duty of care" for online platforms, requiring them to:
- Identify and mitigate risks: Platforms must assess and address risks posed by harmful content, including both illegal material (e.g., child sexual abuse, terrorism) and content that is legal but harmful (e.g., cyberbullying and content promoting self-harm).
- Implement safety measures: Platforms are expected to deploy effective safety measures, such as content filtering, user reporting mechanisms, and appeals processes.
- Demonstrate accountability: Platforms must provide transparency reports detailing their content moderation policies, enforcement actions, and user safety measures.
The Act grants enforcement powers to Ofcom, the UK’s communications regulator, which can issue fines of up to £18 million or 10% of a company’s global turnover for non-compliance.
Legal Implications for Meta
Meta’s changes to its Community Standards may put it on a collision course with its obligations under the Online Safety Act. Critics argue that allowing harmful language, even under the guise of fostering debate, contradicts the platform’s duty of care to users.
Under the Act, platforms must protect users from both harmful and illegal content. While Meta’s policies still prohibit direct threats and incitement to violence, the relaxed approach to derogatory language targeting protected groups raises questions about whether the company is fulfilling its legal responsibilities.
Ofcom may investigate whether these policy changes violate the requirement to mitigate risks to users. If found in breach, Meta could face substantial penalties, including fines and potential restrictions on its UK operations.
Free Speech vs. Safety: A Delicate Balance
Meta’s defence is rooted in the principle of free expression. The company contends that political and religious beliefs, however controversial, deserve a platform for public discussion. This aligns with Article 10 of the European Convention on Human Rights (ECHR), which protects freedom of expression while allowing for lawful restrictions to prevent harm.
However, the legal landscape in the UK prioritises balancing free speech with the protection of individuals from harm. Courts have consistently upheld the importance of safeguarding vulnerable groups from hate speech and harassment, as demonstrated in cases such as Miller v College of Policing. In that case, the Court of Appeal ruled that overly broad content policies can have a chilling effect on free expression but also highlighted the need for proportionate measures to prevent harm.
The key legal question is whether Meta’s policy changes strike an appropriate balance. Critics argue that by prioritising free expression over user safety, Meta risks creating an environment where harmful content flourishes unchecked.
Industry Trends and Broader Implications
Meta’s policy shift reflects a broader trend among tech companies reassessing their roles in moderating speech, especially under the US presidential changes. Twitter (now X), under Elon Musk, has embraced a "free speech absolutist" approach, resulting in the reinstatement of controversial accounts and reduced content moderation. Similarly, platforms like YouTube and TikTok have faced scrutiny over their handling of harmful content.
The debate surrounding content moderation highlights the challenges platforms face in navigating conflicting legal frameworks and societal expectations. While some users demand greater freedom of expression, others call for stricter safeguards to protect marginalised groups from harm.