Defamation and social media expert lawyers. Best defamation solicitor

Internet Law Specialist Lawyers FREE CALL 0800 612 7211

Recently removed from the internet
What our clients say...
"good quality advice which allowed me to avoid unnecessary risks and stresses."
“I opted to use this service as it was online and offered the flexibility I require. I... Read More...
Contact our super friendly Social Media lawyers today!

Click HERE to Call Free for immediate help! 0800 612 7211

 

Every situation is different so by far the best way to find out how to respond to a social media legal issue is to speak to those who are most likely to have dealt with a situation similar to yours.
To find out how you can improve your reputation on the internet simply select one of the easy methods of contacting us.

 
Please use the form below to contact us.
We will respond as soon as possible.

 

 
 
 
 
 
 
Or you can call us on our free hotline.

FREEPHONE  0800 612 7211

(+) 44 207 183 4 123 from outside the UK.

 

 

Or if you prefer you can email us to helpline (at) CohenDavis.co.uk.

 

TheInternet LawCentre

Facebook’s updated free speech policy

Facebook's updated free speech policy

Facebook's updated free speech policy

Meta’s recent decision to relax its content moderation rules on Facebook is stirring up a lot of debate—and for good reason. It’s a game-changer for free speech and has reignited a global conversation about how digital platforms should handle public discussion.

Free speech shouldn’t be limited just because some views make people uncomfortable

Would the relaxing social media censorship come at the expense of victims of online harassment

What is online harm and how do you judge online harm

Is it the job of social media platforms to carry out speech censorship

Conclusion

Free speech shouldn’t be limited just because some views make people uncomfortable

While some people fear this move will let offensive language and harmful content slip through the cracks, it’s important to take a step back and look at what’s really happening. This isn’t just about what people can and can’t say online—it’s about protecting the core right to express opinions, even when they’re uncomfortable or unpopular. This shift feels like a crucial moment in the ongoing battle for freedom of expression in the digital era, a reminder that open dialogue is often more powerful than censorship. At the heart of Meta’s policy shake-up is a simple but vital idea: free speech shouldn’t be limited just because some views make people uncomfortable.

Democracy thrives on open debate, even when those debates get messy or challenging. Meta seems to recognise that real discussions—especially around politics, society, and religion—aren’t always polite or easy. By allowing space for more controversial opinions, even those that might be offensive, Meta is encouraging an environment where ideas can be debated, challenged, and, ultimately, refined. This move supports the belief that truly free societies should welcome all perspectives and never silence voices simply for straying from the mainstream.

Would the relaxing social media censorship come at the expense of victims of online harassment

Critics of the new Facebook censorship policies say that relaxing these rules of free speech on social media could make it easier for harassment to flourish, particularly targeting marginalised communities. But here’s the hard truth: did stricter policies ever genuinely protect those groups? Many who faced severe online abuse—like targeted online harassment, doxxing, or even revenge porn—found little help under the old system. Despite all the promises of tough moderation, it is my personal experience over the past 25 years of working on internet law related legal matters that the platforms often failed those who needed protection the most. Instead, the heavy hand of censorship often seemed to fall on political opinions and alternative viewpoints, rather than tackling genuine abuse. This selective enforcement left many feeling abandoned by the very platforms that claimed to protect them. When a system claims to prioritise safety but doesn’t follow through, it risks losing the trust of its users entirely.

What is online harm and how do you judge online harm

The real issue isn’t about shielding people from discomfort as the term "online harm" can never be defined to an acceptable level of certainty. Is about making sure online spaces stay open for everyone, even when debates get tough. Free speech should allow people to question dominant narratives, challenge powerful institutions, and explore ideas that might be uncomfortable. Meta’s changes could breathe life into public discourse, encouraging vibrant, inclusive conversations that aren’t held back by arbitrary censorship of content.

When users engage with challenging ideas, they grow more resilient, better prepared to face opposing views head-on—something every healthy democracy needs. After all, a society’s commitment to freedom isn’t measured by how well it shields people from discomfort but by how boldly it defends the right to speak freely, even when that speech is unpopular. Now, let’s talk about the UK’s Online Safety Act 2023. This law aims to protect users from harmful content online, which, on paper, sounds reasonable. But it’s vital to distinguish between actual harm and the discomfort that naturally comes from hearing different opinions. Censorship shouldn’t be a tool to shelter people from ideas they don’t like.

Those who have genuinely faced harassment often found that previous policies failed to protect them effectively. The issue isn’t free speech—it’s the lack of effective enforcement against those who cause real harm. Instead of stifling political speech, we should be targeting the bad actors who use online platforms for abuse. True protection should come from focused action on real threats, not blanket suppression of controversial views.

Is it the job of social media platforms to carry out speech censorship

Meta’s new approach shifts responsibility back to users rather than corporate moderators with potential political biases. It’s about encouraging people to engage thoughtfully with online content, make their own choices, and decide what’s acceptable for themselves. This move empowers users, fostering a sense of personal responsibility and maturity—qualities that are essential for any functioning democracy.

Meta’s message here is clear: users are capable of distinguishing between harmful content and challenging, thought-provoking opinions. Some people worry that these changes might clash with the UK’s Online Safety Act. Should they? Real freedom of speech must make room for uncomfortable truths and unpopular opinions, even if they challenge societal norms. Protecting the right to speak freely doesn’t mean endorsing harmful behaviour—it means trusting people to think for themselves and respond to challenging views in a responsible way. True democracy thrives on this kind of trust.

Meta’s decision embraces the spirit of open discussion and democratic participation. People who genuinely value freedom of expression should see this not as a call for hate but as a chance for meaningful, unfiltered dialogue. Free speech shouldn’t be feared—it should be celebrated as a tool for learning, growth, and societal progress. A true marketplace of ideas requires room for every voice, even those that are unpopular. Only through open, honest debate can societies address deep divisions and build genuine understanding.

Instead of focusing on limiting speech, the real conversation should be about strengthening protections for those who are truly harmed by online abuse. Previous rules often fell short, creating a false sense of security without delivering real action. Moving forward, we need stronger legal frameworks that hold abusers accountable while ensuring that political and social discourse remains free and open. Let’s focus on empowering legal systems to handle real harm effectively, while leaving room for diverse perspectives and genuine debate.

Conclusion

Meta’s updated policies are a bold statement in support of free speech, challenging the idea that censorship of online content is the best way to create safe online spaces. Real safety comes from strong legal protections against genuine harm, not from silencing voices that challenge the status quo. By promoting open dialogue and defending diverse viewpoints, Meta’s decision signals a new chapter for online conversation—one where freedom, responsibility, and safety can finally coexist in balance. This could set a precedent for other platforms, encouraging a shift towards free expression while ensuring that real threats are swiftly and effectively dealt with by the proper authorities.

 

a flat out uncond

Signature cases

Our work featured on

Latest Articles

Explore this topic!