Hacker News new | past | comments | ask | show | jobs | submit login
Forum with 2.6M posts being deleted due to UK Online Safety Act (hexus.net)
273 points by jonatron 60 days ago | hide | past | favorite | 288 comments



What is the meaning of "illegal content" given in the OSA? What will social media platforms be forced to censor (, remove, ..) ... let's take a look:

Table 1.1: Priority offences by category ( https://www.ofcom.org.uk/siteassets/resources/documents/onli... )

Disucssion of offenses related to: prostitution, drugs, abuse & insults, suicide, "stiring up of racial/religious hatred", fraud and "foreign interference".

So one imagines a university student discussing, say: earning money as a prostitute. Events/memories related to drug taking. Insulting their coursemates. Ridiculing the iconography of a religion. And, the worst crime of all, "repeating russian propaganda" (eg., the terms of a peace deal) -- which russians said it, and if it is true are -- of course -- questions never asked nor answered.

This free-thinking university student's entire online life seems to have been criminalised in mere discussion by the OSA, there may have been zero actual actions involved (consider, though, a majority of UK students have taken class-A drugs at most prominent universities).

This seems as draconian, censorious, illiberal, repressive and "moral panic"y as the highs of repressive christian moralism in the mid 20th C.


We grew up with the internet being a fun place where fun things happen and you don't need to take it so seriously. It was the symbol of freedom. Then internet evolved into a business center, where everything is taken extremely seriously, don't you dare break the etiquette. It's a sad change to witness, but it is what it is.


It was once in a lifetime. Some things are best when not everybody (but especially lawyers) is aware of them.

Maybe the future will be places guarded by real life trust.


I'm no fan of this act but your characterisation is highly misleading.

To pick two examples from the document you linked:

Discussion of being a sex worker would not be covered. The only illegal content relating to sex work would be if you were actively soliciting or pimping. From the document:

* Causing or inciting prostitution for gain offence

* Controlling a prostitute for gain offence

Similarly, discussion of drug use wouldn't be illegal either per se, only using the forum to buy or sell drugs or to actively encourage others to use drugs:

* The unlawful supply, offer to supply, of controlled drugs

* The unlawful supply, or offer to supply, of articles for administering or preparing controlled drugs

* The supply, or offer to supply, of psychoactive substances

* Inciting any offence under the Misuse of Drugs Act 1971

That's very different to criminalising content where you talk about being (or visiting) a prostitute, or mention past or current drug use. Those things would all still be legal content.


Those are indeed against the law. The issue is what these platforms are required to censor on behalf of these other laws.

Recall that we just spent several years where discussion of major political issues of concern to society were censored across social media platforms. Taking an extremely charitable interpretation of what government demands will be made here isn't merely naïve but empirically false.

And the reason I chose those kinds of illegal activities was to show that these very laws themselves are plausibly oppressive as-is, plausibly lacking in "deep democractic" support (ie., perhaps suriving on very thin majorities) -- and so on.

And yet it is these laws for which mass interactive media will be censored.

This is hardly a list with murder at the top.


> [..] as the highs of repressive christian moralism in the mid 20th C.

What makes you pick the mid-20th century as the high point of repressive christian moralism? That doesn't seem even close to the high point if you look back further in history.


I was specifically thinking of the censorship of mass media which took place in the west from the 20s-90s, which enforced a "family values" kind of christian moralism. Prior to the 20s, mass media wasn't particularly censored (https://en.wikipedia.org/wiki/Pre-Code_Hollywood):

USA : * https://en.wikipedia.org/wiki/Hays_Code * https://en.wikipedia.org/wiki/Federal_Communications_Commiss...

UK : https://en.wikipedia.org/wiki/Lord_Chamberlain

> From 1737 to 1968, the Lord Chamberlain had the power to decide which plays would be granted a licence for performance; this meant that he had the capacity to censor theatre at his pleasure.

UK : https://en.wikipedia.org/wiki/Video_nasty

> To assist local authorities in identifying obscene films, the Director of Public Prosecutions released a list of 72 films the office believed to violate the Obscene Publications Act 1959.


I'm supposing they mean

as the highs of (repressive christian moralism in the mid 20th C.)

and not

as the highs of (repressive christian moralism) in the mid 20th C.


It's a cheapshot at Christianity. That's all it is.


You're right. He should have mentioned the Victorian era (1837–1901) as a clear precedent of repressive moralism. Though there are still echoes of it today, where Christians want to ban any sort of criticism of their morality.


It is odd that stirring up hatred is fine, as long as it does not pertain to religion, race, sexual orientation


This is because use of this data could create significant risks to the individual’s fundamental rights and freedoms. For example, the various categories are closely linked with:

- freedom of thought, conscience and religion; - freedom of expression; - freedom of assembly and association; - the right to bodily integrity; - the right to respect for private and family life; or - freedom from discrimination.


- freedom of thought - freedom of expression - freedom of assembly and association

Then political views should be protected in the same manner?

Further, would this mean that even mentioning: Problems with child abuse in the Catholic church are forbidden. Problems with LGBT rights in some Islamic groups.

Since both can be seen as spreading hatred of people based on religion.

What about spreading hatred about fat people? Why is that not included?

This response is only intended to point out problems with such censorship as this bill defines.

Spreading or receiving hateful harassment is wrong regardless of the why it should all be banned. Or hateful harassment should be protected under freedom of speech.

To make a law that allows hateful harassment sometimes and makes it illegal in others is inherently not sustainable Since it will almost certainly have to keep expanding as other vulnerable groups are identified and thus deserve equal protection.


Where does it say discussion of those offences is illegal content? It says "content that amounts to a relevant offence". Frustratingly that is nonsensical: content surely cannot "amount to an offence" in and of itself. Offences have elements, which fall into two categories: actus reus and mens rea. And "content" cannot be either. Perhaps posting some content or possessing some content is the actus reus of an offence but the content itself does not seem to me to sensibly be able to be regarded as "amounting to an offence" any more than a knife "amounts to an offence". A knife might be used in a violent offence or might be possessed as a weapons possession offence but it makes no sense to me to say that the knife "amounts to an offence".

Either way, the point of that document in aggregate seems to be that "illegal content" is content that falls afoul of existing criminal law already: (possession and distribution of) terrorist training material is already illegal and so it is illegal content. But saying that you committed an offence is not, in and of itself, an offence, so saying you took drugs at university doesn't seem to me like it could be illegal content. Encouraging people to do so might be, but it already is.

Maybe I missed the bit where it says discussing things is illegal, so correct me if I am wrong.

Not your lawyer not legal advice etc etc


> This free-thinking university student's entire online life seems to have been criminalised in mere discussion by the OSA

There's nothing illegal about hosting a forum. The problem is that you as the site operator are legally required to take down certain kinds of content if and when it appears. Small sites with no money or staff don't have the resources to pay for a full time moderator. That cost scales with the number of users. And who knows whats in those 2.6M historical posts.

From TFA:

> The act will require a vast amount of work to be done on behalf of the Forums and there is no-one left with the availability to do it

Maybe an LLM can carry some of the load here for free forums like this to keep operating?


> Maybe an LLM can carry some of the load here for free forums like this to keep operating?

It can't give you any guarantees, and it can't be held liable for those mistakes.


And it misses the point that the law seems to, or could be used to, criminalise the simple discussion of unpleasant (to some) topics.

Without free discourse...well, I think it'd be real bad


> Without free discourse...well, I think it'd be real bad

And yet there are restrictions on speech everywhere in the world.

So not sure we need to clutch the pearls too tightly.


This seems to be what the anti-Section 230 folks are going for. The UK just...went ahead and did it?


All you need to do is have a think about what reasonable steps you can take to protect your users from those risks, and write that down. It's not the end of the world.


1.36 Table 1.2 summarises the safety duties for providers of U2U services in relation to different types of illegal content. The duties are different for priority illegal content and relevant non-priority illegal content. Broadly they include:

a) Duties to take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content and minimising the length of time that such content is present on the service;

b) Duties to take or use proportionate measures relating to the design or operation of the service to design and operate systems in a way which mitigates and manages the risks identified in the service provider’s risk assessment;

c) A duty to operate the service using proportionate systems and processes designed to swiftly take down (priority or non-priority) illegal content when they become aware of it (the ‘takedown duty’); and

d) A duty to take or use proportionate measures relating to the design and operation of the service to mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence

---

That's a bit more than "have a think"


That is false. The post you replied to virtuously linked directly to the UK government's own overview of this law. Just writing down "reasonable steps" [1] is insufficient - you also have the following duties (quoting from the document):

- Duties to take or use proportionate measures relating to the design or operation of the service to prevent individuals from encountering priority illegal content and minimising the length of time that such content is present on the service;

- Duties to take or use proportionate measures relating to the design or operation of the service to design and operate systems in a way which mitigates and manages the risks identified in the service provider’s risk assessment;

- A duty to operate the service using proportionate systems and processes designed to swiftly take down (priority or non-priority) illegal content when they become aware of it

- A duty to take or use proportionate measures relating to the design and operation of the service to mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence.

- The safety duty also requires providers to include provisions in their terms of service specifying how individuals are to be protected from illegal content, and to apply these provisions consistently.

Even if the language of this law was specific, it requires so many so invasive and difficult steps, no hobbyist, or even small company could reasonably meet. But it's anything but specific - it's full of vague, subjective language like "reasonable" and "proportionate", that would be ruinous to argue in court for anyone but billion dollar companies, and even for them, the end result will be that they are forced to accede to whatever demands some government-sanctioned online safety NGO will set, establishing a neverending treadmill of keeping up with what will become "industry standard" censorship. Because it's either that, or open yourself to huge legal risk that, in rejecting "industry standard" and "broadly recognized" censorship guidance to try to uphold some semblance of free discussion, you have failed to be "reasonable" and "proportionate" - you will be found to have "disregarded best practices and recognized experts in the field".

But, short of such an obvious breach, the rules regarding what can and can't be said, broadcast, forwarded, analysed are thought to be kept deliberately vague. In this way, everyone is on their toes and the authorities can shut down what they like at any time without having to give a reason. [2]

[1] Good luck arguing over what is "reasonable" in court if the government ever wants to shut you down.

[2] https://www.bbc.com/news/world-asia-china-41523073


> "foreign interference"

That is a very tricky one to manage on an online forum. If an American expresses an opinion about UK policy, in a literal sense that is literally foreign interference. There isn't a technical way to tell propagandists from opinionated people. And the most effective propaganda, by far, is that which uses the truth to make reasonable and persuasive points - if it is possible to make a point that way then that is how it will be done.

The only way this works is to have a list of banned talking points from a government agency. I'd predict that effective criticism of [insert current government] is discovered to be driven mainly by foreign interference campaigns trying to promote division in the UK.

This runs into the same problem as all disinformation suppression campaigns - governments have no interest in removing the stuff everyone agrees is untrue - what is the point? the flat earthers are never going to gain traction and it doesn't matter if they do - the only topics worth suppressing are things that are plausible and persuasive. The topics most likely to turn out to be true in hindsight.


> The only way this works is to have a list of banned talking points from a government agency.

How so? The "obvious" solution to me, from the perspective of a politician, would be to 1. require online identity verification for signup to any forum hosted in your country, and then 2. using that information, only allow people who are citizens of your country to register.

(You know, like in China.)


That won't stop foreign disinformation. They'll just pay some local to say it.

And China's system doesn't stop disinformation; it promotes disinformation. It it designed to make sure that only China-sponsored disinformation is available. If you want a system for that it is a solved problem; it just isn't a good idea.


Well, yes, but — again, from the perspective of a politician — if foreign agent provocateurs are forced to rely on locals to spread their messages, then you can just arrest those locals. Unlike the foreigners, the locals are under your jurisdiction. This creates a chilling effect against accepting money from foreigners to repeat those foreigners' messages.

And to be clear, "making sure that only [legislative jurisdiction]-sponsored disinformation is available" is almost always the whole point of laws like this — and what I was assuming the UK was going for here. No state wants to prevent the spread of their own propaganda; they want state propaganda to be the only legal propaganda.

Remember that your phrasing I was responding to here is "the only way this works is[...]". I think what the UK is doing here can work very well indeed to achieve their goals — it's just a question of what those goals are. Which, I think, is where we differ; I may have a far more cynical view of those goals than you.


The British legal system is a common law one like the U.S I believe, so it would be up to court interpretation.

Foreign interference would probably be interpreted as an organized campaign of interference being launched by a foreign power.

>This runs into the same problem as all disinformation suppression campaigns - governments have no interest in removing the stuff everyone agrees is untrue

at one time everyone agreed Anti-Vaxx was untrue, and now it's American government policy but still just as untrue.


The legislation follows the general structure of the health and safety act a couple of decades ago. That also caused a big right wing press crisis, and then we all sort of moved on, did a bit more paperwork, and now fewer people die in factory accidents. It's really quite helpful to start practically implementing this stuff rather than philosophising about it.


Yeah it's all a series of no biggies. But one day citizens in your sinking ship of a country will be looking overseas at countries like Afghanistan in longing as they flip ends of the leaderboard with you.


In the linked PDF, where does it say you can't discuss using drugs?


Table 1.1: Priority offences by category


I might have missed something, but I can't see where it says you can't discuss using drugs. The table lists the following:

• The unlawful supply, offer to supply, of controlled drugs

• The unlawful supply, or offer to supply, of articles for administering or preparing controlled drugs

• The supply, or offer to supply, of psychoactive substances

• Inciting any offence under the Misuse of Drugs Act 1971


> This seems as draconian, censorious, illiberal, repressive and "moral panic"y as the highs of repressive christian moralism in the mid 20th C.

Given what has happened to the US as a result of unbridled free broadcast of misinformation and disinformation, we definitely need more "draconian, censorious, illiberal, repressive" rules around the propagation of such media.

Moral panic is EXACTLY what's called for!

You have captains of industry and thought leaders of the governing party throwing fucking nazi salutes, and this is broadcast to the masses! Insanity to defend free speech after the country is circling a drain as a result of said free speech.


[flagged]


The point the person you are responding to is saying is that even discussing this is, under his interpretation, "illegal content". Which presumably would make your comment illegal content. I am not sure I agree but either this law is very poorly communicated to the public, or it is batshit insane authoritarian nonsense, if that is what people are taking away from it, but either way it is a major fuckup.


Well it's not obviously, otherwise the text of law would be illegal content too. It's just another bad faith interpretation of this kind of laws that I very often see on American-centered socials.


Related post with a large discussion from someone who said:

"Lfgss shutting down 16th March 2025 (day before Online Safety Act is enforced)

[...] I run just over 300 forums, for a monthly audience of 275k active users. most of this is on Linode instances and Hetzner instances, a couple of the larger fora go via Cloudflare, but the rest just hits the server.

and it's all being shut down [...]"

For the same reasons.

https://news.ycombinator.com/item?id=42433044


LFGSS was a legendary forum, sad to see it go.



awesome!


If it's hosted in the USA what's the problem?


Anyone* would be crazy to run a UK-based or somewhat UK-centric forum today. Whether it be for a hobby, profession, or just social interaction. The government doesn’t perceive these sites as having any value (they don't employ people or generate corporation tax).

[*] Unless you are a multibillion $ company with an army of moderators, compliance people, lawyers.


Well I'm on a forum run by a UK company, hosted in the UK, and we've talked about this, but they're staying online. And, no, they're not a multibillion dollar company.

I don't see our moderators needing to do any more work than they're already doing, and have been doing for years, to be honest.

So we'll see how the dice land.


As long as they don't upset anyone with influence (government, media, etc.), they'll probably be fine. Otherwise, at best they'll be looking at a ruinously expensive legal battle to justify if what they did was "reasonable" or "proportionate" - the vague terms used by the law.

For my friends, everything; for my enemies, the law.


At least they're a UK company though so presumably they've at least got some money to support this. If you're an individual running a hobby forum then you're SOL


more than just forums, it's basically a failed state now. I knew when I left (I was the last of my school year to do so) it was going to get bad once Elizabeth died, and that would be soon, but I never imagined it would get this bad.

The plan for April is to remove the need for police to obtain a warrant to search peoples homes - that bad.

I'd say "there will be blood on the streets", but there already is...

This video pretty much sums up what the UK is now. https://m.youtube.com/watch?v=zzstEpSeuwU


No, the proposal is that there is a power of entry where the police have reasonable grounds to believe stolen property is on the premises and that this is supported by tracking data and that authority to enter is provided and recorded by a police inspector.

This is analogous to s18 PACE post-arrest powers, grafted onto s17 PACE.

The alternative is that we continue to require police to try and get a fast-time warrant while plotted up outside a premises; this is not a quick process, I've done it and it took nearly two hours.

>there will be blood on the streets

Oh, dry up.


The topic here is how they made running public forums a crime.

After making secure communications a crime.

And you think a state like that cares about the formalities? lol..

They just doing what every other monarchy and dictatorship has done in a desperate bid to hold onto power while the state collapses due to inept leadership.


> The topic here is how they made running public forums a crime.

You brought up warrants, they counterpointed the warrants.

> And you think a state like that cares about the formalities? lol..

The warrant is a formality, isn't it? I'm pretty sure you're arguing against your own point now.


I find it terrifying that you consider this to be legitimate grounds for a search, and a reasonable procedure for obtaining permission to do so. They should get in line and get permission from proper legal authorities, like all other law enforcement.


While I hate how the UK is becoming even more of a police state, that law (or that part of a law) is the least worthy of criticism. It simply codifies one instance of reasonable grounds for a search, so that it does not have to be decided on a case-by-case basis by a judge. I.e. a judge is asked to decide if something is justified grounds for a search - now the law says "this narrow case is justified grounds, you don't have to ask a judge".

Or in other words, the proper legal authorities are parliament and the law itself. Sometimes the law needs interpreting and judgment calls, which where warrants come in. This law removes the need for interpretation in one narrow and clearly defined case. If before judges were expected to issue warrants on the kind of evidence that this law requires, then now codifying that expectation and removing what has now become nothing but a bureaucratic delay doesn't reduce liberty.


Are you familiar with the current powers of entry in E&W per s17 & s18 Police and Criminal Evidence Act 1984?


> The plan for April is to remove the need for police to obtain a warrant to search peoples homes - that bad.

This seems to be limited to stolen geo-tagged items: https://www.theguardian.com/uk-news/2025/feb/25/police-new-p...

I would agree that this law is a slippery slope, but at the same time we should not omit important facts.


Its not a slippery slope, its carte blanche for a police force with a reputation for e.g. beating elderly people to death because they looked at them wrong (most famous being Ian Tomlison, but its fairly regular) to not have to hold back just simply because they run into a locked door.

And that is before you get into the court system, which if you need a quick primer, just look at the treatment of Julian Assange - and thats a "best case" for someone with millions of global supporters.

Uk police have targets to hit, they can't hit those targets going after real criminals, so they predominantly target people nieve enough to think they want to help them.

Of course they had to make running public forums a crime.


I'm sorry, are you accusing the UK police of killing more innocent old people than the US police? Because if so that's a funny joke.


I'm here wondering in which part of that statement there's a comparison to the US police.


I'm more depressed that he considers 47 to be old


>I knew when I left (I was the last of my school year to do so) it was going to get bad once Elizabeth died

How small was your school year?! What does Elizabeth (presumably the 2nd) dying have to do with anything?


>What does Elizabeth (presumably the 2nd) dying have to do with anything?

Lets just say her replacements brother is Andrew, and his best mate was Jimmey Saville. Should tell you all you need to know about her replacement with less chance of me ending up like David Kelly.

Heads of state do matter, regardless of how much propaganda they push that they only matter in other countries. These laws are not something the labour voters asked for.


I don’t understand this. The monarchy hasn’t had influence over parliament and the government for over 300 years. You may be able to point to attempts at doing so (eg the Black Spider letter), however, parliament is fully sovereign.


The monarchy does have influence [0], but no more than Liz did

[0] https://www.theguardian.com/uk-news/2022/jun/27/queen-secret...


Kier Starmer is (pretty much) to the UK what JD Vance is to the US.

Appointed by the head of state. Meets the head of state regularly to be told what to do (wednesdays iirc).

Think about what it has taken for you to say what you just said despite those facts.

We are also talking about what is still the richest, most most powerful family on the planet. You think Elon musk is rich for owning Tesla and Twitter, these guys still own for example, England, Wales, Canada and Australia.


To say the British monarchy "owns" Canada and Australia is comical


comically true.

its called crown land

https://en.m.wikipedia.org/wiki/Crown_land


[flagged]


[flagged]


[flagged]


Or, the targets of this law really are:

https://m.youtube.com/watch?v=u2h-_BzLCcc


First of all, the King isn’t Prince Andrew. That guy simply isn’t the head of state and nor will he ever be.

Secondly, the British monarchy have absolutely nothing to do with politics and have remained impartial for nearly 100 years.

The UK government has always been right wing compared to most of the rest of western Europe. It sucks, but it is what it is. But the way you’re talking is as if the UK has suddenly gone to hell when the reality is just that this is just more of the same.

If anything, the biggest footgun the UK has done was leaving the EU, and that was something the dumb British public voted for. We did it to ourselves.


Thanks Elon, I'm sure that civil war you predicts will happen anytime now.


"it was going to get bad once Elizabeth died"

What do you think she was doing?


I'm sure she had massively strong views on the online safety act and encryption


This comment has got daily mail reader written all over it


The opposite is true. The new law makes it considerably more risky for large companies because the law is specifically designed to hold them to account for conduct on their platforms. The (perceived) risk for small websites is unintended and the requirements are very achievable for small websites. The law is intended for and will be used to eviscerate Facebook etc. for their wrongs. We are far more likely to see Facebook etc. leave the UK market than we are see any small websites suffer.

A small website operator can keep child pornography off their platform with ease. Facebook have a mountain to climb — regardless of their resources.


> A small website operator can keep child pornography off their platform with ease. Facebook have a mountain to climb — regardless of their resources.

Facebook can actually train AI to detect CSAM, and is probably already doing so in cooperation with NCMEC and similar organisations/authorities across the world.

Your average small website? No chance. Obtaining training material actively is seriously illegal everywhere, and keeping material that others upload is just as bad in most jurisdictions.

The big guys get the toys, the small guys have to worry all the goddamn time if some pedos are going to use their forum or whatnot.


No, that is not how it works. Large companies can afford compliance costs. Smaller ones can't.


I believe file uploading services like cloudinary have this capability already. It does have a cost, but it exists.


But you shouldnt need to use file uploading services! File upload doesnt require additional services, it has been a well understood part of HTTP for decades. You can do file upload using normal web form submission in your web server/CMS/Rails/Laravel/CGI program without paying a monthly subscription to some service at an exorbitant markup.

Also, those filters are obviously imperfect. Remember the man who got his Google account terminated because he took a photo of his son's rash to send to his doctor? Pedo alert, pedo alert, a child is naked in a photo. My parents must be pedos too, they took a photo of me sitting in the bath when I was a toddler. Call the police.


What are the compliance costs for this law that would apply to a small independent forum?


Have you run a forum, in, say, the last decade? The amount of spam bots constantly posting links to everything from scams to pints to guns is immense - and no, captchas don’t solve it.


You can just read any of the writing by the people operating these fora that are closing.


I have read every post, every article, every piece of guidance. I’m asking for specifics, not hand waving. What are the actual compliance costs?


> I have read every post, every article, every piece of guidance.

Prove it. I’m asking for specifics, not hand waving.


https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

Last month.

“ We’ve heard concerns from some smaller services that the new rules will be too burdensome for them. Some of them believe they don’t have the resources to dedicate to assessing risk on their platforms, and to making sure they have measures in place to help them comply with the rules. As a result, some smaller services feel they might need to shut down completely.

So, we wanted to reassure those smaller services that this is unlikely to be the case“

“If organisations have carried out a suitable and sufficient risk assessment and determined, with good reason, that the risks they face are low, they will only be expected to have basic but important measures to remove illegal content when they become aware of it. These include:

easy-to-find, understandable terms and conditions; a complaints tool that allows users to report illegal or harmful material when they see it, backed up by a process to deal with those complaints; the ability to review content and take it down quickly if they have reason to believe it is illegal; and a specific individual responsible for compliance, who we can contact if we need to.”

Your turn. Where are these compliance costs?


It's right there in your post.

>they will only be expected to have basic but important measures to remove illegal content when they become aware of it. These include:

>easy-to-find, understandable terms and conditions; a complaints tool that allows users to report illegal or harmful material when they see it, backed up by a process to deal with those complaints; the ability to review content and take it down quickly if they have reason to believe it is illegal; and a specific individual responsible for compliance, who we can contact if we need to.”


All of those things are buttons to click and ship with every piece of forum software from the last decade. No forum can survive without moderation because of spam so these tools and policies will already be in place on every website with user generated content.


Some forum use custom backend, and updating them for an asinine law may not be the maintainer priority.

Having someone dedicated to contact with this authority is also a burden on hobbyist projects.


"We pinky swear to totes not enforce the law as written [unless and until we decide, with no notice or warning, to do so] up to and including criminal penalties". Not as reassuring as you claim it to be.


Exactly - the liability risk is huge, and relying on them not enforcing the law because they say they are 'unlikely' to on small sites is not a risk any sane person would take.


That's not what they are saying. What they are saying is that the law as written doesn't require the things that many small sites have been saying will be too expensive to comply with. The law as written only requires those things for large site and sites with elevated risk of certain harms. For most small sites any required changes will just be minor tweaks to things they are already doing.


We don’t need to trust what they say, we just need to engage in a little critical thinking. What’s the benefit for Ofcom in pursuing tiny websites? There’s no political benefit, no financial benefit… the guidance from Ofcom reaffirms the natural conclusion.


There is no political benefit to imposing liability on any online forum operator for content posted by others?

Governments can abuse their power to silence speech it doesn't like. Governments can use agitators to develop pretext for legal action. Governments can make examples out of small-time defendants to send chilling effects. Governments can have prosecutors who may not be evil, but merely overzealous and harmful.

At the end, it is about a default to freedom of speech and content online (short of objectively illegal content) or a default to self-censorship and closing the gates on open forums.


Sorry, but that's foolish beyond belief. The law allows and probably mandates them to do so. You can pretend that's not what the law says but it clearly does. And it was written with intent and advice, so that's what the writers intended as well.

But if it's so simple, volunteer. Take on the criminal penalties yourself and perform the reviews.


Yeah exactly. And it will end up being a tool used to go against unfavoured groups.

Create a forum for supporters of (unfavourable person)? Sorry, your online complaints process isn't good enough, prison for you.


I'll remind you of two thing which a lot of people often forget with hobbies/volunteering and may make this argument moot for you: Just because someone gives time for free doesn't mean that time doesn't cost them or can easily be increased without significantly impacting the giver. Secondly that some parts of a hobby can be work that is required for the fun part of the hobby and changing the ratio of fun:work can kill any motivation for the hobby.

To your point even your extract from the link there are compliance costs.

>So, we wanted to reassure those smaller services that this is *unlikely* to be the case

Your source admits there are extra costs that will likely cause some small services to have to shutdown if the costs are to burdensome for them, they are just saying that they hope the costs are small enough that it doesn't put most small services in that position.

Even in your quote it explicitly lists extra costs. i.e. the cost of a compliant compliance tool. Obviously the government isn't going to implement it or spend the time moderating reports or abuse of reports. Which means the cost of extra hours moderating and setting it up are on the service provider.

"Must have an individual responsible for compliance". So either employ someone to take this risk or take on the risk and responsibility yourself and the associated due diligence costs (lawyers in the UK are only free if you're already losing hours of your life to the court system).

These costs will definetly push some people over the line to not wanting to host such services. Especially when the wording is so wide that you need to moderate out insults in your forum.

Jesus Christ! Your comment would probably be flagged as foreign propaganda to soft peddle broken UK policies, that is if the US had such rules. My comment should be flagged because that could be an insulting insinuation or the expletive at the start of this paragraph could be stirring up religious hatred by being needlessly blasphemous. And a moderator has to read the entire post to get to the non compliant part.


Many of the provisions of the act apply to all user-to-user services, not just Schedule 1 and Schedule 2 services.

For example, the site must have an "illegal content risk assessment" and a "children’s risk assessment". And the children's risk assessment is a four-dimensional matrix of age groups, types on content, ways of using the service and types of harm. And it's got to be updated before making any "significant" change to any aspect of a service’s design or operation. It also makes it mandatory to have terms of service, and to apply them consistently. The site must have a content reporting procedure, a complaints procedure, and maintain written records.

Now obviously the operator of a bicycling forum might say "eh, let's ignore all that, they probably don't mean us"

But if you read the law and interpret its words literally, a bicycling forum is a user-to-user service, and a public forum is almost certain to be read by children from time to time.


HEXUS stopped publishing in 2021, and the company no longer exists. The forums were kept because they don't take much work to keep online. Now, there's a lot of work to do, like reading hundreds of pages of documents and submitting risk assessments. There's nobody to do that work now, so the idea was it could go into read only mode. The problem with that was, some users may want their data deleted if it becomes read only. Therefore, the only option is to delete it.


Sort of like burning down a library because you can't make it ADA compliant and install a wheelchair ramp.


I feel as though the "sort of" is doing a lot of work there.


Iirc UC Berkeley(?) did exactly that to their YouTube library of recorded lectures due to an accessibility lawsuit.


I still remember some of the Berkeley genetics lectures as some of the best learning materials I could find for my upper-level courses. Later I tried to refer to them and found they were all gone. I wish there was any other option vs. just taking them down.


Current motto of the internet: Grab it while it's hot, enjoy it while it lasts.


I mean, there was the obvious solution of paying the money to have it captioned, which was the original order.

Berkeley instead offered this alternative solution, because they did not want to pay.


The obvious solution was for a school to use their resources not on their students but for the general public?

I guess they spent a tiny bit of money so why not 1000% times more?

I don't mind giving your child some candy on Halloween but I'm not going to pay for braces. Even though he may really need them.


As we all know, there is no middle ground between "paying for video captions" and "financing all money ever for the rest of time", and I think that's unfortunate.


I feel like there is a logical paradox here. You're using a bigger hyperbole to criticize a hyperbole. If this argument is valid then it invalidates itself.


It would be nice if they revisited it in light of AI captioning services.


I think a more accurate comparison would be burning down a library because you can't afford the manpower to check every single book for arbitrarily defined wrongthink.


Except you really don't need to do that. Your risk assessment could explain it's read only, you've done some basic searches, and that's proportionate given the content and the fact you're a small website. Job done.


It's more like shutting down a library because you are unwilling to censor the books


Why don't they just anonymize the users? Discourse does this, and it's apparently GDPR compliant.


gdpr compliance depends a lot on who you ask, and only a court can make the final decision.

Stripping all usernames out of a forum certainly makes it safer, but I don't think anyone can say there still won't be a few pissed off users who wrote things they now regret on there, and can be tracked back to individuals based on context/writing style alone.


The online safety act is different to GDPR


Summary: The UK has some Online Safety Act, any websites that let users interact with other users has to police illegal content on its site and must implement strong age verification checks. The law applies to any site that targets UK citizens or has a substantial number of UK users, where "substantial number" is not defined.

I'm going to guess this forum is UK-based just based on all the blimey's. Also the forum seems to have been locked from new users for some time, so it was already in its sunset era.

The admin could just make it read only except to users who manually reach out somehow to verify their age, but at the same time, what an oppressive law for small UK forums. Maybe that's the point.


IANAL

> any websites that let users interact with other users has to police illegal content on its site and must implement strong age verification checks.

But I believe you only need age verification if pornography is posted. There's also a bunch of caveats about the size of user base - Ofcom have strongly hinted that this is primarily aimed at services with millions of users but haven't (yet) actually clarified whether it applies to / will be policed for, e.g., single-user self-hosted Fediverse instances or small forums.

I don't blame people for not wanting to take the risk. Personally I'm just putting up a page with answers to their self-assessment risk questionnaire for each of my hosted services (I have a surprising number that could technically come under OSA) and hoping that is good enough.


I believe you only need age verification if pornography is posted

But if you let users interact with other users, you're not in control of whether pornographic material is posted, so it's safer to comply beforehand.

I commend you for keeping your site up and hoping for the best. I don't envy your position.


> Ofcom have strongly hinted that this is primarily aimed at services with millions of users but haven't (yet) actually clarified [...]

This has echoes of the Snooper's Charter and Apple's decision to withdraw ADP from all of UK.

It is not enough for regulators to say they won't anticipate to enforce the law against smaller operators. As long as the law is on the books, it can (and will) be applied to a suitable target regardless of their size.

I saw this this same bullshit play out in Finland. "No, you are all wrong, we will never apply this to anything outside of this narrow band" -- only to come down with the large hammer less than two years later because the target was politically inconvenient.


I geo-block UK visitors on all of my websites. It's sad but the safest solution.


why? if you're located elsewhere you can literally just ignore UK/EU law. they don't have jurisdiction over you; worst-case scenario is probably them ordering ISPs to block your site.


While the actual risk is minimal, countries do have reach beyond their borders.

For example, if you ever leave your home country to visit a third country, that country could arrest you and extradite you to the country that doesn't like you.

Or they could force any financial institution (or even any company) that wants to do business within their territory to stop doing business with you. The EU probably wouldn't do that, because it's difficult and expensive to get the member states agree on sanctions. The US does it regularly. The UK could probably try, but they have less leverage.


What are the chances that someone who runs a tiny, hobby motorcycle forum is going to be extradited from his vacation abroad for breaking a U.K. law? 0.1%? 0.01%? 0.001%? Less? If we only did zero-risk things, nobody would do anything.


That probability is entirely based on the premise that the extradition treaties that the UK has signed with other countries would NOT follow UK law despite the treaty soley based on the premise of being a political refugee for their free expression. What will likely happen, and what often happens with other countries in the third world, is that 'politically problematic' people are being caught in neighbouring countries only to be sent back based on friendly geopolitics. There's no law for lawlessness. This is what happens when north koreans escape and get caught in china or russia. This is what could happen when you accidently post something of political consequence on a tiny motorcycle website that isn't usually being watched but one angry user could be someone's worst nightmare. Also, gangs of motorcycle enthousiasts havent been the most inconsequential group of people you could have used as an example.


As a US person, living in the US, with a US server, I would have absolutely zero reservations about hosting an online forum that may or may not welcome UK users. Just like I would have zero reservations about going online and blaspheming against a religion (illegal in many countries) swearing (illegal in the U.A.E. & probably elsewhere) or insulting a king (illegal in Thailand).


even the UK surely wouldn't risk the horrible PR of extraditing someone from a third nation because a citizen of a completely different country didn't follow their asinine laws. and were the person in question an American citizen, it'd be a massively foolish move for both the UK and whichever nation worked with her.


I like London and want to visit the city again some day.


What if a large number of brits access your websites from a different country? :-/


It's for 7 million active UK users per month. https://www.ofcom.org.uk/siteassets/resources/documents/onli... - definition on page 64.

That's quite sizeable. How many sites can you name have 7 million monthly active UK users? That's over one-in-ten of every man, woman and child in the UK every month using your site.


Yes, the actual draft doesn't really add many requirements to non "large" services, pretty much having a some kind of moderation system, have some way of reporting complains to that, and a filed "contact" individual. I note it doesn't require proactive internal detection of such "harmful" content that many people here seem to assume, just what they already have 'reason to believe' it's illegal content. Even hash-based CASM detection/blacklisted URLs isn't required until you're a larger provider or a file share product.

It just seems like an overly formalized way of saying "All forums should have a "report" button that actually goes somewhere", I'd expect that to be already there on pretty much every forum that ever existed. Even 4chan has moderators.


Rather than shut it down, would it be possible to sell the forum to someone in the US for a little bit of money, like $20 or something?

Idea being the US-based owner migrates the DB with posts and user logins to servers hosted on US soil, then if the UK government comes knocking the former owners in the UK can say "Sorry it doesn't belong to us anymore, we sold it, here's the Paypal receipt." (Ideally they'd sell the domain too, but as long as you still have the DB you could always host the forum at a different domain.)

Any forum admins here willing to add another forum to their portfolio?


Or maybe open it up to scraping so someone can archive it--if the content is that useful, surely some hobbyist outside U.K. with a few GB of disk space would be willing to host it.



The US owner would still be obliged to follow the UK rules, apparently. It's unclear how punishment will be enforced exactly.


Just create a block page for UK IPs with VPN ads.


It's awkward.

It's clear this law affects terribly bona fide grassroots online communities. I hope HN doesn't start geoblocking the UK away!

But then online hate and radicalization really is a thing. What do you do about it? Facebook seems overflowing with it, and their moderators can't keep up with the flow, nor can their mental health keep up. So it's real and it's going to surface somewhere.

At some level, I think it's reasonable that online spaces take some responsibility for staying clear of eg hate speech. But I'm not sure how you match that with the fundamental freedom of the Internet.


You don't. "Hate speech" is code for "the government knows better and controls what you say."

Yes, racism exists and people say hateful things.

Hate speech is in the interpretation. The US has it right with the first amendment - you have to be egregiously over the line for speech to be illegal, and in all sorts of cases there are exceptions and it's almost always a case-by-case determination.

Hateful things said by people being hateful is a culture problem, not a government problem. Locking people up because other people are offended by memes or shitposts is draconian, authoritarian, dystopian nonsense and make a mockery of any claims about democracy or freedom. Europe and the UK seem hellbent for leather to silence the people they should be talking with and to. The inevitable eventual blowback will only get worse if stifling, suppressing, and prosecuting is your answer to frustrations and legitimate issues felt deeply but badly articulated.


I see no reason why hate speech should be given the benefit of the doubt. And no, it's not because my government told me so, I have my own opinion, which is that freedom of speech ends where threats of violence appear.

If you don't want it tolerated online, which I don't, you need some kind of legal statement saying so. Like a law that says, you can't do it, and websites can't just shrug their shoulders and say it's not their problem.

I don't line this legislation as it seems to be excessive, but I disagree that the root issue it tries to address is a made up problem.

EDIT it just struck me that in speech and otherwise, the US has a far higher tolerance for violence - and yes I do mean violence. Free speech is taken much further in the US, almost to the point of inciting violence. Liberal gun laws mean lots of people have them, logically leading to more people being shot. School shootings are so much more common, and it appears there is no widespread conclusion to restrict gun ownership as a result.

Maybe that's a core difference. Europeans genuinely value lower violence environments. We believe all reasonable things can be said without it. That doesn't make this legislation good. But at least it makes sense in my head why some people glorify extreme free speech (bit of a tired expression in this age).


> I see no reason why hate speech should be given the benefit of the doubt

Because a lot of speech people don't like gets relabeled as hate speech - which is's not. Or a lot of discussion/debate topics that are sensitive get relabeled as hate.


And that's where the cultural difference lies between the new and the old world...


I agree that threats of violence cross a line, but I think that many countries interpret hate speech to be much broader than this, and there's certainly room for people to disagree, or for one person to say something in a neutral and non-hateful way that another person interprets as a hateful attack.

Some edge cases might include: arguing about interpretations of historical events (eg. Holocaust denial, colonialism, nuclear bombings); arguing about the economic effects of immigration policy; suggesting that one country or another is currently committing genocide; suggesting that one country or another is not currently committing genocide; expressing support for a country or political party that some consider to be committing genocide; arguing that travel restrictions should be imposed on certain countries to contain an epidemic; writing "kill all men" on reddit; publishing a satirical political cartoon depicting the prophet Mohammad; advocating political independence for some geographic region; expressing support for the police in an instance in which they took a state-authorized violent action; expressing support for a vigilante; expressing support for one's country during a violent conflict; expressing sympathy with the opposing side during a conflict; demanding stronger legal penalties for criminals (eg. supporting Singapore's death penalty for drug dealers); publishing a fiction novel in which the villain is a member of a minority group and acts in accordance with a stereotype.

Personally, while I think limits are necessary, the guidelines should be extremely specific and the interpretation extremely narrow to minimize any chilling effect on legitimate expression and discussion. Even where speech can verge into hurtful or offensive territory, I think it's important to allow it in the open, because I think dialogue builds more bridges than it burns. I am concerned that a lot of internet hate-speech legislation goes too far into leaving hatred open to interpretation, which results in conversation spaces being closed down because of the potential liability.


The problem is that policing hate speech creates a police state worse than allowing hate speech to exist. The system you need to create to police the hate speech will result in more violence against people than letting the hate speech exist. To me, your very statement "freedom of speech ends where threats of violence appear" is a form of hate speech. You are hating on my principle of free speech. It actually makes me physically sick to read those words, because I know where they lead.

Generally on the Internet you would make use of existing tools to prevent people from talking to you if you find them hurtful. For example, I could just block you and not deal with you any more. Sometimes people get around those to harass others. That is definitely bad and we already have laws against harassment and ways for law enforcement to find those individuals without creating a full police state on the Internet. Posting your opinion once is not harassment, no matter how much it makes me want to puke. Or as we used to say in a more civilised time, I abhor your speech, but I will fight to the death for your right to speak it.

I don't know where you got your conclusion from - I am European and I don't mind violent speech. In fact I think we generally need a lot more freedom since many countries give their citizens barely more freedom than serfs had. School shootings have been a perennial favourite for your type to parade around so you can rule over a disarmed population, but e.g. Czechia lets you have a gun at home as easily as the USA and they do not have that problem. USA's problem is mostly societal.

Your opinion sounds like it was formed in the ivory tower of university with no connection with reality. Please get more varied life experience and reconsider your position.


> To me, your very statement "freedom of speech ends where threats of violence appear" is a form of hate speech

How on earth did you conclude that? Where is the emotional charge you are implying? What about the other party feelings (of being intimidated)?


> To me, your very statement "freedom of speech ends where threats of violence appear" is a form of hate speech. You are hating on my principle of free speech. It actually makes me physically sick to read those words, because I know where they lead.

Hmm. Well, it's the US that has liberal freedom of speech and freedom of violence. It also has a "free speech absolutist" as a first buddy and that's going great too. To me that is a picture of where this kind of "absolute free speech" leads to, and I'm frankly happy with going in the opposite direction.

> Your opinion sounds like it was formed in the ivory tower of university with no connection with reality. Please get more varied life experience and reconsider your position.

You have literally no idea. I could easily say the same to you - except this is highly impolite. But suit yourself.


The US is full of bans of free speech, from pornography to piracy, from banning books to banning talk about kidnapping Donald Trump.

Its just that Americans think that that is the default level of free speech, any extra restrictions are an affront, and any lesser restrictions are irrelevant


> You are hating on my principle of free speech.

Do you really think you are contributing to the conversation when you say things like this?


that is a genuine contribution to the conversation, it points out the opposing argument doesn't make any sense and that the rules are just applied arbitrarily

instead of applying them arbitrarily (which you won't like when your political party isn't in power), just apply them fairly across the board regardless of whose in charge so we can coexist in a civilized manner without the kind of extreme psychological aggression that takes place in the heart of censorship


> that is a genuine contribution to the conversation, it points out the opposing argument doesn't make any sense and that the rules are just applied arbitrarily

That post only works if you use an absolutely ridiculous definition of hate speech.

It does not show a flaw in the opposing argument. It's not an example of a rule being applied arbitrarily. It's an example of falsely claiming a rule applies when it objectively does not apply. If that's a weakness, it's a weakness in basically every law. It's not a disqualifier.

They could have tried to show an unfairness or a contradiction that actually relates to sloppy definitions of hate speech, but they didn't. They went outside the definition.

> instead of applying them arbitrarily (which you won't like when your political party isn't in power), just apply them fairly across the board regardless of whose in charge

I think you mixed something up here. Applying a law fairly doesn't prevent a different political party from interpreting it differently and ending the fairness. I'm pretty sure your argument is supposed to be that these laws should not exist in the first place, because they're too dangerous in the wrong hands. Not "just apply them fairly".


Nope, it makes sense and you're the only one here whose lacking an example. Any definition of hate speech is ridiculous because grown adults who try to control the speech of others are all ridiculous.


> Any definition of hate speech is ridiculous because grown adults who try to control the speech of others are all ridiculous.

That's a very different argument from the one I was critiquing.

> you're the only one here whose lacking an example.

It's not about whether I can come up with an example myself, it's that the specific example used by Asooka was a troll argument.


> Liberal gun laws mean lots of people have them, logically leading to more people being shot.

Explain Czechia and Switzerland, then, please.


Switzerland has a strong permit system allowing the government to control who can purchase a firearm. Automatic firearms and concealed carry permits are given sparingly and only for a good reason.

Basically unlike the U.S. Switzerland doesn't view background checks and permits as a slippery slope to a dictatorship and implements them effectively.


The US, contrary to your implication, has much stronger controls on automatic firearms than Switzerland does, in the forms of the 1934 National Firearms Act (makes them illegal to manufacture or purchase without a permit and exorbitant tax), the 1968 Gun Control Act (Massively regulates gun stores that are allowed to sell), and the 1984 Hughes amendment to the FOPA (makes new production machine guns illegal for civilians to buy)

Meanwhile, in Switzerland, I could have a fully automatic SIG 550 in about two weeks with some paperwork. In fact, the harder part is finding a range to shoot the damn thing at!

"Good Reason" carry permits, meanwhile, are looked down on due to their messy history of being Jim Crow laws. Generally, the "good reason" was "being white" and this was used to ensure that the Black community was disarmed when the Klan rolled in.


This is largely incorrect or misleading. The US has had strict laws on automatic weapons since the 1930s. Its almost impossible to own an automatic weapon in the US unless you are very wealthy.

The Swiss... since the 2000s

The US has had strict background checks since the 60s

The Swiss.... 1997? Maybe later, the EU forced them to change their gun laws.

The questions stands: more Swiss households have guns than the US, yet gun violence does not exist.

Why?


The vast majority of firearm homicides in the US come from three sources. First, the vast majority (60-80 percent depending on jurisdiction) are suicides. After that, you have young minority men with criminal records killing other young minority men with criminal records, usually involving the drug trade and/or street gangs and using firearms which they are largely already banned from possessing. Next after that are homicides which occur as part of domestic or relationship violence.

So for all everyone crows about how likely you are to get shot in America, it's statistically no more likely to happen than the rest of the world unless you're a) suicidally depressed, b) a drug dealer or in a street gang, or c) in a violent relationship.


You don't seem to understand US gun laws . . . especially in the bluest of blue states, which have restrictions which make Switzerland and Czechia look like Somalia in comparison.


> Free speech is taken much further in the US, almost to the point of inciting violence.

Yes, that's where we (here in the U.S.) draw the legal line. But almost inciting violence is not inciting violence. Since the U.S. made free speech the focus of the very first rule in the constitution, an enormous amount of jurisprudence and precedent has emerged around exactly how to make those tricky case by case judgements. Whether one agrees with it or not, it's easily the most evolved, detailed and real-world tested (over many decades) body of free speech law humanity has. Because it's deep, complex and controversial, there's also quite a bit of misunderstanding and misinformation about U.S. free speech law. I see incorrect assertions and assumptions quite often in mainstream media outlets who should know better. Here's a good primer on some of the most common misunderstandings: https://www.theatlantic.com/ideas/archive/2019/08/free-speec...

I've studied and read a lot about free speech and the first amendment as I find it fascinating. It took me quite a while to really understand how and why the U.S. implementation got to where it really is (and not the exaggerations and extrapolations that sometimes get amplified). In terms of free speech current practice and precedent, I now think the U.S. has got it just about right in the tricky balance between ensuring the open exchange of ideas (even unpopular ones) against preventing actually real and serious defamation, libel and incitement. To be sure, the U.S. system is based on the principle that it's not the job of the current government in power to force adults to be nice, reasonable or respectful in either words or tone. Freedom of speech means the freedom to be wrong, stupid, or mean, to be insulting or offensive - even to provoke or inflame should you choose to.

While the government won't send men with guns to force you to shut up, other citizens are also free to exercise their rights to tell you (and everyone else) you're an asshole, that you're wrong and exactly why. They are equally free to be rude, offensive and even hateful against your ideas and you. One of the key ideas behind the U.S. constitution is every fundamental right granted to all citizens comes with matching responsibilities for all citizens. In other words, no right is free - they have actual, personal costs for each citizen. In the case of the first amendment, the responsibilities include tolerating speech that's wrong, boorish, offensive or even hateful. As well as the responsibility to exercise your own good judgement on which speech to ignore, reject and/or counter. The open marketplace of ideas, like all markets, is two-sided. Another responsibility is accepting the consequences of exercising your free speech unwisely. Your fellow citizens are free to ignore, argue, yell back, openly mock or just laugh at you. Ultimately, the framers of the constitution believed the majority of citizens can figure out for themselves who's an idiot and who's worth listening to. Which ideas are worth considering and which are important to stand against.


What defines hate speech? Who defines hate speech? Does hate speech result from the speech or the actions of those against the speech? Should the speech of protestors have consequences for disturbing the peace? What consequences should the state force onto individuals for speech, or actors affected by speech?

Americans for lack of a better description grapple with violence of the state differently than Europeans, but it seems neither are without consequence.


This act itself, I believe, does not reference "hate speech", which as you seem to point out is ambiguous, and I in turn only use it as short hand.

For the most part, this act says that content already considered illegal by existing and new laws must be policed by platforms. What is illegal is actually quite well defined, it seems. This article covers it nicely: https://www.theguardian.com/law/article/2024/aug/08/what-is-...

Indeed, the controversy is, it appears, not about what is illegal, but about how the onus on policing this, and other things like the restrictions, is put on platforms. There are no major changes to what content is and isn't illegal! There are some additions, like "revenge porn", which is likewise easy to define and hard to see as a fundamental freedom of speech issue.


The practical impact is the self censorship and suppression of all sorts of speech because it's too onerous and burdensome to maintain. This effectively centralizes control, in as blatant and evil a way as the Great Firewall. Decades old forums and communities have been destroyed, all for the sake of... what? Things that were already criminal and offenders could be held to account?

Freedom of speech is a binary choice for a society. When you introduce politically motivated discretion and ambiguity, then instead of protecting people, such laws serve only as tools of power and control. With freedom of speech and press, the laws preclude any attempts at control like this. Freedom of expression and press supersede responsibility for the potential of other people doing something bad.

This is why they can't have nice things. It's the equivalent of shutting down businesses because you impose a law that 20 armed guards must attend every building 24x7, just in case some bad guys with guns try to get in.


I think I expressed it clearly. I don't like this legislation. I'm saying that I do understand the underlying tension though, it is real, and hard to legislate, and what can you do. In your first paragraph, you seem to essentially criticize this piece of legislation. I'm not defending it.

I do however reject the notion that it's either absolute freedom of speech or a totalitarian censorship state. Freedom of speech has always had well defined boundaries, well before the Internet - and yes, even in America, just these boundaries are somewhere different to eg. Europe.


But this is a very us centric view. The rest of the world doesn't tolerate people going around being violent because of the constitution.


How would you feel about receiving daily credible death threats to you and your family? Should that be tolerated too in the name of the first amendment?

Point is, we must draw the line somewhere. It's never "everything goes". Tolerating intolerance always ends up reducing freedom of expression.

Look at the US, the government is doing everything it can to shove trans people back in the closet, their voices are silenced and government websites are rewritten to remove the T in LGBT. By the very same people who abused "the first amendment" to push their hateful rhetoric further and further until it's become basically fine to do nazi salutes on live TV.

"Free speech absolutism" is a mirage, only useful to hateful people who don't even believe in it.


Death threats are not protected by free speech. I know you are trying to make a hyperventilating political point but it’s just not a genuine thing. I am a little surprised at the anoint of those on HN that are against free speech. I mean, don’t you realize that without it, a government you don’t like could imprison you for “denying basic facts oh biology” just as another country does for “denying historical events”. It’s madness.


Yes, death threats are not protected by your free speech, that was my entire point that you completely missed.

Why then not allow them but allow flurries of racial slurs? Or harassment? Or foreign propaganda? The line is never "anything goes", we have to draw it somewhere. So, why act like anything other than "anything goes" is "literally 1984".

In Europe, it's "La liberté des uns s'arrête là où commence celle des autres" (Rousseau). Americans should simply stop trying to impose their different conception of freedom that just led them into a violent kleptocracy.


No that’s just nonsense. The line is drawn at actual death threats. Anything else does lead to 1984.


People regularly kill themselves over online harassment, isn't that enough? Would acting on that lead to 1984?


[flagged]


I mean, you are also making a hyperventilating point in your sarcasm not true. It may be true that illegal immigrants tend to be of ethnic minorities but it fallacious to say that collecting and deporting them, for being in the country illegally, is the same as “rounding up minorities”. That is offensive to the memory of those times over a hundred years when that did actually happen in this country. I don’t see any political enemies being “rounded up”. The issue at hand is about free speech and whether it makes sense to have a literal, without hyperbole, Orwellian approach that so many here seem to relish.


Hate speech is the thing that plays on the radio station that directly causes the mass graves of the Rwandan genocide. The physical call to violence is just the very last step in a long chain of escalating hate speech, but it is no more culpable than the preceding hate speech that created the environment where that physical call to violence is acted on.


During the Rwandan genocide, the radio stations played incitement to violence. While "hate speech" is inclusive of speech that incites violence, the types of hate speech which people have contemporary political disagreements about (including this thread) do not include such incitement.

More importantly, causality doesn't erase culpability. The step that immediately preceded the [Charlie Hebdo shooting](https://en.wikipedia.org/wiki/Charlie_Hebdo_shooting) was publishing a cartoon in a newspaper. Those who create hateful environments may have some culpability, but those that act almost always have greater culpability than those who speak.


The incitement to violence came later, at the climax, after the hatred was distilled via the type of hate speech we are discussing here. This is the etiology of all internal genocides. They all follow the same pattern. Attempting to inderdict the genocide just as the calls to violence are happening is too late, because the population is too radicalized by that point.


> But then online hate and radicalization really is a thing.

I'm not trying to be edgy, but genuinely why do you care if someone says or believes something you feel is hateful? Personally I'm not convinced this is even a problem. I'd argue this is something that the government has been radicalising people in the UK to believe is a problem by constantly telling us how bad people hating things is. Hate doesn't cause any real world harm – violence does. And if you're concerned about violence then there's better ways to address that than cracking down on online communities.

In regards to radicalisation, this is a problem imo. I think it's clear there is some link between terrorism and online radicalisation, but again, I'd question how big a problem this is and whether this is even right way to combat these issues... If you're concerned about things like terrorism or people with sexist views, then presumably you'd be more concerned about the tens of thousands of unvetted people coming into the country from extremist places like Afghanistan every year? It's not like online radicalisation is causing white Brits to commit terror attacks against Brits... This is obviously far more an issue of culture than online radicalisation.

So I guess what I'm asking is what radicalisation are you concerned with exactly and what do you believe the real world consequences of this radicalisation are? Do you believe the best way to stop Islamic terrorism in the UK is to crack down on content on the internet? Do we actually think this will make any difference? I don't really see the logic in it personally even if I do agree that some people do hold strange views these days because of the internet.


Hate and radicalization are products of existential purposelessness. You can’t make them go away by preventing existentially purposeless people from talking to each other.


> You can’t make them go away by preventing existentially purposeless people from talking to each other.

At least you can limit the speed of radicalization. Every village used to have their village loon, he was known and ignored to ridiculed. But now all the loons talk to each other and constantly reinforce their bullshit, and on top of that they begin to draw in the normies.


No, you can't, but also theres is no reason why the law about allow these to be up. Plenty of people have racist thoughts, and that's not illegal (thoughts in general aren't), but go print a bunch of leaflets inciting racist violence and that is illegal.

I see this as an internet analogy.


https://en.wikipedia.org/wiki/2023_Quran_burnings_in_Sweden

Does burning a religious book "incite violence" ? It causes it, for sure. Free expression brings about, in the fanatic, a great desire to oppress the speaker. That's why we have such a freedom in the first place.


It seems though that allowing a country which already has problems with “lawful free speech,” to tamp down more on free speech would bring issues no?

Without mentioning the oxymoron that lawful free speech is.


Yes, incitement is illegal, but you haven't said what kind of speech you actually have in mind. Rather, you've made a tautological assertion that we can't allow incitement because incitement is illegal.


Governmental attempts to reduce "online hate" (however defined, as it is entirely subjective) are just going to make our problems worse.


> online hate and radicalization really is a thing

People have always had opinions. Some people think other people's opinions are poor. Talking online was already covered by the law (eg laws re slander).

Creating the new category of 'hate speech' is more about ensuring legal control of messages on a more open platform (the internet) in a way that wasn't required when newspapers and TV could be managed covertly. It is about ensuring that the existing control structures are able to keep broad control of the messaging.


Is it a thing?

I mean we had the holocaust, Rwandan genocide and the transatlantic slave trade without the internet.

The discovery, by the governing classes, that people are often less-than-moral is just as absurd as it sounds. More malign and insidious is that these governors think it is their job to manage and reform the people -- that people, oppressed in their thinking and association enough -- will be easy to govern.

A riot, from time to time -- a mob -- a bully -- are far less dangerous than a government which thinks it can perfect its people and eliminate these.

It is hard to say that this has ever ended well. It is certainly a very stupid thing in a democracy, when all the people you're censoring will unite, vote you out, and take revenge.


It is a thing for sure. How often it happens, I don't know.

I read a number of stories about school children being cyber-bullied on some kind of semi-closed forum. Some of these ended in suicide. Hell, it uses to happen a lot on Facebook in the early days.

I totally understand a desire to make it illegal, past a certain threshold. I can see how you start off legislating with this in mind, then 20 committees later you end up with some kind of death star legislation requiring every online participant to have a public key and court-attested age certificate, renewed annually. Clearly that's nonsense, but I do understand the underlying desire.

Because without it, you have no recourse if you find something like this online. For action to be even available, there has to be a law that says it's illegal.


> Clearly that's nonsense, but I do understand the underlying desire.

I wanna eat hamburgers like Peter Griffin in the stroke episode. But I don't because I'm an adult with logical thinking abilities and I know that there are consequences to my actions even if they are not immediate.

I have less, much less, than zero sympathy for people who advocate for doing things with law and government that the history textbooks are stuffed full of the horrific and nearly inevitable eventual consequences of.

Having benign motives doesn't absolve people for being stupid.

Not that any of this is in disagreement with your points.


Of course hatred, bullying, etc. is real -- what I was referring to is some special amount or abundance of it as caused by free discussion on the internet (rather than, say, revealed by it; or even, minimised by it).

We're not running the counter-factual where the internet does not exist, or was censored from the start, and where free expression and discussion has reduced such things.

The salem witch trials are hardly a rare example of a vicious mob exploiting a moral panic to advance their own material interests -- this is something like the common case. It's hard to imagine running a genocide on social media -- more likely it would be banned as "propganda" so that a genocide could take place.

We turned against the internet out of disgust at what? Was is the internet it itself, or just a unvarinished look at people? And if the latter, are we sure the internet didnt improve most of them, and hasnt prevented more than its caused?

I see in this moral panic the same old childish desire to see our dark impulses as alien, imposed by a system, to destroy the system so that we can return to a self-imposed ignorance of what people are really thinking and saying. It's just victorian moralism and hypocricy all over again. Polite society is scandalised by the portrait of dorian gray, and we better throw the author in jail .


I think these views are not necessarily contradictory. You can't wipe out Bad Things by making them illegal online. But I think not proliferating them certainly helps, and for sure I don't see why they should be tolerated online.

IMO there's benefit in making easy Bad Things hard, even if you can't stop them. Like gun ownership in Europe. How you do that while respecting internet freedom - my original question - I don't know. But I disagree with simply stating there is no conflict.


I mean, is it impossible that the commodified web is a sufficient but not necessary condition for atrocities? "But we had the Holocaust without it!" Okay, nobody said the internet was THE cause of ALL atrocities, just that it's actively contributing to today's atrocities. I think your logic is a bit... wrong.


That's not quite my argument. A little more formally:

There's a base rate of human malevolence running in each society. We do not know this base rate, and we can only sample malevolence via mass media (, police reports, etc.). If the mass media (including internet) were a neutral measurement device then we could say for sure that what we're seeing is just the background conditions of society leading to eg., riots, etc.

Because our measuring device isnt neutral we have a problem: are the things we see caused by our measuring? Do we cause more malevolence by participating in social media, which also makes us aware of it?

My argument is that we are presently significantly over-estimating the effect of our participation in the internet as a cause. My view is that its effects at reducing bad-stuff are likely more potent than its effects at causing it, and the vast majority of what we see isn't caused by the internet at all.

One argument for this is that it seems baseline malevolence (violence, etc.) is significantly decreasing, is historically very high, and that nothing we see via the internet is suprisingly above this historical case.


Online hate is skyrocketing in large part because billionaires and authoritarian regimes are pumping in millions of dollars to uplift it. Let’s address this issue at its source.


UK is sensitive about verbal manners, that is 'of utmost importance' (among all the others of course), just to use one of the most popular phrase here. If you suffer some outrageous impact in your life and complain in bad manner you may be punished further some way, socially or even contractually. One example is the TOC of Natwest. They close your account immediately if your conduct is offensive or discriminatory towards the staff. What counts as offensive? That detail is not expanded. Cannot be. It is a bit worrisome for those paying attention being nice to others as well. How to do that exactly? Where is the limit nowadays or in that situation? It is often people get offended nowadays for example by looking at upsetting things, or could feel discriminated. The bbc.co.uk is flowing with articles of people felt very intensive about something unpleasant. Be very careful about your conduct or you bank will kick you out. We are not even talking about hatefulness or radicalization.


I once saw someone propose a national service where you are required to work in a customer-facing job for a year, and I think about that a lot.


Feels more and more like we're at the end of an era when it comes to the internet.


How so? This is just the UK. While the UK really does want to enforce this globally, they really have no enforcement power against non-UK citizens who do not reside in the UK.

Certainly it's possible (and perhaps likely!) that the EU and US will want to copycat this kind of law, but until that happens, I think your alarm is a bit of an overreaction.


A lot of people who travel internationally occasionally transit through UK jurisdiction, such as a connection at LHR. This potentially places forum operators in personal legal jeopardy. Would the UK authorities really go after some random citizen of another country for this? Probably not, but the risk isn't zero.


Similar laws are being written elsewhere, Section 230 may not last the next few years. It's not just the UK.


Well the attacks on section 230 from the right are about removing censorship not adding censorship so I'm not sure section 230 is a good comparison.


You mean moderation. People are freaking out because they were banned from places for saying awful shit. Removing section 230 removes the right to moderation, which would kill the internet as we know it. It would become like cable internet.

EFF summary on Section 230: https://www.eff.org/issues/cda230


Moderation wouldn't disappear, it would just become controlled by the user.


Yes, it would. Any site offering moderation capabilities of any kind would be open to legal attack. That is enough to make it infeasible for any smaller site to have moderation, unless they're big tech of course (which is why they're OK with this)


There's no reason moderation couldn't happen on the users machine.


USA has backdoor laws afaik. Sweden is targeting Signal to force them create a backdoor. And this is only from regular news, I'm not even reading infosec industry updates. All govts are targeting privacy tools and the clock is ticking for them. I'm only hoping that one day these fuckers will be targeted themselves via exploits they have forced on us.


First they came for the British And I did not speak out Because I was not British...


By Internet do you mean Western Civilization?


I was gonna say looking at world affairs, it's starting to feel like the end of the Westphalian system.


Any pretense of the Westphalian system in most of Europe ended with the European Union.


Then how come the UK has left the European Union? This is a common misunderstanding, but a misunderstanding nonetheless. Any country in the EU has full sovereignty and chooses to exercise it by combining forces with other countries in order to strengthen their combined influence in the wider world.


>Then how come the UK has left the European Union?

Are you suggesting that if the European Union were post-Westphalian that the UK wouldn't be able to leave? Not sure how that follows. That the EU is a post-Westphalian system is not a new idea.


fair point, I’m just a bit tired of people falsely claiming countries give up sovereignty by joining the EU. Thought you were making a point in that direction, but today we need to stay together and stay strong.


These UK laws might boost Tor usage.. let's hope something good will come from the full censorship political tyranny in Europe.


If enough people switch to Tor, then Tor will get banned. Technical solutions don’t fix bad policies.


If you're in a struggle against a hostile regime, you don't refuse to use the weapons available to you because they're not what will bring you final victory. You use whatever you can.


Don’t refuse of course, but any workaround will be unsustainable, and you will eventually run out of measures unless the issue gets addressed politically.


Tor is pretty hard to block. I think that some sort of mixnet is pretty much the solution to all ISP/Government spying and censorship on the web as they make the law de-facto unenforcable


It's not, really. All governments have to do is to block all IP addresses in exitnodes.txt, and suddenly it's only a handful of people who can bootstrap Tor using custom exit nodes remain.


I doubt it. I think these laws were made to herd users towards big tech's established platforms that are 'policed' by community guidelines deemed 'appropriate' and where the content is never more than a takedown request away.

Welcome to the new internet.

(and it's funny how everyone's yelling 'fascist' at whatever happens in the US instead)


Two countries can be fascist at the same time.

And it's not like the UK and the US aren't known for exchanging the worst of the worst with each other all the time.


Right, it is called Regulatory Capture, because big actors have the means to comply.


Trust me, while the big social media sites love this, it wasn't their lobbying that made this happen.

The UK government has a long history of meddling in media coverage to achieve certain aims. Up until Covid, legacy media still had control over the narrative and the internet was still considered 'fringe,' so governments could still pull the tried-and-true levers at 1-3 of the big media institutions to shape opinion.

Post-covid, everyone became internet nerds and legacy media in english-speaking countries fully lost control of the narrative.

This regulation is intended to re-centralize online media and bring back those narrative control levers by creating an extremely broad surface area of attack on any individual 'creator' who steps out of line.


Leave this "vile" "unsafe" forum and go talk on ... er ... Twitter.


they were alarmed they lost what used to be tight control of media narratives around e.g. the gaza genocide and are working overtime to concentrate control so it doesn't happen again

let it be known the UK used its carve out territory in Cyprus to process bomb shipments to the IDF in furtherance of a genocide

https://www.aljazeera.com/news/2024/1/15/uk-bases-in-cyprus-...


The UK is not in Europe, which would otherwise impose human rights legal constraints on UK government legislation.


The UK is in Europe, it didn't suddenly break off and float away, it's just not part of the EU, there's a bunch of European countries that aren't in the EU


The UK is in Europe. What other continent would it be in?

It isn't in the EU, but it is a member of the Council of Europe, which is why it is still a party to European Declaration of Human Rights and the European Court of Human Rights still hears appeals from the UK.

No international agreement can ever or has ever been capable of imposing legal constraints on the British Parliament because it is absolutely sovereign.


The UK is a signatory to the european convention on human rights (hell it wrote it), despite what Farage and the the Mail convinced you of in 2016 this was unrelated to the EU


You know perfectly well that the UK is in Europe. Not necessary part of the EU, but Europe as a continent, yes.


Y'all know perfectly well this refers to the UK leaving the EU.


I have it on good authority that the majority of Tor nodes are compromised.


I sympathize with the operators of these forums of course -- the UK Online Safety Act is poorly conceived.

HOWEVER.

Deleting their forums? "The act will require a vast amount of work to be done on behalf of the Forums and there is no-one left with the availability to do it." [1]

This is a false dichotomy. Put Cloudflare in front of the site, block UK traffic [2], and you're done. 5 minute job.

[1] https://forums.hexus.net/hexus-news/426608-looks-like-end-he...

[2] https://developers.cloudflare.com/waf/custom-rules/use-cases...


I don't know the detail here, but in many of the discussions I've seen the operators themselves are based in the UK, and that changes the calculus.


Yeah, GP is, to put it charitably, not understanding the situation.

> About Us

> HEXUS.net is the UK’s number one independent technology news and reviews website.


Wow, UK has these crazy laws too? The German hate speech laws made headlines a week or so ago (https://www.cbsnews.com/news/germany-online-hate-speech-pros...). They'll confiscate your electronics if you insult someone and they actively monitor the Internet for prohibited speech.


So sites will geoblock the uk and users will use VPN software. Ugh. More software layers, more waste. Also a problem that is solved by a layer of indirection.


Fear/risk is at work here. Government by clear guidance, not guesswork is needed. The word "unlikely" is doing too much lifting in the guidance. OFCOM need to hard clarity with the kind of detail to satisfy lawyers. OSB is sound in its aims, a fumbled hot potato in its long-long discussion, a hash of an implementation, and the explication/communication is a regurgitated dogs dinner. Normally our gov communication is very good. Why can't OFCOM write? I guess we all know any forum with more than a few members likely already has software and some basic policy settings to do this. Unclear guidance is making operators jumpy and afraid.

An opportunity for anyone with a transformer from "UK.GOV Hand-waving" -> forum_settings.json


When lawmaking gets stupid, the stupid turn pro :0


So, what makes the UK Online Safety Act close the forum?


This list of requirements is excessive and nobody wants to read through endless documents and do endless risk assessments. https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

Children's access assessments - 32 pages

Guidance on highly effective age assurance and other Part 5 duties - 50 pages

Protecting people from illegal harms online - 84 pages

Illegal content Codes of Practice for user-to-user services - 84 pages


What happens with cross nation access? Will international sites start to refuse accounts to brits?


I believe lobste.rs is one site that's going to geoblock the UK as a precautionary measure at least


I thought that was a tech site, are they hosting porn now? I'd have thought they'd already police hate crimes, encouraging suicide, self-harm, and such?? Perhaps they have a special section where they encourage kids to huff glue?


You’re missing the point. The law is so vague and broad that it could be interpreted as covering even far more innocuous content than the few extreme examples you listed here.


The 'if they have nothing to hide' argument? Really?

I look forward to reading your fully compliant risk assessment before interacting with this comment, lest it be judged to contain offensive, inappropriate, or pornographic content.


Because the UK refuses to elaborate on who qualifies under the act, and the only "safe" way to operate a website that might hypothetically be used by someone in the UK is to simply not.

The costs required to operate any website covered by this act (which is effectively all websites) is grossly excessive and there are either NO exceptions, or the UK has refused to explain who is excepted.


Couldn't they wait for some kind of inquiry from UK Gov and then closed the forum reactively if it was an unreasonable financial burden?


> The costs required to operate any website covered by this act (which is effectively all websites) is grossly excessive

That depends what you count as the costs. If you're a small site[0] and go through the risk assessment[1], that's the only costs you have (unless pornography is involved in which case yes, you'll need the age verification bits.)

[0] ie. you don't have millions of users

[1] Assuming Ofcom aren't being deliberately misleading here.


they don't want to reengineer the forum...


Headline: 2.6M posts

Reality: the forum has negative 358 posts in the last month. The forum has negative ~2k posts over the last 12 months. The forum is so inactive that they’re deleting posts faster than creating them. 8 people have created accounts in the last year.

The forum has been long dead.


Apparently any piece of informational older than a year has no value to you?

Thankfully you aren't writing the laws in my country.

Creating a law that makes internet creators want to delete all historical record for fear of potential prosecution under extremely broad terms -- doesn't seem like it's in the interest of the greater good.


The law has absolutely nothing to do with historic content, it has no provisions for or relevance to content published decades ago. Even in the most cautious response to this law, there is no reason to take content offline.


Counterpoint: I read the law and it seems to me that it absolutely does apply to historic content. Ultimately you may be right, but the fact that there isn't a clear answer means nobody without thousands of dollars to throw at lawyers can take that risk.


this is a strange framing because Britain had internet and content laws before these recent changes and people were just fine with the risk that came with running them.

Let's be real, for most dead sites this is an excuse for old admins to close the thing down because they got tired of running it. For sites with fewer than millions of users you basically needed to add a contact form and report button. These places are just deserted, and instead of having a few angry oldheads screaming at you, you can just blame the government


> Britain had internet and content laws before these recent changes and people were just fine with the risk that came with running them

I think it's equally likely that people didn't take them seriously, but each new law has had increasingly dire consequences AND has been increasingly difficult to decipher. So there's that.


I'm not sure why you're comparing total posts to monthly new posts. The tragedy here is that 2.6 million posts, potentially full of great content, is being deleted.

>The forum is so inactive that they’re deleting posts faster than creating them.

They've been in read-only mode, more or less, for awhile. Primarily, again, due to the (at the time proposed, now passed) law.

Not to mention, this comment is missing the forest for the trees. This is not the only forum or website to shutter operations in the wake of the UK Online Safety Act.


The forum has had less than 100k posts in the last 10 years.

Forums and small websites have been killed off by changing consumer behaviour, the shift to big social media platforms. Using big numbers to suggest that the UK Online Safety Act is responsible for killing off these smaller independent websites is disingenuous.

If you do the same exercise for the other forums, you’ll find they’re all long dead too.


I posted another example in this thread of someone running forums with 275k monthly active users that also decided to shut down. That does not qualify as "long dead".

That's just one other example. I can assure you that it is not just long-dead forums deciding to shut down, despite your preconceived notion.


You’re falling for the big numbers that do not stand up to scrutiny. There’s no such forum shutting down. Are you referring to lfgss? First, it’s not shutting down, second, the user numbers are completely wrong. As is the claim that the platform supports over 300 forums. You’re an order of magnitude off. Go and visit it and look at the activity, it’s clinging to life. 275k active users? Pure fiction.


I'll avoid searching for other examples, as you seem to want to latch onto the example itself rather than the broader message the examples communicate. The fact is that some people are shutting down operations of websites, deleting data, etc. in response to this law.

Just considering that the law is forcing people to think about shutting down operations is a sign that the law is having a chilling effect. Both for existing websites and the potential creation of new ones.

Just because you believe yourself to be the sole arbiter of which websites are valuable and which can be deleted without worry doesn't change the fact that this law is having a negative effect on small websites.

Perhaps with better communication about the law, rather than the hundreds on hundreds of pages of vague guidance, the law could remain as-is and small website operators wouldn't be as concerned. However, that is not the case.


Some people are protesting against this law by threatening to shut down their websites or by deleting content. The founder of lfgss explicitly said they’re against the law on principle.

I think historic content is very valuable which is why I am offended by this absurd response on hacker news where people are conflating the actions of a protest with the consequence of a law.

If someone chooses to protest this law by deleting their website then more power to them but we must be honest about what it is: protest.

People should be considerate about the consequence of the services they release onto the internet. We can debate the specifics of whether certain requirements are reasonable/fair/beneficial but it’s patently absurd to label choices these website owners are making as being caused by this law. The law has zero to do with historic content, there’s not a single risk to anyone who leaves a website online in read only mode as an archive.


> there’s not a single risk to anyone who leaves a website online in read only mode

Your posts make sense if this is true, but I really don't think it's true.

Your argument that they won't be fussed to do anything about archived websites is very much not zero risk.


I've been working with OFCOM on implementing the requirements of this act. They seem reasonable, and what they are looking for is mostly table stakes. That said, I wouldn't want to live in or run a UGC business in the UK right now.


Donate it to the internet archive.


The State of Utopia has published this report on the source of funding of Ofcom, the U.K. statutory regulator responsible for enforcing the Online Safety Act:

https://medium.com/@rviragh/ofcom-and-the-online-safety-act-...

(In short it is funded by the regulated tech companies, which must pay fees to it.)


Could someone please shed any light on why simply geoblocking the UK in its entirety would not be sufficient for an average forum to avoid having to deal with the Act?

A lot of US websites initially geoblocked EU to avoid dealing with GDPR, for example.


In this particular case, the forum is UK-based ("HEXUS is a UK-based technology reporting and reviews website founded by David Ross in 2000")

In other non-UK-based cases, geo-blocking is the answer being used by some people.

Per https://geoblockthe.uk/, they state:

"Luckily OFCOM (the UK Government department responsible for 'enforcement' of these new rules) have confirmed that blocking people in the UK from accessing your website is a perfectly legal and acceptable way to comply with the law.".


Would be great if services like e.g. Wikipedia would do exactly that.

"This website is not available in the UK. Ask your representative about the UK Online Safety Act for more information".


Wikipedia tries to act as a source for information all around the globe, they never block, they only get blocked, blocking the UK would go against their goals


Other comments here have suspected its audience might be primarily UK-based, so geoblocking might not be the best option.

I'm also not familiar with UK law, which may or may not deem that be a sufficient counter-measure against VPNs. Also, if the forum's operator is based in the UK this also might not be an option.


That doesn’t help a UK-based forum. But otherwise, the law doesn’t limit itself to the UK, so there is concern about what happens if you don’t comply with it and ever intend to visit the UK.


This is a major blow to non-profit communities. Which also means that only for profit will make sense of maintaining such platforms, which in itself is contradictory to what the proposal of the this act is.


I would ordinarily be upset about this but in this case it is probably for the best as there is unfortunately a lot of islamophobic content on the site


I wonder if closed forum would also fall under these laws. By closed forum I mean a forum where you can see posts only after signing in.


Finally the true decentralized internet could start.


What features are lacking from vBulletin that prevents being compliant? I suspect some details are missing.


It's not necessarily a technological problem that software has to solve. There's a bunch of processes around reporting and age verification that have to be in place.


Ah I see. I guess that's where the current direction of law and I have parted ways. Had they focused on making laws requiring better parental controls support on all devices then server operators could add a single RTA header and be done with it. It's not a perfect technical solution but I believe it delegates the legal liability on the parents where it belongs. Parents could then simply consent for their kids to view whatever they want if they feel they are psychologically ready.


Moderating UKs regressive speech laws might be a problem.


Inability by admins to police all their users the way that law wants them to.


Just host it all elsewhere, I don't understand the problem here. Double freedom rockets to the UK


actual question, why bother? if they are domiciled in the UK, sell it to someone outside it or move the company elsewhere. let the britons kick and scream; the fun thing about the internet is they can't really do anything about it.


The company no longer exists and it doesn't make any money so it isn't worth anything.


someone owns the forum and it's still a big archive of stuff. backlinks pointing there, old info, can run ads so it should cashflow somehow.


Forgive me if I’m being dense…

I just read through the entire HN discussion about lobste.rs and continued down that rabbit hole to other discussions of forum, deletions, and the safety act, etc.

The part I don’t understand is: Why aren’t these operators placing the forum into a corporate or partnership entity, without personal liability, that would be the target of some eventual enforcement?

These very small forums Are almost certainly not going to be targeted for enforcement… The issue is simply the risk…

… So why not just incorporate, go on your merry way, and if enforcement goes very differently than we all assume then you walk away from a corporate entity And continue to vacation in London without fear of arrest.

What am I missing here?


The corporate veil can be pierced. https://en.wikipedia.org/wiki/Piercing_the_corporate_veil. From the Wikipedia page, it sounds like this is most common in the United Kingdom when a corporation "is established to avoid an existing obligation" - which is exactly what you're suggesting. I am not licensed to practice law anywhere, but particularly not in the United Kingdom, where I have never been.

To my understanding working at American non-profits, however, corporations are most helpful as a liability shield when they are clearly distinct entities, with distinct goals, and distinct decision making. In practice, that means having multiple people, writing some sort of charter / statement of purpose, and having quarterly meetings of a board of directors with quorums where notes are written and votes are taken. This can all be a fair bit of work, where before nothing was required.


In addition to the "you're not fooling anyone" bit about piercing the corporate veil that's already well-addressed, you missed provisions of the OSA.

There's a requirement to name a senior manager: https://www.legislation.gov.uk/ukpga/2023/50/section/103

There's personal liability attached to being the named senior manager: https://www.legislation.gov.uk/ukpga/2023/50/section/110

Other nearby sections have additional personal liabilities. Like Sec 109 (5) (a) probably criminalizes your exact suggested response of walking away from Ofcom's inquiries: https://www.legislation.gov.uk/ukpga/2023/50/section/109 It depends on the legal definition of "permits the suppression of... any information required". We'd have to hire a UK lawyer for a confident answer.

All this is a couple sentences in the law. The law is 250 pages long. Ofcom's guidance was rounding 3,000 pages the last time I counted.

If you want to understand the OSA, I think the most accessible and valid writing available is by Neil Brown: https://onlinesafetyact.co.uk/ There's a lot developing as Ofcom continues to publish new rules and ignore questions, so I suggest reading the 'Replies' tab of his fediverse account.


"There's personal liability attached to being the named senior manager ..."

Thank you - that's a missing piece that helps.

"In addition to the "you're not fooling anyone" bit ..."

I don't suggest a corporate veil as a ruse - it's a tool that has a function and I think this is certainly it.

My sense is that enforcement for small operators is unlikely but the potential liabilities skew the risk dramatically. Pointing the initial enforcement at a corporate entity could change that risk assessment.


For those who do not know, Peter (pushcx) operates Lobste.rs, and even though he and the website is US-based, it looks like there will be repercussions too for them (basically geoblocking UK, for reference see https://lobste.rs/s/ukosa1 and https://news.ycombinator.com/item?id=43152178)


Some kinda online safety law is probably needed (download button is just up there if you must :/) but there should be a carve out for small operations. Set a revenue minimum or something.


The Act does in fact scale the obligations according to the size of the community/service.


But size and revenue/budget are different things entirely. Does it scale down to pocket change for large community forums with no commercial backing?


i can feel them coming for porn and as someone who is ugly poor and has no interest in interacting with real humans me so sad


I think you're right. The elites want infinite population growth and we're not doing our part, so they're slowly turning the screws.


Can't they just block the UK?


They are a UK-based forum.


Couldn't it be held in trust in the US or something?

"Just shut it down" is the lazy thing to do. Should take tips from dissidents in other totalitarian shitholes - they just move it abroad to relatively free countries.


Heavily editorialized headline here. Just as accurate: "Forum with 2.6M posts being deleted due to insufficient moderation"


It already has moderators. But they'd need to know the details about the 17 types of illegal harm too. And someone would have to submit a yearly risk assessment and contact information to the regulator. And there's a children's access assessment. And there'd need to be a complaints procedure. Oh and a children's risk assessment. Plus whatever else is contained within the hundreds or thousands of pages of guidance.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: