Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The actual OfCom code of practice is here: https://www.ofcom.org.uk/siteassets/resources/documents/onli...

A cycling site with 275k MAU would be in the very lowest category where compliance is things like 'having a content moderation function to review and assess suspected illegal content'. So having a report button.



This isn't how laws work. If you give a layperson a large law and tell him that, if he is in violation, he has to pay millions, then it pretty much doesn't matter that there is some way where, with some effort, he can comply. Most people aren't lawyers and figuring out how to actually comply with this is incredibly tedious and risky, as he is personally liable for any mistakes he makes interpreting those laws.

Companies have legal departments, which exist to figure out answers to questions like that. This is because these questions are extremely tricky and the answers might even change as case law trickles in or rules get revised.

Expecting individuals to interpret complex rulesets under threat of legal liability is a very good way to make sure these people stop what they are doing.


>This isn't how laws work.

The law worked the same way yesterday as it does today. It's not like the website run in Britain operated under some state of anarchy and in a few months it doesn't. There's already laws a site has to comply with and the risk that someone sues you, but if you were okay with running a site for 20 years adding a report button isn't drastically going to change the nature of your business.


You don't get it. The law is completely different for people and corporations. A corporation has the resources to figure out how exactly the law applies to them and defend that at trial. An individual does not.

It is plainly insulting to say that "adding a report button" is enough, obviously that is false. And investigating how to comply with this law is time consuming and comes with immense risk if done improperly. The fact that this law is new, means that nobody knows how exactly it has to be interpreted and that very well you might get it completely wrong. If a website has existed for 20 years with significant traffic it is almost certain that it has complied with the law, what absolutely is not certain is how complying with the law has to be done in the future.

I do not get why you have the need to defend this. "Just do X", is obviously not how this law is written, it covers a broad range of services in different ways and has different requirements for these categories. You absolutely need legal advice to figure out what to do, especially if it is you who is in trouble if you get it wrong.


> A corporation has the resources to figure out how exactly the law applies to them and defend that at trial.

A very large fraction of corporations are run on minimal margins. Some of them still do try and keep up with regulations and that is then (often) a very large part of their operating costs.


The big tech corporations which are the presumed targets of this will have absolutely zero problems paying their legal teams for the work they have to do to comply.


But in 2025 the law will change. It is this reason that the site will shut down the day before the law comes in.


This: OP seems to be throwing the baby out with the bathwater.

Im surprised they don’t already have some form of report/flag button.


I’m not so sure. It’s a layman’s interpretation, but I think any “forum” would be multi-risk.

That means you need to do CSAM scanning if you accept images, CSAM URL scanning if you accept links, and there’s a lot more than that to parse here.


I doubt it. While it's always a bit of a gray area, the example for "medium risk" is a site with 8M monthly users who share images, doesn't have proactive scanning and has been warned by multiple major organisations that it has been used a few times to share CSAM material.

Cases where they assume you should say "medium risk" without evidence of it happening are if you've got several major risk factors:

> (a) child users; (b) social media services; (c) messaging services; (d) discussion forums and chat rooms; (e) user groups; (f) direct messaging; (g) encrypted messaging.

Also, before someone comes along with a specific subset and says those several things are benign

> This is intended as an overall guide, but rather than focusing purely on the number of risk factors, you should consider the combined effect of the risk factors to make an overall judgement about the level of risk on your service

And frankly if you have image sharing, groups, direct messaging, encrypted messaging, child users, a decent volume and no automated processes for checking content you probably do have CSAM and grooming on your service or there clearly is a risk of it happening.


The problem is the following: if you don't have basic moderation your forum will be abused for those various illegal purposes

Having a modicum of rule enforcement and basic abuse protections (let's say: new users can't upload files) on it goes a long way


That scanning requirement only applies if your site is:

• A "large service" (more than 7 million monthly active UK users) that is at a medium or high risk of image-based CSAM, or

• A service that is at a high risk of image-based CSAM and either has more than 700000 monthly active UK users or is a file-storage and file-sharing service.


> do CSAM scanning if you accept images, CSAM URL scanning if you accept links

Which really should be happening anyway.

I would strongly prefer that forums I visit not expose me to child pornography.


You cannot get access to the tech without being a certain size to avoid people modifying images to avoid the filter.


Cloudflare has a free CSAM scanning tool available for everyone:

https://developers.cloudflare.com/cache/reference/csam-scann...


oh great so you centralize even harder and that will fix everything?


So what's your alternative to market forces?

Ggovernment regulation - "good" centralisation?


Not the person you are asking but alternatives I can think of are:

- Configure forums using ranks so that new users can post but nobody will see their post until a moderator approves or other members vouch for them. Some forums already have this capability. It's high maintenance though and shady people will still try to warm up accounts just like they do here at HN.

- Small communities make their sites invite only and password protect the web interface. This is also already a thing but those communities usually stay quite small. Some prefer small communities. quality over quantity, or real friends over bloated "friends" lists which is common on big platforms.

- Move to Tor onion sites so that one has more time to respond to a flagged post. Non tor sites get abused by people running scripts that upload CSAM, then snapshot it despite them being the ones uploading it, automatically submit to registrars, server and CDN providers so the domains and rented infrastructure get cancelled. This pushes everyone onto big centralized sites and I would not be surprised if some of them were people with a vested interest in doing so.

Not really great options but they do exist. Some use these options to stay off the radar being less likely to attract the unstable people or lazy agents trying to inflate their numbers. I suppose now we can add to the list government agencies trying to profiteer of this new law. Gamification of the legal system, as if weaponization of it were not bad enough.


> I would strongly prefer that forums I visit not expose me to child pornography.

While almost everybody including me shares this perference maybe it should be something that browsers could do? After all why put the burden on countless various websites if you can implement it in a single piece of software?

This could also make it easier to go after people who are sources of such material because it wouldn't immediately disappear from the network often without a trace.


> While almost everybody including me shares this perference maybe it should be something that browsers could do? After all why put the burden on countless various websites if you can implement it in a single piece of software?

If I recall correctly, Apple tried to do that and it (rightly) elicited howls of outrage. What you're asking for is for people's own computers to spy on them on behalf of the authorities. It's like having people install CCTV cameras their own homes so the police can make sure they're not doing anything illegal. It's literally Big Brother stuff. Maybe it would only be used for sympathetic purposes at first, but once the infrastructure is built, it would be a tempting thing for the authorities to abuse (or just use for goals that are not universally accepted, like banning all pornography).


Apple tried to do that obligatory, taking away control from user. Which of course is a terrible idea.

I don't want my browser to report me if I encounter illegal materials. I want the browser to anonymously report the website where they are, at most and even that, only if I don't disable reporting.

People do install cctv cameras in their homes but they are (or at least believe to be) in control of what happens with the footage.


So basically you want your browser to be controlled by the governement and remove ones ability to use their browser of choice?

All this because a negligible amount of web user upload CSAM?


No, I want my browser to be controlled by me. It should just be more capable so I'm not getting exposed to materials that I don't like getting exposed to and maybe easily report them if I want. Like adblock but for illegal or undesirable online materials.

> All this because a negligible amount of web user upload CSAM?

Still it's better to fix it in the browser than keep increasingly policing the entirety of the internet to keep it neglible.


Love to never be able to see photos of my child at the beach because Google Chrome tells me I'm a criminal.


Unless you tell Google Chrome it's ok and you actually want to see photos of naked children in some whitelisted contexts.


OP isn't throwing the baby with the bathwater and he explains it very well in his post: the risk of being sued is too great in itself, even if you end up winning the lawsuit.


The general risk of being sued is always there regardless of the various things laws say.

I think there’s a pretty decent argument being made here that OP is reading too far in the new rules and letting the worst case scenario get in the way of something they’re passionate about.

I wonder if they consulted with a lawyer before making this decision? That’s what I would be doing.


In this case though, the griefers don't have to file a lawsuit themselves, they just have to post harmful material and file complaints. That is a much lower threshold. It is less effort than the sorts of harassment these people already inflict on moderators, but with potentially much more serious results.


I was thinking the same.

I don’t like this new legislation one bit, but

It’s not obvious to me that from the post or what I know of the legislation that OP is at meaningfully greater risk of being sued by someone malicious/vindictive or just on a crusade about something that they have been prior to the legislation. (Unless, of course, there forums have a consistent problem with significant amounts of harmful content like CSAM, hate speech, etc.)

I am not saying that the risk isn’t there or that this isn’t the prudent course of action, I just don’t feel convinced of it at this point.


Given:

> I do so philanthropically without any profit motive (typically losing money)

the cost (and hassle) of consulting with a lawyer is potentially a lot in relative terms.

That said, I thought that the rule in the UK was generally that the loser pays the winners costs, so I'd think that limit the costs of defending truly frivolous suits. The downside risks are possibly still high though.


> That said, I thought that the rule in the UK was generally that the loser pays the winners costs

That’s generally true… but only happens after those costs have been incurred and probably paid.

There’s no guarantee the party suing will be able to cover their own costs and the defendant’s costs. That leaves OP on the hook for defence costs with the hope that they might get them back after a successful and likely expensive defence.

In that situation, I can understand why OP wouldn’t want to take the risk.


> the loser pays the winners costs

Winning against the government is difficult - an asymmetric unfair fight. You can't afford to pay the costs to try: financial, risk, opportunity cost, and most importantly YOUR time.


OP having to consult a lawyer IS the problem...


I think from a US perspective being sued is commonplace but in most of the world being sued is very rare.


OP uses they/them pronouns.


From how I understood the post, the forums were never self-sustaining financially and always required a considerable amount of time, so the new legislation was probably just the final straw that broke the camel's back?


Exactly. Adding punitive governance hurdles hinders the small and/or solo.

Those that do whist not seeking financial gain are impacted the most.

Regulatory capture. https://en.wikipedia.org/wiki/Regulatory_capture


Yes they do but you need to do more than that.

They do not have the resources to find out exactly what they need to do so that there is no risk of them being made totally bankrupt.

If that is all - please point to the guidance or law that says just having a report button is sufficient in all cases.


I get the same feeling as the repercussions for bad actors are fines relative to revenue, 10% if I read correctly, given that the OP has stated that they work off a deficit most of the time, I can't see this being an issue.

Also if it is well monitored and seems to have a positive community, I don't see the major risk to shut down. Seems more shutting down out of frustration against a law that, while silly on it's face, doesn't really impact this provider.


>the repercussions for bad actors are fines relative to revenue, 10% if I read correctly, given that the OP has stated that they work off a deficit most of the time, I can't see this being an issue.

From another commenter:

Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher.


Part of the issue is that you have to spend time and money to defend an accusation.


I am the OP, and if you read the guidance published yesterday: https://www.ofcom.org.uk/siteassets/resources/documents/onli...

Then you will see that a forum that allows user generated content, and isn't proactively moderated (approval prior to publishing, which would never work for even a small moderately busy forum of 50 people chatting)... will fall under "All Services" and "Multi-Risk Services".

This means I would be required to do all the following:

1. Individual accountable for illegal content safety duties and reporting and complaints duties

2. Written statements of responsibilities

3. Internal monitoring and assurance

4. Tracking evidence of new and increasing illegal harm

5. Code of conduct regarding protection of users from illegal harm

6. Compliance training

7. Having a content moderation function to review and assess suspected illegal content

8. Having a content moderation function that allows for the swift take down of illegal content

9. Setting internal content policies

10. Provision of materials to volunteers

11. (Probably this because of file attachments) Using hash matching to detect and remove CSAM

12. (Probably this, but could implement Google Safe Browser) Detecting and removing content matching listed CSAM URLs

...

the list goes on.

It is technical work, extra time, the inability to not constantly be on-call when I'm on vacation, the need for extra volunteers, training materials for volunteers, appeals processes for moderation (in addition to the flak one already receives for moderating), somehow removing accounts of proscribed organisations (who has this list, and how would I know if an account is affiliated?), etc, etc.

Bear in mind I am a sole volunteer, and that I have a challenging and very enjoyable day job that is actually my primary focus.

Running the forums is an extra-curricular volunteer thing, it's a thing that I do for the good it does... I don't do it for the "fun" of learning how to become a compliance officer, and to spend my evenings implementing what I know will be technically flawed efforts to scan for CSAM, and then involve time correcting those mistakes.

I really do not think I am throwing the baby out with the bathwater, but I did stay awake last night dwelling on that very question, as the decision wasn't easily taken and I'm not at ease with it, it was a hard choice, but I believe it's the right one for what I can give to it... I've given over 28 years, there's a time to say that it's enough, the chilling effect of this legislation has changed the nature of what I was working on, and I don't accept these new conditions.

The vast majority of the risk can be realised by a single disgruntled user on a VPN from who knows where posting a lot of abuse material when I happen to not be paying attention (travelling for work and focusing on IRL things)... and then the consequences and liability comes. This isn't risk I'm in control of, that can be easily mitigated, the effort required is high, and everyone here knows you cannot solve social issues with technical solutions.


Thanks for all your work buro9! I've been an lfgss user for 15 years. This closure as a result of bureaucratic overreach is a great cultural loss to the world (I'm in Canada). The zany antics and banter of the London biking community provided me, and my contacts with which I have shared, many interesting thoughts, opinions, points of view, and memes, from the unique and authentic London local point of view.

LFGSS is more culturally relevant than the BBC!

Of course governments and regulations will fail realize what they have till it's gone.

- Pave paradise, put up a parking lot.


Which of these requirements is, in your opinion, unreasonable?


> The vast majority of the risk can be realised by a single disgruntled user on a VPN from who knows where posting a lot of abuse material when I happen to not be paying attention (travelling for work and focusing on IRL things)... and then the consequences and liability comes. This isn't risk I'm in control of, that can be easily mitigated, the effort required is high, and everyone here knows you cannot solve social issues with technical solutions.

I bet you weren't the sole moderator of LFGSS. In any web forum I know, there is at least one moderator being online every day and much more senior members able to use a report function. I used to be a moderator for a much smaller forum and we had 4 to 5 moderators any time with some of them being among those that were online every day or almost every day.

I think a number of features/settings would be interesting for a forum software in 2025:

- desactivation of private messages: people can use instant messaging for that

- automatically blur post when report button is hit by a member (and by blur I mean replacing server side the full post by an image, not doing client side javascript).

- automatically blur posts when not seen by a member of the moderation or a "senior level or membership" past a certain period (6 or 12 hours for example)

- disallow new members to report and blur stuff, only people that are known good members

All this do not remove the bureaucracy of making the assessments/audits of the process mandated by the law but it should at least make forums moderable and have a modicum amount of security towards illegal/CSAM content.


OP runs 300 essentially free forums. It's just too much.


That's why feeling too. There will always be people who take laws and legal things overly seriously. For example, WordPress.org added a checkbox to the login to say that pineapple on pizza is delicious and there are literal posts on Twitter asking "I don't like pineapple on pizza, does this mean I can't contribute". It doesn't matter if a risk isn't even there, like who is going to be able to sue over pineapple on pizza being delicious or not? Yet, there will be people who will say "Sorry, I can't log in I don't like pineapple on pizza".

In this case, it's "I'm shutting down my hobby that I've had for years because I have to add a report button".


> having a content moderation function to review and assess suspected illegal content

That costs money. The average person can't know every law. You have to hire lawyers to adjudicate every report or otherwise assess every report as illegal. No one is going to do that for free if the penalty for being wrong is being thrown in prison.

A fair system would be to send every report of illegal content to a judge to check if it's illegal or not. If it is the post is taken down and the prosecution starts.

But that would cost the country an enormous amount of money. So instead the cost is passed to the operators. Which in effect means only the richest or riskiest sites can afford to continue to operate.



[flagged]


What agenda do you think the OP is following, and why do you think they'd do so now after their long (~3 decades!) history of running forums? There has been many other pieces of legislation in that time, why now?

I tried to think of an agenda, but I'm struggling to come up with one. I think OP just doesn't want to be sued over a vague piece of legislation, even if it was a battle they could win (after a long fight). Just like they said right there in the post.

It's kind of rude to imply that this is performative when they gave a pretty reasonable explanation.


> or is pretending to feel for some agenda

Assume good faith.

https://news.ycombinator.com/newsguidelines.html


The actual rules are vulnerable to this attack. https://www.legislation.gov.uk/ukpga/2023/50

If you think the attack won't be attempted, you've never been responsible for an internet forum.


Can you explain what you mean by "this attack"?

They'll submit a complaint to the regulator that you've not done a risk assessment?

I've tried submitting issues to the ICO before but didn't have enough for them to go on and so the other company was never contacted.


This is Ofcom, not the ICO; and stuff like "flood the site with child abuse material" (not in your risk assessment, why would it be? this is a public forum about cycling and nobody's ever done that before) and try to get you prosecuted for not having adequate protections in place.


Their examples of it being right to say you're low risk have a key part about it not happening before. Medium risk example had "has had warnings of csam being shared before from international organisations and has no way of spotting it happening again".

You don't have to stop everything happening to comply.

So is the scenario you're picturing that someone spams child porn, complains you don't stop it and makes you add a URL filter? Would you do something different if someone was spamming csam anyway?


Someone spams CSAM on the site. You report it to CEOP, as every forum mod knows to do (though most have never needed to do), and Ofcom let you off with a warning – but you're no longer low-risk, so there's a lot more paperwork.

Now someone copy-pastes the doxx of members of the military from a leaked Pastebin – something you have no practical way of detecting – and it's not your first strike, and there's some public attention and someone decides they need an Example, so now you're getting scary letters about potential criminal charges.

You don't hear anything about those charges, so you assume things are okay. But now someone's claiming to be the parent of one of your users, who hasn't been around for a while. They claim the user was 17, has tragically died, and you don't have a policy about giving parents access to information about this user's activity (but they claim it's a TTRPG forum, which is a children's game, so 35(1)(3)(b) says you should have had a children's access assessment), and they claim they can prove they're the user's parents (they have the password, even!) but haveibeenpwned says the associated email address was in a data breach. Do you provide the information, or not?

Fortunately, you got in context with the real parents of that child – they know nothing about this website you run, and the person contacting you is someone else. You let them know that photos of their identification documents have been stolen. (You later learn that the user isn't even dead: they tell you about a stalker ex, and you make a note to be extra careful about this user's data.)

One of the domains in your webring has expired, and now redirects to a cryptospam site. That counts as §38 "fraudulent advertising". In response, Ofcom decide (very reasonably) to make webrings illegal.


If only more people actually read the actual documents in context (same with GDPR), but the tech world has low legal literacy


Expecting people to read and correctly interpret complex legal documents is absurd. Obviously any lay person is heavily dissuaded by that.

I would never except personal liability for my correct interpretation of the GDPR. I would be extremely dumb if I did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: