Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep super simple. You just have to make individual value judgements every day on thousands of pieces of content for SEVENTEEN highly specific priority areas. Then keep detailed records on each value judgement such that it can hold up to legal scrutiny from an activist court official. Easy peasy.


No, not at all. You need to consider your service's risks against those seventeen categories once, and then review your assessment at least every year.

From the linked document above: "You need to keep a record of each illegal content risk assessment you carry out", "service providers may fully review their risk assessment (for example, as a matter of course every year)"

And links to a guidance document on reviewing the risk assessment[1] which says: "As a minimum, we consider that service providers should undertake a compliance review once a year".

[1] https://www.ofcom.org.uk/siteassets/resources/documents/onli...


> You just have to make individual value judgements every day on thousands of pieces of content

That's simply not true.


If you read that guidance, it wants you to have a moderation policy for 17 specific priority areas. You need prove you can demonstrate that you have thought about it. You need to have a paper trail that says you have a policy and that its a policy. You _could_ be issued with a "information notice", which you have to comply with. Now, you could get that already, with the RIPA, as a communications provider.

this is similar to running a cricket club, or scout club

For running a scout association each lesson could technically require an individual risk assessment for every piece of equipment, and lesson. The hall needs to be safe, and you need to prove that it's safe. Also GDPR, and safeguarding, background checks, money laundering.

> hold up to legal scrutiny from an activist court official

Its not the USA. activist court officials require a functioning court system. Plus common law has the concept of reasonable. A moderated forum will be of a much higher standard of moderation than facebook/twitter/tiktok.


Any competent forum operator is already doing all of this (and more) just without the government-imposed framework. Would the OP allow CSAM to be posted on their website? No. Would the OP contact the authorities if they caught someone distributing CSAM on their website? Yes. Forum administrators are famous (to the point of being a meme) for their love of rules and policies and procedures.


    You just have to make individual value judgements every day on thousands of pieces of content for SEVENTEEN highly specific priority areas.

     Then keep detailed records on each value judgement such that it can hold up to legal scrutiny from an activist court official.
> Any competent forum operator is already doing all of this

What is your evidence that the record keeping described by the parent is routine among competent forum operators?


The record keeping requirements described by the parent are completely wrong: https://news.ycombinator.com/item?id=42436626


A risk assessment is not the same as a record of each decision.

https://russ.garrett.co.uk/2024/12/17/online-safety-act-guid... has a more comprehensive translation into more normal English.

You will need to assess the risk of people seeing something from one of those categories (for speciality forms, mostly low), think about algorithms showing it to users (again for forums thats pretty simple) Then have a mechanism to allow people to report offending content.

Taking proportionate steps to stop people posting stuff in the first place (pretty much the same as spam controls, and then banning offenders)

The perhaps harder part is allowing people to complain about take downs, but then adding a subforum for that is almost certainly proportionate[1].

[1] untested law, so not a guarantee


1) "record keeping requirements described by the parent are completely wrong:"

2) "Any competent forum operator is already doing all of this [this = record keeping requirements described by the parent]".

These two assertions seem to conflict (unless good forum OPs are doing wrong record keeping). Are you willing to take another stab at it? What does good forum op record keeping look like?


Me saying they don't need to do what pembrook claims, and aimazon saying they already do it, are not conflicting assertions. I didn't assert that competent forum operators are doing everything the new law requires. If you're asking me to "take a stab" at convincing you that forum operators are doing the hyperbolic FUD that pembrook posted, I won't. Take a stab at convincing you that they are already doing some large sub-set of what the law actually calls for, okay; I suspect internet forum operators already don't want their forums to become crime cesspits, or be taken overy by bots or moderators running amok, and that will cover quite a lot of it.

For comparison imagine there was a new law against SQL Injection. Competent forum operators are already guarding against SQL Injection because they don't want to be owned by hackers. But they likely are not writing down a document explaining how they guard against it. If they were required to make a document which writes down "all SQL data updates are handled by Django's ORM" they might then think "would OfCom think this was enough? Maybe we should add that we keep Django up to date ... actually we're running an 18 months old version, let's sign up to Django's release mailing list, decide to stay within 3-6 months of stable version, and add a git commit hook which greps for imports of SQL libraries so we can check that we don't update data any other way". They are already acting against SQL injection but this imaginary law requires them to make it a proper formal procedure not an ad-hoc thing.

> "What does good forum op record keeping look like?"

Good forum operators already don't want their forums to become crime cesspits because that will ruin the experience for the target users and will add work and risk for themselves. So they will already have guards against bot signups, guards against free open image hosting, guards against leaking user private and personal information. They will have guards against bad behaviour such as passive moderation where users can flag and report objectionable content, or active moderation where mods read along and intervene. If they want to guard against moderators power tripping, they will have logs of moderation activities such as editing post content, banning accounts. There will be web server logs, CMS / admin tool logs, which will show signups, views, edits. They will likely have activity graphs and alerts if something suddenly becomes highly popular or spikes bandwidth use so they can look what's going on. If they contact the authorities there may be email or call logs of that contact, there will be mod messages records from users, likely not all in one place. If a forum is for people dealing with debt and bankruptcy they might have guards against financial scams targetting users of their service such as a sticky post warning users, a banned words list for common scam terms - second hand sales site https://www.gumtree.com has a box of 'safety tips' prominently on the right warning about common scams.

Larger competent forums with multiple paid (or volunteer) employees would likely already have some of this formalised and centralised just to make it possible to work with as a team, and for employment purposes (training, firing, guarding against rogue employees, complying with existing privacy and safety regulations).

Yes I think the new law will require forum operators to do more. I don't think it's unreasonable to require forum operators once a year to consider "is your forum at particular risk of people grooming children, inciting terrorism, scamming users, etc? If your site is a risk, what are you doing to lower the chance of it happening, and increase the chance of it being detected? And can you show OfCom that you actually are considering these things and putting relevant guards in place?".

(Whether the potiential fines and the vagueness/clarity are appropriate is a separate thing).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: