>if someone is incapable of making good faith genuine attempts to mitigate against atrocious things happening openly in the property they control, then isn’t this fairly solid evidence they’re just not capable of owning that property?
Even if true, what then? It doesn't follow said property can be ethically transferred to anyone else; otherwise you've just thrown out all semblance of property rights. You've sold off the world to the HOA's, as it were; now anyone who objects to the way you maintain your grounds has a button to push to make sure you are deprived of any grounds you keep. Be they real, or digital.
If I make a platform that shuffles bits around, and a bunch of users start using it for CP and terrorism (lets assume perfect enforcement/investigative capability up until piercing the platform, so probability 1 on the CP/terrorism front); I don't think the choice then is "lets shluff this to someone responsible to admin/make a tap". The only ethically tenable approach would be "well, no more moving bits around by anyone for anyone else anymore". And at that point we've unmade computing essentially.
No one, and I mean "Not One Single Entity, government or otherwise" can be trusted to not to abuse privileged access; and once put into the position to abuse, abstain from doing so. Abuse is probability 1. This is part of why I believe Stallman was right. The concept of the user account has been a disaster for the human species. As it is by the prescribing of unique identifiers to discern one operation on behalf of someone from another that has created a world in which we can even imagine such horrifying concepts as a small group unilaterally managing the entirety of the rest of humanity, for any purpose.
For me it is a sobering thought on the impact of automated business systems. I've practically 180'd on actual character of my own life's work. It's got me in a spot where I'm strongly considering burning my tools. Extreme? Maybe. Sometimes though, you have to accept that there are extremely unpleasant consequences out there that cannot be satisfactorally mitigated.
So I have a return question for you. Are you sure that the question you asked is the one you should be asking, or should you be asking yourself, "how many lives are acceptable casualties in order to continue operating within the bounds of my assumed ethical envelope?" Because there is a counter of people effected; you may not be able to read it or write it, but it's there.
Even if true, what then? It doesn't follow said property can be ethically transferred to anyone else; otherwise you've just thrown out all semblance of property rights. You've sold off the world to the HOA's, as it were; now anyone who objects to the way you maintain your grounds has a button to push to make sure you are deprived of any grounds you keep. Be they real, or digital.
If I make a platform that shuffles bits around, and a bunch of users start using it for CP and terrorism (lets assume perfect enforcement/investigative capability up until piercing the platform, so probability 1 on the CP/terrorism front); I don't think the choice then is "lets shluff this to someone responsible to admin/make a tap". The only ethically tenable approach would be "well, no more moving bits around by anyone for anyone else anymore". And at that point we've unmade computing essentially.
No one, and I mean "Not One Single Entity, government or otherwise" can be trusted to not to abuse privileged access; and once put into the position to abuse, abstain from doing so. Abuse is probability 1. This is part of why I believe Stallman was right. The concept of the user account has been a disaster for the human species. As it is by the prescribing of unique identifiers to discern one operation on behalf of someone from another that has created a world in which we can even imagine such horrifying concepts as a small group unilaterally managing the entirety of the rest of humanity, for any purpose.
For me it is a sobering thought on the impact of automated business systems. I've practically 180'd on actual character of my own life's work. It's got me in a spot where I'm strongly considering burning my tools. Extreme? Maybe. Sometimes though, you have to accept that there are extremely unpleasant consequences out there that cannot be satisfactorally mitigated.
So I have a return question for you. Are you sure that the question you asked is the one you should be asking, or should you be asking yourself, "how many lives are acceptable casualties in order to continue operating within the bounds of my assumed ethical envelope?" Because there is a counter of people effected; you may not be able to read it or write it, but it's there.