I know there are a lot of groups working on how to prevent AI disruptions to society, or how to mitigate their impact, but are there any groups working on how to adapt society to a full blown unchained AI world?
Like throw out all the safeguards (which seems inevitable) and how does society best operate in a world where no media can be trusted as authentic? Or where "authentic" is cut from the same cloth as "fake"? Is anyone working on this?
One thing we should be doing is supporting critical thinking at the high school and university level. Unfortunately, it seems we have been dedicated to the opposite of this for about 50 years, at least in the US.
At the risk of falling for a joke, I'm not sure "critical thinking" means what you think it means. It just means thinking objectively about things before making judgments, it has nothing to do with criticizing people. The things one criticizes are our own beliefs and reasons for believing them.
What do I believe? Why do I believe that? Why do I feel that evidence supports that belief, but not this one? For example, I can explain in a fair bit of detail why I believe that the Apollo landing was not faked. I wouldn't normally bother to explain those reasons, but all of them are based on beliefs and evidence that I've read about, and most of those beliefs are subject to reversal should counter-evidence surface.
I think of critical thinking as the art of being critical toward oneself when one is thinking.
In other words, when I read something and hear myself think, "oh yeah, that sounds right", there is another part of my mind that thinks, "maybe not".
Critical thinking is precisely what could have spared us from all of that 'cultural marxism' you mentioned, or at least, to do it in a way that is... constructive.
I suspect we'll need to return to the idea of getting our news from trusted sources, rather than being able to rely on videos on social media being trustworthy.
Technically, we could try and build a trusted computing-like system to ensure trust from the sensor all the way to a signed video file output, but keeping that perfectly secure is likely to be virtually impossible except in narrow situations, such as CCTV installations. I believe Apple may be attempting to do things like this with how Face ID is implemented on iPhone, but I suspect we'll always find ways to trick any such device.
80% of the problem could be solved with a reliable signature scheme that allows some remixing of video content. So if CNN publishes a video, signed with their key so it's verifiably CNN, we need the ability to take a 20 second bit of it and still have a valid key attached that verifies that the source is CNN (without trusting the editor). Then you can share clips, remix it, etc, and have integration in social media that attests the source.
My plan to solve this "20 second bit of it" is that it's done at the analog hole. Whatever is painting those pixels, a smart TV for instance, will be coordinating with cloud services to fingerprint at a relatively high temporal resolution - maybe 5 seconds. The video itself is the signature. But we will need either trusted analog hole vendors or some trusted non-profit organization - or likely both. I think that "viewing" will be delayed by perhaps 30 seconds to allow for that signature analysis. These smart TVs will overlay a scorecard for all displayed content, and owners will be able to set device scorecard thresholds such that low-scoring content will be fuzzed out.
If you trust a person (or source) and they have a private key that they can properly secure, they could always sign their material with that key. That would prove that the source provided that material.
A blockchain could be a way to store and publish that signature & hash.
It can't say "this is real", it can only say "that signature belongs to source X".
I disagree. The key alone is not sufficient nor secure. We will need crowdsourced validity data as well. We need a zero-trust model - and I too believe that blockchains will play a role.
The TV will have to match every short segment - perhaps 5 seconds of video - against a blockchain which scores the validity of that segment - and of course looking back to it's original source. Signing the whole video is necessary but not sufficient.
Why would either of those things need a block-chain?
Crowd-sourced is already "network of trust".
An AI based score would have to be produced by a centralized provider, so network of trust is the reality for that too.
The only way blockchains would provide benefit would be as a distributed discovery mechanism for "review" of the video chunk and having an open ecosystem for that (a dht or trackers) would work better.
Blockchains only ever had a reasonable use case under the assumptions of functional capitalism (and we don't have one of those). The reality is that they can't be sustainable without capture and the market incentives only increase the incentive to capture it.
DHTs and Networks of Trust only have the value of what service they provide and while that is less exciting for scamming people, they have survived and been high functioning for decades.
Like throw out all the safeguards (which seems inevitable) and how does society best operate in a world where no media can be trusted as authentic? Or where "authentic" is cut from the same cloth as "fake"? Is anyone working on this?