> Instead I propose this solution; marketplace of ideas now has to incorporate the marketplace for AIs that curate those ideas to be complete.
I don't know about this. Interesting proposal. I don't normally take a cynical view, but humans are emotional creatures, otherwise these dishonest mechanisms of generating engagement wouldn't work. In such a system it is likely that the most emotionally motivating algorithms would win.
Beyond that, why does one company need to make multiple algorithms? Why do we need a law to enforce this? Why don't we just have any company able to make any product they like and let people decide?
We have that now of course. And the problem I laid out with this is plainly obvious on the internet today. People are picking the algorithm that generates in them the most emotional reaction. But I suggest that these algorithms fail in the marketplace by a different mechanism: they inevitably lead to the state we are in now, where people become disillusioned with the control of information flow (and other problems associated with the curation algorithms), and people move to other websites with other algorithms.
In the past couple of years we have seen an explosion of alternatives, some echo chambers, some failed but genuine attempts, some sizeable competitors, some in the pipe. There is a large and fast growing diaspora from the big websites. Essentially what I'm saying is, what you want is happening now without any intervention, and everything is as it should be.
> Beyond that, why does one company need to make multiple algorithms? Why do we need a law to enforce this?
I meant they are enforced to open their content for licensing.
The market we have today is for services that are actually a bundle of things; youtube is both a video infrastructure and a recommendation engine. Twitter is both an infrastructure for textual content, for a network of people and a recommendation engine based on both.
It is hard to say if they are chosen for their recommendation engines or their content/network. The latter is very prone for monopolistic dynamics, the former is being victims of that power law distribution in my opinion.
I don't want to be as engaged as possible with twitter, but twitter does. If the new "virtuous-recommendation-engine" promised they are not there for maximum money but maximum wellbeing and sensemaking of their users, I might switch to that, ask my relatives to switch to that, force my kids to switch to that.
Well there are things like mastodon, for example, they serve the first 2 purposes (content and networking) without any real tampering with recommendations. I use it, it is very popular. UI wise it is almost identical, UX wise it has some unique behaviors that come with very powerful advantages IMO. It also doesn't have any design features built with a goal of maximum engagement.
For the most part, I'd say people have found recommendation engines to be nuisances at best, tools of control to many. The idea of the network effect seems to suggest that the largest draw with these sites is the content. Of course, the recommendation engine has it's effect on the perceived content, and it's possibly addictive nature has an effect on the available content, so the interaction there is still fuzzy. As another comment here pointed out, people see the algorithm the same way a fish sees the water it is in. They only see content, in their mind the site is the content, they don't see an engine.
My idea of a "maximum wellbeing" algo for recommendations is a chronological timeline. It is up to me to engage with what I want to engage with. Maybe that isn't ideal for applications similar to YouTube. I've seen a pretty simple ranking algorithm used in a project called Lemmy (FOSS federated community oriented link aggregator) that I think is phenomenal.
I don't know about this. Interesting proposal. I don't normally take a cynical view, but humans are emotional creatures, otherwise these dishonest mechanisms of generating engagement wouldn't work. In such a system it is likely that the most emotionally motivating algorithms would win.
Beyond that, why does one company need to make multiple algorithms? Why do we need a law to enforce this? Why don't we just have any company able to make any product they like and let people decide?
We have that now of course. And the problem I laid out with this is plainly obvious on the internet today. People are picking the algorithm that generates in them the most emotional reaction. But I suggest that these algorithms fail in the marketplace by a different mechanism: they inevitably lead to the state we are in now, where people become disillusioned with the control of information flow (and other problems associated with the curation algorithms), and people move to other websites with other algorithms.
In the past couple of years we have seen an explosion of alternatives, some echo chambers, some failed but genuine attempts, some sizeable competitors, some in the pipe. There is a large and fast growing diaspora from the big websites. Essentially what I'm saying is, what you want is happening now without any intervention, and everything is as it should be.