Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Aicracy – Governed by Algorithms (aicracy.net)
117 points by hemmert on Jan 30, 2023 | hide | past | favorite | 100 comments


Algocracy [1] sounds somewhat better, but as any barbarism, Medieval Latin algorismus, a mangled transliteration of the Arabic al-Khwarizmi + kratos, Ancient Greek word for power, is just as wrong.

The issue however is that society does not have a technology problem, it has a metaphysics problem: under, over, and for what should there be a we. Hence why all the futures we imagine through technology can barely go beyond a boring distopia.

[1] https://en.wikipedia.org/wiki/Government_by_algorithm


I think we're losing that 'we' in many instances, even if it feels like being 'more connected', it's less 'together' often.


Going back to etymology: the word relationship comes from “act of telling or relating in words.”

You can’t have a “we” if you don’t directly connect in a meaningful way your experiences.


> The issue however is that society does not have a technology problem, it has a metaphysics problem:

Wrote the same thing yesterday in this Techrights piece [1]

I think there's a genuine danger in writing "dystopian warnings" like this. Plenty of people are now actually at the stage where they cannot see the joke.

Christopher Hitchens wrote on North Korea's Kim Jong-Un (and later Cory Doctorow said something similar), that Orwell's warning had simply been taken as a blueprint. For the same reason psychotherapists refuse to treat psychopaths - it arms them with better ways to be manipulative. The problem with satirical tech-critique is that for a lot of people our "Dystopia" is their "Utopia", and our frightening visions just gives them ideas.

[1] https://techrights.org/2023/01/28/unmasking-ai/


That is exactly the reason I have tried to explore the Solarpunk sub-genre[0]. It isn't that the work is necessary optimistic but it seems like the genre focuses on problems that currently exist and where to go from here.

[0] https://en.wikipedia.org/wiki/Solarpunk


A utopia for some is going to be a dystopia for others. That's why I prefer a heterotopia:

https://bilibili.com/video/BV1EW41177v6


What makes you think its mangled?


"mangled transliteration" is the unquoted expression used by etymonline [1]. I suppose it's mangled since Khwa becomes go without too much thinking on the Latin side. In the case of of alkali, from the Arabic al-qaliy "burnt ashes" [2], there is a bit more care. Perhaps a better transliteration would have been alquarismus, 800+ years too late [3].

[1] https://www.etymonline.com/word/algorithm#etymonline_v_8145

[2] https://www.etymonline.com/word/alkali#etymonline_v_8163

[3] Alexander of Villedieu published Carmen de Algorismo, Poem about Arithmetic, around 1200 https://en.wikipedia.org/wiki/Alexander_of_Villedieu


I'm not sure how I feel about the letter /q/ substituting the letter خ but it's interesting to note that the letter matches the softened /g/ in dutch. I wonder if that shift played a role in the reason why it is spelt the way it is.


> This will help you to conveniently find the products that match your taste, your wallet, and your societal value. To increase your societal value, just be yourself and actively contribute as much as you can. Your efforts will not remain unrewarded.

This is supposed to be dystopian etc, but don't we already have this mechanism in place in our current world? It's called money.


I think that exaggerates the purpose of the world's economy - which for almost everyone, everywhere, all of the time is about sustaining themselves and their kids, not buying products for social value (what the neighbors just got is a very old idea, but the Instagram version of showing it off is very new, and most people just want to live well without trying to impress people with bling). There's also something contained in this assumption about the purpose of money itself that winkingly implies that what all working class boobs aspire to are fancy watches.


A lot of people place a lot more value on status than you seem to realize.


I feel like all my friends say this but none of them really care about status either, they just think other people do. Maybe apart from a handful of psychopaths with outsized social media followings, no one really does. Then again, I live in Portland where a lot of people fight for intellectual status but no one would be caught dead showing any sign of material wealth or privilege... but I also feel like people in the red parts of the metro/state are mostly not concerned with status or much beyond living middle class lives either.


It was more aimed at personalized pricing in insurances e.g., but money is indeed all that, too.


I think the dystopian aspect is that it's an AI judging your value, not other people giving you some of their finite money based on your value to them.


No, because in practice it is not real people but corporations (institutional, ersatz paperclip maximizers that shall exist until some capitalist accidentally “innovates” a real one) that control all the money and who gets it. We already live in a dystopia; the only difference is that, thus far, the adversarial AIs still need humans called executives to run them.


I disagree. The state controls the money to a great extent. It can take it, and it can make more of it, and it can lock you in a box if you don't give it, and it can force (some of) you to go and die fighting another country.

Corporations are not perfect, but generally speaking they are just scaled up versions of individual workers. A one-person company or a 1000-person company might each need capital investment to get their business going. And they get money by providing value. They don't get to just have it.

Now I'm exaggerating slightly, but it seems very odd to view the world as though corporations run it, other than to reflect the fact that corporations are made up of people.


Except large corporations hold more money than workers, even if divided per worker. So they can lobby more effectively and gain both the power over private algorithms and regulations.


Agreed, but I mean people (or groups of people) are doing it.

To put it another way, there are two types of actor:

- the state, who has the ability to pass laws, collect money on pain of being locked up, penalise/encourage businesses through tariffs and regulations, and change the value of money through inflation

- people and corporations, who have to provide value to receive money

You're right, any of those can do good or bad with the money they receive. But to say that corporations hold the most power because they can lobby the people who actually have power doesn't make sense to me.


Corps don't hold the most direct power. Rather governments so often defer to big companies or the same small club of rich and industry insiders that it's hard for workers to push back. Regulatory capture can also have the opposite of the desired result, yielding more power to companies instead of maintaining healthy societies.


For sure. But regulatory capture is:

1) Government's fault. Don't get bribed and cosy with the industry you're regulating. We don't call cops teaming up with criminals "police capture". We call it corruption.

2) Predicated on governments being the biggest power, that rich companies can buy a small piece of their power.


> just be yourself and actively contribute as much as you can. Your efforts will not remain unrewarded.

That doesn't really give money. It might get you a thumbs up.


Doesn’t that change with the influencer economy? Most kids I know aspire to become YouTube stars, where, literally, a thumbs up will give you money.


Don't want to be too critical of (I assume) the students' efforts, but in my opinion the message behind the project is a bit too explicitly translated into the products. It could use a bit more creativity in how they're trying to convey the message translated into everyday (distopian) future life. For instance compare it to the works at https://www.nearfuturelaboratory.com/ like their IKEA catalog of the future. It works better in my opinion to generate discussion rather than having the distopian part expressed so explicitly.


That catalog is indeed a great one!


I expected this to be dystopian but nevertheless was extremely relieved to to see it explicitly labeled as such towards the end.

Remember that only 18 months or so ago, during the crypto hype, the legal aspect of this was actively touted as some utopic feature of blockchains.


What's the point if the conclusion has already been made by the authors? Such "discussion" with a presumed bias becomes merely a statement or an echo chamber.


The goal is stated at the bottom - they take it for granted that no one wants to be ruled by chatbot. The point is to get you to think about what exactly is wrong with this picture, and what that says about the future you wish to see

> It was the goal of this project to show a dystopian future, sensitizing people to undesirable futures while stimulating a discussion about what they wish the future to be like.


Thanks for clarifying, you described this project well.

In my view taking for granted that the role of AI would be bad/dystopian makes it unsuitable to be called a discussion.

Imagine if we switch this topic to immigration in the 1900s, then we'd have a project that take for granted that a country where the majority are immigrants would be dystopian, and the "discussion" would revolve around how to stop migrants from coming in or getting rid of them.

Many times in history we've been blindspotted by quickly applying good/bad dichotomy to every matter, instead of exploring it.


It's provocative statements just like this one that often start the most interesting discussions.


Yes, that was the idea. We're industrial designers, so we thought we'd make a start for that discussion by designing concrete, tangible objects that somehow mirror that dystopian society (the 'aicracy').


You know you can disagree with a conclusion, right?


Neutrality is a prerequisite for discussions. This project is not a discussion.


> Neutrality is a prerequisite for discussions

I have a hard time understanding how a person who has ever talked to another person in their life could have that viewpoint.


Your hyperbole doesn't mean anything


I don't think I have ever had a discussion which started from a neutral point of view. The process of discussion was neutral in the sense that all sides were considered equally, but everyone enters the discussion with something in their mind. Which is fair, right? Everyone has looked and thought about the problem, so of course they have an opinion.


The participants of discussion may have biases, but the forum shouldn't.

This project claims to be promoting a discussion, therefore being the forum rather than a participant. If it itself takes a clear position from the start, it has already impaired the debate by turning itself into a participant vs those for the adoption of AI within government.

Unless its understanding of "dystopian" is one that's merely provocative rather than intrinsically evil or needs to be prevented.


The project itself is not a discussion, but responses and reactions to it form a discussion.

As to neutrality being a prerequisite, are you trying to separate the idea of discussion from debate? I feel you may either have different basic definitional ideas than the people you are arguing with.

On this project specifically, and the notion of AI in government in general, and am reminded of a quote about late nineteenth and early twentieth century utopian fiction, “The authors expend all their effort describing the method of distributing resources without expanding on what the fair distribution is.”


I guess my issue is that IMO these explorations of provocative issues are best done with the least amount of prejudice. When we're clouded by repulsion or other emotional states, we might fail to properly analyze the matter at hand.

By declaring the described future as undesirable, the authors' question and wish to stimulate a discussion is clouded with prejudice, IMHO.


lol wat.

Please cite your sources, friend. All my best discussions started with, “Bro, you’re an idiot if you think…”


I'm not making a factual claim, what sources can I cite, Plato?

It's up to us humans to decide whether discussions are better off taking place within a neutral/biased forum.


> I'm not making a factual claim, what sources can I cite, Plato?

You said that "Neutrality is a prerequisite for discussions". It's factual claim. Do you have a some citation/proof/data that supports your claim? I bet there are plenty papers (or even better, meta-analysis) about discussions.

> It's up to us humans to decide whether discussions are better off taking place within a neutral/biased forum.

First in previous paragraph you stated that every discussion (you didn't specified which ones) needs neutrality and now you state that discussion can take place within biased forum and we should decide which type of discussion we value more aka. one with biased or unbiased forum? Maybe start with deciding which one this two statements is true.

Also neutrality is hard or even impossible.


I'm not interested to researching into the research of discussions. You're free to disagree if you think discussions aren't best done within a neutral forum, but I expect most people to have the common sense of seeing the benefit of it.

Which question is a useful opener for a discussion:

- "How can we help Africa end starvation?" vs - "Should we send more aid to these ungrateful African countries who have been cozying up to China instead? (Obviously not)".

It should be obvious which is a discussion and which is merely a statement veiled as a rhetorical question. But hey maybe most people can't tell and that's why Fox news still has so many viewers.


I get this was a design/art project and not necessarily meant to be sophisticated political philosophy, nonetheless there are a few distinctions here that would be useful:

To be ruled by AI I think requires the AI to be in some sense conscious, to have some notion of I and be able to form its own values and goals. Otherwise the ruler is whoever imposes/creates the AI, just in a once removed sort of way.

On the other hand to be ruled by algorithms: we already are. That's basically what the "rule of law" means - it's the (actually fairly desirable) right to be ruled and governed by a set of (in principle) knowable algorithms, which creates a fairer society than one ruled directly by human dictat.

What's being depicted (and criticised) in the film I think are mostly two things:

1. Some notion of social credit, where desirable actions are being rewarded with some arbitrary tokens and undesirable actions are being punished by these tokens being taken away. In some way this is already the case with capitalism (where desirable actions are work/investment and tokens are money), so I think some notion where what is societally desirable being more broadly evaluated isn't necessarily that terrible.

2. Persistent surveillance. This makes the above notions of social credit a bit more challenging since in some ways it makes participation in the social ranking system less voluntary (although how voluntary is work/making money in our own system is pretty debatable). Also it's creepy and dangerous in a number of other ways.

Neither of these things really require AI or algorithms to be bad, but AI tech really helps with the scalability of these solutions and therefore their applicability to real world societies. But it doesn't really seem like you are critiquing things for which AI would actually be bad as such (such as bias in training, value of compassion over justice, etc)


Great points, I think they used “AI” to be a bit provocative on purpose. IMO this is really more about taking the concepts of pure utilitarianism to their extreme, as enshrined by systems and technologies that rule our lives. This fits the term “artificial intelligence” very well I think, but in a very different way than how it’s used today (I.e. conscious AGI agents or stats-based ML models)


Rule of law differs substantially from rule by algorithms since laws are created and interpreted by humans, and there's always the possibility of reasoning and making a deal with a human when the law is incorrect or there are specific circumstances. In rule by algorithm, you get a situation where "Computer says no", and there's no recourse, no deal to be made, nobody to reason with.

An entire society can be bootstrapped to be entirely run by thoughtless algorithms and it could end up in a situation where there's no way in to untangle the dependencies that these systems have with one another.


Whether and what recourse is available is pretty orthogonal to whether the thing is run by computers or humans. In practice it is likely to be a mixture of both these days, as we don’t really have sufficient technology to run a real society, and most countries today take heavy advantage of automation.


> Some notion of social credit, where desirable actions are being rewarded with some arbitrary tokens and undesirable actions are being punished by these tokens being taken away. In some way this is already the case with capitalism (where desirable actions are work/investment and tokens are money)

I would argue this isn't always the case. There's a lot of reward being given for things that are detrimental to society. Pharma companies being at the top of the list.


Thanks for these points!


I quite fancy a Productivity Chair. When I work, I get a normal chair, but if I want to procrastinate I get all the benefits of a standing desk!


Does attempting to offload decision making to an "AI/Algo/ML tool/whatever" not simply suffer from the same problem models/scientific studies/corporations/governments/whatever suffer from already?

Externalities?


We simplify or straight up forgo adding some laws to the lawbooks because they're about something hard to measure, too mentally taxing to determine, too much paperwork when executing it, all while we want them and they would be beneficial. In short the benefit is offloading the mental work as computers do.


I've actually discussed the same idea with friends. My western friends find the idea very repulsive, where for me with my mixed east-west background find it worthy of exploration.

IMO the west's ideals reflects their history of loosely-confederated tribes, e.g. the Germanic and Frankish tribes. Centralised governments are the exception (the Roman empire, lasting only a few hundred years).

Compare this with the east with China being the center of eastern civilisation for more than 2000 years. Education, order and standardisation enabled society to progress beyond wars and civil wars into commerce and creativity.


Western cultures, as originating from the Greeks, prioritise the individual in society and usually attempt to improve through the conflict and collaboration of such individuals. Eastern cultures, as originating from Confucianism, prioritise the group and attempt to improve through harmony of social structure and contribution to the greater good.

I wouldn't say one is better than the other, but they're very alien in comparison to each other and this makes preference extremely subjective. Both approaches have significant drawbacks in addition to all their successes, so it's natural that an Easterner would entertain this satirical idea (seeing the positives) while it'd be abhorrent to a Westerner (seeing only the negatives)


The individualized approach is more congruent with reality, as people experience the world as individuals. Collectivism is based on a conceptual error, in personifying society, instead of recognizing it as an abstract notion. The advancement of human civilization was driven by the recognition of individual rights and the emphasis on the individual.

https://www.theatlantic.com/magazine/archive/2010/07/the-pol...


If you're into eastern philosophies/religions, there is an emphasis on understanding the Ego. Personally I would rather have less rights and be less rich than to see the less-fortunate continue to suffer.

A bigger house or a fancier cars doesn't persuade me more than doing something to empower the disenfranchised.

Edit: Also, civilisations arose because people were able to work together and trust each other. We stopped killing each other or relieving ourselves whenever we want so that we can all have a better quality of life.


A society that emphasizes the rights of the individual is a highly cooperative one, with a preponderance of altruism. A society of selfish people will never be able to maintain a social order that affords individual rights.

>>Personally I would rather have less rights and be less rich than to see the less-fortunate continue to suffer.

That's not inconsistent with believing in individual rights. If you have a right to self-autonomy, then you also have a right to give up your own rights to help the needy.

It's only if you deem your will as more worthy than that of others, and would like to use coercive means to compel others to make the same trade-off, that you no longer believe in individual rights.


I would say it's more congruent with evolution, which is to say it aligns with the idea that mutations or other advantages can put an organism far ahead of its competitors and improves the species through genetic preservation (or historical preservation, in the case of societies).

I don't buy that it's out-and-out better than a collectivist system at a society level (even though I consider myself very much an individualist). As usual with these sorts of polarising perspectives, the ideal is probably somewhere in the middle and a purely individualist society will never be able to persist long term.


It’s funny, I thought this comment was going to go the opposite way. The first examples I thought of in evolution were things like ant colonies, where individuals sacrifice themselves for the good of the collective gene pool, and are extremely successful as a species because of it.

But yeah, there are more individualistic species that are also successful. I guess it depends on the ecological niche, and both strategies can find success depending on the environment they are in?


Sums up my thoughts! Haven't met many people who can appreciate or at least listen to both point of views, unfortunately.


We tried to create everyday objects from a fictional world in which we somehow chose to be governed by algorithms. The scariest thing about it was that nothing we came up with doesn't already happen in some way.


>The scariest thing about it was that nothing we came up with doesn't already happen in some way.

Like the texts on the homepage, there are two ways to interpret this sentence. I don't want to imply an insult but why haven't you continued until you came up with something new?

>Quickly, your brain will learn to stay focused, enabling you to be meaningfully productive without the urge to distract yourself.

Why would there be a need for focused work in a world where an ai is available?

If the government is just a mindless algorithm, where would be the difference to the current situation with an unaware code of law?

We turn people into computers, who need stimuli to be happy, or punishment to stay focus. Once we have ai to do the computing, there is no need to train people into rigid structures. Thus the entire mood of a dystopia shouldn't be there. There will only be one in-group because nobody else is needed. However, the possible implications of that future are far more sinister.


> The scariest thing about it was that nothing we came up with doesn't already happen in some way.

That's a natural side effect of the kind of systems you are exploring. They kill creativity and innovation, which is an enemy of cybernetic control. If your students seem stuck at a pedestrian level of imagination the system is working - to protect itself from being anticipated. On the other hand, the fact you're still allowed to do this at least shows it considers you have some value yet to extract.


If you're the author(s), this possibly could be Show HN.


Yes, I am – I'm a professor of Industrial Design, this was my class in which the students designed the objects. Should I rename it to Show HN?


This is usually the case with dystopian fiction; the stories that really resonate are the ones that express our subconscious dread at where our society’s self-appointed “leaders” are taking us.

The same is true if conspiracy theories. QAnon is absurd dangerous nonsense, but upper-class human trafficking rings do exist (see: Epstein). The Great Reset is likewise absurd (the people who attend the WEF are dangerous, malevolent, and powerful, but the WEF itself has no real power) but the anxiety about it reflects our awareness of how the upper classes see us.


The happiness patch sounds like it would undermine everything else. If you want people to be able to make the best of themselves, you need strong people, not weak ones with zero resilience.


When we filmed the movie, we mixed coke with water and pumped it through the happiness patch's hose: today's readily available boost of dopamine (sugar and caffeine) is also the future's. ;)


Wait, there's a movie? I didn't see that. What's its name?


https://www.youtube.com/watch?v=8fnsW5MBfDg

It's embedded at the bottom of the page ;-)


Oh, the short. I thought you meant a full length movie. Thanks!


    Previous problems of human bias and corruption thus belong to the past
I hope so too. But in many cases, machine learning based systems just amplify human bias, like ML based hiring systems. After all, it's been trained by using biased data.

    The system is designed according to the human values of transparency, happiness, productivity, fairness, and individuality. Humans can be selfish. Our system can't. 
Hmm. Big if true...


I feel like the elephant in the room with western society is that 90% of the corruption we have comes not from politicians but from the people themselves. People decide whether a policy is a good idea based on how it affects them personally not whether it is a good or a bad for the country or efficient or just. That's how you get a situation where (majority) mortgage owners are subsidised but (minority) renters are punished for example. The same applies to the justice system: people decide how they feel about court cases based on the victim/defendants age/sex/race/attractiveness.

So an AI either needs to ape those processes (and be popular but largely pointless as that is what we already have) OR to actually be fair but be too unpopular to ever be accepted by the people it governs.


Arizona: Can I get like 5 extra of those happiness patches in case I lose a few?

New York: This bracelet gets really hot when I'm trying to stir fry on my new induction stove.


My old supervisor wrote a book on computational socialism back in 93. I rather like some of the ideas (can a participatory centrally planned economy work now that we have internet/cybernetics?), but suspect it wouldn't work in practice.

https://en.wikipedia.org/wiki/Towards_a_New_Socialism


See also The People’s Republic of Walmart


I’ve often wondered about it too. Is it possible with sufficiently good AI and enough data?

I think a major shortcoming is you still need the creativity of people inventing new products and services and bringing those to market. Without capitalism, how can you incentivize the entrepreneurs?

I’m sure there must be a way, I’m less certain that there’s a better way.

With general AI it could be done, but it’s questionable what value we serve in such an environment.


It is conceivable that the rushed and breathless attempt to remove the human from the loop (more precisely, to have a small minority of humans exert unprecedented control over others via digital channels) will lead to the exact opposite: deepening our appreciation for humanity, an era of a new, digitally inspired, humanism if you wish.

This backlash plays out simultaneously both in the "AI" domain (the algorithmic classification, influencing or reproduction of human behavior) and the "crypto" domain (the algorithmic facilitation and management of "value" and credit accounting between humans)

Dystopia or Utopia? Odds are still on the former but its not a done deal yet.


I have sometimes wondered whether a sufficiently advanced AI could measure a proposed piece of legislation against a constitution/bill of rights/"mission statement", and provide a score to show how likely it is to conform and further the social mission.

Step 2 would be to allow the sufficiently advanced AI to propose the legislation.


A "sufficiently advanced" AI could do that, by definition.


Although I like where this satire is coming from, I find the actual execution to feel a little bit too strawmannish.


If they hadn't explained the project, it would've worked better as satire. Had they steelmanned it, the explanation would've had something to add.


I actually added that explanation at the very end. I was afraid that people wouldn't get it as satire. ;-)


(One commentor pointed out that they weren't aware that this is a movie project - the film's embedded at the bottom of the page, but here it is as a YouTube link:

https://www.youtube.com/watch?v=8fnsW5MBfDg

)


A true AI-cracy should ignore the citizens' vote (it's not a democracy) and make autonomous decisions trying to maximize their well-being. The dopamine-releasing device would skew the measurement of an individual's well-being, so it shouldn't be used.


Here's a timelapse video of the design process behind the project:

https://youtu.be/g2jQpN8UeYs

We usually go through these steps in 13 weeks, here they are in ~40 seconds:

- Research

- Potentials

- Concepts

- Design

- Model (you can see the physical models appearing in the very last second)


This would be a perfect episode for Black Mirror. Hopefully won't become real.


It already exists: Nosedive, S3E1 [0].

See also: Down and Out in the Magic Kingdom by Cory Doctorow [1] and Community's App Developments and Condiments, S5E8 [2].

[0] https://en.wikipedia.org/wiki/Nosedive_(Black_Mirror)

[1] https://craphound.com/down/download

[2] https://en.wikipedia.org/wiki/App_Development_and_Condiments


(If you can, I'd appreciate an upvote on Product Hunt: https://www.producthunt.com/posts/aicracy-2)


Reading, watching the video, I keep wondering whether the authors just rehashed the ideas from their native country's bestseller - QualityLand by Marc-Uwe Kling



People don’t get it, I’m an aicist. Always have been. No soul, no service. AI can’t drink at my water fountain!


The next stage is getting rid of the weakest link, the unstable biological elements.


So much shopping assistance! Can't wait to visit a sex shop.


Really reminds me of Tom Scott's dystopian-future talks


Oh, I wasn't aware of Tom doing anything into this direction (but of course he does, just checked it out)! I sat next to him at a conference dinner once, will ping him about it!


Great promotion for the next season of Black Mirror


It was indeed much inspired by Black Mirror. I tried to convince Charlie Brooker to come in via Zoom as the examiner in the student's final presentation, but it didn't work out.


Large populations are needed for productivity and for territorial defense.

Once large populations stop being needed for those things, rulers around the world will collude and get rid of most people.


Thunderhead is here




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: