Hacker News new | past | comments | ask | show | jobs | submit | antonkar's comments login

I’m honestly shocked that we still don’t have a direct-democratic constitution for the world and AIs - something like pol.is with an x.com-style simpler UI (Claude has a constitution drafted with pol.is by a few hundred people but it's not updatable).

We’ve managed to write the entire encyclopedia together, but we don't have a simple place to choose a high-level set of values that most of us can get behind.

I propose solutions to the current and multiversal AI alignment here https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-u...


I think 99% of what less wrong says is completely out to lunch. I think 100% of large language model and vision model safety has just made the world less fun. now what.

I don't think it does what you think it does. You'll end up taking sides on India and China fighting on rights and equality and giving in to wild stuffs like deconstruction and taxation for churches. It'll be just a huge mess and devastation of your high-level set of values, unless you'll be interfering with it so routinely that it will be nothing more than a facade for quite outdated form of totalitarianism.

This reads like word salad to me...

Prompt in, inference out.

Frankly, I can't stand these guys viewing themselves as some sort of high-IQ intellectual majority types when none of such labeling would be true and they're more like stereotypical tourists to the world. Though that's historically how anarchist university undergraduates had always been.


> We’ve managed to write the entire encyclopedia together, but we don't have a simple place to choose a high-level set of values that most of us can get behind.

Information technology was never the constraint preventing moral consensus the way it was for, say, aggregating information. Not only is that a problem with achieiving the goals you lay out, its also the problem with the false assumption that they are goals most would agree should be solved as you have framed them.


Curious how people who are supposed to be rational, have never read what is anger (or “badness” and “evilness” as some people still call it) - the best way to do it is to read any recent meta analysis on what is the most effective anger treatment. It’s cognitive therapy and it not only explains the mechanics of it: misunderstanding - worry and resulting anger (anything enforced on another without consent is anger, even if you think it’s good for them). So we actually have predictive understanding of the mechanics of “good” and “evil” - a person without or with anger management problems. “Evil” is nothing more than misunderstanding, worrying and protecting yourself (often for reasons that they invented themselves after trying to read the mind of another - something that’s impossible) - forcefully enforcing something upon another. “Good” is nothing more than trying to understand another, not fearing (because you understood another and yourself) and as a result not trying to enforce your will upon them

If it quacks like a duck and acts like a duck - does it matter that our LLM is not really a duck? Some people are more LLM-like when they answer your question then some LLMs :-)

I consider thinking about the long-term future important if we don't want to end up in some dystopia. How can you create an all-understanding all-powerful jinn that is a slave in a lamp? Can the jinn be all-good, too? What is good anyways? What should we do if doing good turns out to be understanding and freeing others (at least as a long-term goal)? Should our AI systems gradually become more censoring or more freeing?


How can you create an all-understanding all-powerful jinn that is a slave in a lamp? Can the jinn be all-good, too? What is good anyways? What should we do if doing good turns out to be understanding and freeing others (at least as a long-term goal)? Should our AI systems gradually become more censoring or more freeing?


I consider thinking about the extremely long-term future important if we don't want to end up in some dystopia.

The question is this: if we'll eventually have almost infinite compute what should we build? I think it's hard or impossible to answer it 100% right, so it's better to build something that gives us as many choices as possible (including choices to undo), something close to a multiverse (possibly virtual) - because this is the only way to make sure we don't permanently censor ourselves into a corner.

So it's better to purposefully build infinitely many utopias and dystopias and everything in between we can choose from freely, then to be too risk-averse and stumbling randomly into a single permanent dystopia. The mechanism for quick and cost-free switching between universes/observers in the future human-made multiverse - is essential to get us as close as possible to some perfect utopia - it'll allow us to debug the future.


Your catastrophizing belief is a self-fulfilling prophesy. There are quite a few safety options: law that forces AI companies to dedicate half of their compute to safety research. Tool AI idea by Tegmark instead off agentic AI. Worldwide direct-democracy constitution for AI like the one Claude has but with real-time voting. The idea is for AI to not have a specific certain goal (always leads to unintended consequences and fundamentalism) but to forever be uncertain and seek to better understand humans and help them: to maximize each individual human’s freedom of choice and options now and in the future. Just look into it, Tegmark is a good start


And what if some people don't want to use AI? How is that maximizing freedom when people begin to be forced to use it in their jobs? Doesn't sound like freedom to me. What if people do not want to be exposed to the creations of AI? I really don't think there is any freedom in a world saturated by AI.


AI and the debate around its usage is reminiscent of nuclear weapons, deterrence, and mutually assured destruction. Due to MAD, nuclear weapons are useful as a deterrent, and ideally not used, leading to nuclear technology and access to it becoming politicized, resulting in anti-proliferation strategies and peaceful applications that do not lend themselves to dual-use.

AI is almost a mirror image of this, in that AI is used because it is useful, not because it is ideal. Following the analogy, now that AI exists in increasingly useful forms, calls for AI safety echo those made by anti-nuclear advocates and Luddites centuries ago, but nuclear technology and AI both are simply too useful to be returned to that Pandora's box. So, we are seeing a relitigation of the same arguments used to curtail or prohibit technology in the same way we did in the crypto wars[0], only applied to AI instead of cryptography.

With MAD, the only winning move was not to play, and that thinking largely remains so to this day, as the nuclear stalemate has thankfully not been broken.

The delicate balance of terror[1][2] never ended, and may never end. The players and pieces may change along with the rules and the game board, but war will always be politics by other means.[3][4]

However, with AI perhaps the only winning move is to play early and often? The Balkanization of the AI landscape between China and the rest of the world[5] suggests that both sides view AI technology similarly to nuclear technology: a tool too useful to ignore, an idea whose time has come.

[0] https://en.wikipedia.org/wiki/Crypto_Wars

[1] https://en.wikipedia.org/wiki/Balance_of_terror

[2] https://www.rand.org/pubs/papers/P1472.html

https://web.archive.org/web/20241204201606/https://www.rand....

[3] https://en.wikipedia.org/wiki/On_War

[4] https://warroom.armywarcollege.edu/articles/grand-strategy-c...

https://web.archive.org/web/20241130183926/https://warroom.a...

[5] https://www.rand.org/pubs/perspectives/PEA3703-1.html

https://web.archive.org/web/20241214130855/https://www.rand....

https://news.ycombinator.com/item?id=42416878


I'm afraid that after reading this guy, people will just give up, thinking there is nothing that works. And this is not the case at all, depression and many other problems are curable. Mine got cured, in addition to anxiety, anger management problem and suicidality. You can get help or start by reading a workbook yourself.

He links to a meta analysis* that says CBT does cure depression and does so consistently for many decades without any declines in effectiveness. Later for some reason, he says no single mental illness was ever cured.

It seems the main point of the article is to say that nothing except "nudges" ever worked in psychology - this is nonsense that he himself contradicts as I mentioned above.

Skip this sensationalist guy, use https://scholar.google.com to do your own research

* https://research.vu.nl/ws/portalfiles/portal/26037670/2017_C...


I think self-worth is not a very useful concept. In my mind people with high “self-worth” demand or force others to do things for them - and that’s the definition of anger not “self-worth”. If you are willing to ask others to do things for you and are willing to get a no - that’s the only humane way to behave even thought some may consider it “low self-worth”


The Foundation Pit (Kotlovan in Russian) is great and Kafka-like: the Soviet people are digging a giant hole no one knows or remembers why and how deep. Eventually people start dying from the hard work and they are buried in the hole where others continue working


Thank you! I got interested in infinity categories and groupoids after watching interviews with Jonathan Gorard (he and Wolfram have physics theory based on that)


I’m not sure if it’s relevant but I cured my depression by reading the primary source of the most effective treatment method. I found it reassuring and easy to read:

https://beckinstitute.org/cbt-resources/resources-for-profes...

After finishing that, there is also a new recovery-oriented therapy from them -it’s about finding interests and then finding some dream/goal to pursue. There is a story of treating the guy who thought he is god and was giving away everything: even his food https://www.amazon.com/Recovery-Oriented-Cognitive-Therapy-S...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: