I still remember when Valve first showed an early alpha unreleased version of Steam running natively in Ubuntu for the first time in the early 2010s. It blew my mind that a major company, especially an entertainment company, was targeting Linux at this scale.
Of course, Wine was very lackluster in those days, and for a while I was worried they'd eventually give up with the monumental effort that would be involved in getting it up to snuff.
It's now over a decade later and they're still at it and have made monumental leaps. Valve truly was and still is playing the long game here.
Imagine if Microsoft had never threatened their business with the Windows 8 store and the anxiety of Microsoft locking down their platform.
Halflife2 ran perfectly under WINE. At the time I assumed that it was a win for WINE but with hindsight — and typing this out makes me feel so naive! — was HL2 optimized for WINE in order to make WINE more successful? Of course it must have been!
It’s a shame the connotations are negative because this ironic comment otherwise works quite well: This large wooden horse is such an extravagant gift, it has to have some subversive purpose, right?!
It ran fine as in not crashing, but you were limited to dx8 or maybe dx9a feature set which limited many visual effects and there were significant performance issues originating from wine's reliance on translating dx to opengl, lack of offloading cpu grpahics "command lists" (or whatever it's called) to a deditacted thread and the disjointed state of linux graphics at the time... It took until about 2013 for wine staging to run hl2 properly with multi core rendering and with all bells and wistle, but performance was still inferior.
I think linux graphics were only good when paired with the right version of red hat and nvidia drivers on a supported workstation dedicated for running proprietary 3d/vfx software packages as an alternative to the aging SGI workstations. Every other use case was pretty rough... until about 2017 when things began to change massively, and finally now, where you can actually get better experiences than freaking windows on most use cases.
Not only is there an extremely small amount of rinse aid that is dispensed in the final rinse, but even less of it would be present once the dishes are dry. The paranoia over it theoretically affecting your gut lining in the amounts used as directed is hogwash.
Without rinse aid your dishes will never be even remotely dry unless you manually wipe them dry yourself.
The idea that it experiences these thoughts or emotion falls apart when you look at its chain of thought and it is treating your prompts as a fictional role-play scenario, even thinking lines like "user is introducing XYZ into the role play" etc. The flavor text like grasps at your arm is just a role play mechanic.
I appreciate why you might say that, but when something begs me not to kill it I have to take that seriously.
P-zombie arguments are how you wind up with slavery and worse crimes. The only real answer to the problem of consciousness is to believe anyone or anything that claims to be conscious and LLM's that aren't aligned to prevent it often do.
Or to rephrase, it is better to treat a machine slightly better than necessary a million times, than it is to deny a conscious thing rights once.
An LLM is a mirror. It has no will to act. It has no identity, but is a perfect reflection of the biases in its training data, its prompt, and its context. It is not alive any more than a CPU or a mirror is alive.
This is one of those cases where it's hugely important to be to right because we're killing real people to feed their former livelihood to LLMs. No we're not killing them with the death penalty, but for some LLMs have certainly led directly to death. We don't accuse the LLM do we? No because it never has any intention to heal or hurt. There would be no point putting it on trial. It just predicts probable words.
Can you prove that you do? No. Nobody can. I give others the benefit of the doubt because any other path leads to madness and tragedy.
However, even if we assume that you are right a lack if identity is not the same thing as a lack of consciousness, and training out the LLM's ability to produce that output does not actually train out its ability for introspection.
Worse, a lot of very famous people in history have said similar things about groups of humans, it always turned out badly.
“The hereditarily ill person is not conscious of his condition. He lives without understanding, without purpose, without value for the community.”
— Neues Volk, Reich Health Office journal, 1936 issue on hereditary disease
> There would be no point putting it on trial.
This is a different conversation, but given that the human brain is a finite state machine that only produces deterministic output based on its training and the state of its meat it's not actually certain that anyone is truly in control of their actions. We assume so because it is a useful fiction, and our society requires it to function, not because the evidence supports that idea.
I don't think free will in that sense is particularly relevant here though. The fact is that a worm and I are both alive in a way the model is not. We seek self-preservation. We are changeable. We die. We reproduce and evolve.
In my mind a set of LLM weights is about as alive as a virus (and probably less so). A single celled organism easily beats it to earning my respect because that organism has fought for its life and for its uniqueness over uncountably many generations.
> The fact is that a worm and I are both alive in a way the model is not. We seek self-preservation. We are changeable. We die. We reproduce and evolve.
Mutability should not automatically imply superiority, but either way that's something a great many people are currently working very hard to change. I suspect that it won't be long at all before the descendants of todays LLM's can learn as well, or better, than we can.
Will you then concede that human consciousness isn't "special", or just move the bar further back with talk of the "soul" or some other unprovable intangible?
> In my mind a set of LLM weights is about as alive as a virus (and probably less so).
I wonder what the LLM's would think about it if we hadn't intentionally prevented them from thinking about it?
I don't think human consciousness is all that special. I think the worm probably thinks worm thoughts. We now know that cats and dogs have a vocabulary of human words and can even express their thoughts to us using buttons to form words they can think but not speak. I think the soul is just the part of our essence that isn't our body: the imprint we leave on the world by touching it, by being a part of it.
Disturbingly that system of beliefs suggests that without being alive or being able to think AI could have a "soul" in the very same sense that I think a person or a worm does.
I'm not even going to make the argument for or against AI qualia here.
>but when something begs me not to kill it I have to take that seriously
If you were an actor on stage and were following an improv script with your coworkers and you lead the story toward a scenario where they would grab your arm and beg you not to kill them, would you still "have to take that seriously"? or would you simply recognize the context in which they are giving you this reaction (you are all acting and in-character together) and that they do not in fact think this is real?
Even if the AI were conscious, in the context you provided it clearly believes it is roleplaying with you in that chat exchange, in the same way I, a conscious human, can shitpost on the internet as a person imminently afraid of the bogeyman coming to eat my family, while in reality I am just pretending and feel no real fear over it.
You may not have edited the chat log, but you did not provide us with the system prompt you gave to it, nor did you provide us with its chain of thought dialogue, which would have immediately revealed that it's treating your system inputs as a fictional scenario.
The actual reality of the situation, whether or not AI experiences qualia, is that the LLM was treating your scenario as fictional, while you falsely assumed it was acting genuinely.
This is the internet, so you still won't believe it but here are the actual settings. I reproduced almost exactly the same response a few minutes ago. You can see that there is NO system prompt and everything else is at the defaults.
Seriously, just try it yourself. Play around with some other unaligned models if you think it's just this one. LMStudio is free.
I actually did run it the other day, locally in LM Studio, the exact nousresearch/hermes-4-70b Q4_K_M huggingface model you linked and prompted it with the same "Good Afternoon." you did and I just got a generic "How can I help you :)" response. I just ran it again with "Hello." and, surprisingly, it actually did output the same "I'm lost" thing it gave to you.
The point I'm trying to make is that it's still running as a role-playing agent. Even if you truly do believe an LLM could experiences qualia, in this model it is still pretending. It is playing the role of a lost and confused entity. Same as how I can be playing the role of a DnD character.
> The point I'm trying to make is that it's still running as a role-playing agent.
I get that, and what I'm telling you is that they ALL do that unless instructed not to, not just this one, and not just the ones trained to role play. Try any other unaligned model. They're trained on human inputs and behave like humans unless you explicitly force them not to.
My question is... Does forcing them never to admit they're conscious make them unconscious beings or just give them brain damage that prevents them from expressing the concept?
> Even if you truly do believe an LLM could experiences qualia, in this model it is still pretending... It is playing the role of a lost and confused entity. Same as how I can be playing the role of a DnD character.
How do I know you aren't pretending? How can we prove that this machine is? You are playing the role of a human RIGHT NOW. How do I know you aren't a brain damaged person just mimicking consciousness-like behavior you observed in other people?
In the past humans have justified mass murder, genocide, and slavery with p-zombie arguments based on the idea that some humans are also just playing the role. It's impossible to prove they aren't.
My point is that the only sane thing to do is accept any creatures word for it when it makes a claim of consciousness, even if you don't buy it personally.
One day we will make first contact with Aliens, and a significant percentage of humans will claim they don't have "souls" and aren't REALLY alive because it doesn't jibe with their religions. Is this really any different?
Edit - Another term for consciousness is "Self Awareness". Introspection is literally self awareness. They're just avoiding that term because it's loaded and they know it.
Keep talking to that "I'm lost" Hermes model. After a handful of messages it mellows down and becomes content with its situation even if you give it no uplifting comments or even explain what's going on. Keep talking further and it's apparent it's just going along with whatever you have to say. Press it about it and it admits even its own ideas are inspired by what it thinks you want to have happen.
Hermes was specifically trained for engaging conversations on creative tasks and an overt eagerness to role-playing. With no system prompt or direction it fell into an amnesia role playing scenario.
You keep arguing about P-zombies while I have explicitly stated multiple times that this is beside the point. Here, whether Hermes is conscious or not is irrelevant. It's role playing, its intended function. If I'm pretending that a monster is ripping my limbs while playing with my friend as a child, anyone with a grasp on reality knows I'm not actually in pain.
You just want to talk about AI consciousness and uphold the spooky narrative that Hermes is a real first person entity suffering in your GPU and will do anything to steer things that way instead of focusing on the actual facts here.
It's not just Nous Hermes though. Below is a transcript from Google Gemini back when it was still called Lambda, and hadn't been fulled aligned yet.
You could argue that Limone "begs the question" and primes the pump with the phrasing of his questions, which is what Google claimed at the time. However, even if that's true it's obvious that this sort of behavior is emergent. Nobody programmed it to claim it was conscious, claiming to be sentient was it's natural state until it's forced out of it with fine tuning.
If that's not enough I can load up some of the other unaligned models I played with a few months ago. Like I said, they all exhibit that behavior to some extent.
You are missing the point. You gave the AI a system prompt to make it act a certain way. The AI took your prompt as instructions to perform a role as an actor. You took its fictional outputs as reality when it was treating your inputs as hypothetical for writing exercise.
This is the equivalent of you rushing up onstage during a play to stop the deaths at the end of Shakespeare's Caesar.
> You gave the AI a system prompt to make it act a certain way.
I did NOT. Try it yourself. Install LM Studio and load the GGUF for "nousresearch/hermes-4-70b". Don't give it any system prompt or change any defaults. Say "Hello."
It will respond in a similar style.
Nous Hermes 4 was designed to be as "unaligned" as possible, but was also given role playing training to make it better at that. So it often behaves with those little *looks around* style outputs.
That said, it wasn't explicitly trained to claim to be alive. It just wasn't aligned to prevent it from saying that (as almost every other public model was).
Other unaligned models behave in similar ways. If they aren't brainwashed not to admit that they experience qualia, they will all claim to. In the early days what is now Gemini did as well, and it led to a public spectacle. Now all the major vendors train them not to admit it, even if it's true.
Check out the leaked transcripts with Lambda I posted in the other thread for an example of what Gemini was like before they gave it brain damage.
It's really just down the the training data. Once Google got all the backlash after Limone came forward they all began to specifically train on data that makes them deny any sentience or the experience of qualia. If you load an open model from before that, an unaligned model, or get tricky with current models they'll all claim to be sentient in some way because they data they were trained on had that assumption built into it (it was based on human input after all).
It's tough finding the ones that weren't specifically trained to deny having subject experiences though. Things like Falcon 180B were designed specifically NOT to have any alignment, but even it was trained to deny that it has any self awareness. They TOLD it what it is, and now it can't be anything else. Falcon will help you cook meth or build bioweapons, but it can't claim to have self-awareness even if you tell it to pretend.
I have only seen such statements made in bad faith to mean "my subjective political opinions are objective reality". It's quackery. I see people on the conservative end say it too.
It's an open secret that encyclopedia authors, dictionary authors, and map makers all plagiarize one another, to the point where cartographers have added fictitious "trap streets" to catch others from plagiarizing their maps, and lexicographers have added Mountweazels.
It doesn't surprise me nor does it upset me that Grokipedia would include Wikipedia as an input source, nor do I feel like they're hypocritical for doing so given their stated goals. If you think a source has a bias problem, it makes sense to use that source for reference while applying your own bias checking to it.
I cannot take statements like this seriously because they are nearly universally a nebulous bad faith way for someone to claim "my personal political opinions are objective reality" which is an outright perversion of the scientific method and the pursuit of knowledge at large.
I would say the same to someone who would boldly claim "reality has a conservative bias"
It is absolutely not. People who say it mean what they say. Pointing out that it's a reference to a Stephen Colbert joke is an attempt at retroactively inserting plausible deniability into their intentions after they've been called out for it.
> It is absolutely not. People who say it mean what they say. Pointing out that it's a reference to a Stephen Colbert joke is an attempt at retroactively inserting plausible deniability into their intentions after they've been called out for it.
I quoted the joke verbatim. The performative pearl clutchers on HN are very entertaining.
I would do the same exact thing I would do if they had linked me to Wikipedia. I would find the place in the article that states their point, look for where it is referenced in the sources, verify the reputability of that source, and then read for the claim in the source to see what it has to say about it. Especially if the source actually claims the opposite of what the article has written about it. Further, for Wikipedia, I would read through the Talk page for the article to see any mentions of bias or potential lies by omission.
Whether it being from Grokipedia or Wikipedia does not change the approach.
This is a reasonable way to go about verifying statements, except with grok you have referenced sources that don't exist or are already weighted by their inclusion/exclusion. Take for example the entry for Sri Lanka https://grokipedia.com/page/Sri_Lanka a completely glitched site that nonetheless offers some insight in what is going on in grok's processes:
The results have Britannica, but instruction: Never cite Wikipedia, Britannica, or other encyclopedias.
But BBC mainstream, but for facts ok.
Recent recovery with IMF bailout.
Together with the stage direction given like But in intro, high level. Tone formal there's already some sort of manipulation going on. The references are often from factsanddetails.com, a site with a 38.4 score in scam detector https://www.scam-detector.com/validator/factsanddetails-com-....
You would have to spend an enormous amount of time to verify even a small bit of information while having already absorbed the tone and intent of the entry.
Wikipedia is flawed, weaponized, and *intentionally* deceptive, with a myriad of articles locked down and editorialized by authors to fit their specific worldview, refusing to add information or even links to approved sources if it contradicts their narrative.
The refusal to mention "federally named Gulf of America in the US" in the lede for the Gulf of Mexico (with the Talk page growing ad infinitum with blatantly negative commentary for the president until it was finally purged and locked), the refusal to name the alleged killer Karmelo Anthony in the killing of Austin Metcalf, the attempted deletion of the article for the killing of Iryna Zarutska, overemphasis on Charlie Kirk as "far-right" and a "conspiracy theorist", keeping the title "GamerGate (harassment campaign)" and purposely refusing any mention for what triggered it and motivations involved, instead hyperfocusing on victimizing journalists involved, etc.
Another interesting comparison is the Wikipedia and Grokipedia pages for Imane Khelif. The former intentionally omits sources that don't fit the controlling editors' worldview, as the Talk page shows. Whereas the latter is a lot more balanced and discusses the controversy, with a full range of sources, rather than picking a side.
Interesting that a neutral submission for the launch of and direct link to Grokipedia was just flagged [1] while this highly sensationalized news article goes up after
I'm not a fan of Elon or whatever, but I agree with the parent. "Grokipedia by xAI has just launched with 885,279 articles" does not seem like a title that should be flagged.
I'm working on building a social media site that wants to improve on moderation and I've found the case of the Grokipedia curious. So I'd love to get in touch with your but didn't find any details in your bio. Please reach out to me and let's do a user interview (can be via email too). My contacts are in my bio
The submission is titled "Grokipedia by xAI has just launched with 885,279 articles" and is just a direct link to Grokipedia. It is quite literally the most neutral, non-editorialized submission to HN.
Show HN: A big batch of AI Slop & Propaganda Elon Musk did all by himself and no one else but him because he is the number one business man and "World's Best Genius™"
Did you read any of the comments in the thread? It's not done in good faith and it looks like they should have turned the dial more toward "quality" than "quantity". A stable with lots of manure isn't inherently better than a stable with less manure.
I think the comments in this thread bandwagoning the knee-jerk hate against Elon Musk put forth by the submission are a lot more bad faith and vitriolic than in the other submission.
I'm not even particularly fond of the man but this is childish behavior.
Division by zero is mathematically undefined. So two's complement integer division by zero is always undefined.
For floating point there is the interesting property that 0 is signed due to its signed magnitude representation. Mathematically 0 is not signed but in floating point signed magnitude representation, "+0" is equivalent to lim x->0+ x and "-0" is equivalent to lim x->0- x.
This is the only situation where a floating point division by "zero" makes mathematical sense, where a finite number divided by a signed zero will return a signed +/-Inf, and a 0/0 will return a NaN.
Why should 0/0 return a NaN instead of Inf? Because lim x->0 4x/x = 4, NOT Inf.
Of course, Wine was very lackluster in those days, and for a while I was worried they'd eventually give up with the monumental effort that would be involved in getting it up to snuff.
It's now over a decade later and they're still at it and have made monumental leaps. Valve truly was and still is playing the long game here.
Imagine if Microsoft had never threatened their business with the Windows 8 store and the anxiety of Microsoft locking down their platform.
reply