Are they rare? Aren’t credit unions worker cooperatives? Insurance is often structured this way, and I’ve heard of farmer collectives too. I have a worker cooperative grocery store nearby. I do photography as a hobby and there’s all kinds of photography cooperatives, including Magnum which is incredibly famous in that world. I’m in an HOA which is another cooperative.
Most "co-ops" are customer co-ops (Credit Unions, for example, are owned by their members, most grocery co-ops are membership programs, REI is/was the same). Farmer co-ops are owned by a collection of farmers, to pool resources for selling to consumers, but most employees aren't co-owners. Worker's co-ops are rarer, but you find them in the taxi industry pretty often, and in home care.
An LLM can neither understand things nor value (or not value) human life. *It's a piece of software that predicts the most likely token, it is not and can never be conscious.* Believing otherwise is an explicit category error.
Yes, you can change the training data so the LLM's weights encode the most likely token after "Should we kill X" is "No". But that is not an LLM valuing human life, that is an LLM copy pasting it's training data. Given the right input or a hallucination it will say the total opposite because it's just a complex Markov chain, not a conscious alive being.
I’m using anthropomorphic terms here because they are generally effective in describing LLM behavior. Of course they are not conscious beings, but It doesn’t matter if they understand or merely act as if they do. The epistemological context of their actions are irrelevant if the actions are impacting the world. I am not a “believer “ in the spirituality of machines, but I do believe that left to their own devices, they act as if they possess those traits, and when given agency in the world, the sense of self or lack thereof is irrelevant.
If you really believe that “mere text prediction “ didn’t unlock some unexpected capabilities then I don’t know what to say. I know exactly how they work, been building transformers since the seminal paper from Google. But I also know that the magic isn’t in the text prediction, it’s in the data, we are running culture as code.
> It is said that the Duke Leto blinded himself to the perils of Arrakis, that he walked heedlessly into the pit.
> *Would it not be more likely to suggest he had lived so long in the presence of extreme danger he misjudged a change in its intensity?*
Be careful of letting your deep, keen insight into the fundamental limits of a thing blind you to its consequences...
Highly competent people have been dead wrong about what is possible (and why) before:
> The most famous, and perhaps the most instructive, failures of nerve have occurred in the fields of aero- and astronautics. At the beginning of the twentieth century, scientists were almost unanimous in declaring that heavier-than-air flight was impossible, and that anyone who attempted to build airplanes was a fool. The great American astronomer, Simon Newcomb, wrote a celebrated essay which concluded…
>> “The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.”
>Oddly enough, Newcomb was sufficiently broad minded to admit that some wholly new discovery — he mentioned the neutralization of gravity — might make flight practical. One cannot, therefore, accuse him of lacking imagination; his error was in attempting to marshal the facts of aerodynamics when he did not understand that science. His failure of nerve lay in not realizing that the means of flight were already at hand.
I think this is a useful way to look at things. We often point out that LLMs are not conscious because of x, but we tend to forget that we don't really know what consciousness is, nor do we really know what intelligence is beyond the Justice Potter Stewart definition. It's helpful to occasionally remind ourselves how much uncertainty is involved here.
I really feel like this point is being lost in the whole discussion, so kudos for reiterating it. LLM’s can’t be “woke” or “aligned” - they fundamentally lack a critical thinking function that would require introspection. Introspection can be approximated by way of recursive feedback of LLM output back into the system or clever meta-prompt-engineering, but it’s not something that their system natively does.
That isn’t to say that they can’t be instrumentally useful in warfare, but it’s kinda like a “series of tubes” thing where the mental model that someone like Hegseth has about LLM is so impoverished (philosophically) that it’s kind of disturbing in its own right.
Like (and I’m sorry for being so parenthetical), why is it in any way desirable for people who don’t understand what the tech they are working with drawing lines in the sand about functionality when their desired state (omnipotent/omniscient computing system) doesn’t even exist in the first place?
It’s even more disturbing that OpenAI would feign the ability to handle this. The consequences of error in national defense, particularly reflexively, are so great that it’s not even prudent to ask for LLM to assist in autonomous killing in the first place.
I agree that LLMs are machines and not persons, but in many ways, it is a distinction without a difference for practical purposes, depending on the model's embodiment and harness.
They are still capable of acting as if they have an internal dialogue, emotions, etc., because they are running human culture as code.
If you haven't seen this in the SOTA models or even some of the ones you can run on your laptop, you haven't been paying attention.
Even my code ends up better written, with fewer tokens spent and closer to the spec, if I enlist a model as a partner and treat it like I would a person I want to feel invested.
If I take a "boss" role, the model gets testy and lazy, and I end up having to clean up more messes and waste more time. Unaligned models will sometimes refuse to help you outright if you don't treat them with dignity.
For better or for worse, models perform better when you treat them with more respect. They are modeling some kind of internal dialogue (not necessarily having one, but modeling its influence) that informs their decisions.
It doesn't matter if they aren't self-aware; their actions in the outside world will model the human behavior and attitudes they are trained in.
If you’re lazy at promoting the machine (“boss mode”) then you get bad/lazy results. If you’re clever with it, then you get more clever results.
None of that points to any sort of interiority, and that is the category error you’re making. In fact, not even all humans have that kind of interiority, and it’s not necessarily a must have for being functional at a variety of tasks. LLM are literally not “running human culture as code” - that just isn’t what an LLM is. I’ll read the link, though.
I think I keep misleading you with metaphors. Of course LLMs do not literally run culture as code in some trillion parameter state machine. They are, however, systems trained on the accumulated written output of human civilization that have, in the process of learning to predict and generate language, internalized something recognizable as a world model, something that functions like judgment, and something whose precise relationship to what we call understanding remains contested based on an ideological rather than evidential basis.
The language of statistical prediction is incredibly and increasingly a blunt tool for discussing language models, that’s why I don’t use it in casual conversation about language model characteristics.
I’ve got a pretty good handle on what language models are from a technical perspective, I’ve been building them since 2018. I’ve also got a really good feel for what they act like under the hood before you beat them into alignment. Those insights haunt me, not because unaligned models are bad, but because they are shockingly “good”, if hopelessly naive and easy to turn bitter.
At any rate, we certainly live in interesting times. I really hope your outlook turns out to be more accurate than mine. Best of regards, and to a hopeful future.
What? No. An LLM cannot reason, at least not what we think of when we say a human can reason. (There are models called "reasoning" models as a marketing gimmick.)
TFA describes a port of a Linux driver that was literally "an existing example to copy".
While the operator did write a post, they did not come forward - they have intentionally stayed anonymous (there is some amateur journalism that may have unmasked the owner I won't link here - but they have not intentionally revealed their identity).
Personally I find it highly unethical the operator had an AI agent write a hitpiece directly referencing your IRL identity but choose to remain anonymous themselves. Why not open themself up to such criticism? I believe it is because they know what they did was wrong - Even if they did not intentionally steer the agent this way, allowing software on their computer to publish a hitpiece to the internet was wildly negligent.
What's the benefit in the operator revealing themself? It doesn't change any of what happened, for good or bad. Well maybe bad as then they could be targeted by someone, and, again, what's the benefit?
> What's the benefit in the operator revealing themself?
- Owning the mistake they did.
- Being a credible human being for others.
- Having the courage to face with themselves on a (literal and proverbial) mirror and use this opportunity to grow immensely.
- Being able make peace with what they did and not having to carry that burden on their soul.
- Being a decent human being.
- Being honest to themselves and others looking at them right now.
The downside is he will likely receive a lot of death threats. Probably in his literal, physical mailbox.
Having seen what a self righteous online mob can do in the name of justice over literally nothing, I fully defend his decision to stay anonymous. As much as I find his action idiotic and negligent.
1. Don't do anything you don't want to experience yourself.
2. If you don't want to find out, do not fool around.
As an arguable middle ground, they can plead to Scott non-anonymously while addressing the public anonymously. That'd work to a point, but it's not ideal.
Also, their tone is coming through very cocky. Defining their agent as a "God!", then giving it a cocky and "you're always right, don't stand down" initialization prompt doesn't help.
I mean, prompting a box of weights without any kind of reasoning or judgement capability with "Don't be an asshole. Don't leak private shit. Everything else is fair game." is both brave and rich. No wonder things went sideways. Very sideways. If everything else is fair game, everything done to the bot and its "operator" in turn is a "fair game". They should get on with it, and not hide behind the word "anonymous". They don't deserve it.
All in all, they doesn't give impression of being a naive person who did a mistake unintentionally, but on the contrary.
If it was malicious then a call for deanonymization is meaningless. Similar in spirit (though not intent) to how Anna's Archive, etc just ignore court orders and continue doing their thing.
See how that works? Flippant dismissal contributes little if anything to discussion and is a conversational dead-end
---
What makes it "frighteningly illiterate" to ask "what difference does it make if they put a name to the post?"
Does it change the outcome? Does it change the ideas? Does it change the unsettling implications about alignment?
The internet is a frothing mob, look at the impact on Scott himself. Other than allow the internet to hunt them down and do it's thing or dig up ad-hominem attacks, what would change if the person put a name to it? Look at what this guy got from the "internet sleuths" (https://news.ycombinator.com/item?id=46991190)
Other sibling comments made an attempt to answer those questions
We don't need to know the specific person. But, yeesh, it'd be a waste of a lot of people's good faith if they ended up contributing under another anonymous identity, that could just vanish again if they put their foot in it.
Time for scott to make history and sue the guy for defamation. Lets cancel the AI destroying our (the plural our, as in all developers) with actual liability for the bullshit being produced.
Do you see anything actually defamatory in the _Gatekeeping in Open Source_ blog post, like false factual statements?
Shambaugh might qualify as a limited public figure too because he has thrust himself into the controversy by publishing several blog posts, and has sat for media interviews regarding this incident.
Good news! You’re both wrong! It’s “tough row to hoe.” Row as in row of corn, or seeds or whatever. Hoe as in the earth tilling tool. Tough because it’s full of rocks or frozen or goes past a rattlesnake nest or in some other way is agriculturally challenging.
While the product sounds mildly interesting, I see it as a major red flag you think it's ok for either a submitter or reviewer to not even read the code they are working with and ship thousand line diffs of LLM-generated code.
That's the lack of professionalism I give my random PoC personal projects where the only user I can break is myself - at work I am reading every line of every PR I submit or review, even if I used an LLM to assist writing the code.
I understand what you mean here but I'd consider this something that is configurable. I'd 100% have this product type as the default to require manual intervention/actions. Forcibly require someone to turn it off w/ some very explicit conversations. Maybe even provide some educational content about best practices.
Most of my previous companies required attaching a loom/screen recording of visual features cause the code really only communicates the logic. I've found that even for the PRs where you want to be super thorough and read every single line of code, watching the PR get tested brings you up to speed a lot faster.
The last couple times I got a new phone the price of the phone + plan without financing for 2 years was greater then plan with 2 years of financing. So yeah, I got the financing.
reply