Language is the network of abstractions that exists between humans. A model is a tool for predicting abstract or unobservable features in the world. So an LLM is a tool that explores the network of abstractions built into our languages.
Because the network of abstractions that is a human awareness (the ol' meat suit pilot model) is unique to all of us we cannot directly share components of our internal networks directly. Thus, we all interact through language and we all use language differently. While it's true that compute is fundamentally the same for all of us (we have to convert complex human abstractions into computable forms and computers don't vary that much), programming languages provide general mappings for diverse human abstractions back to basic compute features.
And so, just like with coding, the most natural path for interacting with a LLM is also unique to all of us. Your assumptions, your prior knowledge, and your world perspective all shape how you interact with the model. Remember you're not just getting code back though... LLMs represent a more comprehensive world of ideas.
So approach the process of learning about large language models the same way that you approach the process of learning a new language in general: pick a hello world project (something that's hello world for you) and walk through it with the model paying attention to what works and what doesn't. You'd do someone similar if you were handed a team of devs that you didn't know.
For general use, I start by having the model generate a req document that 1) I vet thoroughly. Then I have the model make TODO lists at all levels of abstraction (think procedural decomposition for the whole project) down to my code that 2) I vet thoroughly. Then I require the model to complete the TODO tasks. There are always hiccups same as when working with people. I know the places that I can count on solid, boiler plate results and require fewer details in the TODOs. I do not release changes to the TODO files without 3) review. It's not fire-and-forget but the process is modular and understandable and 4) errors finding from system design are mine to identify and address in the req and TODOs.
The same can be said about hucksters of all stripes, yes.
But maybe not contrarians/non-contrarians? They are just the agree/disagree commentators. And much of the most valuable commentary is nuanced with support for and against their own position. But generally for.
Simulated environment suggests the possibility of alignment during training but real time, real world, data streams are better.
But the larger point stands: you don't need an environment to explore the abstraction landscape prescribed by systems thinking. You only need the environment at the human interface.
I agree with your basic argument: intelligence is ill-defined and human/LLM intelligence being indistinguishable IS the basis for the power of these models.
But the point of the article is a distinct claim: personification of a model, expecting human or even human-like responses is a bad idea. These models can be held responsible for their answers independently because they are tools. They should be used as tools until they are powerful enough to be responsible for their actions and interactions legally.
But we're not there. These are tools. With tool limitations.
While the consumer will soon be irrelevant, I agree with the basic premise: neutered AI isn't helping.
At the same time, overrepresentation of evil concepts like 'Nazis are good!' or 'Slavery is the cheapest, most morally responsible use for stupid people' could lead to clear biases (ala Grok 4) that result in alignment issues.
The larger issue is that money is fundamentally a record of human effort (unless we're talking corporate value and then it's something a bit more).
With the automation of labor and cognitive effort, MONEY won't matter. They don't need customers. They only need the automation required to produce. Which will be broadly and cheaply available, all the way to the end because people will be competing for disappearing jobs.
There is no precedence for this kind of change; think Internet, computers, and the assembly line all packed together into a 5 year window, globally. And consider that there's no apparent end to the level of development and impact. Using historical metrics (like customer base or resource availability) is not going to help understand what's coming.
The real value in vibe coding does not come to developers who are already out at the bleeding edge of technology. Vibe codings true value is for people who know very little about programming, who know just enough to be able to debug the a type issue, or who have the time to read and research the issues outside of the general structure provided by LLMs. I've never created an Android app before. But I can do that in 24 hours now.
These tools are 2 years old. They're vastly superior to their versions from two years ago. As people continue to utilize and provide feedback these tools will continue to improve and become better and better at providing customers (non-programmers) access to features, tools, and technologies that they would otherwise have to rely on a team of developers for.
Personally I cannot afford the thousands of dollars per hour required to retain a team of top shelf developers for some crazy hair brained Bluetooth automation for my house lighting scheme. I can, however, spend a weekend playing around with Claude (and chat GPT and...). And I can get close enough. I don't need a production tool. I just need the software to do the little thing, the two seconds of work, that I don't want to do every single day.
Who's created a RAG pipeline? Not me! But I can walkthrough the BS necessary to get PostGRE, FastAPI, and Llama 3 set up so that I start automating email management.
And that's the beauty: I don't have to know everything anymore! Not spend months trying to parse all the specialized language surrounding the tools I'll need to implement. I just need to ask the questions I don't have answers for, making sure that I ask enough that the answers tie back into what I do know.
Because the network of abstractions that is a human awareness (the ol' meat suit pilot model) is unique to all of us we cannot directly share components of our internal networks directly. Thus, we all interact through language and we all use language differently. While it's true that compute is fundamentally the same for all of us (we have to convert complex human abstractions into computable forms and computers don't vary that much), programming languages provide general mappings for diverse human abstractions back to basic compute features.
And so, just like with coding, the most natural path for interacting with a LLM is also unique to all of us. Your assumptions, your prior knowledge, and your world perspective all shape how you interact with the model. Remember you're not just getting code back though... LLMs represent a more comprehensive world of ideas.
So approach the process of learning about large language models the same way that you approach the process of learning a new language in general: pick a hello world project (something that's hello world for you) and walk through it with the model paying attention to what works and what doesn't. You'd do someone similar if you were handed a team of devs that you didn't know.
For general use, I start by having the model generate a req document that 1) I vet thoroughly. Then I have the model make TODO lists at all levels of abstraction (think procedural decomposition for the whole project) down to my code that 2) I vet thoroughly. Then I require the model to complete the TODO tasks. There are always hiccups same as when working with people. I know the places that I can count on solid, boiler plate results and require fewer details in the TODOs. I do not release changes to the TODO files without 3) review. It's not fire-and-forget but the process is modular and understandable and 4) errors finding from system design are mine to identify and address in the req and TODOs.
Good luck and have fun!