Sorry for this, Simon. But just know that this non-newly-created hacker news account does not think you are a “vapid propagandist” and appreciates your content.
Hey Tom! Earnest question here - I am seeing on the order of one AI post a day on HN, sometimes more than that. It's good to know we can email in about these things, but I think most users don't understand that - certainly I didn't for the last few month as this has been going on. It would be nice if there was an affordance on the site to flag these, similar to the existing flag function.
This is just AI slop. The best tell is how much AI loves tables. Look at "The Hidden Costs Add Up", where it literally just repeats "1" in the second column and "7" in the third column. No human would ever write a table like that.
It can code in an autocomplete sense. In the serious sense, if we don’t distinguish between code and thought, it can’t.
Observe that modern coding agents rely heavily on heuristics. LLM excels at training datasets, at analyzing existing knowledge, but it can’t generate new knowledge on the same scale, its thinking (a process of identification and integration) is severely limited on the conscious level (context window), where being rational is most valuable.
Because it doesn’t have volition, it cannot choose to be logical and not irrational, it cannot commit to attaining the full non-contradictory awareness of reality. That’s why I said “never.”
I just (right before hopping on HN) finished up a session where an agent rewrote 3000 lines of custom tests. If you know of any "autocomplete" that can do something similar, let me know. Otherwise, I think saying LLMs are "autocomplete" doesn't make a lot of sense.
That’s neat, but it’s important to note that agentic systems aren’t just comprised of the LLM. You have to take into account all the various tools the system has access to, as well as the agentic harness used to keep the LLM from going off the rails. And even with all this extra architecture, which AI firms have spent billions to perfect, the system is still just… fine. Not even as good as a junior SWE.
That’s impressive. I don’t object to the fact that they make humans phenomenally productive. But “they code and think” makes me cringe. Maybe I’m confusing lexicon differences for philosophic battles.
Yes, I think it is probably a question of semantics. I imagine you don't really take issue with the "they code" part, so it's the "they think" thing that bothers you? But what would you call it if not "thinking"? "Reasoning"? Maybe there is no verb for it?
Some of that is true, sure, but nobody who claims LLMs can code and reason about problems is claiming that they operate like humans. Can you give concrete examples of actual specific coding tasks that LLMs can’t do and never will be able to do as a consequence of all that?
I think it can solve about any leetcode problem. I don’t think it can build an enterprise-grade system. It can be trained on an exiting one but these systems are not closed and no past knowledge seems to predict the future.
That’s not very specific but I don’t have another answer.
Unrelated, but it seems his previous company, 1x, was initially named Halodi and was located in Norway. And eventually, it was moved with all employees in Silicon Valley. How the hell does that work? That sounds like a logistical nightmare.Do you upend all those people's lives? Do you fire those who refuse? How many norwegians even want to go to the US? Sounds crazy to me.
Did they actually move or is it just a "remote-first" company now?
(Or even just registered in SV but still physically in Norway?)
Edit: Seems like a mix of all of it:
> I joined Halodi Robotics in 2022 (prior name of the company) as the only California-based employee. At the time, we were about 40 based out of Norway and 2 in Texas.
And I am pretty sure every single one of those "billions of people" have had the experience of returning back from the grocery store, only to realize they were actually out of eggs.
When I read the article, I feel the same emotions that I feel if someone were to tell me "I keep trying to ride a bike but I keep falling off". My experience with LLMs is that the "lack of thinking" is mostly a quick trough you fall into before you come out the other side understanding how to deal with LLMs better. And yes, there's nothing wrong with relating to someone's experience, but mostly I just want to tell that guy, just keep trying, it'll get better, and you'll be back to thinking hard if you keep at it.
But then OP says stuff like:
> I am not sure if there will ever be a time again when both needs can be met at once.
In my head that translates to "I don't think there will ever be a time again when I can actually ride my bike for more than 100 feet." At which point you probably start getting responses more like "I don't get it" because there's only so much empathy you can give someone before you start getting a little frustrated and being like "cmon it's not THAT bad, just keep trying, we've all been there".
> I keep trying to ride a bike but I keep falling off
I do not think this analogy is apt.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do.
The article is lamenting the disappearing of something meaningful for the OP. One can feel sad for this alone. It is not an equation to balance: X is gone but Y is now available. The lament stands alone. As the OP indicates with his 'pragmatism' we now collectively have little choice about the use of AI. The flood waters do not ask they take everyone in their path.
I think the disagreement is over what exactly will be taken away. Certainly, like any technology that came before, AI will automate something. A programmer that finds joy in the raw act of coding- thinking of how to solve a problem and crafting the resulting logic line by line will indeed have something taken away by AI.
But there is a spectrum here. AI is a cruder, less fine-grained method of producing output. But it is a very powerful tool. Instead of "chiseling" the code line by line, it transforms relatively short prompts along with "context" into an imperfect, but much larger/fully formed product. The more you ask it to do in one go, usually the more imperfect it is. But the more precise your prompts, and the "better" your context, the more you can ask it to do while still hanging on to its "form" (always battling against the entropy of AI slop).
Incidentally, those "prompts" are the thinking. The point is to operate at the edge of LLM/machine competence. And as the LLMs become more capable, your vision can grow bigger.
I think if OP had said "I miss getting paid for (a particular type of) thinking hard" I would find it to be a lot more agreeable. But he's just saying he misses it in general. I think that's what I (and, from OP's summary, many other people) find confusing. Can't you still do it? AI is not physically preventing you from thinking hard.
It's certainly a different style of thinking hard. I used to really stress myself over coding - i.e. I would get frustrated that solving an issue would cause me to introduce some sort of hack or otherwise snowball into a huge refactor. Now I spend most of my time thinking about what cool new features I am going to build and not really stressing myself out too much.
Yeah, I feel like this is really the smoking gun. Because it's not actually deeper? An LLM running untrusted code is not some additional level of security violation above a plugin running untrusted code. I feel like the most annoying part of "It's not X, it's Y" is that agents often say "It's not X, it's (slightly rephrased X)", lol, but it takes like 30 seconds to work that out.
Actions? I generally judge people by what they do, not what they say - though of course I have to admit that saying things does fall under "doing something", if it's impactful.
reply