Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Code is a liability, not an asset. It is a necessary evil to create functional software.

Senior devs know this, and factor code down to the minimum necessary.

Junior devs and LLMs think that writing code is the point and will generate lots of it without worrying about things like leverage, levels of abstraction, future extensibility, etc.



LLMs can be prompted to write code that considers these things.

You can write good code or bad code with LLMs.


The code itself, whether good or bad, is a liability. Just like a car is a liability, in a perfect world, you'd teleport yourself to your destination, instead you have to drive. And because of that, roads and gas stations have to be built, you have to take care of the car, etc,.... It's all a huge pain. The code you write, you will have to document, maintain, extend, refactor, relearn, and a bunch of other activities. So yo do your best to only have the bare minimum to take care of. Anything else is just future troubles.


Sure, I don’t dispute any of that. But it's not a given that using LLMs means you’re going to have unnecessary code. They can even help to reduce the amount of code. You just have to be detailed in your prompting about what you do and don’t want, and work through multiple iterations until the result is good.

Of course if you try to one shot something complex with a single line prompt, the result will be bad. This is why humans are still needed and will be for a long time imo.


I'm not sure that's true. A LLM can code because it is trained on existing code.

Empirically, LLMs work best at coding when doing completely "routine" coding tasks: CRUD apps, React components, etc. Because there's lots of examples of that online.

I'm writing a data-driven query compiler and LLM code assistance fails hard, in both blatant and subtle ways. There just isn't enough training data.

Another argument: if a LLM could function like a senior dev, it could learn to program in new programming languages given the language's syntax, docs and API. In practice they cannot. It doesn't matter what you put into the context, LLMs just seem incapable of writing in niche languages.

Which to me says that, at least for now, their capabilities are based more on pattern identification and repetition than they are on reasoning.


Have you tried new languages or niche languages with claude sonnet 3.5? I think if you give it docs with enough examples, it might do ok. Examples are crucial. I’ve seen it do well with CLI flags and arguments when given docs, which is a somewhat similar challenge.

That said, you’re right of course that it will do better when there’s more training data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: