Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.
Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.
This. Set up your dev env and pay attention to details and get it right. Introducing probabilistic codegen before doing that is asking for trouble before you even really get started accruing tech debt.
You say "probabilistic" as if some kind of gotcha. The binary rigidness is merely an illusion that computers put up. At every layer, there's probabilistic events going on.
- Your hot path functions get optimized, probabilistically
- Your requests to a webserver are probabilistic, and most of the systems have retries built in.
- Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.
Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.
We’re comparing this to an LSP or intellisense type of system, how exactly are these probabilistic? Maybe they crash or get a memory leak every once in a while but that’s true of any software including an inference engine… I’m much more worried about the fact that I can’t guarantee that if I type in half of a variable name, that it’ll know exactly what i’m trying to type. It would be like preparing to delete a line in vim and it predicts you want to delete the next three. Even if you do 90% of the time, you have to verify its output. It’s nothing like a compiler, spurious network errors, etc (which still exist even with another layer of LLM on top).
> Just because YOU dont deal with probabilistic events while programming in ...
Runtime events such as what you enumerate are unrelated to "probabilistic codegen" the GP references, as "codegen" is short for "code generation" and in this context identifies an implementation activity.
The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.
> The scheduler that puts your program on a CPU works probabilistically. There are no rigid guarentees of workloads in Linux. Those only exist in real time operating systems.
Again, the post to which you originally replied was about code generation when authoring solution source code.
This has nothing to do with Linux, Linux process scheduling, RTOS[0], or any other runtime concern, be it operating system or otherwise.
Apples and oranges.
It's frankly nonsense to tell me to "embrace it" as a phantom / strawman rebuttal about a broader concept I never said was inherently bad or even avoidable. I was talking much more specifically about non-deterministic code generation during implementation / authoring phase.
> This. Set up your dev env and pay attention to details and get it right. Introducing function declarations before knowing what assembly instructions you need to generate is asking for trouble before you even really get started accruing tech debt.
Old heads cling to their tools and yell at kids walking on lawns, completely unaware that the world already changed right under their noses.
We know the "world has changed": that's why we're yelling. The Luddites yelled when factories started churning out cheap fabric that'd barely last 10 years, turning what was once a purchase into a subscription. The villagers of Capel Celyn yelled when their homes were flooded to provide a reservoir for the Liverpool Corporation – a reservoir used for drinking water, in which human corpses lie.
This change is good for some people, but it isn't good for us – and I suspect the problems we're raising the alarm about also affect you.
Honestly, I've used a fully set up Neovim for the past few years, and I recently tried Zed and its "edit prediction," which predicts what you're going to modify next. I was surprised by how nice that felt — instead of remembering the correct keys to surround a word or line with quotes, I could just type either quotation mark, and the edit prediction would instantly suggest that I could press Tab to jump to the location for the other quote and add it. And not only for surrounding quotes, it worked with everything similar with the same keys and workflow.
Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.
> Then it's a real bad case of using the LLM hammer thinking everything is a nail. If you're truly using transformer inference to auto fill variables when your LSP could do that with orders of magnitude less power usage, 100% success rate (given it's parsed the source tree and knows exactly what variables exist, etc), I'd argue that that tool is better.
I think you're clinging onto low-level thinking, whereas today you have tools at your disposal that allow you to easily focus on higher level details while eliminating the repetitive work required by, say, the shotgun surgery of adding individual log statements to a chain of function calls.
> Of course LLMs can do a lot more than variable autocomplete.
Yes, they can.
Managing log calls is just one of them. LLMs are a tool that you can use in many, many applications. And it's faster and more efficient than LSPs in accomplishing higher level tasks such as "add logs to this method/methods in this class/module". Why would anyone avoid using something that is just there?
I have seen people suggesting that it's OK that our codebase doesn't support deterministically auto-adding the import statement of a newly-referenced class "because AI can predict it".
I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?
Some tools are better than others at specific things. AI as is commonly understood today is better at fuzzy problems than many other tools. In the case of programming and being able to tab complete your way through symbols, you'll benefit greatly from having tools that can precisely parse the AST and understand schemas. There is no guess work when you can be exact. Using AI assistants for simple tab completion only opens the door for a class of mistakes we've been able to avoid for years.
I don't want to wade into the debate here, but by "their tools" GP probably meant their existing tools (i.e. before adding a new tool), and by "a fuzzy problem solver" was referring to an "AI model".
The user, infact, has setup a tool for the task - an "AI model", unless you're saying one tool is better than others.