Hacker Newsnew | past | comments | ask | show | jobs | submit | daliusd's commentslogin

In some cases, that’s true, but sometimes you need to update cutting rules because of law changes, or you saw different way of cutting for example. There are cases where this is not one time investment. What I agree with that cutting-it-yourself became significantly cheaper

I wanted to try this locally as well so I have asked AI to write CLI for me: https://github.com/daliusd/qtts

There are some samples. If you have GPU you might want to fork and improve this, but otherwise slow, but usable on CPU as well.


How much is “much more”. It is still a number.

Claude is great, but you still need a developer to ripe the benefits. And it is not magic - I see productivity increase of about 30% in my team.

Looks like default OpenCode / Claude Code behavior with Claude models. Why the extra prompt ?

Good question!

1. The post was written before this was common :)

2. If using Cursor (as I usually am), this isn't what it always does by default, though you can invoke something like it using "plan" mode. It's default is to keep todo items in a little nice todo list, but that isn't the same thing as a spec.

3. I've found that Claude Code doesn't always do this, for reasons unknown to me.

4. The prompt is completely fungible! It's really just an example of the idea.


I am working on invoicing web app https://www.haiku.lt . Currently focusing on marketing and EU e-invoicing part.

It will be shoved into your life anyway. You might like it or not, but the only safe choice is to learn and understand it IMHO.

About usage: it looks like web development gets benefits here, but other areas are not that successful somehow. While I use it successfully for Neovim Lua plugins development, CLI apps (in JS) and shell development (WezTerm Lua + fish shell). So I don't know if:

a) it simply has clicked for me and it will click for everyone who invests into it;

b) it is not for everybody because of tech;

c) is it not for everybody because of mindset;


There is Browser MCP that works reasonably well: https://browsermcp.io/

What's the difference?


So does autocomplete. Why not treat LLM as next autocomplete iteration?


LLMs are generative and do not have a fixed output in the way past autocompletes have. I know when I accept "intellisense" or whatever editor tools are provided to me, it's using a known-set of completions that are valid. LLMs often hallucinate and you have to double-check everything they output.


I don't know what autocomplete you're using but mine often suggests outright invalid words given the context. I work around this by simply not accepting them


The high failure rate of LLM-based autocompletes has had me avoid those kind of features altogether as they waste my time and break my focus to double-check someone else's work. I was efficient before they were forced into every facet of our lives three years ago, and I'll be just as efficient now.


Personally, I configure autocomplete so that LSP completions rank higher than LLM completions. I like it because it starts with known/accurate completions and then gracefully degrades to hallucinations.


Because they are not. Autocomplete completes the only thing you already thought. You solve the problem, the machine writes. Mechanical.

LLMs defines paths, ideas, choose routes, analyze and so on. They don't just autocomplete. They create the entire poem.


Sometimes. Usually LLM does exactly what I ask it. There is not like there are million ways - usually 4-10.


Who'd want an autocomplete that randomly invents words and spellings while presenting them as real? It's annoying enough when autocomplete screws up every other ducking message I send by choosing actual words inappropriately. I don't need one that produces convincing looking word salad by shoving in lies too.


I wonder why people have such completely different experience with LLM


You could build one like that, but most implementations I've seen cross the line for me.

Hard to define but feels similar to the "I know it when I see it" or "if it walks like a duck and quacks like a duck" definitions.


Autocomplete annoys me, derails my train of thought, and slows me down. I'm happy that nobody forces me to use it. Likewise, I would greatly resent being forced to use LLMs.


Completely different context though - you have to feed through your own data for autocomplete and even then it’s based on your own voice as a writer. When you no longer have to write - nor think about those things you’re writing - then your voice and millions of others will be drowned out by LLM trash.


What’s your take on the fact that everyone around gets this boost? I feel the same boost but in our company we had little competition using llm - team I was leading won, but victory was not decisive but quite minimal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: