Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Shall we therefore conclude that syntax highlighting is not useful, that developers who use syntax highlighting are just part of the IDE hype train, and that anecdotal reports of syntax highlighting being helpful are counterbalanced by anecdotal reports of $IDE having incorrect syntax highlighitng on $Esoteric_file_format?

Yes. We should conclude that syntax highlighting is not useful in languages that the syntax highlighter does not support. I think basically everyone would agree with this statement.

Similarly an llm that worked 100% of the time and could solve any problem would be pretty useful. (Or at least worked correctly as often as syntax highlighting in situations where it is actually used does)

However that's not the world we live in. Its a reasonable question to ask if llm is good enough yet where the productivity gain outweighs the productivity lost.



Your stance feels somewhat contradictory. A syntax highlighter is not useful in languages it does not support, therefore an LLM must be able to solve any problem to be useful?

The point I was trying to make was, an LLM is as reliably useful as syntax highlighting, for the tasks that coding LLMs are good at today. Which is not a lot, but enough to speed up junior devs. The issues come when people assume they can solve any problem, and try to use them on tasks to which they are not suited. Much like applying syntax highlighting on an unsupported language, this doesn't work.

Like any tool, there's a learning curve. Once someone learns what does and does not work, it's generally an strict productivity boost.


The problem is that there are no tasks that LLMs are reliably good at. I believe that's what OP is getting at.

I fixed a production issue earlier this year that turned out to be a naive infinite loop - it was trying to load all data from a paginated API endpoint, but there was no logic to update the page number being fetched.

There was a test for it. Alas, the test didn't actually cover this scenario.

I mention this because it was committed by a co-worker whose work is historically excellent, but who started using Copilot / ChatGPT. I'm pretty sure it was an LLM-generated function and test, and they were deeply broken.

Mostly they've been working great for this co-worker.

But not reliably.


I understand that, the point I'm making is that reliability is not a requirement for utility. One does not need to be reliable to be reliably useful :)

A very similar example is StackOverflow. If you copy/paste answers verbatim from SO, you will have problems. Some top answers are deeply broken or have obvious bugs. Frequently, SO answers are only related to your question, but do not explicitly answer it.

SO is useful to the industry in the same way LLMs are.


Sure, there is a range. If it works 100% of the time its clearly useful. If it works 0% then it clearly isn't.

LLMs are in the middle. Its unclear which side of the line they are on. Some ancedotes say one thing some say another. That's why studies would be great. Its also why syntax highlighting is a bad comparison since that is not in the greyzone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: