Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Use the tools properly

> (from upthread) I was being sold a "self driving car" equivalent where you didn't even need a steering wheel for this thing, but I've slowly learned that I need to treat it like automatic cruise control with a little bit of lane switching.

This is, I think, the core of a lot of people's frustrations with the narrative around AI tooling. It gets hyped up as this magnificent wondrous miraculous _intelligence_ that works right-out-of-the-box; then when people use it and (correctly!) identify that that's not the case, they get told that it's their own fault for holding it wrong. So which is it - a miracle that "just works", or a tool that people need to learn to use correctly? You (impersonal "you", here, not you-`vidarh`) don't get to claim the former and then retreat to the latter. If this was just presented as a good useful tool to have in your toolbelt, without all the hype and marketing, I think a lot of folks (who've already been jaded by the scamminess of Web3 and NFTs and Crypto in recent memory) would be a lot less hostile.





How about:

1) Unbounded claims of miraculous intelligence don't come from people actually using it;

2) The LLMs really are a "miraculous intelligence that works right out-of-the-box" for simple cases of a very large class of problems that previously was not trivial (or possible) to solve with computers.

3) Once you move past simple cases, they require increasing amount of expertise and hand-holding to get good results from. Most of the "holding it wrong" responses happen around the limits of what current LLMs can reliably do.

4) But still, that they can do any of that at all is not far from a miraculous wonder in itself - and they keep getting better.


With the exception of 1) being "No True Scotsman"-ish, this is all very fair - and if the technology was presented with this kind of grounded and realistic evaluation, there'd be a lot less hostility (IMO)!

The problem with this argument is that it is usually not the same people making the different arguments.

That would be true if I was making a argument of criticism of a certain person or class-of-people, but I'm not. I'm describing my observations about the frustrations that AI-skeptics feel when they are bombarded with contradictory messages from (what they perceive as) "the pro-AI crowd". The fact that there are internal divisions within that group (between those making absurd claims, and those pointing out how correct tool use is important) does mean that the tool-advisers are being consistent and non-hypocritical, but _it doesn't lessen the frustration by the people hearing it_.

That is - I'm not saying that "tool advisers" are behaving badly, I'm observing why their (good!) advice is met with frustration (due to circumstances outside their control).

EDIT: OK, on reading my previous comment, there is some assumption that the comments are being made by the same people (or group-of-people) - so your response makes sense as a defence against that. I think the observations of sources of frustration are still accurate, but I do agree that "tool advisers" shouldn't be accused of inconsistency or hypocrisy when they're not the ones make outlandish claims.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: