I don't see how it's unfair at all. This is novel technology with rough edges. Sharing novel techniques for working around those rough edges is normal. Eventually we can expect them to be built-in to the tools themselves (see MCP for how you might inject this kind of information automatically.) A lot of development is iterative and LLMs are no exception.
Sure it is normal, but the reply was also a normal reply because, as you said it's a product with rough edges advertised as if it's completely functional. And the comment I replied to, criticizing the other reply, was not fair because he's only giving a normal reply, imo
I don't believe it is a normal reply on HN, or at least it shouldn't be.
We all know that this technology isn't magic. In tech spaces there are more people telling you it isn't magic than it is. The reminder does nothing. The contextual advice on how to tackle those issues does. Why even bother with that conversation, you can just take the advice or ignore it until the technology improves since you've already made up your mind about the limit you or others should be willing to go.
If it doesn't meet the standard of what you believe is advertised than say that. Not, "workarounds" are problematic because they obfuscate how someone should feel about how the product is advertised. Maybe you are an advertising purist and it bothers you, but why invalidate the person providing the context into how to utilize those tools in their current state better?
I didn't say it's magic. I said what it is advertised as.
> The reminder does nothing. The contextual advice on how to tackle those issues does.
No, the contextual advice doesn't help because it doesn't tackle the issue because the issue is "It doesn't work as advertised". We are in a thread of an article whose main thesis is "We’re still far away from AI writing code autonomously for non-trivial tasks." Giving advice that doesn't achieve autonomous writing code for non-trivial tasks doesn't help achieve that goal.
And if you want to talk about replies that do nothing. Calling the guy a Luddite for saying that the tip doesn't help him use the agent as an autonomous coder, is a huge nothing.
> since you've already made up your mind about the limit you or others should be willing to go.
Please read the article and understand what the conversation is about. We are talking about the limits that the article outlined, and the poster is saying how he also hit those limits.
> If it doesn't meet the standard of what you believe is advertised than say that.
The article says this. The commenter must have assumed people here read the articles.
> why invalidate the person providing the context into how to utilize those tools in their current state better?
Because that context is a deflection from the main point of the comment and conversation. It's like in a thread of mechanics talking about how an automatic tire balancer doesn't work well, and someone comes in saying "Well you could balance the tires manually!" How helpful is that?