Guys, it's not meant to be the keystone for your production servers. I get your points about infra and LLMs but this is not the place for them. Surely there's some more deserving targets to your 'anti-AI-acrimony'?? Hahah! :)
More poignantly however the comments here decrying the use of AI tooling as suspicious, incorrect hallucinations, instead suggest that this ire is more about how these people fear that AI, LLMs and ChatGPT are obviating the need for people with their particular expertise, and are desperate to present otherwise. And so they criticize it profusely, even if irrational. A truly future-proof take would be to embrace the trend, and see how it enhances, rather than erodes their prospects.
But back to the point at hand -- it says very clearly at the top of the comment it's a first draft. In fact, I spent a little bit of time honing with a prompt. It says it's untested and suggests ways to improve. Hahaha! :)
A good sysadmin would focus on suggesting ways it could be improved and recognize it for its convenience, a goal they share. They'd likely clearly see that it's not claiming to be either: a substitute for using the tools in another way; nor for learning what they are; but rather can very much be an aid in learning and using the tools.
To instead allow personal, and perhaps mistaken biases against LLMs occlude your productivity or utilization of things seems unwise. There's nothing wrong with having your personal opinions, but failing to see the other ways that things could be useful outside of that, is a mistake, I imagine.
More bigger picture, now: what you do imagine the purpose of this script was?
You can always look for ineptitude and wrong. To some extent that attitude might even underlie an admirable caution, and could be indicative of expertise -- even if clumsily expressed. But at the same time such attitudes could underpin a not-so-admirable mistaken assumption of stupidity on the part of others, blindness to approaches outside of one's own experience, or an failure to communicate respectfully. It might be hard to argue these were wise traits.
I get your disdain for what you see as the high amount of low quality LLM output that is putting everything at risk, but while a valid opinion, this particular thread is not the best target of that. You can try to make it about that, but why? Then you're just abusing someone else's words as a wrong vehicle for venting your own gripes, right? If you want to vent, do a "Tell HN:" or a blog post. Not reply to someone else's completely-unrelated-to-your-angst comment.
In other words, it's possible to express that opinion without taking aim at something to which that does not apply. There's no need to misuse someone's else's comment as a way soapbox for your own gripes.
I get if that seemed wise, but it wasn't.
Also, it's possible to raise the question of balance. The criticisms of the script comment erode their own credibility by failing to note anything good in what they're replying to. Instead waxing verbosely why they are "right" and "correct.", suggesting instead that's the primary aim sought by such comments.
Yet, there's no absolutes in this, unless you mistakenly make your definition overly narrow -- then it's meaningless. If you're being real, there's a multitude of ways to do things "right", and a multitude of approaches, as well as uses for, and improvements of, the script I propose in my comment, and approaches like it.
A wise interpretation would clearly see such a script aims to be a collection of useful tools in the same way that many unix tools are collections of useful related functions.
If you'd like to use my comment as a generalized jumping off point for your own gripes on LLMs or need to criticize, it would be better instead to find a more appropriate target so as not to come across as being an abusive and overly-critical bully, which I'm sure you're actually not, in fact.
Your comment comes across as if you misread mine as someone suggesting you provision your entire infrastructure hinging on the correctness of this HN comment, and uses that unhinged assumption as a basis for then criticizing it as something it never intended, nor claimed, to be. Hahaha! :)
Again, I think the semi-hysterical hyperbole of the responses speaks to the 'fear of replacement' that must be gripping the ranks of these. There must be a perception of employers that this is true, and these people fear. That sucks, but it's better to be more rational in response, than less. So your stated skepticism and criticisms would arrive more warranted if they were more precise and balanced.
Better yet, as to commenting ... rather than imposing your view that this is "harmful", find ways that it's not, or ways to make it better. Or just, you know, appreciate that there's multiple ways to the same goal, and everyone can get there differently, and doesn't make you "right" and them "wrong".
To fail to see how my comment and script could be good or useful and instead impose one's own insecurities or generalized sentiments toward current ChatGPT, could also be considered boring...
So...I suspect the pearl clutching is unwarranted, and it's a false equivalence to equate use of ChatGPT with your presumption of technical ineptitude. True technical ineptitude could also include refusal to embrace new technologies, or an overly limited perspective on what people say. Or even an overly narrow view of your own prospects for the future given the introduction of these disruptive technologies! Hahaha! :)
We simply shared what we thought of your comment and I personally tried to do it the more polite way possible. Of course we are essentially telling you that we find your comment useless and why we think that, I'm inclined to understand you don't enjoy it. Now, we are also not imposing anything, what makes you think that?
Sure, you warned this is a draft that needs work, we noticed, but why share this? Do you have ideas on directions where it could be taken to? You are asking what we thought your script was useful for, but that is indeed the question. As is, your comment feels low effort. I don't want to make you justify to us why you think your comment was useful, people are free to comment on HN without justifications, but that's clearly what we are missing. You are writing a lot of words focusing on us detractors as people, but what about the actual content and arguments?
> So your stated skepticism and criticisms would arrive more warranted if they were more precise and balanced
Sorry, but I aim at being precise and deep in my thinking, I'm not aiming at balanced. I sometimes have opinions that are clear and strong, happy to change my mind given good arguments, but I don't seek balanced. I don't know why I should. I seek documented, educated, not watered down.
Now, about using AI myself, I don't quite feel the need but in any case, I will consider using LLMs more seriously when they are open source and when they are careful about how they source their data: the quality of the input, and whether people agree to have their work being used as training data. I also have issues with the amount of energy they require to run. ChatGPT is too ethically wrong from my point of view for considering using it. But that's beside the point and my opinion on this didn't play a role in my comments.
And I don't feel insecure. I'm all right really.
You are blaming us but your comment was flagged to death. We are not the one who were flagged (and I didn't flag you, to be clear). We are also your (only) clues on why this happened. I would suggest some humility. Really, take a hint.
And to be clear, I don't have disdain for you, and I don't assume stupidity. That's not how I work. I would look down at myself if I did. I'm sorry if I made you think this, but let me assure you this is not the case.
This last comment of mine is harsh, but you need to take in account that I just read yours which is not really nice to us. Let's now tone down a bit maybe.
Sorry for the belated reply, I did not read your comment until just 5 minutes ago. I avoided it, knowing it would be toxic and I had more important things to do. But now I have some free time, so let's deal with you, sir.
"We"? You only speak for you, right? You cannot assume consensus in unknown random internet others, or else you also must presume consensus with my ideas, too?
The idea of "useless" is of course an imposition. And abusive. I clearly find it useful, so to claim useless is to devalue my perspective. Do you not see that? Or you think it justified? Neither is acceptable if you aim, as you say, for 'politeness'. Nor even for good sense.
So, I think you don't aim to be polite in fact, but merely pretend to be so. Hahaha! :)
What about the content as arguments? There is none from you because you do not acknowledge the other perspectives. So it all comes down, necessarily, to you as people.
But you can't be deep without being balanced, because then you can only be narrow minded. Which you are succeeding at, but you think that's a victory. When it's not: balance is required for real depth, because in appreciation the the breadth, you depth is able to resonate, through linking with what else is real. Otherwise it is, necessarily, unhinged. As your seems to be, sorry to say! Hahahaha :)
Your pretense at ethics around use of AI tools is belied by your "low ethics" attitude toward commentary. How are we to find that convincing, if you are not a moral actor in the first?
Flagged only requires a few people. If you require the consolation of the chorus of voices to lift your own, I understand. But that undercuts your message of depth, does it not, sir? :)
> I'm sorry if I made you think this, but let me assure you this is not the case.
You know you can only be sorry for your own choices/actions, right? Not for whatever you assume someone else feels, yes? You cannot "make" me feel a certain way. My feelings are my responsibility, not yours. So, a better way that respects the boundaries of individuals (I understand if you have trouble with that, but take heed, and learn!) is to say, "I'm sorry for <insert your action>" if you do feel you have something to be sorry for.
Overall your comment comes across just about exactly as I thought it would, given your previous ones. For humility, well, perhaps you have a thing or two to learn, indeed. But even that may be too much to ask of you. I suggest, instead, first you take a course in empathy, and then in self-awareness. Then perhaps you'll be equipped to appreciate your humility.
Good luck, sir. And have a pleasant week! Hahaha! :)
Your comment brought me the entertainment I needed at this minute. I am grateful. So here's my gift to you, youngin: But, I think you're just playing at this role of provocateur--you can do much better--but you haven't figured it out yet (and you know it), and that's your weakness.
So, work out what you really want to do, and then talk to others of 'standards'. Hahahahahahaha! :)
More poignantly however the comments here decrying the use of AI tooling as suspicious, incorrect hallucinations, instead suggest that this ire is more about how these people fear that AI, LLMs and ChatGPT are obviating the need for people with their particular expertise, and are desperate to present otherwise. And so they criticize it profusely, even if irrational. A truly future-proof take would be to embrace the trend, and see how it enhances, rather than erodes their prospects.
But back to the point at hand -- it says very clearly at the top of the comment it's a first draft. In fact, I spent a little bit of time honing with a prompt. It says it's untested and suggests ways to improve. Hahaha! :)
A good sysadmin would focus on suggesting ways it could be improved and recognize it for its convenience, a goal they share. They'd likely clearly see that it's not claiming to be either: a substitute for using the tools in another way; nor for learning what they are; but rather can very much be an aid in learning and using the tools.
To instead allow personal, and perhaps mistaken biases against LLMs occlude your productivity or utilization of things seems unwise. There's nothing wrong with having your personal opinions, but failing to see the other ways that things could be useful outside of that, is a mistake, I imagine.
More bigger picture, now: what you do imagine the purpose of this script was?
You can always look for ineptitude and wrong. To some extent that attitude might even underlie an admirable caution, and could be indicative of expertise -- even if clumsily expressed. But at the same time such attitudes could underpin a not-so-admirable mistaken assumption of stupidity on the part of others, blindness to approaches outside of one's own experience, or an failure to communicate respectfully. It might be hard to argue these were wise traits.
I get your disdain for what you see as the high amount of low quality LLM output that is putting everything at risk, but while a valid opinion, this particular thread is not the best target of that. You can try to make it about that, but why? Then you're just abusing someone else's words as a wrong vehicle for venting your own gripes, right? If you want to vent, do a "Tell HN:" or a blog post. Not reply to someone else's completely-unrelated-to-your-angst comment.
In other words, it's possible to express that opinion without taking aim at something to which that does not apply. There's no need to misuse someone's else's comment as a way soapbox for your own gripes.
I get if that seemed wise, but it wasn't.
Also, it's possible to raise the question of balance. The criticisms of the script comment erode their own credibility by failing to note anything good in what they're replying to. Instead waxing verbosely why they are "right" and "correct.", suggesting instead that's the primary aim sought by such comments.
Yet, there's no absolutes in this, unless you mistakenly make your definition overly narrow -- then it's meaningless. If you're being real, there's a multitude of ways to do things "right", and a multitude of approaches, as well as uses for, and improvements of, the script I propose in my comment, and approaches like it.
A wise interpretation would clearly see such a script aims to be a collection of useful tools in the same way that many unix tools are collections of useful related functions.
If you'd like to use my comment as a generalized jumping off point for your own gripes on LLMs or need to criticize, it would be better instead to find a more appropriate target so as not to come across as being an abusive and overly-critical bully, which I'm sure you're actually not, in fact.
Your comment comes across as if you misread mine as someone suggesting you provision your entire infrastructure hinging on the correctness of this HN comment, and uses that unhinged assumption as a basis for then criticizing it as something it never intended, nor claimed, to be. Hahaha! :)
Again, I think the semi-hysterical hyperbole of the responses speaks to the 'fear of replacement' that must be gripping the ranks of these. There must be a perception of employers that this is true, and these people fear. That sucks, but it's better to be more rational in response, than less. So your stated skepticism and criticisms would arrive more warranted if they were more precise and balanced.
Better yet, as to commenting ... rather than imposing your view that this is "harmful", find ways that it's not, or ways to make it better. Or just, you know, appreciate that there's multiple ways to the same goal, and everyone can get there differently, and doesn't make you "right" and them "wrong".
To fail to see how my comment and script could be good or useful and instead impose one's own insecurities or generalized sentiments toward current ChatGPT, could also be considered boring...
So...I suspect the pearl clutching is unwarranted, and it's a false equivalence to equate use of ChatGPT with your presumption of technical ineptitude. True technical ineptitude could also include refusal to embrace new technologies, or an overly limited perspective on what people say. Or even an overly narrow view of your own prospects for the future given the introduction of these disruptive technologies! Hahaha! :)