Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ai is useless. anyone claiming otherwise is dishonest


I use GenAI for text translation, text 2 voice and voice 2 text, there it is extremely useful. For coding I often have the feeling it is useless, but also sometimes it is useful, like most tools...


Exactly, it’s really weird to see all this people claiming these wonderful things about LLMs. Maybe it’s really just different levels of amazement, but I understand how LLMs work, I actually use ChatGPT quite a bit for certain things (searching, asking some stuff I know it can find online, discuss ideas or questions I have etc.).

But all the times I tried using LLMs to help me coding, the best it performs is when I give it a sample code (more or less isolated) and ask it for a certain modification that I want.

More often than not, it does make seemingly random mistakes and I have to be looking at the details to see if there’s something I didn’t catch, so the smallest scope there better.

If I ask for something more complex or more broad, it’s almost certain it will make many things completely wrong.

At some point, it’s such a hard work to detail exactly what you want with all context that it’s better to just do it yourself, cause you’re writing a wall of text to have a one time thing.

But anyway, I guess I remain waiting. Waiting until FreeBSD catches up with Linux, because it should be easy, right? The code is there in the Linux kernel, just tell an agent to port it to FreeBSD.

I’m waiting for the explosion of open source software that aren’t bloated and that can run optimized, because I guess agents should be able to optimize code? I’m waiting for my operating system to get better over time instead of worse.

Instead I noticed the last move from WhatsApp was to kill the desktop app to keep a single web wrapper. I guess maintaining different codebases didn’t get cheaper with the rise of LLMs? Who knows. Now Windows releases updates that break localhost. Ever since the rise of LLMs I haven’t seen software release features any faster, or any Cambrian explosion of open source software copying old commercial leaders.


What are you doing at your job that ai can't help with at all to consider is completely use less?


Useless for what?


That could even be argued (with an honest interlocutor, which you clearly are not)

The usefulness of your comment, on the other hand, is beyond any discussion.

"Anyone who disagrees with me is dishonest" is some kindergarten level logic.


no


[Deleted as Hackernews is not for discussion of divergent opinions]


> It's not useless but it's not good for humanity as a whole.

Ridiculous statement. Is Google also not good for humanity as a whole? Is Internet not good for humanity as a whole? Wikipedia?


I think it is an interesting thought experiment to try to visualize 2025 without the internet ever existing because we take it completely for granted that the internet has made life better.

It seems pretty clear to me that culture, politics and relationships are all objectively worse.

Even remote work, I am not completely sure I am happier than when I use to go to the office. I know I am certainly not walking as much as I did when I would go to the office.

Amazon is vastly more efficient than any kind of shopping in the pre-internet days but I can remember shopping being far more fun. Going to a store and finding an item I didn't know I wanted because I didn't know it existed. That experience doesn't exist for me any longer.

Information retrieval has been made vastly more efficient so I instead of spending huge amounts of time at the library, I get that all back in free time. What I would have spent my free time doing though before the internet has largely disappeared.

I think we want to take the internet for granted because the idea that the internet is a long term, giant mistake is unthinkable to the point of almost having a blasphemous quality.

Childhood? Wealth inequality?

It is hard to see how AI as an extension of the internet makes any of this better.


Chlorofluorocarbons, microplastics, UX dark patterns, mass surveillance, planned obsolescence, fossil fuels, TikTok, ultra-processed food, antibiotic overuse in livestock, nuclear weapons.

It's a defensible claim I think. Things that people want are not always good for humanity as a whole, therefore things can be useful and also not good for humanity as a whole.


You’re delusional if you think LLMs/AI are in the ballpark of these. I’ve listed things in my comment for a reason.


You're the one who made the comparison...?


There was some confusion. I originally read Wiseowise's comment as a failure to think of anything that could be "useful but bad for humanity". But given the followup response above I assume they're actually saying that LLMs are similar to tools like the Internet or Wikipedia and therefore should simply not be in the bad for humanity category.

Whether that's true or not, it is a different claim which doesn't fit the way I responded. It does fit the way Libidinalecon responded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: