Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Am I the only one here who can't stand HN's AI obsession?
140 points by ciconia 7 months ago | hide | past | favorite | 118 comments
I can't really explain why, but I find the recent AI developments, articles and news stories totally boring and lame. I can understand why people get excited with generative AI that can transform a text into an image etc, but otherwise the benefits of so called AI are completely lost on me, and all those AI articles on HN are just noise to me.

I don't need you to convince me I'm wrong. I just want to know if there are other people here that feel the same way. Thank you for commenting thoughtfully!



There are plenty of us. Before AI, it was blockchain and web3. Before that, it was virtual reality. Before that, it was 3D printing. Before that... you get the idea. HN has always reflected the latest hype train. It's always been that way.


The way people talked about Blockchain/Web3 in particular was very similar to how a lot of people talk about AI now. It's going to take over the world any day now, we just need to fix [fundamental issue]. Get on board right now or you'll be left behind, especially if you're in a creative field. The business model is definitely-maybe-kinda legal but we just need to bend all governments to our will and then it'll be fine. It may use as much energy as a small country but trust us it's totally worth it.


The tech itself works, unlike blockchain, but the narratives are way overblown. I know a few small companies that have legitimately let people go without even having a proper replacement.

The leadership then hyped up Devin or whatever, only to find the whole workplace in chaos after a few months. All this was done to attract VC money, and now they're hiring for the same position again.


What about blockchain tech doesn't work? It's a relatively more tame technical problem than any GenAI.


Usability. The use case of GenAI is pretty clear at this point. With blockchain, the big question was, "Okay, we have this cool tech, but what do we do with it now?" If you have to think about what problem a tool solves, it's not a good tool at all.


To some extent, usability.

But since the target audience was always technical (although gullible) people, the incentive to make it useful beyond coin purchases never manifested.

But moreso: scalability.

The blockchains that are worth anything don’t scale to the level of total worldwide transactions per second.

I’ve heard “But Lightning!” — sure, so before I can send any money, I have to find a fellow to relay my transactions against.


I mean, Our Lord and Saviour The Almighty Blockchain (and its offshoot technologies, like DAOs) worked, for some value of worked; it just wasn't particularly useful. I'd put LLMs largely in the same bucket for now.

> I know a few small companies that have legitimately let people go without even having a proper replacement.

In terms of collateral damage, blockchain nonsense is probably still winning, at least for now, though its victims were often at least somewhat complicit in their own destruction.


I wouldn't be as harsh on LLMs as I was on blockchain. Blockchain didn't solve a single problem I had or understood. In contrast, LLMs already solve a few boring and annoying problems for me, which I find quite valuable.


No, people superficially said the same things. Except if you know anything at all about the underlying tech, or about business, you immediately see the difference. This is why so many people disliked and totally ignored the blockchain hype (like me), but are very bought into AI (also like me).

One glaring example of the difference - you're making an analogy by saying that people were saying "get on board web3 or you'll be left behind". I'm not sure I heard people say it quite this way, but even if they did - what was the causal mechanism they claimed would "leave you behind" exactly? Why would the average user care? Not at all clear.

Whereas, correct or not, there's a very obvious reason to think adopting AI is critical - in SE, for example, it promises drastic changes to how people work and increases in productivity - it's pretty clear why not adopting it is a bad idea.


Lol, in the Web3 case, "take over the world" was only a metaphor. This is a big difference for AI.


Eh, I think that's just the state of contemporary "influencer"-heavy discourse. The similarity in how people talk about the two fields has more to do with the people doing the talking (they're the same people) than it does with similarities between the fields themselves.

While LLMs are still new, ML itself is by no means a new or unproven technology. Recommendation engines built on ML are, for better or worse, the backbone of the average person's online experience. Fraud and spam detection, built on ML, are baked into most of our digital utilities (email, payments, etc.) How many apps do you use that involve ETA predictions? Because those are all built on ML. Image recognition, speech-to-text, even the camera on most smartphones--it all relies on ML.

Even looking purely at LLMs, they've reached a point of adoption that blockchain never did. ChatGPT alone reports over 200 million users per week. Google search, possibly the most ubiquitous product in the history of the internet, uses LLMs as a standard part of search.

You may have criticisms of the technology, you may think it is overhyped or that companies are making a mistake by leaning into it, but it's difficult to rationally dismiss it as a purely speculative hype bubble when its usage is so widespread.


I love how Trump put Crypto and AI in the same department under a VC vulture Czar. At least he understood that they are meant to be in the same department.


At least with VR and 3D printing there was a sense of wonder and whimsy about it: that we were living in magical times.

There's always has and always will be a "Vulture Culture" on HN that rubs my FSF sensibilities somewhat the wrong way from time to time, but with this latest AI and crypto bubble it really feels like that shared sense of wonderment is being greatly usurped by the vulture crowd to a toxifying degree.

They don't want to grow money in 10 years. They want to make money now. It really hasn't always been this bad.


I agree, but I find that useful. There are a lot of info, developments, and opinions that I can go through and inform and update my view on the topic. It was on HN that I was able to very quickly realize web3 was a scam.

It was also good to teach that if enough people believe cryptocurrency is a way to invest money, it becomes as legitimate a way to invest money than any other. Also informed me of all the nuances and how naive ideology, even if misplaced and bad at forming the future, and money from illegal activities can become a legitimate investment option and become part of the mainstream where all of its past can be mostly ignored by current cryptocurrency owners. Anyway, right or wrong, that is my current view on it and it was informed by what I chose to read about it on HN.

Even the current flood of AI content I find useful to gradually understand how to use it or not, bring attention around ethical dilemmas, what is the useful for and what is not useful for.

And since I can chose what I read very easily on HN, it doesn’t bother me at all the hype-driven subject of HN submissions.


Oh, I like LLMs and use them all the time. But that doesn't change the fact that CEOs are lying to get more VC money. No one is replacing 30% of their workforce with AI. This whole narrative is spun to attract fad investments. I don’t like this part.

But I guess it’s just like programming languages: either people rant about them or no one uses them.


Thanks for that! That gives me more hope that this hype train is going to die down like the others did.


The way I see it is that HN usually follows the VC money, and since there is a *lot* of money being invested on anything with the tag "AI" nowadays there is a lot of interest in the topic.


This is the actual answer. The staged-up "we replaced 30% of our employees with AI" narrative is concocted for two reasons:

- To garner VC money - To drive down wages and make employees more fungible

It doesn’t matter if it’s true or not. As long as enough people believe it, that’ll become the new baseline.


Also the "programming language of the week".

"<a well established C/C++ program, existing for many years> written in ruby" (or whatever the language was that week, so replace that with go, rust, etc.)


Take me back to the ancient past where we discussed the 1000th postmortem of python 2->3 and the truly bleeding edge users argued about whether this interesting new language should be referred to as ‘go’ or ‘golang’. Oh and the occasional Flutter or Elixir development update. And the weirdo technical shamans debating functional programming patterns. When was the last time someone recommended “Learn You…”?

Besides the exaggeration of the first sentence, I’m actually deadly serious about the rest.


When was the last time we all had an honest to goodness editor war? It doesn’t even have to be text editors, let’s rhetorically fistfight about IntelliJ versus Visual Studio. Other nostalgic prompts:

- For competent power users, tiling window managers are better than desktop metaphor WMs

- Is Java finally dead?

- Is my manager a bitchass for not promoting me to the senior role (SF/GOOG/M/24)

- If you don’t use the trackpoint on your Thinkpad your mom’s a hoe

Good morning to all.


"Weirdo technical shamans"—lmao, I’m borrowing this. The whole FP community is like that. It’s all fun and games until you ask, "But what did you build with it?"


Don't forget another stuck in the past minimal semantic css based web framework that's almost unusable on anything more complex than its own demo. Complaints about react too, because its new (its 13 years old).


I feel it's gotten worse over time, with crypto came a new sort of grifter that was far more aggressively pushing their grift with nearly religious overtones. Some of those people seem to have abandoned ship and moved to AI instead, largely taking the playbook with them.

With short term flash in the pan hype cycles like that superconductor one, prediction markets astroturfing also seems involved. Basically the more hype you drum up, the more people you can lure into betting on room temperature superconductors, the more money you stand to win when it turns out to be nothing of the sort.


I think the less wrong way of thinking about it is that things do emerge and hn is fundamentally about thinking about what those might be in the realm of technology.

Just like yc, most things will not work out. Some will. You can read about all of them here. The noise around the thing is one of many signals.


It would be interesting to see when these trends started and how long they lasted, to see if the current trend is any different.


'AI' is kind of the perennial tech fad. The first 'AI winter' (period of diminished interest in 'AI') is usually reckoned to have started in _1966_. It is, in some sense, the _default_ state of the tech industry to be overly excited about AI; the periods when it is _not_ are the exceptions. The current one is unusually extreme, mind you.


In fairness, if you imagine the current tech landscape with AI removed it isn’t exactly a wellspring of optimism or good ideas.


it seems to mostly be heated debates on whether servers can handle rending entire webpages, or that's ridiculous and we need to use react.


That or yet another k8s homelab experiment to deploy something rewritten in rust.

We seem to be stuck in a tarpit of reinventing tactics, but there hasn’t been a major strategic shift for over a decade.

I am part of the problem, posting links to the obsessive recreation (not by me) of old arcade machines.


Oh, I’m glad this Rust fad has died down. Rust is great, but it’s not replacing C or C++. The most annoying part of this fanaticism, though, was that people were legitimately proposing Rust as an alternative to Go or Python. Those two solve completely different sets of problems.

Also, there are startups that failed because the founders decided to ditch Go, Python, or JavaScript for Rust. Now, they can’t hire people, and no one wants to deal with the mess.


Arguably that's part of the problem; the industry almost _requires_ there to be a new thing at all times, to justify lavish forward valuations. In the absence of a concrete New Thing, _something_ is still required, and LLMs, for now, fit the bill.


but not paying attention to the ramifications is troublesome. for example, web3 was exclusively a scam, but it shifted power and economies because it was entrenched in high places (mostly oligarchs evading sanctions and with that, politicians).

now you may have ignored it and not lost money like 90% who joined the hype... but now you have a president bound by support of that very money and they already made promises to clear the way even further for those people.

likewise with AI, even if you ignore it, there are proxy wars happeing with little more reason than to showcase the viability of autonomous robots in trench warfare for the next round of arms sales. https://www.youtube.com/watch?v=YrrXNZyoc8k


Look, I won't try to convince you're wrong. But if you come to a website with thousands of active users and ask: Do you agree with me? Don't reply if not.

What do you expect? Might as well ask an AI to generate that text, same level of information you'll be getting.


btw I'm not trying to defend LLMs here, I'd make the same comment if you flipped your question.


Yeah the Do you agree with me, don't reply if not seems a bit unsporting on a discussion forum! I mean it's kind of supposed to be about discussing things.

Taking the other side, on one hand I can see "AI that can transform a text into an image etc, but otherwise the benefits of so called AI are completely lost on me" but on the other hand AI overtaking biology is kind of an interesting thing.

And you can of course skip the articles.


I mean we are all susceptible to confirmation bias, OP is just being explicit about it ;)


Why are tech people obsessed with the most cutting edge tech we have.. yeah I wonder why.

If you would've asked experts 6 months before chatgpt when we'd have the current capabilities they would've said we're at least 10 years away.


Just for the right order of history here - this prediction disparity is false - GPT-3 came out in 2020 and a decent slew of researchers and startups had access in 2021. Though unreliable, slow and costly, davinci could mimic chats back then (It was a main feature sample), but OpenAI banned chat applications until IT released ChatGPT in Nov 2022.

For a whole year prior to ChatGPT, "davinci-instruct-002" was at the level of ChatGPT for most purposes, but applications using OpenAIs models had to be approved, and chat apps were disallowed (as were any completions > 300 tokens).

The competitors of that time (GPT-J, GPT-20b-neo) and early BigScience stuff was behind OpenAI, especially on instruction training. So we didn't see applications until OpenAI changed it's application approval process in November 2022. A good reference for where things were is "Machine Learning Street Talks" 4 hour 2020 GPT-3 video - although it expressed doubt about GPT-3, you can see there through many of the examples that the tech was close to ChatGPT relative to the previous iteration (GPT-2, T5, BERT etc).

However, for experts, it was obvious from about Spring 2021 when GPT-3 started taking off in the research zeitgeist with actual research usage, with loads of tweets and recognition of what it meant for the field (both in terms of impact to grant applications, ongoing projects and the future of language models being decoder only for the reasonable short term).

The real gap in prediction disparity was that most experts, possibly due to a bias towards their funding areas researching BERT etc, totally ignored GPT-2 and assumed encoder architectures or other fields (RNNs etc) were still better paths. Established Researchers* would have predicted GPT-3 was 10 years away in 2018 or 2019 which is funny in hindsight. However, even in 2013 (when I was not a researcher but a student), people in computer vision felt that arbitrary tasks/arbitrary recognition was nigh impossible unless using coding schemes etc (which limited to certain image types anyway)

* And I say this because I was researching in 2019, but I was more optimistic after seeing T5 and GPT-2. The field did not ignore CLIP, but there wasn't much obvious apparent research to be done with CLIP initially. Computer vision was all about semantic segmentation and recognition of disease etc in the 10s.


Not to mention that HN is the most hostile place for AI that I frequent.


In my view, it’s self-hostile, not AI-hostile. Other sites that I have visited may discuss how AI works or how to tinker with the internals of open models. But no other place except HN was themed “Prophecies of AI” that much. We are just babbling here all day compared to places where people actually work on/with AI.

Even your sibling comment (that is an example of it) reflects almost properly in the end. There’s just nothing to actually do here wrt AI.


Seriously, we're seeing tech with so many jaw-dropping capabilities and so much potential that it's scary. I'm surprised that even today most HNers and redditors keep seeing it as a grift and compare it to web3/blockchain with a straight face.

I knew this site was notoriously cynical, but the dismissal of such monumental tech advancements is making me reconsider the time I spend here.


It's not just HN. It's the whole industry.

I'm a freelancer and all of my clients talk is AI. I think, it's cool tech, but also quite overhyped.

But I get it, AI has become the magic box that the masses of "idea guys" can use to realize "the next big thing". No more meddling with devs or designers.

Whelp, guess we have to wait for the valley of disillusionment.


> AI has become the magic box that the masses of "idea guys" can use to realize "the next big thing"

It’s the new “I have an idea for app, can you build it for me for free?” that clueless friends of friends bust out when they find out you work in “tech”


> It’s the new “I have an idea for app, can you build it for me for free?” that clueless friends of friends bust out when they find out you work in “tech”

Well, it's the antidote: "If it's so simple and valuable, build it yourself, ChatGPT et al makes it easier and since it's so simple, you can get to keep all the rewards to yourself :)"


> It's not just HN. It's the whole industry.

I've got a zealot in my company too. The worst thing is that our customers (healthcare providers) really want to hear this, so it actually helps in marketing, which in turn keeps the hype going.

I will embrace the moment we step into that valley of disillusionment, although I fear the slop will remain.


Although I've worked in tech for a long time, I finally went to a conference a year or two ago, expecting maybe to have some fun conversations. Kind of went on a whim to see who I'd meet, even paid for the VIP.

Instead, it was 99% filled with everyone in my region shilling their ChatGPT wrapper like it was going to change EVERYTHING. There were nearly zero interesting tech or business conversations.

Edit: to more directly address your comment, it's interesting to see which businesses and sectors want to ram through AI with no clear reason and which have no interest in it.


One mitigating control for Hype Of The Year is to use uBlock and add custom rules in Addons -> uBlock Preferences -> My Rules.

Examples change as desired:

    # Filter some topics
    # top (title / url)
    news.ycombinator.com##tr.athing span.titleline > a:has-text(/(lockchain|coin|202[3-9]$)/):upward(tr)
    # bottom (stats / comments)
    news.ycombinator.com##tr.athing span.titleline > a:has-text(/(lockchain|coin|202[3-9]$)/):upward(tr) + *
    #


Why 202[3-9] though?


There are articles that end with the year that are often hype related. It's just an example of things people have chosen to filter out in the past. Each person can edit the regex as desired.

~-The Best Sausage ASMR Of 2024


Not GP, but there's a trend of auto-generated articles being spammed like "the best _X_ of [current year]" and they are rarely of decent quality (the current jetpens one is an exception).

This rule would avoid these articles, and GP probably sees the few false positives as an acceptable tradeoff.


Still seems like a bad fit for a HN filtering rule.

The spammy articles won't make the HN front page (at least not with the original title), but there will be a lot of false positives from the HN style of tagging older articles with the publication year.


2023-2029 was a really rough couple of years for LinuxBender and they don't want to remember any news stories from then.

Jokes aside, my best guess is that parent is trying to hide prediction posts like "What will cryptocurrency look like in 2026?" or something.


For me the bad year was 1975. My nose is still a little messed up from that. Thankfully the 1975 hype-train has faded.


Same. I really don't care about any of this stuff.

I am not a luddite by any means, I constantly keep trying out LLMs in order to see if I am missing anything. I am trying to get some utility from them, but I just can't.

"But you can use them to generate code", no, not really. First of all, why would I want to generate code? Code is a liability, I want less code, not more. Also, code is very expressive, I can say exactly what I want in code much more effectively than I can try to explain in English to an LLM. The LLM always misunderstands and generates garbage, garbage that takes more time for me to read, understand and fix, compared to simply writing it in the first place.

"Ah, but you can generate the boring stuff, boilerplate, stuff like that". I don't write any boilerplate, any repetitive things I automate by pricipled things, like more abstract code or by using my very effective text editor skills. Trivial code is easy to get right, by definition, why would I risk getting it wrong by using an LLM?

I do get some utility out of LLMs by asking questions about stuff I don't know about. The LLM's answers are almost always wrong, but they can push me in the right direction by informing me of things I am not aware of. This is not really a feature of LLMs, it's just that Google has become garbage at searching. So yes, LLMs are useful, but only by accident.

Generative "art"... miss me with all that.


I couldn't care less for some of the "generative AI" articles on here, but I do like some papers or research that is posted here which I find interesting, even though I don't always fully understand them.

For example, here are a couple from my HN favorites list, and I wouldn't mind seeing more of these articles:

* An Intuitive Explanation of Sparse Autoencoders for LLM Interpretability (https://news.ycombinator.com/item?id=42268461)

* Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet (https://news.ycombinator.com/item?id=40429540)

* Refusal in LLMs is mediated by a single direction (https://news.ycombinator.com/item?id=40242939)


Software engineers ignoring AI is like retail ignoring Amazon and the internet.

80% of shopping is done in stores. Off line retail is still quite successful despite Amazon.

But they were successful because they adapted. Most stores that completely ignored the Internet failed. The successful ones adapted.

OTOH The ones that tried competing directly against Amazon failed at a much higher rate.

That suggests the middle road: neither ignoring AI nor embracing it completely. Instead concentrate on the many strengths we have as humans.


Alternatively, "software engineers ignoring AI is like television producers ignoring 3D TV" or "like games companies ignoring the 'metaverse'", or "the food industry ignoring the juicero" or "the medical testing industry ignoring Theranos" or...

For every Big New Thing that actually becomes a Big New Thing, there are many which become, at best, a historical curiosity.


I’m using AI every day and I don’t see why I would ever stop. It’s not doing my job for me but it’s a much better way for me to learn how to do my job. There is undeniable value in it. I don’t think scale is going to bring us to AGI but what we have now is a big deal already.


> I’m using AI every day and I don’t see why I would ever stop

If you have to pay for its full actual real costs, you might decide it's not worth it.

(All the AI startups are losing money even on their paid tiers. Google and Apple and Meta can eat the costs for some time, but at some point they'll have to try to recoup their investments.)


Maybe but that doesn’t have anything to do with its real life value and utility today like the comment I replied to was disputing.


As a software engineer, I find AI to be boring. Sure, the technology itself is nothing short of miraculous, but I ultimately program because it’s fun to solve puzzles, manipulate symbols, and architect solutions in service of a project. Swapping out the labor for a black box just sucks the life out of the whole thing, since getting to the end and popping out a product was never the point.

I come to HN to revel in the joys of programming and hackery, not to min-max my career. And if I could cull AI-related content from the front page, I would definitely do so as well.


Sounds like a good advice.

Unfortunately, my feelings tell me otherwise. What is it that we humans are better at? Is it a chore of managing people, sitting on meetings and aligning stakeholders' interests? Feels to me more like a politician job than an engineering.


Humans are the ones with the money. They exchange it for goods and services.

So if you want money, you need to provide a good or service that other humans want. Humans have an inside advantage providing goods and services to humans.


I find AI to be mostly overrated but helpful in certain scenarios. It has helped me during my research & provides a different way to lookup for information as compared to traditional search engines.

But I don't find its generative capabilities to be all that impressive. It is generic and uninspired at least in text generation. I feel the same way about most new things that are emerging out of the tech scene since Blockchain. I dont even bother with tech news nowadays.

AIs adoption has been quite fast though. Unlike blockchain that never went mainstream AI has captured the casual user segment. It's being used everywhere from recruiting to emails. In offices and in schools. It's hyped alright but real people are using it.


I’m AI-obsessed and have been since forever (mid-1990s). I’m longing for AGI, so I tend to consume every piece of content that looks like an advance or is related to AI business.

AI-related content is seasonal. This new LLM trend (which is now shifting towards “agentic AI”) will eventually fade and be replaced by another shiny new term. Basically, there’s a lot of noise, but every now and then, some gems pop up. Finding those articles is the fun part and the reason why I visit HN every day.


Some of these AI articles discuss ways to make progress towards the most important technology humans have created in terms of productivity gains. I love these. Others are cheap pitches of a minor hack from somebody trying to make a buck; they are predictable, thus boring, but hard to avoid, just like noise in a city. In my opinion the quality of HN comments and votes has been losing steam over the last few years especially since a large chunk of reddit migrated here a couple years ago; the commentary used to be more insightful than the articles themselves on many topics, now the commentary is also noisy and often boring (thankfully, it still contains many hidden gems).


Look, I'm in my late thirties and only now got the whole magnets, coils and electronics fun. Others did that shit when they were toddlers.

Aside from the fact that a ton of people here make money with AI or are or will be heavily invested in AI, there's just the curiosity factor, that I had and then lost to the reasons you described and many more.

My brain clicked in the past three days, and it's annoying that people didn't explain it properly but waited until I reasoned myself into it. It makes me angry and I'm gonna build a tiny potato canon this summer and hunt them down :P (I'm not, but I never build one, so ...)

Humans take a lot of wrong turns in their lives and it's important not to take that the wrong way. Envy is brutal and makes people do buttloads of dumb stuff and among those things are all those AI enhancers and data krakens build on top of GPT and of course all those people who create content and make those this-is-not-marketing videos ... but they all serve some customers, and of course, some Ponzis.

It rarely is what it is in this digital world, just as it always has been on any consumer markets. Honesty? Every single person and supplier counts.

Is it all pathetic? Is pathetic bad from all POVs?

A lot of people do it to pussy-grab money out of gullible people and if you really want to to get the benefits of AI, use it to hack these people and take em out of their fraudulent businesses so they have a reason to get and build better or pivot the fuck away.


A friend of mine once told me, don't get upset about other people's passion. Just bet your own money instead. If you don't like the AI obsession, short the AI stocks, or invest in something else.


That might be poor investment advice.


I guess not. I just like the philosophy: don't argue but act on the differences of understanding.


The comments before mine certainly point out that many dislike the hype topics.

I think these hype topics are inherent to HN though, given that the org is fundamentally a venture capital financial institution.

By nature, VC is always riding a wave of hype. Investors put money into a startup in hopes that it generates returns like the next singer on stage in her underwear.

The industry could support smaller returns from thousands of smaller startups, but the big payoff comes with investment in the next big superstar.

These are the same financial forces that guide massive industry consolidation.

Who actually likes goggle today? Investors...


Not the only one, but to be honest, I also find most of SaaS-related content boring. Technically interesting content is as rare as it was, there is definitely some though, which I'd why I am here.


The same thing happens over and over again, because of the noise and interruption, we escape from fb, x, ig, which we were once passionate about, from blackhat, from everywhere, but in fact, it is just from a filled cesspit, temporarily Jump into a niche shithole that hasn’t been filled (yet), and the cycle begins over and over again. Relatively speaking, I think HN is not a cesspit because of its retro interface and unfriendly operation. That is great! !


AI is a neutral primitive; a new type of compute. It will be applied in both positive & negative ways. Man I’m still deciding if the internet, as a whole, has been a net benefit to society & culture. Now we’ll need to figure that out for AI, too.

Genetics research, especially for more rare conditions with less traditional research funding, is one application where I can’t help but feel excitement.

AI image & song generation in the style of a popular artist is one application where I only feel sadness.


It's a next token predictor capable of some impressive stuff, but there's no intelligence behind it. You will never find a novel idea from an LLM.

Like so many successful applications of computers it's a new way of taking a monotonous task and grinding through it quickly. In this way I think it's different from some previous fads (e.g. block chain) and there is real utility.

I agree though that it's exhausting to read people anthropomorphize and hype it up.


I find AI to be fascinating because of humankind's old dream to create intelligent life in a machine. Just watch the countless old and new movies and animes or read the novels².

With big advances, the current wave of AI is finally promising to turn this dream into reality.

Besides, it's already quite useful even with its current shortcomings.

--

² here's a short very incomplete list:

The Matrix, 2001 A Space Odyssey, Blade Runner, The Terminator, Robocop, Her, Ex Machina, Ghost in the Shell (1995, 2004, 2017), Data in Star Trek, Star Wars, ...


I agree. I'm not interested that much in AI. Mostly I use it too make boring or complex SQL queries and regexes, which are wrong 50% of the time.

I see people here using those LLMs on their machine etc, I have no idea what they're doing with it or how any of that works.

Or writing prompts all day doesn't seem fun.

The paradox is that I work at a AI startup with a real product, with real clients ;)

So I don't think it's a fad and as a developper you have to stay alert on the effects it has on your profession.

AI is not useless NFTs.


I do understand you there, and think AI is a bit overblown. A lot of use cases for it seem to be more about using AI for AI's sake, not because it really revolutionises anything.

But at the same time, I don't think Hacker News is that heavily flooded with AI related posts. In the 30 posts I see on the home page, 3 are AI related, with one being this post complaining about it. The next page of posts has even less, with only 1 or so post about an AI related topic.


It depends. Sometimes the first 7 results are AI, and it really puts me off the site.


7 out of 30, that's a very low threshold to accepting that people can have different interests.


I'm not sure if you've read many comments on here but there seem to be more people railing against generative AI than there are those sharing useful and interesting stories or critiques. It's one of the reasons I've been spending less time here, I used to be able to rely on HN for interesting perspectives on both cutting edge and historical tech related topics, often seeing insights from people sharing practical knowledge that was hard to find elsewhere. For the past year or two it's taken a hard turn towards a very spiteful and shallow gripe fest that feels like the same thing that happened to Reddit years ago when it took off in popularity. It doesn't make a lot of sense to me why people are coming here to complain about technology and it just adds a bunch of noise that you need to sift through if you are interested in the tech being discussed. Feels more like people fishing for upvotes by sharing their "hot take" which inevitably a cookie cutter opinion that you see all over social media without any original thought behind it.


Started with yes and no, but thinking more of it, basically yes.

I’m interested in AI tech (not philosophy or prophecies). But HN isn’t that great of a source for tech details. Reddit and local boards helped much more.

At the same time I have to ignore lots of shallow startups and react form developers 10x productivity reports, cause that sparks no interest.


Currently on the front page there are ~2/30 posts on the front page.

AI hype is everywhere. On HN there is genuinely much higher quality conversation about AI than many other places.

So, no, the question in the title doesn't resonate with me. I don't think there is an obsession, and for the AI discussion happens, I'm happy with the quality of it.


HN, or at least part of the user base, is always getting over-excited about some fad or other. As others have mentioned, there was also blockchain (a few iterations of this one), metaverses, more AI (there was a computer vision bubble a while back), thorium... There'll be another along sooner or later.


If it were just transforming text into an image, I agree it would be overblown.

But if you follow the thread to the end, this is the beginning of something that will change everything, probably more than the Industrial Revolution even.


most people I know who used to browse hn have left because of this (I am not exaggerating), and I barely ever look at it either anymore, so asking here is probably gonna give you a biased view.


Well, yes, this is what makes HN and sites like it work for the target audience: they both passively - by endlessly discussing a group of topics which is of interest to a relatively narrow group of people - as well as actively - by vociferous use of the down-vote button on those who venture too far outside of the desired discourse - shun those who do not fit in. If you do fit in you'll see this as a positive, if the topics discussed here are of no interest to you you won't care at all since you prefer to be somewhere else. It is only for those who straddle the edges of the evolved site profile who will think negatively of this since they want the site to accommodate them as well. Some (myself included) are fine with the subject matter discussed here since they can simply ignore those things which they're not interested in but would like the down-vote to disagree rule to disappear so that legitimate comments which don't fit the desired narrative do not get greyed out into oblivion. Others (like you?) would prefer a less hype-oriented subject matter discussion.


I find AGI fascinating. Generating bad airbrush art and crappy prose to be summarized badly by another vastly inefficient machine wasting energy on a dying planet so more douchebags desperate to have their very own Smaug horde of gold have another bandwagon to leap on? Not so much.

The only thing I even remotely use AI for is Codeium in VS Code and that's almost entirely as autocomplete - every time I've tried to use it to generate anything more than a simple function I end up spending as much time going through it and rewriting it to suit my needs as I would have just writing it myself in the first place.

I like writing code, the same way I like writing prose and music and drawing. I will never want or need AI to do these things for me and, if I'm being honest, I'm always gonna be a snob to people who do.

If you don't care about the act of creation and the process of learning how to do it, what's the point? To "generate content"? To make money? Go be a prostitute then, if money is all you care about.

Maybe a life focused on efficiency of generating content works for some people but, fuck me, I'd rather gouge my own eyes out with a spoon.

So yeah, totally with you.


Yes, I feel the same way. The lack of critical thinking about AI is almost shocking… and then I remember how messed up humans are and it doesn’t seem shocking, just depressing.


I use these tools with the same caution with which I use Wikipedia.

They’re all great to get you started on a (re)search path provided you take care to validate the information so easily acquired.


At least behind all hype generative AI has some real beneficial use cases, unlike blockchain which is mainly used for drug trade, tax evasion and speculative investment.


I'm just sick of the hype.

its a fascinating and useful tool, but its no silver bullet. Calm down and carry on people, less froth (I'm looking at YOU Microsoft!).



My wish is that those of us sick of the hype would start calling it ChadGPT, to give it the respect that it deserves!


Yes, people that say they use AI every day are embarrassing to me. I've never been able to replicate it's usefulness and have actively tried. I can only imagine at this point that they are either delusional, lying, or their job is actually that easy.


Don't worry, you are in good company among the many who are also sure of what they think they see.


It’s a combinator. Most people’s passive income here depends on AI stocks.


I should post my take on this substituting Rust for AI ;)


> Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it. [...] If you flag, please don't also comment that you did.

I flagged this submission for this reason.


Frankly, HN has moderate with the amount of AI spam compared to other technical sources I happen to brief through. And comments often are refreshing in keeping the hype mood somewhat down.


Lots of people got into IT-related stuff vaguely because they liked the idea of AI. For a very long time it was only a dream, now it's becoming reality, and I'm pretty sure it's a big part of the excitement.

Of course there is a lot of lame content and grifting around this, but I'm pretty sure this is not another Web3.0 nft crypto hype wave, this is here to stay and expand.


they will change their mind when they are all unemployed.


There are three things you can expect from HN:

* People who like AI or China or Trump or Apple.

* People who hate AI or China or Trump or Apple.

You see AI slammed as much as it is glorified here. And AI is the topic of the industry today, it would be like shoving your head in the the ground if you ignored it.


Why is this, again, flagged? (if not proving the point it aims to discuss)


While I'd also prefer for this thread to stay, I have to admit that OP saying

> I don't need you to convince me I'm wrong.

doesn't sound like they wanted to discuss anything, rather to encourage confirmation bias


Agree, but, the discussion it triggered around the initial post was of interest nonetheless.

Plus, I've seen several posts flagged in the past few days that would have been worthy of discussion; although being on the fringe of social/political, they are related to the tech sector and YC, which may, or may not explain why they were flagged.


This website is about money, dude


Yes, it should not be crazy to see that a website started by people like Paul Graham and Sam Altman would naturally be full of hyped up VC nonsense. Their entire business model is to hype up these ideas and the startups working on them and then cash out!


Exactly. Also, this _world_ is about money, dude.


For some people perhaps - it doesn't have to be for everyone. And there's plenty of really interesting content on HN that is about science and discovery rather than commercial uses.

To op - yes, feel the same, but what can you do...


It may seem that way, but the reality is that rabid capitalism has not reached every corner.


It is mostly a marketing gimmick. Tech companies want people to believe that their models can do "anything", even become conscious. Anyone who understands the technology can see that this is nonsense. But they'll continue to repeat this while the money is coming, so unsuspecting investors will poor more and more money into AI companies.


I don't mind the AI stuff but find the censoring of conservative voices extremely disappointing.


"Am I the only one here who can't stand HN's AI obsession?"

Nope.

Someone took an HN poll last year and a majority agreed "AI" is overhyped.


Honestly your post is worthless. You are asking why a tech focused forum has posts about a specific tech field that has seen advancements in recent time. You don’t even know elaborate beyond saying they are boring and lame.

I see your opinion a lot here on HN and I wonder if there is a segment of the population in this field who choose to stick their head in the sand? No doubt there is a lot of hype that won’t materialize but unlike some of the other hype cycles, people are starting to see a value from this current cycle already. Certainly at the bleeding edge the cost may exceed the value but it’s only a matter of time before those costs get eaten away.

So no, I don’t understand your feelings, I am excited for the future and if something gets posted on HN that I don’t like, I don’t read or upvote it.


Me personally? I'm more tired of the constant whining and moaning about how horrible AI is


Haha, you know what would prevent that? That's right, having less submissions about it. :P




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: