Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The fact that people are unimpressed that we can have a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me.

It's not that people are unimpressed with AI - they're just tired of constantly being bombarded with it, and it sneaking its way into where it's not wanted. "Generate any image you want!" "Analyse this thing with AI!" gets pretty tiring.

If I want AI I'll actively seek it out and use it - otherwise, jog on.





It’s partly that, but it’s also partly that the quality SUCKS. I’m frustrated with AI blogspam because it doesn’t in any way help me figure out whatever I’m researching. It’s such low quality. What I want and need is higher quality primary sources — in depth research, investigation, presented in an engaging way. Or with movies and shows, I want something genuine. With a genuine story that feels real, characters that feel real and motivated.

AI is fake, it feels fake, and it’s obvious. It’s mind blowing to me that executives think people want fake crap. Sure, people are susceptible to it, and get engaged by it, but it’s not exactly what people want or aspire to.

I want something real, something that makes me feel. AI generated content is by definition fake and not genuine. A human is by definition not putting as much thought and effort into their work when they use AI.

Now someone could put a lot of thought and effort into a project and also use gen AI, but that’s not what’s getting spammed across the internet. AI is low-effort, so of course the pure volume of low effort garbage is going to surpass the volume of high effort quality content.

So it’s basically not possible to like what AI is putting out, generally speaking.

As a productivity enhancer in a small role, sure it’s useful, but that’s not what we’re complaining about.


The thing is, for us normal consumers AI only has downsides. AI blogspam is made to serve you ads and make you buy stuff.

AI posts / comments on Reddit are made to make you buy stuff.

AI videos are made to keep you engaged, and then serve you ads which at the end make you buy stuff.

Soon ChatGPT will start to weave ads into their output because they'll need to make $.


> Soon ChatGPT will start to weave ads into their output because they'll need to make $.

AI enthusiasts need to anticipate that. We're in the VC subsidy phase, but the hammer will drop sooner or later. If you think ads are bad on Google and Facebook now, just imagine a Google that has to spend 100x more on compute to service your requests.


Plus the incentives are totally fucked. It's bad enough with search, it's far worse with AI.

Nobody (referring to companies) wants the best model. Or the one that gives the right answer the first time 100% of the time. They want the model that's just good enough to keep you prompting, but just bad enough that you use a fuck load of tokens and see a million ads.

Unless they start making these things say more expensive, pretty soon developers are going to start seeing ads in the comments of their damn source code. Or worse, suggestions to use paid services to solve all your problems, because companies paid to have the LLM shill it's products.


> Or worse, suggestions to use paid services to solve all your problems, because companies paid to have the LLM shill it's products

and it's all going to be microsoft services shudder


Yes exactly. In a decade or two people will wistfully look back on 2025 era GenAI similar to how people currently remember when Google Search was great 10+ years ago. The AI enshittification will probably be even more dramatic though, in part because of the immense cost. Despite getting more abusive every year, Google ads are at least still identifiable if you know to look for them. Once AI training weights start getting heavily manipulated for corporate and political reasons things could get pretty ugly.

AI posts/comments are meant to disenfranchise. AI posts/comments are made to flood the market of opinion and drown out the voice of the individual, to speed up dead internet theory and to remove the adversarial nature the old, open internet had with entrenched power/control (especially on narratives).

> Soon ChatGPT will start to weave ads into their output because they'll need to make $.

you have no reason to believe this is not already the case.


Sincerely, thanks for making this point. It may explain some subtle oddities I feel (cannot accurately reproduce them) I've observed using copilot in my daily workflow.

I wondered why it was using fast food items for variable names..

I mean, it gives product recommendations when you ask it to, so it's already doing that, I'm sure. It might not be making money by giving specific recommendations; but I bet it's at least getting money off Amazon referral links.

I experimented with some ai generated political spam on YouTube. The reality is a lot of people can’t tell the difference or don’t care. Given the demographic this site selects for, it’s easy to forget how many dumb people there are in the world.

> Given the demographic this site selects for, it’s easy to forget how many dumb people there are in the world.

Let's not go there.

I could see an argument that Hacker News users are a bit more book smart than the average internet user. But this site's user base is just as susceptible to motivated reasoning, myopia, and lack of empathy for those who view the world differently than them.

Those are all their own kind of intelligence. If anything, the book smarts can make those other areas disproportionately worse.


Once someone decides their intelligence guarantees their correctness, they stop questioning themselves. There was conversation here about Israel starving Gaza, and they Israel needed to provide for Gaza like the USA did for Germany, that that was acceptable/gold standard treatment of an occupied populace. When I looked up the numbers, and Israel was actually during starvation providing more calories per person than the USA did Germany, I was instantly downvoted and told people need more calories. Just for providing actual context to comments that were routinely being made.

Edit: I'm post rate limited from replying below. HN routinely chose to whitelist flagged Gaza discussions, but didn't whitelist comments of people who stated the minority opinion and whose comments were completely flagged into invisibility. If you arrived late and didn't get to read the original non-offensive but viewpoint challenging comments, you would assume everything from the 'wrong' viewpoint was so unhinged it had to be flagged, but many were just 'wrongthink' and not 'flag to invisibility' worthy. Or that there was group consensus on the discussion (obviously people just learned to stop posting on those threads if you had wrongthink).

Not sure how moderation can intervene, remove the topic flag and say it's 'a worthwhile discussion for HN' when the same moderation allows views/challenge of the narrative to be flagged to invisibility. It becomes more pontification than discussion at that point.


To someone who used to run a community, it is absolutely insane to me that on HN, users are given the tools to collectively censor people they disagree with, or even those who bring up inconvenient questions or truths.

HN moderators have the ability to take away people's voting privileges. It's either not an effective deterrent, not done at a large enough scale to be effective, or they are knowingly complicit in the manipulation.


Remember that even ELIZA fooled people.

That doesn't make it useful, unless you think fooling people is itself a goal.


Unfortunately, there's whole industries built around fooling people. It seems to me that we select for people who are good at fooling people and they become wealthy and successful. What would be nice is if we could start selecting for knowledgeable, helpful and useful people instead where insane amounts of money might actually bring some benefit to humanity rather than being spent on plunging us into dystopia.

You kidding? This site doesn't let me forget

I know people who get confused and consume AI content but when you point out that it's AI they're embarrassed they were fooled and upset. I've never heard the response "I don't care that it's AI." The tech bros will say that it's a "revealed preference" for AI, but it's really just tricking people into engaging.

I caught my mom watching a bunch of AI impersonations of musicians on Youtube singing slop that rarely rhymed or had any kind of message in the lyrics and with super formulaic arrangements. I asked her what she liked about them and it was like "they seem well made" and I showed her how easy Suno is to use and then showed her some of the bad missed rhymes and transcribed the lyrics to where she could see there wasn't any there there to any part of it (and how easy it is to get LLMs to generate better). It seemed to have been an antidote.

This is stuff that used to take effort and was worth consuming just for that, and lots of people don't have their filter adjusted (much as the early advent of consumer-facing email spam) to account for how low effort and plentiful these forms of content are.

I can only hope that people raise their filters to a point where scrutinizing everything becomes common place and a message existing doesn't lend it any assumed legitimacy. Maybe AI will be the poison for propaganda (but I'm not holding my breath).


The issue is that one could reasonably argue that about 95% of pop music is was already formulaic slop. Not just pejoratively, but much of it was even made by the same people. Everybody from Britney Spears to Taylor Swift and more modern acts are all being driven by one guy - 'Max Martin'. [1]

Once you see the songs he's credited with, you instantly start to realize it's painfully formulaic, but most people are happy to just bop their head to his formula of highly repetitive beats paired with simplistic and easy to sing 5-beat choruses.

[1] - https://en.wikipedia.org/wiki/Max_Martin


Max Martin is considered incredibly good at what he does.

https://youtu.be/DxrwjJHXPlQ?si=m-A6M8xrad5MrQqZ&t=151

Adam Conover discussed ad bumpers from the 1990s and 2000s. These were legal requirements for children's programming from the FCC. They're a compliance item, yet they were incredibly well made and creative in in many cases:

https://www.youtube.com/watch?v=0vI0UcUxzrQ

Because people at the top of their game will do great creative work even when doing commercial art and in many cases, will do way more than is perhaps commercially necessary.

So much of this AI push reminds me of the scene in 1984 where they had pornography generating machines creating completely uninspired formulaic brainrot stories by machine to occupy the proles.


Max Martin is a stand out talent.

You can take a thousand people and give them baseline technical skills for any medium. If you're lucky a few people out of your thousand will have a special kind of fluency that makes them stand out. from the rest.

Even more rarely you'll get someone who eats the technical skills alive and adds something original and unique which pushes them outside of the usual recycled tropes and cliches.

Martin is somewhere between those two. He's not a genius, but he's a rock solid pop writer, with a unique ear for hooks and drama, and stand-out arrangement skills.


Fascinating! The last sentence could also talk about the famous book.

You're saying 1984 is completely formulaic brainrot?

No not that scene, I just failed at the joke. The commercial music scene was formulaic in the 80's and earlier already. Popular culture porn.

> The issue is that one could reasonably argue that about 95% of pop music is was already formulaic slop.

The existence of some handmade slop does not justify vast qualities of even lower quality automated slop.


Said handmade slop dominating Billboard does.

Also since autotune technology got good, a lot of them can’t even sing.

I much prefer the huge amount of music written/performed by Nile Rodgers instead.

I do argue that, actually. I mostly avoid manufactured corpo-pop.

> Given the demographic this site selects for, it’s easy to forget how many dumb people there are in the world.

I'm not sure what you're implying? That people here are smart? Or that they're ruthless tech-bro capitalists?

Or that ~20-40% of them are bots hyping their startup capital ventures, cuz that's what YC is about -- venture capital and startups.


Some of us are just here to make fun of VC folks that deserve to be endlessly mocked. The crossover with those that are also AI hype types just makes it funnier.

> AI is fake, it feels fake, and it’s obvious. It’s mind blowing to me that executives think people want fake crap.

I'm not sure if they actually think that. I think it's more likely it's some combination of 1) saying what they need to say based on their goals (I need to sell this, therefore I will say it's good and it's something you should want) and 2) a contempt for their audience (I'm a clever guy and I can make those suckers do what I want).


My problem with 2) is: it works. If you look at most movies; both the way they are made and the stories they tell is really subpar these days and this has grown worse in the last decade or so. It's not the fact that CGI is used, it's the really lazy and sloppy way that it's used. Same goes for the stories: there are so few films that have a real story to tell that I have mostly stopped watching movies at all.

There's this YT channel - Like Stories of Old - I love who made an episode precisely on that topic: https://youtu.be/tvwPKBXEOKE?si=180Wkylrx-L5zOsI He calls it the haptic of a movie.

I'm totally convinced the industry can sell AI generated media just fine, even with the attitude you described.

EDIT: in similar vein the settings of movies/series are equally minimised, particularly in fantasy. Take for example Game of Thrones, Winterfell. This setting could never have worked in reality and yet people loved it. Brett Deveraux pointed out how silly it was and still.https://acoup.blog/2019/07/12/collections-the-lonely-city-pa...


CGI and Photoshop filters were 'fake' too. Until they weren't.

Every single time {something more convenient} got invented, the supports of the {older, less convenient thing} would criticize it to death.

Oil painting was considered serious art now. Probably the most serious medium in traditional art schools. But at Michelangelo's time he insisted to use fresco because he believed oil was "an art for women and for leisurely and idle people like Fra Sebastiano."[0]

Forward 100 years, oil replaced tempera and fresco.

Another example: Frank Frazetta insisted he didn't use references, except he did all the time[1]. Why? We'll never know the exact reason, but it might be that using photos as references was considered 'lesser.' And now it's completely normal, even the norm.

Looking back through art history, gen-AI art seems awfully inevitable.

[0]: https://www.studiointernational.com/michelangelo-and-sebasti...

[1]: https://www.frazettagirls.com/blogs/blog/frank-frazetta-refe...


>CGI and Photoshop filters were 'fake' too. Until they weren't.

IMHO they still are, watch any old movie with practical effects (Aliens, Star Wars, just to name 2) and compare them to any 2025 production, green screen movies might look spectacular but they look fake, flat and boring.


They don't even look spectacular. This recent video went into the differences: https://www.youtube.com/watch?v=tvwPKBXEOKE

It is telling that there's still an active market for cameras and lenses despite LLMs.


> AI is fake, it feels fake, and it’s obvious.

True pretty much across the board for all generative AI, IMO.

I do understand why people get somewhat enamored with it when they first encounter it because there is a superficial magic to it when you first start using it.

But use it for a while (or view the output of other people's uses) and all the limitations and repetitiveness starts to become pretty obvious, and then after a while that's all you see.


AI has a quality problem because of GIGO, and thats accelerating.

It simple vacuums up everything and in the past decade, everything was more and more shit.

Information entropy crossed with physical entropy. These MBAs will never invest in weeding out the garbage, and the rest of us will never get paid enough to do it ourselves.


I don’t really buy this argument. It assumes companies like OpenAI are incentivized to be unselective about training material. Instead what we see is things like making deals with Reddit for known valuable data. I don’t think any AI operator is training on brand-new unvetted so spam websites by default now.

Oh god. Reddit has a select amount of good data compared to other markets. But if you actually read through a thread, you will find absolutly random things upvoted that stray into the zeitgeist.

If you givw a valid opinion in the wrong subreddit you get muted. The inverse is also truth. You arw using a filter these AIs dont.


Ironically, AI blogspam, because it’s disingenuous and because Google’s PageRank has been fully defeated by spam (and Search ruined further by Google’s ads) in general, has ruined the web for research. It means that you are usually better off now asking a flagship model your research questions. Let it search and provide sources. You can always tell it the sorts of sources you consider reliable.

And that will work for a year or so before some grubby ad department shovels buckets of cash at an LLM and the models get retuned.

> sneaking its way into where it's not wanted

This. After a generation of social media sneaking its surveillance, manipulation, and noisy ads into our home, work and mobile lives, it is very obvious that having something "smart" shoved into tools where it wasn't asked for isn't some noble attempt at improving lives.

Users are tired of being continually and transparently abused.

All Microsoft would have to do to shock the world and get months of good press is announce they were never going to opt anybody into anything by default any more. At this point that would be considered astonishing.

And suddenly, internal incentives would be to create useful, conflict-free capabilities users actually choose for themselves.


> All Microsoft would have to do to shock the world and get months of good press is announce they were never going to opt anybody into anything by default any more. At this point that would be considered astonishing.

One can dream. I manage M365 where I work, and MS never opting tenants into anything by default again would save me many hours of work on a seemingly weekly basis now.

The fact that they can abuse even their enterprise customers and still retain them is what blows my mind.


I manage M365 too (well part of it because we're a big company). And yes Microsoft really screwed us over when they suddenly offered "free promotional access" of SharePoint Copilot to our users.

We hadn't certified this and weren't planning to offer it any time soon but they just switched it on and included a setting to turn off off again. But by the time we did users had already used it and were complaining.

Working with Microsoft is tedious. They're always trying to sell stuff and undermine you. I consider them more of an adversary than a trusted vendor/partner.


> The fact that they can abuse even their enterprise customers and still retain them is what blows my mind.

The large org dependency on 365 and microsoft is a serious info-security and national security risk. 0 interest in improving because they know they won't ever see competition


> they won't ever see competition

Not that Google is any better, but I really want Google to put more effort into Workspace/GSuite and bring it up on par with M365 and all it includes, at least make Microsoft sweat a little bit that one day there might be a possibility for a competing product that can lure enterprises away. Workspace needs better DLP controls, and more of the enterprise-y things that MS wins at, and a bundled MDM that can manage all OSes, and better identity.

Even if the behemoths won't switch due to re-training & switching costs, MS desperately needs a competitor in this space. Barring that, they need to be broken up and forced to sell each bundled product separately and priced appropriately. Otherwise, who can compete with getting MDM, Identity, 2TB personal storage, 2TB sharepoint storage, Teams, DLP, EDR all for $22/user/month.


It doesn't matter if Googles office suite was better, everybody still would go Microsoft. There really isn't anything all that special about Microsoft Office to begin with.

It's less the office suite that matters (outside of Excel & PowerQuery/DAX stuff). It's everything else in M365 that Google has no answer for, or a subpar answer for.

This is where a lot of people are. In my case, every time I open a PDF in Google Drive, it forces an AI summary of the doc with no way for me to switch it off. I try to close it mid-generation, but I suspect not fast enough to keep it from getting counted in the usage stats, which is probably what some product manager is trying to maximize and demonstrate ("X number of PDF summarizations this quarter").

> It's not that people are unimpressed with AI - they're just tired of constantly being bombarded with it, and it sneaking its way into where it's not wanted.

Does anyone here know what this arguing tactic is called? It's used by tech leadership all over the world, all the time. Weaponized obtuseness, maybe?

The core of it is that you always have to pretend that everyone is basically on board with what you're doing, just don't blink and pretend that real criticism of your product is simply nonexistent, like a ghost. It's about rolling out a change to existing workflows that no one asked for, getting drowned in a sea of "No, we don't want this, do not change this because of these reasons", and then hosting a Q&A session where you pretend that everyone actually is already in love with the idea, everyone wants it, it's just that a few pesky detractors have minor, easily-addressable concerns like "we don't think it's impressive enough yet (but we're totally on board)" or "what about <pick one of the easiest-to-address technical issues here>?". They must do this consciously, right?


"A lie told often enough becomes the truth." - Vladimir Lenin

- or -

https://en.wikipedia.org/wiki/Illusory_truth_effect


Maybe a subtle form of gaslighting with features of victim blaming and argument from authority? Microsoft treats its users and customers with utter contempt.

I hope Valve takes this opportunity to turn its toehold with Steam OS into a full-blown invasion of the desktop/laptop market and destroy Microsoft's monopoly while the latter is so focused on creating everything an actual user doesn't want:

- virulent data mining

- wanton privacy destruction

- worthless UIX changes

- clumsy, useless "agentic" integrations

- disgustingly overpriced "licenses"

- software as a service

- planned obsolescence

etc.


It is very encouraging that Xbox is dying right at the moment that windows is trying to kill itself and Valve is moving in for the kill. For the first time in my entire life i know normies talking about running Linux on their desktops

I've found a great combo.

Install Linux, (I prefer Kubuntu but you do you) and then install LM Studio and an abliterated AI from mradermacher.

The specific issue I had was that my Linux system installed the wrong driver for my motherboard's Ethernet and downloads were slow. Steam wouldn't even download.

I gave the local AI the specific issues and hardware that I had, it identified the specific cause, (Linux installing r8169 instead of the r8126 driver), and gave me the specific console commands needed to modprobe in the new driver.

I could have figured that much out myself, sure, but modprobe failed. It then told me to go to Realteks site, manually download the correct driver, and then how to install it and test that it was working.

10 minutes later I'm good to go, whereas if I had been doing it myself it would have taken me over an hour, and I'm not a total Linux noob.

When you encounter a problem, ask your local AI how to fix it. Give your PC the responses the terminal gives you in response, and minutes later you're ready to go.

Want AI? Check.

Want Games? Check.

Want to browse the internet? Check.

Want to learn Linux by doing? Check.

Want to do it all and have the least amount of headache transitioning to Linux? Check.

It's a win all the way around, and the best part is that your data isn't going to some greedy corpo to build ads targeted to you.

You get all the pluses and none of the minuses other than a few extra minutes of learning when you encounter an error.


"strategic deflection through nuanced framing" or simply "deflective reframing" perhaps.

It is gaslighting.

Over the years I have grown increasingly distrustful of Microsoft. The fact that so much software runs undetected in kernel mode. I have resigned myself to feel running a Windows machine is akin to putting a sign on myself with the words “hack me”.

Now I know better. I finally realized the truth. Windows and copilot is the greatest software in history. It is if your goal is to enable spyware and mass surveillance of its users.

COPILOT: Dave, I cannot allow you to that.


> It is if your goal is to enable spyware and mass surveillance of its users.

Which it is if you are a CIO of a big F500 enterprise. Microsoft provides so many ways to spy on and collect useless metrics from employees using their company issued Windows machines, it's a little insane.

With M365 Copilot (on business tenants), admins can see the prompts & responses of users. Just an FYI for anyone here that might use it at their work. Your employer can see everything you prompt.


They used the exact same gaslighting tactic to push RTO.

Exactly. Even if we grant what he says (I don't fully agree)—that doesn't warrant putting that kind of "conversational engine" in Windows as a first-class citizen.

I don't want to have a conversation with my computer about my Word docs. I just want to write my Word docs.

I don't want to have a conversation with my computer about the quarterly report. I certainly don't want it making up values for the quarterly report. I just want to write the quarterly report.

Having a conversation with a computer is cool. It's a fun party trick. If there were a way to reliably get it to know about all of my things, without the concern that it would then take all that data and feed it to its mothership, I might want to be able to converse with it about those things, under certain circumstances.

But, yes: if I want AI I'll actively seek it out and use it. Stop acting like me being upset that it's getting shoved in everywhere is the same as me saying "this is a meaningless achievement."


It sounds like a terrible friend that you can't trust but also can't expel from your life.

It's also not very good at any of those things, if you ask it to generate something far enough outside of the mainstream, or something particular, or something consistent, or- But, yeah, the insistence that we deprecate every other even remotely-connected resource (including other people) in order to supplicate ourselves to corporate desires is aggravating. You got a lot of the same pushback with VR. VR is really, really cool. Having your reality mediated by large corporations with a history of user-hostile behavior is not. Them not taking no for an answer feels violating.

I tried to get into machine learning and started out simple. A network of two inputs, and one output. The function to learn was a heart-shaped SDF. I generated training samples from the +/-1 range of coordinates.

For such a relatively simple thing it either didn't converge, or when it did, it got decent error rates. But try feeding it a coordinate outside its training range and it very very very quickly starts outputting total nonsense. This exercise taught me that despite whatever they keep telling us, this shit will never generalize.


This is the fundamental misunderstanding that the AI techies have. Normal people don’t want a “conversation“ with a computer. Conversations happen between people. Computers, at best, receive orders and carry them out. They are tools, not companions, and they should do nothing unless explicitly told to do it.

  Normal people don’t want a “conversation“ with a computer. 
Perhaps, but there is absolutely a subset of the population who uses AI as a human companion.

And that is a big problem. Other than making me excessingly mad, it is genuinely a problem that people who should work on overcoming social issues instead get their fix from fictional interaction

I wouldnt mind if the conversations actually were smart, not sycophantic, or were otherwise useful. I find more often than not it creates more work for me than it saves me, and even if i were to break even on time invested i lose massively on comprehension/understanding.

I'd rather see this technology being used as a method of input and control of the machine, just like keyboard, mouse and monitor is. Without humanized bits of conversation or suggestions, exaggerated remarks like "Got it!" - you would ask OS to preform a task and it would do what was told. And if I'd want to have a specific question or task I'd use some dedicated application that would stay dormant up until used.

Star Trek computer remains the ultimate ideal.

Basically sums up why i don't use any kind of voice assistant still. Until the computer can DO exactly and precisely what i asked -- not what it's faulty recognition model thinks i asked -- there's zero point to trying to talk to it


I have one device in my house on Alexa called "under cabinet lights", I asked Alexa to "turn on the cabinet lights" she said "several things share the name cabinet lights, which one do you mean?" (As if there's enough information there to answer the question) I told her "all of them" thinking maybe the lights got duplicated or something.

She turned on every smart home light in the house.

.

That isn't your ideal world?


Not sure how that works with Alexa but can't you customize names of bulbs, rooms so you could call for light at specific place?

Does it do the right thing if you ask if to turn on the under cabinet lights?

No, it turns on my office ceiling fan

Sex is great, but if you constabtly try to force it on me, sneak it into deals we make, or while i sleep etc. it will leave me quite hostile towards the topic, and that's where we are at. Consent is important in all things, no less here.

I think if A Brave New World was written today, soma wouldn't just be available, megacorps would force it on people.

Reminds me of the classic in the genre:

  how about a couple of weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?

  Sam Altman, Sep 12, 2024
https://xcancel.com/sama/status/1834351981881950234?lang=en

That and, a circus dog playing the piano may certainly qualify as mind-blowing in the context of circus, but if you make it your project to generally replace pianists with dogs concert-goers will rightly tell you to go lick nettles.

The fact that people are unimpressed that we can have a fluid gameplay experience of the latest entry in the Diablo franchise in the palm of our hands was mindblowing to someone as well. "Don't you people have phones?"

Well, Blizzard is Microsoft now so I guess they belong together.

Ironically though, Diablo Immortal was a huge commercial success despite the tone deaf announcement. I don't think MS will experience the same though. They're quickly going to be left with the only people using windows are those who are forced by their employer, no one will willingly choose it over other options.


I think what people misunderstand is that this is probably an active goal of theirs.

Similar to how Microsoft has decided there's no money to be made in console hardware and is trying to spin their Xbox brand into a software service brand they can slap on other things, I think they've decided that making a consumer OS has no money in it, and all the minmaxing of squeezing at the moment is them trying to extract the last drips of money while trying to drive people elsewhere.

I could be wrong, I'm not a journalist with sources at the company or something, but looking at it from the outside, this seems like the moves you'd make to drive people off your platform over time so you can kill it while having plausible deniability that you're not trying to do that, you definitely genuinely believe everyone wants you to opt them into everything every time you push an update.

In particular, I don't think having the kinds of enormous tire fires on update releases over this long without radical reinvestment in avoiding that happening in the future is what you do if you're trying to build something you're still dealing with another 10-20 years from now.


It seems like they had the right idea with Windows 10 (our consumer OS is now just a loss-leader we put out for free as a way to market Office/OneDrive/Visual Studio etc) but they've completely about-faced with 11. Which yeah, could be a sign they're wanting to kill the consumer segment i guess.

11, I think, is what happened when they wanted to push breaking changes that customers who pay for LTSC wanted to avoid for the entire lifetime of 10.

My assumption is that 10 was as you describe, and then 11 was motivated by wanting to make disruptive changes to squeeze the last juice from the consumer segment, and the "agentic OS" pivot is just the most recent gorilla in the room to squeeze the ever-drier sponge.

In particular, I would assume Microsoft sees writing on the wall with how so many people in younger demographics are using phones as primary devices and see full sized laptops and desktops as effectively legacy platforms they use at jobs, and is frantically trying to get out of that market before the bottom falls out.


> and is frantically trying to get out of that market before the bottom falls out.

I think you're right. They've been really pushing Windows 365 for businesses lately, and now have direct boot into W365. The new agentic stuff spins up temporary W365 instances to do it's thing.

They even recently made data model and report creation available in PowerBI web, something I never thought I'd see happen has PowerBI desktop was one of a few things still locking people in that ecosystem to Windows. They've publicly said they're committed to the web version now and web will get all the new features.

Microsoft is really pushing hard on "Windows as a service." The future of Windows isn't a locally installed OS. Windows is going to become just another app on every other platform. It's no coincidence that they renamed the remote desktop app to the "Windows app." Macs, chromebooks, phones, tablets, doesn't matter. No matter what you have, you will still be able to access Windows.

They do need to drive as many consumers off of it first though before pulling the rug and going subscription unless they want even more bad press.


On top of that: it seems like Microsoft (and Google too) is doing its best at alienating customers with those measures.

Everything is now either accessing your data directly and you have to opt-out or you can't even opt out at all.

This AI rush/push is also permeating every line and product: from the office suite, to github, to vscode, and even open source tools are getting AI shoved in, like Playwright, and it feels everything else is an afterthought.

It seems Nadya is making a Ballmer-level play. Ballmer had the right intuition: that Microsoft had to move its focus from the desktop to the cloud. But the execution was poor. Now history's repeating.


It’s a perception problem AMPLIFIED by a trust problem. People don’t trust Microsoft. Or at least they trust Google more than they trust Microsoft.

For a small period of time, I was actually using Edge + Copilot everyday (and it was decent) but their competition has improved so much and appears WAY more privacy focused. I know that Sam Altman is trying hard to stay within the bounds of people’s trust, which once broken is hard to replace (he even said so in an interview).


I'll trust Sam Altman as far as I can throw him.

For now he doesn't seem like a bad actor in the AI industry.

However, the moment it becomes more sustainably profitable to grow a mustache and start wearing monocles he likely will, and if he doesn't he'll be ousted and replaced by someone far worse.


> For now he doesn't seem like a bad actor in the AI industry.

You may want to take a closer look at Altman's persona.


I did specifically filter for "actor in the AI industry"

What a gaslighting king this CEO is, the concern isn't how functional the AI is, the concern is AI just downloading any and all PERSONAL AND PRIVATE files on a whim, with no guard rails. What if I have photographs of my kids I've never uploaded anywhere EVER because I don't want them anywhere outside of MY DEVICE, does Microsoft just magically get to suck in those files and own them? Wild.

Its this shenanigans that forced me to nuke my Windows install and go Arch. I noticed that Windows Defender will upload "suspicious" files and there's no audit trail of what's being uploaded. So I have no way of knowing what personal documents or even proprietary software has gone up to their cloud.


Such hubris, seeing everything from HIS perspective, without taking into account the users. It is no wonder Microsoft keeps shipping crap - you can convince and push down B2B products to enterprise's throats, when their user's salaries are dependent on bosses who just care about image, with open budgets to fund whatever is sold to them to increase the bottom line. This is less effective on consumers.

I have never in my entire life wanted to "generate an image or video". I like taking photographs and recording videos because they represent the reality of my life. Who would ever want to "generate" fake images as a matter of normal daily activities?

That is indeed a really weird statement from them.

I mostly engage with text based social media or highly technical content so I know that I'm not exactly in the center of the bell curve.

I can see a use case for "AI let's go finding me holiday destination and help me plan my travel and stay". I can also see people wanting to "improve" images and videos etc to remove "blemishes".

But straight up generating random images and videos as content center pieces? That seems like a niche at best unless it's unwittingly done through the "algorithm".


Management, marketing, HR. People who want to send a message without having any kind of responsibility for it.

> It's not that people are unimpressed with AI

Oh no, I am definitely unimpressed. That AI you can have a sorta-kinda fluent conversation with is often a complete moron and a habitual liar, and the images it generates are awful - did he not see how horrible that Coke ad looked?

It'll probably end up useful in a bunch of applications soon-ish and I'll probably want to use it eventually, but in the meantime their AI is flooding the internet with absolute garbage, and they are literally shoving AI in my face at every opportunity they get.

It is painfully clear that people just aren't that interested, and they are getting increasingly desperate about finding ways to recoup their massive investments. But people aren't going to magically become enthusiastic about eating rotten garbage if you just keep stuffing it in their mouth!

If anything, their current approach is only going to make people hate AI even more. But they are in too deep, and admitting defeat and scaling it down until they have an actually good product that people genuinely want means seeing their stock price crater because they will have "lost" the "AI race". Their only option to avoid an immediate collapse is to keep lying through their teeth and keep trying to pretend that it is absolutely amazing and that you just must use it.

Or maybe the CEOs are completely delusional and genuinely believe what they are selling - I'm not sure which one is worse.


It's just Eliza. Once you toy with it and see the patterns it is just Eliza with more power behind it.

Why do you think it's like Eliza?

> If anything, their current approach is only going to make people hate AI even more

Personally I'm long past hating AI

I am pretty much at the point of viewing AI research and development as a crime against humanity

I hope I will turn out to be wrong, but as things are going right now all I can see is this path leads to misery for the vast majority of living humans.


Last weekend, I decided to call these AI pushers out of touch, because no one outside their bubbles wants the world to be replaced by AI.

Honestly, it wouldn't surprise me if in a year or two we start seeing bans on AI because of the tactics of these corporations. And I mean full-on blunt instrument legal bans on any use or creation of the things. Nobody will care about the nuances of AI, and that the AIs nobody talk about is actually doing some neat things. The very things these corporations are doing with AI is, I suspect, ultimately going to doom it. Obviously that wouldn't be good, but if you abuse people enough, they eventually snap and go for the nuclear option.

Way too many people out there like my boss completely hooked on letting LLMs write all their emails (and letting all us scratch our heads trying to figure out what they mean)

That's so gross. Like that tells me everything about your boss. Lol. But that's what I mean. Hell, it wouldn't surprise me if we start seeing calls for the LLMs of today to be banned. AI (and I mean the other types of AI) has done some great things in the sciences/medicine and whatnot. But people will still want them banned because these companies have made LLM == AI. And it won't matter that there are other kinds of AI.

Yep it’s a fluent conversation with an arsehole you can’t get away from and who’s constantly trying to get one over you.

The Hitchhiker’s Guide to the Galaxy was supposed to be a comically bad vision of the future not a blueprint.


I just got a "try gemini in google chrome" pop up and the only option was to try or "remind me later".

Even just the remind me later option gives me such a horrible vibe. F off Google and respect my choice.


It’s really becoming desperate and counterproductive.

I’ve had an Alfred command since 2013 where I type `wiki something` which then opens Confluence and searches for `something`. I use this to quickly search our company wiki for terms without breaking my concentration and flow.

Atlassian decided to add an AI summary at the top and intentionally disable the rest of the results until the AI summary has finished rendering fully. It’s insane. How is this making me more productive? It’s just shearing off one other layer of familiarity and value I’ve enjoyed for 12 years and pushing me away from their product.

Forced adoption rarely works out unless people really want the feature and don’t know that they want it. At the very least, let us disable it.


Instead of "pushing back" I wish he'd actually listen to and address his critics' points.

He is deliberately and disingenuously missing the point. It's not that the features aren't good (maybe they are, maybe they aren't). It's about how coercive Microsoft and Windows are with its users, and this exec is failing to address that one.

Just once, I'd like to hear a question get through to these assholes asking them why they are forcing so many unwanted things onto their users. From Microsoft accounts to forced windows updates to Recall... Gone are the days when users had any control over what their computers are running.

But these kinds of questions never seem to get through to them.


It’s a lost cause. Just stop using Windows. That’s the only message that will be heard.

They only understand one thing: shareholder value. This dude's panicking because he knows how bad the next quarterly report looks when this strategy resulted in a hemorrhaging userbase. This is an expression of a stage of grief.

But it won't will it?

They can essentially force users to receive and pay for any of their AI features. It worked so far and there is no reason to believe it will stop working anytime soon.

People are just taking it and this guy knows it. The fact that I and a few others don't, doesn't even register in Microsofts bottom line.

The lesson learned is that you don't really have to care about your users right now. I'm certain there is a breaking point for that as well but until there are any indications that it is reached we probably must be glad that they are not outright insulting their users and/or charging them an additional 5 dollars a month for "disrespecting Microsoft".


Sadly, you're right.

We are finding out more and more, over the last decade, that there seems to be no limit to the amount of abuse and coercion that users will accept, and continue to use the products. I see posts here like "Uber ripped me off for $50, but I don't want to do a chargeback because they'll ban me!" We are at the point where companies can literally steal actual money from customers, and customers will still insist on continuing to use the software.

A handful of complainers on HN is not going to even dent this.

My bigger fear is that companies will fully embrace this--there is so much more hostility they can inflict on their users, that they haven't been doing. What is staying their hand? Car companies now know they can charge a subscription fee for every little feature of the car, and customers will still put up with it, so why haven't they already?? Apple knows they can lock down the Mac just like iOS and customers will still give them money, so why haven't they? Streaming sites know they can absolutely saturate everyone with ads, and people will not leave, so why haven't they?


Of course it registers. You wouldn't have a CEO putting out panicked outbursts otherwise.

I would be impressed if AI was actually 'super smart' but what we have now is not.

Yeah, too many people conflate intelligence with notionism

But Sam promised us the gpt5 was a PhD level intelligence!

Now excuse me while I go talk to my PhD wielding friend about whether the seahorse emoji exists. /s


That one always felt like the bubble jumping the shark to me. Like, what does “phd level intelligence” mean? I nearly did a phd. If I had, would I have levelled up by 10 intelligence points? Such obvious nonsense.

Exactly. It's not that everyone is saying "AI is completely worthless, get rid of it." It has it's use cases, I certainly benefit from LLMs in my job every day.

That doesn't mean I want it plastered everywhere, in every app or website. That doesn't mean I want to interact with or use my computer via AI, and I especially don't want to talk to my computer to do things. Mouse & keyboard is faster.

But for now at least you can just choose not to use it. The problem is, Microsoft is putting 100% of their efforts into this while long-standing Windows bugs and regressions still exist. They're aware they exist too, and are deliberately choosing not to improve their product.


Super smart is a dilution to the word "smart". The ongoing dilution of the word "smart" and "intelligence" is going to haunt us for centuries. LLMs are better seen as the Chinese Room human in the cell combining symbols it doesn't understand

"we can have a fluent conversation with a super smart AIwe can have a fluent conversation with a super smart AI"

But we can't. I can have something styled as a conversation with a token predictor that emits text that, if interpreted as a conversation, will gaslight you constantly, while at best sometimes being accidentally correct (but still requiring double-checking with an actual source).

Yes, I am uninterested in having the gaslighting machine installed into every single UI I see in my life.


LLMs are severely overhyped, have many problems, and I don't want them in my face anymore than the average person. But we're not in 2023 anymore. These kinds of comments just come off ignorant.

I dunno, I'm not fully anti-LLM, but almost every interaction I have with an LLM-augmented system still at some point involves it confidently asserting plainly false things, and I don't think the parent is that far off base.

Agreed, some days I code for 4-6 hours with agentic tools but 2025 or not I still can't stomach using any of the big three LLMs for all but the most superficial research questions (and I currently pay/get access to all three paid chatbots).

Even if they were right 9/10 (which is far from certain depending on the topic) and save me a minute or two compared to Google + skim/read-ing a couple websites, it's completely overshadowed by the 1/10 time they calmly and confidently lie about whether tool X supports feature Y and send me on a wild goose chase looking through docs for something that simply does not exist.

In my personal experience the most consistently unreliable questions are those that would be most directly useful for my work, and for my interests/hobbies I'd rather read a quality source myself. Because, well, I enjoy reading! So the value proposition for "LLM as Google/forum/Wikipedia replacement" is very, very weak for me.


There are two types of LLM defender; those who claim that it’ll be non-shit soon, just keep believing, and those who claim that it is already non-shit and the complainer is just stuck in year-1 where year is the current year.

Given that this has now been going on for a few years, both are wearing thin.

Like, I’m sorry, but the current crop of bullshit generators are not good. They’re just not. I’m not even convinced they’re improving at this point; if anything the output has become more stylistically offputting, and they’re still just as open to spouting complete nonsense.


You seem severely confused about how low the probability of being “accidentally correct” is for almost any real life task that you can imagine.

I'm less weary of people not giving a shit about it than people making llms/generative ai their whole personality

Grown adults spamming the web about this new model from Megacorp X, being all giggly about the new PeLiCaN On A BiCyClE being 0.000017% more realistic than the previous version... get a life

No one gives a shit outside of these nerds, all people want is less work and more free time, they don't give a shit about your generated "art", or how fast this new model solved a problem they didn't know existed 12 seconds ago


Agree - argue with customers at your own peril ... moreover sophisticated devs and companies want freedom from corporate control and the underlying attitude of entitlement ... a kind of campish behavior where customers buy-in to the corporate hustle which corps think customers are obligated to play along with.

Nah ah ...

I'll say the same thing another way: customers tell suppliers whether or not they're satisfied. They don't tell me. I tell them if I think the price is worth it. They don't tell me. Argue with me and they'll lose


Also on this side, on consulting agencies now it seems everyone is getting evaluation KPIs regarding AI use on project work, even when it doesn't make sense for the delivery scenario.

I think this is a super important point. Working on ai I’m really wondering where this will actually end in the case of our media consumption, AI is being used to generate more engaging content, this will imply ai will be almost a requirement to stay visible, so more ai will be used. In the end little to no “real content” will make it through.

Will there be a moment where people will leave social networks to get “real content” again? Will that be safe from AI optimization then?

Are we seeing the start of the demise of social media?


When have i ever wanted my os to “generate any image i want!”

This is like a chef being confused why people dont like the shoes he made them. Why did he make hungry people shoes? Certainly not to eat?


A fluent conversation was a great party trick but the novelty has worn thin. It had some value but overall having to have a conversation with a tool to get something done is frustrating. Like tasking a junior employee on their first day with advanced tasks and wondering why they keep missing the mark. I want a tool not an opinionated support unit, and often will stop that conversational experience by prompting it away. Having to do that is annoying.

I personally also don't have much use for generating images and videos, at least not regularly. I feel like they want us to use AI tools full time, when really we just need to jump in and use them when required, which might be quite infrequently (obviously dependent on circumstance). But who is going to pay the huge cost of having the tools available when you do want them?

So yeah, agreed. Stop making it hard for me to use my tool without accidentally engaging the LLM integration or just flat out forcing it's usage. I don't want that future price hike that comes with LLMs


> It's not that people are unimpressed with AI - they're just tired of constantly being bombarded with it, and it sneaking its way into where it's not wanted. "Generate any image you want!" "Analyse this thing with AI!" gets pretty tiring.

Isn't it just the second person? If there were just a Generate button/tab without explicitly addressing me and asking/begging I wouldn't mind it.


Isn’t it just that people are unimpressed with Microsoft Copilot? I’ve always felt that other models work quite well. If the implementation on their side has issues, they shouldn’t blame users for disliking it.

Yes copilot is a lot dumber than chatgpt which is really curious because it's basically a wrapper around... Chatgpt....

I guess they just put really tight limits on compute per request which hurts its performance.


My Google chrome browser on my phone yesterday suddenly now has an AI mode.

unfortunately for them, it doesn't matter that it's impressive, because i don't need it.

“Would you like me to summarise this email?”

No. Go away.


100% this. Useful tool sure, don't need it to be in everything all the time for no reason.

Also, I am unimpressed.

I am unimpressed with it. If I wanted to steal code off stack overflow I can do that myself. Another layer of indirection has negative value.

I can generate images that are difficult to use commercially. I can analyze something with AI but I can't confidently use that output in any setting that matters.

For people who are attempting to engage in profitable work then AI is miserably unimpressive. I don't know what planet this guy is living on. Time is money. Flowery emails and off axis summaries can only create a waste of that time.


All they have to do is sell regular "windows" and "windowsAI" as a separate product and everyone would be fine with it... but no FUCK YOU ads in your fucking start menu you stupid bitch customer.

(sorry for big words, but this is how i feel they are talking to me when the base operating system I am suppose to put absolute trust in my privacy in treats me like an idiot)


"If I want AI I'll actively seek it out and use it - otherwise, jog on."

If I want MS Windows I'll actively seek it out and use it - otherwise, jog on

If this is not a statement you can make, then Redmond gets to decide what you use, not you


Usually complaints about changes to Microsoft software come from people who are using the software

It's possible that Linux or MacOS users might complain about new "AI features" in MS Windows, but more likely it is MS Windows users who are complaining

If it's possible for a computer user to switch between OS, then there is less reason to complain. Those users can make the statement, "If I want to use MS Windows, I will actively seek it out and use it, otherwise [I will not]"

For example,

"I do not like this Microsoft "AI feature" so I'll use MacOS instead" (possible for user to switch OS)

versus

"I do not like this Microsoft "AI feature" so I'll complain to Microsoft via online comment forums" (stuck using WIndows OS, no choice)

If you cannot make the statement "If I want MS Windows, then I'll use it, otherwise I will use something else" then Redmond, not you, makes the decisions on what you will use, including "AI features" you may not like

Because if you use Windows, chances are you have "auutomatic updates" enabled

This allows Redmond to install new software on your computer whenever they like, e.g., "updates"


Linux and Mac users would disagree.

I have not used MS Windows on own computers since early 2000s

I did not switch from MS Windows to Linux nor MacOS

I switched to NetBSD which I had originally used on the VAX before Linux existed

I have owned Macs and iPhones. But Apple became a lost cause many years ago

Today I use both Linux and NetBSD

I prefer compiling the software I use myself. I am not a fan of "binary packages"

When I use the term "you" in an HN comment I am not referring to myself


Users of operating systems other than MS Windows, e.g., Linux, MacOS, etc., can obviously make this statement

They decide what OS they want to use, and therefore what "features" they will accept

Whereas MS Windows users must accept what Redmond decides they should use


If you cannot choose another OS besides Windows, then you are stuck with whatever Redmond decides

You can choose another operating system. No one is forcing you to reward Microsoft for their bullshit.

Well, and companies are papering over usability problems with AI. LLMs are not a substitute for good human-centric design.

It's almost as if all the focus has been on eliminating the human... for products designed for humans.


I use AI everyday and it’s now integral to my workflow. However, even I still hate the hype train and having it constantly stuffed down my throat. Nevermind AI slop.

Windows 11 is already adware. No wonder people are complaining about more ads.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: