Hacker Newsnew | past | comments | ask | show | jobs | submit | therealmarv's commentslogin

a very human reaction ;)

Great article!

I still remember the time when people cherished the arrival of IE5.5 and IE6 later. They were once the best browsers.


It reminds me what I said to somebody recently:

All my friends and family are using the free version of ChatGPT or something similar. They will never pay (although they have enough money to do so).

Even in my very narrow subjective circles it does not add up.

Who pays for AI and how? And when in the future?


People are always so fidegty about this stuff — for super understandable reason, to be clear. People not much smarter than anyone else try to reason about numbers that are hard to reason about.

But unless you have the actual numbers, I always find it a bit strange to assume that all people involved, who deal with large amounts of money all the time, lost all ability to reason about this thing. Because right now that would mean at minimum: All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.

Of course, there is a lot of uncertainty — which, again, is nothing new for these people. It's just a weird thing to assume that.


Pets.com, Enron, Lehman Bros, WeWork, Theranos, too many to mention.

Investors aren’t always right. The FOMO in that industry is like no other


The point is not whether they are right, but how low the bar is for what constitutes as palatable opinions from bystanders on a topic that other people have devoted a lot of thought and money to.

I just don't think "I don't know anyone who pays for it" or "You know, companies have also failed before" bring enough to the table to be interesting talking points.


I think it's a bit fallacious to imply that the only way we could be in an AI investment bubble is if people are reasoning incorrectly about the thing. Or at least, it's a bit reductive. There are risks associated with AI investment. The important people at FAANG/AI companies are the ones who stand to gain from investments in AI. Therefore it is their job to downplay and minimize the apparent risks in order to maximize potential investment.

Of course at a basic level, if AI is indeed a "bubble", then the investors did not reason correctly. But this situation is more like poker than chess, and you cannot expect that decisions that appear rational are in fact completely accurate.


Fallacy: Appeal to authority.


> All the important people at FAANG, all the people at OpenAI/Anthropic, all the investors.

It's like asking big pharma if medicine should be less regulated, "all the experts agree", well yeah, their paycheck depends on it. Same reason no one at meta tells Zuck that his metaverse is dogshit and no one wants it, they still spent billions on it.

You can't assume everyone is that dumb, but you certainly can assume that the yes men won't say anything other than "yes".


Again, this is not an argument. I am asking: Why do we assume that we know better and people with far more knowledge and insight would all be wrong?

This is not rhetorical question, I am not looking for a rhetorical answer. What is every important decision maker at all these companies missing?

The point is not that they could not all be wrong, they absolutely could. The point is: Make a good argument. Being a general doomsayer when things get very risky might absolutely make you right but it's not a interesting argument – or any argument at all.


I think you have a point and I'm not sure I entirely disagree with you, so take this as lighthearted banter, but:

Coming from the opposite angle, what makes you think these folks have a habit of being right?

VCs are notoriously making lots of parallel bets hoping one pays off.

Companies fail all the time, either completely (eg Yahoo! getting bought for peanuts down from their peak valuation), or at initiatives small and large (Google+, arguably Meta and the metaverse). Industry trends sometimes flop in the short term (3D TVs or just about all crypto).

C-levels, boards, and VCs being wrong is hardly unusual.

I'd say failure is more of a norm than success, so what should convince us it's different this time with the AI frenzy? They wouldn't be investing this much if they were wrong?


The universe is not configured in such a way that trillion dollar companies come into existence without a lot of things going well over long periods of time, so if we accept money as the standard for being right, they are necessarily right, a lot.

Everything ends and companies are no exception. But thinking about the biggest threats is what people in managerial positions in companies do all day, every day. Let's also give some credit to meritocracy and assume that they got into those positions because they are not super bad at their jobs, on average.

So unless you are very specific about the shape of the threat and provide ideas and numbers beyond what is obvious (because those will have been considered), I think it's unlikely and therefor unreasonable to assume that a bystanders evaluation of the situation trumps the judgement of the people making these decisions for a living with all the additional resources and information at any given point.

Here's another way to look at this: Imagine a curious bystander were to judge decisions that you make at your job, while having only partial access to the information that you have to do the job, that you do every day for years. Will this person at some point be right, if we repeat this process often enough? Absolutely. But is it likely, on any single instance? I think not.


[flagged]


They were responding to me and I have no issue with their answer (although I don't particularly agree with it).

Take your attitude somewhere else. It sucks.


> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?

Because of historical precedent. Bitcoin was the future until it wasn't. NFTs and blockchain were the future until they weren't. The Metaverse was the future until it wasn't. Theranos was the future until it wasn't. I don't think LLMs are quite on the same level as those scams, but they smell pretty similar: they're being pushed primarily by sales- and con-men eager to get in on the scam before it collapses. The amount being spent on LLMs right now is way out of line with the usefulness we are getting out of them. Once the bubble pops and the tools have a profitability requirement introduced, I think they'll just be quietly integrated into a few places that make sense and otherwise abandoned. This isn't the world-changing tech it's being made out to be.


You don't have an argument either btw, we're just discussing our points of view.

> Why do we assume that we know better and people with far more knowledge and insight would all be wrong?

Because money and power corrupt the mind, coupled with obvious conflicts of interest. Remember the hype around AR and VR in 2015s ? Nobody gives a shit about it anymore. They wrote articles like "Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020" [0], well, if you look at the numbers today you'll see it's closer to 15b than 150b. Sometimes I feel like I live in a parallel universe... these people have been lying and overpromising things for 10, 15 or 20+ years and people still swallow it because it sounds cool and futuristic.

[0] https://techcrunch.com/2015/04/06/augmented-and-virtual-real...

I'm not saying I know better, I'm just saying you won't find a single independent researcher that will tell you there is a path from LLMs to AGI, and certainly not any independent researcher that will tell you the current numbers a) make sense, b) are sustainable


Someone is paying. OpenAI revenue was $4.3 billion in the first half of this year.


You forgot that part:

> The artificial intelligence firm reported a net loss of US$13.5 billion during the same period

If you sell gold at $10 a gram you'll also make billions in revenues.


That loss includes the costs to train the future models.

Like Dario/Anthropic said, every model is highly profitable on it's own, but the company keeps losing money because they always train the next model (which will be highly profitable on it's own).


But even if you remove R&D costs, they’re still billions of dollars short of profitability. That’s not a small hurdle to overcome. And OpenAI has to continue to develop new models to remain relevant.


Reminds me of the Icelandic investment banks during the height of the financial bubble. They basically did this.


OpenAI "spent" more on sales/marketing and equity compensation than that:

"Other significant costs included $2 billion spent on sales and marketing, nearly doubling what OpenAI spent on sales and marketing in all of 2024. Though not a cash expense, OpenAI also spent nearly $2.5 billion on stock-based equity compensation in the first six months of 2025"

("spent" because the equity is not cash-based)

From https://archive.is/vIrUZ


How the fuck does anyone spend 2 billion dollars on sales and marketing. I’ve seen the odd ad for openai but thag number seems completely bananas.


Astroturfing on social media, most likely. The AI hype almost certainly isn’t entirely organic.


All that free use by millions of users is sales and marketing.


> Who pays for AI and how?

The same way the rest of webshit is paid for: ads. And ads embedded in LLM output will be impervious to ad blockers.


> They will never pay

Of course they will, once they start falling behind not having access to it.

People said the same things about computers (they are just for nerds, I have no use for spreadsheets) and smartphones (I don't need apps/big screen, I just want to make/receive calls).


I use it professionally and I rotate 5 free accounts on all platforms, money doesn't have any values anymore, people will spend $100 a month on LLMs and another $100 on streaming services, that's like half of my household monthly food budget


I'm sure providers will find ways of incorporating the fees into e.g. ISP or mobile network fees so that users end up paying in a less obvious, less direct way.


The cost of serving an "average" user would only fall over time.

Most users rarely make the kind of query that would benefit a lot from the capabilities of GPT-6.1e Pro Thinking With Advanced Reasoning, Extended Context And Black Magic Cross Context Adaptive Learning Voodoo That We Didn't Want To Release To Public Yet But If We Didn't Then Anthropic Would Surely Do It First.

And the users that have this kind of demanding workloads? They'd be much more willing to pay up for the bleeding edge performance.


The bet is that people will pay for services which are under the hood being done by AI.


I pay. Like if they're using it to talk then they won't pay.

But I use it for work.


> Who pays for AI [...]?

Venture capital funding adding AI features to fart apps.


People said the same thing about Facebook. The answer: advertisers.


They will eventually get ads mixed in the responses.


AI companies don't have a plausible path to productivity because they are trying to create a market while model is not scalable unlike different services that have done this in the past. (DoorDash, Uber, Neftlix etc.)


Somebody know where to get a higher resolution of that map?


The could have been the Jetbrains (before Jetbrains existed) and even bigger than Jetbrains.


I remember thinking back then that Jetbrains had their pricing right.

Not free but low enough so that invidual developers and companies wouldn't think twice about bying a license.

Borland/Inprise/Codegear/Embarcadero just priced themselves out of the market.


Just offload themselves to JetBrains, and have Delphi opened.


Open Sourcing Delphi along with Commercial licensing to Continue development is on my wish list if I ever become a billionaire.


If somebody wants to help XSLT 3.0...

This open source Rust implementation needs more help of integrating the full XSLT 3.0 standard into their engine

https://github.com/Paligo/xee


So does every other open source XSLT implementation, along with XPath/XQuery 3x. Most of them at best support 2x or 1x with some 2x-compatible extensions, but I'm not aware of any that fully support the 3x variants, besides Xidel (XPath/XQuery 3.0). The proprietary Saxonica is the go-to at the moment.


What is the max input and output resolution of images?

This is why I'm sticking mostly to Adobe Photoshop's AI editing because there are no restrictions in that regard.


In my testing it has been stuck at 1024x1024. Have to upscale with something...


Around 1 megapixel, AFAICT.


Best comment from another related thread (not from me):

So the libxml/libxslt unpaid volunteer maintainer wants to stop doing 'disclosure embargo' of reported security issues: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913 Shortly after that, Google Chrome want to remove XSLT support.

Coincidence?

Source (yawaramin): https://news.ycombinator.com/item?id=44925104

PS: Seems libxslt which is used by Blink has an (unpaid) maintainer but nothing going on there really, seems pretty unmaintained https://gitlab.gnome.org/GNOME/libxslt/-/commits/master?ref_...

PS2: Reminds me all of this https://xkcd.com/2347/ A shame that libxml and libxslt could not get more support while used everywhere. Thanks for all the hard work to the unpaid volunteers!


This seems totally fine though? XSLT 1.0 supporter says the support time is costing heavily, then Chrome says removing support is fine, which seems to align to both of them.

It'd be much better that Google did support the maintainer, but given the apparent lack of use of XSLT 1.0 and the maintainer already having burned out, stopping supporting XSLT seems like the current best outcome:

> "I just stepped down as libxslt maintainer and it's unlikely that this project will ever be maintained again"


Mozilla doesn't use libxslt


I also wondered about that. They probably don't want to do that because of maintaining, fixing and allocating resources to it then.

Probably a browser extension on the user side can do the same job if an XSLT relying page cannot be updated.


This seems like the kind of thing that won't require any resources to maintain, other than possible bugfixes (which 3rd parties can provide). It only requires parsing and DOM manipulation, so it doesn't really require any features of JS or WASM that would be deprecated in the future, and the XSLT standard that is supported by browsers is frozen - they won't ever have to dedicate resources to adding any additional features.


That is an interesting approach, you could suggest it? In general using JS to implement web APIs is very difficult, but using WASM might work especially for the way XSLTProcessor works today.


it will. It will make old non-updated pages break with same fate as old outdated pages which used MathML in the past and were not updated with polyfills.


FYI, MathML is currently shipping (again, after all these years) in Chrome, Firefox, and Safari[1].

[1] https://mathml.igalia.com/


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: