“You told me these were the best engineers in the world!!”
“I said they were the best engineers in Canada”
(Great quote from the BlackBerry movie).
Rings true here. You can’t fight market forces. To push out the US tech you need to build something that’s better than the US tech. Anything else is just wishful thinking.
> To push out the US tech you need to build something that’s better than the US tech. Anything else is just wishful thinking.
Not true at all, a perfect example from the ride-sharing world. Lyft and Uber left Austin a decade ago over a city ordinance requiring background checks, so a couple local tech folks pitched in a very small amount of money, relatively speaking, and built a non-profit version of Uber. Everyone loved it, drivers got paid more, it was cheaper overall because it was a non-profit, the app worked just fine, etc. The app buildout was somewhere in the seven figure range.
All was good until Lyft and Uber came back, artificially undercut the non-profit app until it died, and then drove prices back up.
And that was ten years ago. Today, a rockstar infra expert and product engineer could easily stand up a scalable ride-share clone. And if people are mad enough (and it sure seems like people are getting mad at the US), then the energy is there for users to make a change.
A ride sharing app is ridiculously easy to create.
Most of the work is in network effects so you have a large pool of drivers willing to work below minimum wage and a large pool of riders interested in paying you a lot more than that.
Your story makes the point that the nonprofit app only worked under new government regulations and could not survive in the free market?
I do think more infrastructure should be non-profit, but if someone makes a for-profit version that beats you there’s not really much to do other than hoping the government has your back.
The nonprofit app worked because the existing players didn't want to do required background checks on drivers and exited the market to make the local government look bad. When that tactic failed, they came back and used some of their VC billions to recapture the market by artificially lowering the price of their services. That's not at all "free market", that's buying your way to a monopoly (or more technically an oligopoly in this case)
Is US tech even good anymore? Do none of us not encounter the massive amount of shit from companies like Google, MSFT, Apple, Amazon, etc as users? Truly terrible bugs or user flows from engineers that clearly don't care while everyone is just collecting their own share of blood.
I can't think of a single thing that big tech has done to improve my life, or society for that matter, over the last 10 years.
All US Tech has is the backing of the US government and that is likely to change in the coming decade, without the pressure of the US government would these companies be as competitive? We see what happens when others try to, rightfully I might add, regulate them: they throw extreme hissy fits and pressure the US government to force the countries to back off (by threat of sanctions or military action).
I sell/work as a consultant for m365 and azure and the services are definitely getting worse. AI translated garbage docs in which "plane" is translated as aeroplane (Flugzeug), Exchange as "Umtausch" (literal meaning of to exchange something) and so on. Obviously those are the ones I can remember because they were funny. There are also other errors that are not as obvious.
And don't get me started on slopilot being everywhere.
I see US (software) tech going the way of Boeing and Intel in the next decade. I’m not sure what their long term goals are, or if they even have any beyond chasing large/quick short term profits, but you can only enshittify your product and abuse your customers for so long before they start abandoning you.
No, US tech is driven by investors willing to risk allocating a ton of capital towards companies and products that have a good chance of succeeding.
Europe has been struggling and behind on tech and investments way before Trump. It’s policy and over regulation that prevents Europe from making any inroads
You absolutely can fight market forces. China did it for decades with their car industry. Chinese people were financially forced to buy inferior Chinese cars to support a domestic industry until it learned to compete in the global market. Very difficult to do this in a democracy, though.
In a way, yes, but there are some differences. The US market was never as heavily restricted as the Chinese market, with foreign competitors allowed to open up factories in the US to avoid tariffs. You can do that now in China, but until pretty recently you had to split ownership with a Chinese company to enter that market. Also US car brands have always had a significant export market (vs China only in the last few years), so our tariffs have always been more about jobs than industry development (though that makes no difference to the economic effect of the tariffs on consumers). Which is why foreign competitors were always free to avoid them so long as they employed Americans at the factory.
The US tech power is a bit like the US political soft power, it's there because it's huge and has momentum but it's not like it'll be here forever, especially given the current trajectory
Have you ever heard someone open Word or any other microsoft product and say "wow this is such a good piece of software I'm so happy my corporation forces me to use it and I would pay to get more of that shit in my life" lol
Your "better" assumes that availability is not a problem.
The risk we need to mitigate is that some right wing doofus in the US gets triggered by a twitter reply and decides to block our use of all US software and services.
In that case, having libreoffice installed locally does not seem so bad.
You talk about "market forces" and you don't seem to understand them at all.
"Confidentiality", "Integrity", and "Availability" are a foundational concept of security (the CIA triad).
For non-US citizens "Integrity" and "Confidentiality" have been compromised for a long time, but these things have no day-to-day impact. They are only relevant as kompromat material once you become powerful and they want you to act in US interests.
What's new are serious, escalating threats and actions against "Availability". This is the most important pillar of security, and a whole different beast. Microsoft has blocked email accounts of international court of justice due to political pressure. Buffoons in US tech leadership such as Cloudflare CEO feel so emboldened that they openly threaten to cut off Italy. After TV performances by Musk, Thiel, Tim Apple, Zucky and Bezos in favor of trump there is no doubt they would cut off another country as form of pressure - and if it is only for a week.
In this week, our markets would be offline and nonfunctional. The market has a very high incentive to untangle from this mess of shitty bootlickers and impulsive convicted criminals.
It will take some time, but the market forces are clearly following the new incentives.
What surprises me here on HN that people who are seemingly US tech workers are quite ignorant to how it feels to be on the receiving end of this totally reckless, unprompted and idiotic behavior.
Your argument based on false equivalence bias might work with in a megachurch but not here.
Amazon dropping Parler, a shitty US-based right wing social network nobody outside the US ever heard of, is totally on the same level as US waging economic warfare against Europe and laying claim on sovereign countries like Canada and Greenland. /s
You're projecting a lot, are you by chance attending the same megachurch? Or is it called gigachurch now? Do they also offer drive-in to grab a coffee and soda before service?
There's a docuseries about the US called "The Righteous Gemstones" - I can highly recommend it.
Your HN submission was flagged for being anti-Ukraine propaganda.
I fully understand why you whine about Parler but you are not a credible actor by any means. There is no reason to take any of your bad faith arguments serious.
No, you have to build something that can work reasonably well, get rid of being fucking dependency slave in strategic areas and then try to catch up. Of course does not work for small countries
Only if you think that government's only purpose is to look pretty. Economies are planned. You can either plan them as governments, or let your oligarchs and foreign oligarchs plan them together ("market forces.") These only look the same when you allow oligarchs to determine your governments.
At the very least, you want domestic oligarchs determining your governments. Their power is based in your country, and they might have a bit of sentimentality on top of that. Leaving it to "market forces" is just watching, not participating.
If some guy in Canada builds something better than current US tech, he's going to sell it to a US oligarch and probably move there, too.
edit: "Our ambition cannot stop there though. In far too many cases, our governments, universities, schools, and other public institutions—not to mention private businesses—are run on Microsoft or Google services. Now is the perfect time to get governments off Microsoft 365 and schools off Google Classroom by properly resourcing a new public agency or Crown corporation dedicated to building technology in the public interest."
This has always been the only answer, but it requires a relatively clean government. The government has to maintain ownership of these things, and cannot subcontract out the work.
You’re getting downvoted because you touched on a sensitive spot with some folks, but you’re right.
If other countries want to stop their reliance on US tech then they need to build better tech. Your BlackBerry quote shows that playing out in reverse. A non-US company dominated the market, a US company built something better (the iPhone) and the non-US company imploded.
This is such an American take diametrically opposed to reality. You literally could not be more wrong. The correlation between "effort to fight market forces (i.e. protectionism" and "independence from US tech) is 1:1. It's China, then Korea, then the rest of the world which is all 100% dependent on US tech. China is independent entirely thanks to protectionism and banning right from the staft, Korea is inbetween thanks to the exact same.
The only thing that works is throwing up huge barriers against dumping. This is the norm for physical goods. US big tech, and really Silicon Valley, is based on dumping - burning VC cash to become a monopoly. This is not a hair better for a domestic industry than being flooded by physical goods that are cheap thanks to burning through (let's say Chinese) government cash. In the latter we love to call this "artificiallly cheap", though for some reason I've never heard this adjective used for US tech based on monopolizing by burning VC cash.
Makes sense given the search alliance already in place.
Amazon/AWS was trying to push its partnership with Apple hard once that was revealed, including vague references to doing AI things, but AWS is just way to far behind at this point so looks like they lost out here to Google/GCP.
All that power has to come from somewhere. The idea that all this AI is powered by “green” energy and unicorn farts is just a bunch of PR puffery from tech companies trying to divert attention from the environmental damage they’re causing.
The uncomfortable truth is that AI is the biggest setback on our path to energy sustainability we’ve seen in a generation.
We can power it all and then some with renewable and nuclear energy. We elected a regime openly hostile to that and openly pro fossil fuel. Like they literally ran on burning more coal, so it shouldn’t be a surprise that we are burning more coal.
AI doesn’t matter. If it’s not AI it’ll be EVs. Or if you’re pro immigration (as I am) then what do you think letting more people into the country does for power demand? It’s something like 5kW averaged out over 24/7 per head. That’s probably conservative when you do a full accounting of all demand per head. Every new immigrant is probably equivalent to a rack of GPUs.
Degrowth is political fantasy. It will establish a populist backlash every time. Or are you going to line up to be the first to become poorer?
I look at that stuff as a very privileged fantasy. Only the rich can romanticize poverty. The people who fantasize about green back to the land scenarios are usually wealthy middle or upper class people in developed nations who have zero first hand experience of what that actually means outside the Avatar films.
It is, but degrowth is an election losing proposition. Any talk like this needs to be transparently non-hostile to demand for political purposes. The solution should be something like requiring them to build nuclear or renewable energy, or tax them and put the money into a subsidy fund for clean energy.
By the time a nuclear plant comes online, Renewables have incrementally added 400 gigawatts. Granted, nukes generate 4 to 8 times more energy, but solar can significantly improve crop yields and soil health. They also make it easy to raise sheep and cattle. It's a good thing I like lamb (yum).
I favor public education, but let's not kid ourselves, there is not a polity on earth where degrowth would get more than 20% support. It's a weird social media echo chamber artefact that will exclusively sabotage efforts to decarbonize.
In a sane election system, 20% gives a party a significant position in the government that influences the coalition and drives some of the future decisions. Just not in the two-party circus.
> In a sane election system, 20% gives a party a significant position in the government that influences the coalition and drives some of the future decisions. Just not in the two-party circus.
There is no consensus among political scientists that either a two-party system or a multi-party/coalition system is inherently “better.” Each design produces different trade-offs in representation, stability, accountability, and policy outcomes.
...or everyone else decides to marginalize that 20% party and allies with the far right instead (I don't want to defend the US system, but proportional representation is not a panacea either).
> there is not a polity on earth where degrowth would get more than 20% support
Eh, I'm not so sure about that. Sustainability politics is mainstream in Europe in a way it isn't in the US. Aside from ethical concerns, a lot of people over here see climate change as a very real economic threat (likely to cause them material economic harm within their lifetimes).
You're probably right that a general degrowth strategy wouldn't ever be popular, but I bet a policy that say restricted AI and cryptocurrency with the aim of reducing electricity prices would be.
> You're probably right that a general degrowth strategy wouldn't ever be popular, but I bet a policy that say restricted AI and cryptocurrency with the aim of reducing electricity prices would be.
That's arbitrary. If you went back in time before AI and crypto, which industries would you pick to constrain growth or development of?
Is it whatever the latest industry is that is driving incremental emissions? If so, I don't know that it is a compelling mental model, because that is a degrowth mindset.
Intentionally reducing quality of life in the short term will never win elections, no matter how educated a populace is. The best strategy to reduce consumption that seems to be working is allowing below replacement total fertility rates.
>> But then you get an aging population and all the problems that that brings with it.
> Only for a generation (mostly \s but entirely true).
That's not accurate. The problems of population aging are not confined to a single generation. They are structural and persistent, unless the underlying institutions adapt.
Aging is a continuing demographic process, not a single event. Once a society enters sustained low fertility and longer life expectancy, each cohort is smaller than the one before it. Each cohort also lives longer. That means that today's workers support more retirees. Tomorrow's workers will support even more, unless something changes.
It can feel (but isn't) like a single generation problem if major structural changes happen like: raising retirement age in line with life expectancy, shifting pensions to funded, large-scale immigration, major productivity gains from technology, or cultural shifts to high fertility.
> Once a society enters sustained low fertility and longer life expectancy, each cohort is smaller than the one before it.
I mean, unless fertility completely collapses (to like less than 0.5) then it'll mostly be a single generation problem. Regardless of any future changes, the current generation (my kids etc) will be supporting a much larger older cohort, with problems arising from that. I am one of 4 siblings, have two kids, and as long as both of them have two kids, no more problems arise (obviously extrapolating to the population).
There's some amount of irreducible demand for kids so I'd be surprised to see TFR continue to decline on a generational basis. Mind you, I could be wrong (or alternatively, we could see a massive increase in TFR like we did post WW2).
Yep, someone was bragging recently they used 13B tokens last year. At 8mg CO2/token that's ~100t of CO2. Consumption of 5 households (or 200 NYC-London flights) just for vibe coding!
Thank you for acknowledging the elephant in the room. I've literally seen people on HN argue that AI's increased power demand isn't bad for climate goals, because the money will encourage renewables.
It's astounding how people don't see it, even when it's the invisible hand of the market that's choking them to death.
It's that the majority of AI deployments are happening in a country which has a has had very poor renewable adoption and is now actively sabotaging renewable projects with an active opposition to climate goals because a particular group wants to protect their existing revenue.
Renewables are cheap and highly profitable, and money talks - even in the US, as can be seen in Texas. But it's hard to fight against your government when they want to force you to buy their rich friends' fossil fuels instead...
This is a pretty gross mis characterization of what’s happening. There’s been a lot written about the fluff that is a lot of these AI company “purchases” of “green” energy. In practice there’s no way to get that power from (insert middle of nowhere location with green energy plant) to (insert location of AI datacenter) so to actually power the data center the utility is forced to power on some clunky old coal plant to keep the chips powered.
The AI company is issuing press releases saying how they bought all this clean power but in practice they just forced some old clunky power plants back online to meet their demand.
What your are describing is purchasing certificates from renewable energy vendors, which while technically a small investment (more money to the renewable energy vendor → renewable business growth → more renewable energy projects) has very little to do with renewable energy projects like those I was talking about.
It is technically possible for the AI companies to decide to become self-sufficient or enter into the energy production market if things tilt far enough in favor of that, but it is somewhat unlikely and unexpected.
Big renewable projects are run by electricity producers, not consumers, and they are the ones being actively sabotaged in all sorts of ways.
"At BigGridCo we're proud to switch AI to 100% renewable power. On paper we just send all the dirty power to (scoffs) pesky houses and industry, leaving the clean power for AI."
Nuclear power works too, it’s clean and low carbon impact.
Can Microsoft and Google not afford to build a battery factory or nuclear power plant? Are they broke or something?
Why is the solution to scarcity of supply to bend over backwards and roll back regulations? The scarcity of supply itself should be a hint to society to stop supporting unfettered growth. Or maybe these mega-corporations need to get over it and pay fair market value for the projects they want to build.
Why do we have to breathe coal power emissions so that we can have one more ChatGPT wrapper nobody asked for?
> Nuclear power works too, it’s clean and low carbon impact.
You want an AI company to invest in a project that takes decades to complete? What are the chances they're around when it completes and what powers their datacenters while that takes place
Just to be pedantic: The median construction time is 7 years. With very slow planning, it is a decade, not decades. It can be done faster though.
Our power consumption won't be going down, and it generally wouldn't be the AI company itself running the project but the electricity companies that earn money supplying power that see dollar signs in all that extra electricity consumption.
Even if the AI companies all die, our global electricity consumption will keep going up margins will be better than the retired plants, so it's a good investment regardless.
I think you should look up actual construction times on reactors in developed countries. Be VERY happy if you can do it in less than 15 years.
> Even if the AI companies all die, our global electricity consumption will keep going up margins will be better than the retired plants, so it's a good investment regardless
If the company putting up the money goes bankrupt, what happens to the project? Maybe it's picked up by someone else?
I think AI companies should try to make it to 2030, my guess is at least a few of them won't make it. Don't commit to projects that won't even complete in the 2030s
I think you should look that up. I was even being conservative: Korea and China seems to be managing consistently around the 6 year time scale, while Japan has done it in less than four years from construction start till operation.
Granted, the US would have to import professionals to do it at that speed, and politicians will of course try to hinder the process with endless bureaucracy as their sponsors would rather sell fossil fuels...
> If the company putting up the money goes bankrupt, what happens to the project?
If people didn't start such medium-length projects out of fear of hypothetical future bankruptcy, there would never have been any infrastructure projects. Investors do not worry about them going bankrupt, they worry about losing momentum and would generally rather light money on fire than stagnate. We live in a time where business people start space programs out of bloody boredom.
However, what happens in these cases is just that other investors flock the carcass and takes over for cheap, allowing them to reap the benefits without having to have footed the whole bill themselves. Bankruptcy is not closure for a company, but a restructuring often under new ownership.
The only realistic scenario where such project would be dropped is if the world situation changed enough such that it would no longer be considered profitable to complete, such as due to other technology massively leapfrogging it to the point where investing in that from scratch is better than continuing investment, or demand being entirely gone such that the finished plant would be unproductive. Otherwise the project would at most change hands until it was operational.
(Particular AI companies making it to 2030 is not really that important when it is electricity producers making these investments and running these projects to earn money from AI companies, EV charging, heatpumps, etc.)
Finland, Olkiluoto, license application 2000, construction started in 2005, planned operation in 2010, actual operation 2023.
France Flammanville 3, construction started in 2007, planned operation in 2012, actual operation 2024, so 17 years
Hinkley point UK, construction began 2017 projected commissioning is in 2029/2030.
Vogtle USA, permits 2006, construction started 2013, operation 2023/2024.
South Korea, shin kori 3 and 4 took 7 and 10 years. And those aren't new designs.
Japan, the newest commissioned reactor is from 1997? Sure, France built really fast in the 80s... Different requirements/rules/public opinion.
And this is all from the start of construction. The beginning of the project is actually waaaay before that.
Please send me some links when you've done your research to prove me wrong. And yes, I did leave out china because I don't see the us building a Chinese design reactor... And even if that was possible it wouldn't meet us standards so you can effectively start over.
> If people didn't start such medium-length projects out of fear of hypothetical future bankruptcy, there would never have been any infrastructure projects.
And who finances that? Not banks by themselves, governments always have to give out some loan guarantees or favorable treatment. No private investor can deal with that amount of risk. So the bureaucracy that you speak of, without it no nuclear plant would exist.
So why don't you point me to a commercial nuclear power plant that was privately funded without loan guarantees by a government and all of that.
> We live in a time where business people start space programs out of bloody boredom.
So if you're referring to SpaceX, no Musk started that to make life multi planetary. And he understood that no one will finance that so the company needs to first make money to finance that mars shot.
Bezos I'm less familiar with but I know he has a collection of space artifacts so I think it's an interest of his and he probably wants to show he can do what musk can.
Google/MS/Meta will be around, probably. The other AI companies? Certainly not all of them.
I wouldn't rule out the current expenditure on AI to be a risk to the big players either. They're putting so much money in this. And with all the off balance sheet tricks that are happening now it'll be hard to know the real exposure.
Again it's a supply chain problem. Regardless of how much cash you have, you can't just order a new battery factory or nuclear power plant and have it up and producing in a couple years. We have eviscerated our supply chains for those things and no matter how much money we throw at the problem now it's going to take decades to reindustrialize. Rome wasn't built in a day.
If the concern is over externalities such as CO2 emissions and other types of pollution then sure, let's tax those directly. That will help accelerate solutions through free market mechanisms.
It's the same song as with crypto. Just as silly as then - of course many people will burn whatever is the cheapest fuel right now, even if they maybe invest in something else in the future. But the total goes up anyway.
>The idea that all this AI is powered by “green” energy and unicorn farts is just a bunch of PR puffery from tech companies trying to divert attention from the environmental damage they’re causing.
Do we have a solid breakdown ala Our World in Data for the energy mix mused to power AI Datacenters?
The only thing I have seen is the facility that Musk acquired in Memphis for Grok is illegaily emitting more pollutants than allowed because of Musk's insane drive for speed and it is causing health problems in the underlying poor community.
Its the reason I will never use Grok but i've been curious about where ChatGPT, Claude and Gemini are hosted. Google has had a history of efficient data centers and they are running custom silicon so i'd assume they are the best here?
I don't like the social harms related to AI but I think the energy is a silly emphasis. No one has ever thought twice about any heavy industry or absurdist garbage for consumers, home heating, etc.
If we were on track for everything else a serious uptake of AI might have put us barely off track.. But this is like blaming the wafer thin mint for the fat guy exploding.
I think it's still worthwhile, though. AI, given its current trajectory, will be able to help immensely with science and engineering challenges. Degrowth isn't a recipe for sustainable reduction of CO2 emissions.
The big engineering challenges right now are electrifying everything (which means convincing people that it's the right thing to do and that gas powered vehicles belong to the trashbin of history, amongst others) and banning production of "virgin" plastic items, especially single use items (which also required a whole lot of convincing).
Most of that is convincing is done in the exact opposite direction with... you guessed it... AI.
Pumping even more CO2 into the air hoping the magic box spits out a solution to remove the CO2 from the air doesn't seem like a sustainable recipe either.
This is broadly more PR puffery. We don’t need some magic AI model to tell us how to cut emissions. We just need to execute things we already know work.
Anyone seriously about tech should have a homelab. It’s a small capital investment that lasts for years and with proxmox or similar having your own personal “private cloud” on demand is simple.
tl;dr blowing up boats in the Caribbean and other aggressive actions, while controversial, has probably done more to address the drug pandemic than other things tried.
Such a “poison” could indeed be very powerful. While the models are good at incorporating information, they’re consistently terrible at knowing they’re wrong. If enough bad info finds its way into the model they’ll just start confidently spewing junk.
The “anti-AU hype” phrase oversimplifies what’s playing out at the moment. On the tech side, while things are a bit rough around the edges still the tech is very useful and isn’t going away. I honestly don’t see much disagreement there.
The concern mostly comes from the business side… that for all the usefulness on the tech there is no clearly viable path that financially supports everything that’s going on. It’s a nice set of useful features but without products with sufficient revenue flowing in to pay for it all.
That paints a picture of the tech sticking around but a general implosion of the startups and business models betting on making all this work.
The later isn’t really “anti-AI hype” but more folks just calling out the reality that there’s not a lot of evidence and data to support the amount of money invested and committed. And if you’ve been around the tech and business scene a while you’ve seen that movie before and know what comes next.
In 5 years time I expect to be using AI more than I do now. I also expect most of the AI companies and startups won’t exist anymore.
In the late 2000s i remember that "nobody is willing to pay for things on the Internet" was a common trope.
I think it'll culturally take a while before businesses and people understand what they are willing to pay for. For example if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.
> For example if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.
One is the time of a human (irreplaceable) and the other is a tool for some human to use, seems proportional to me.
> if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.
Is way off base. Even if you replace multiple workers with one worker but better tool, businesses still won't want to pay the "multiple worker salary" to the single worker just because they use a more effective tool.
It would seem to me that tokens are only going to get more efficient and cheaper from here.
Demand is going to rise further as AI keeps improving.
Some argue there is a bubble, but with demand from the public for private use, business, education, military, cyber security, intelligence, it just seems like there will be no lack of investment.
People said the exact same thing about (numbers from memory, might be off):
- when Google paid $1 bil for YouTube
- when Facebook paid $1 bil for Instagram
- when Facebook paid $1 bil for WhatsApp
The same thing - these 3 companies make no money, and have no path to making money, and that the price paid was crazy and decoupled from any economics.
Yet now, in hindsight, they look like brilliant business decisions.
I am not even clear how Whatsapp "paid off" for Facebook in any sense other than them being able to nip a potential competitor in the bud. I use Whatsapp but do not see a single advert there nor do I pay a single penny for it, and I suspect my situation is pretty typical. Presumably some people see ads or pay for some services but I've not, and I don't imagine there's that much money to be made in being the #1 platform for sharing "Good Morning" GIFs
While many people thought Facebook/Google paid too much for these companies, you're making an apples-to-oranges comparison. That part about there being "no path to making money" is wrong - online advertising was a huge industry and only getting stronger and while YT/Insta/Whatsapp may have struggled as standalone companies it was clear they'd unlock an enormous amount of value as part of a bigger company that already had a strong foothold in advertising online.
It is not clear who, other than maybe someone like Microsoft, could actually acquire companies like OpenAI or Anthropic. They are orders of magnitude larger than the companies you mentioned in terms of what they are "worth" (haha) and even how much money they need just to keep the lights on, let alone turn any kind of profit.
Not to mention the logical fallacy at the core of your point - people said "the exact same[sic] thing" about YouTube, Instagram and Whatsapp ... therefore, what, it necessarily means these companies are the same? You realise that many of us talked like this about "the blockchain", and "the Metaverse" and about those stupid ape JPEGS and we were absolutely correct to do so.
> Not to mention the logical fallacy at the core of your point
Yes, it's a logical fallacy. Another one is saying "I don't see any viable business model, therefore there is no viable business model".
Blast from the past:
> YouTube is a content paradise though. There's tons of value there and you can sell ads against it or even charge for premium services.
> Where's the money in Instagram? The content is practically worthless and their only real value is in their userbase. Even though I use the Instagram client, most of the time I see photos, they come through Twitter. So that also reinforces for me that any value is in the users and not the actual content, which is mostly crap.
> I'm more convinced that we're in a 2nd bubble now more than ever.
> Does anyone else think this valuation is insane? It's like $300/registered user. The company doesn't have a business model. No way the handful of employees are worth $1B. My mind is blown.
It sounds like you're really into this and I hope for all of our sakes that you are correct to be all hyped up about AI. Because if you're not and that this is a horrific bubble that is going to burst then we're all in big trouble
yeah, and Zuckerberg said that everyone on planet Earth will buy his VR helmet, and renamed his whole company after a stupid game which i don't think even exists anymore. Being a contrarian doesn't mean you are right, and sometimes seemingly stupid money-losing things turn out... stupid.
Uber are doing something entirely different though - they took a market which was proven to exist, created a product which worked then spent a decade being horribly unprofitable until they were the dominant player in that market. And even at their very worst they weren't losing as much money as OpenAI are. There's far too much hand-waving and dismissive "ah it'll be ok because Uber exist" going on among those who have bought into the AI hype cycle
We don't really know how much money Google sunk into YouTube before it became (presumably) profitable. It might have actually not been strongly coupled to economics.
I was not clear enough. I wanted to write a PRO-AI blog post. The people against AI always say negative things with using as central argument that "AI is hyped and overhyped". So I, for fun, consider the anti-AI movement a form of hype. It's a joke but not in the sense it does not mean what it means.
However, as you point out, anti-AI people are pushing back against hype, not indulging in hype themselves - not least as nobody is trying to sell 'not-AI'.
I for one look forward to the next AI winter, which I hope will be long, deep, and savage.
And I'm sure if you go back to the release of 3.5, you'll see the exact same comments.
And when 5 comes out, I'm sure I'll see you commenting "OK I agree 6 months ago but now with Claude 5 Opus it's great".
It's really the weirdest type of goalpost moving.
I have used Opus 4.5 a lot lately and it's garbage, absolutely useless for anything beyond generating trivial shit for which I'd anyway use a library or have it already integrated in the framework I use.
I think the real reason your opinion has changed in 6 months is because your skills have atrophyed.
It's all as bad as 6 months ago, and even as bad as 2 years ago, you've just become worse.
> Not from people whose opinions on that I respect.
Then you shouldn't respect Antirez's opinion, because he wrote articles saying just that 2 years ago.
> If you think LLMs today are "as bad as 2 years ago" then I don't respect your opinion. That's not a credible thing to say.
You are getting fooled by longer context windows and better tooling around the LLMs. The models themselves have definitely not gotten better. In fact it's easy to test, just give the exact same prompt to 3.5 and 4.5, and receive the exact same answer.
The only difference is that when you used to copy-paste answers from the ChatGPT UI, you now have it integrated in your IDE (with the added bonus of it being able to empty your wallet much quicker). It's a faster process, not a better one. I'd even argue it's worse, since you spend less time reviewing the LLM's answer in this situation.
How do you explain that it's so easy to tell (in a bad way) when a PR is AI-generated if it's not necessary to code by hand anymore?
> Despite the large interest in agents that can code alone, right now you can maximize your impact as a software developer by using LLMs in an explicit way, staying in the loop.
There are too many people who see the absurd AI hype (especially absurd in terms of investment) and construct a counter-argument with it that AI is useless, overblown and just generally not good. And that's a fallacy. Two things can be true at the same time. Coding agents are a step change and immensely useful, and the valuations and breathless AGI evangelizing is a smoke screen and pure hype.
Don't let hype deter you to get your own hands dirty and try shit.
On the tech side, while things are a bit rough around the edges still the tech is very useful and isn’t going away. I honestly don’t see much disagreement there.
What? HN is absolutely packed with people complaining about LLMs are nothing more than net useless creators of slop.
Granted, fewer than six months ago, which should tell people something...
A lot of software is like this. You can build a bare bones but functional version for 1x investment or something that addresses every bell and whistle (often with market research saying it’s really needed) for 1000x. The 1000x version is better, but not remotely 1000x better.
A lot of SaaS has turned into this too. Take a bloated monstrosity like Salesforce and I bet 95% of customers would be very happy with a “bare bones” version that costs 1 10th the price.
Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
That means responses can be far more tailored - it knows what your job is, knows where you go with friends, knows that when you ask about 'dates' you mean romantic relationships and which ones are going well or badly not the fruit, etc.
Eventually when they make it work better, open ai can be your friend and confident, and you wouldn't dump your friend of many years to make another new friend without good reason.
I really think this memory thing is overstated on Hacker News. This is not something that is hard to move at all. It's not a moat. I don't think most users even know memory exist outside of a single conversation.
Every single one of my non-techie friends who use ChatGPT rely heavily on memory. Whenever they try something different to it, they get very annoyed that it just doesn't "get them" or "know them".
Perhaps it'll be easy to migrate memories indeed (I mean there are already plugins that sort of claim to do it, and it doesn't seem very hard), but it certainly is a very differentiating feature at the moment.
I also use ChatGPT as my daily "chat LLM" because of memory, and, especially, because of the voice chat, which I still feel is miles better than any competition. People say Gemini voice chat is great, but I find it terrible. Maybe I'm on the wrong side of an A/B test.
This feels like an area Google would have an advantage though. Look at all of the data about you that Google has and it could mine across Wallet, Maps, Photos, Calendar, GMail, and more. Google knows my name, address, drivers license, passport, where I work, when I'm home, what I'm doing tomorrow, when I'm going on vacation and where I'm going, and whole litany of other information.
The real challenge for Google is going to be using that information in a privacy-conscious way. If this was 2006 and Google was still a darling child that could do no wrong, they'd have already integrated all of that information and tried to sell it as a "magical experience". Now all it'll take is one public slip-up and the media will pounce. I bet this is why they haven't done that integration yet.
I used to think that, too, but I don't think it's the case.
Many people slowly open up to an LLM as if they were meeting someone. Sure, they might open up faster or share some morally questionable things earlier on, but there are some things that they hide even from the LLM (like one hides thoughts from oneself, only to then open up to a friend). To know that an LLM knows everything about you will certainly alienate many people, especially because who I am today is very different from who I was five years ago, or two weeks ago when I was mad and acted irrationally.
Google has loads of information, but it knows very little of how I actually think. Of what I feel. Of the memories I cherish. It may know what I should buy, or my interests in general. It may know where I live, my age, my friends, the kind of writing I had ten years ago and have now, and many many other things which are definitely interesting and useful, but don't really amount to knowing me. When people around me say "ChatGPT knows them", this is not what they are talking about at all. (And, in part, it's also because they are making some of it up, sure)
We know a lot about famous people, historical figures. We know their biographies, their struggles, their life story. But they would surely not get the feeling that we "know them" or that we "get them", because that's something they would have to forge together with us, by priming us the right way, or by providing us with their raw, unfiltered thoughts in a dialogue. To truly know someone is to forge a bond with them — to me, no one is known alone, we are all known to each other. I don't think google (or apple, or whomever) can do that without it being born out of a two-way street (user and LLM)[1]. Especially if we then take into account the aforementioned issue that we evolve, our beliefs change, how we feel about the past changes, and others.
[1] But — and I guess sort of contradicting myself — Google could certainly try to grab all my data and forge that conversation and connection. Prompt me with questions about things, and so on. Like a therapist who has suddenly come into possession of all our diaries and whom we slowly, but surely, open up to. Google could definitely intelligently go from the information to the feeling of connection.
Maybe. I haven't really heard many of the people in my circles describing an experience like that ("opening up" to an LLM). I can't imagine *anyone* telling a general-purpose LLM about memories they cherish.
Do people want an LLM to "know them"? I literally shuddered at the thought. That sounds like a dystopian hell to me.
But I think Google has, or can infer, a lot more of that data than people realize. If you're on Android you're probably opted into Google Photos, and they can mine a ton of context about you out of there. Certainly infer information about who is important to you, even if you don't realize it yourself. And let's face it, people aren't that unique. It doesn't take much pattern matching to come up with text that looks insightful and deep, but is actually superficial. Look at cold-reading psychics for examples of how trivial it is.
Another data point: my generally tech savvy teenage daughter (17) says that her friends are only aware of AI having been available for last year (3 actually), and basically only use it via Snaphhat "My AI" (which is powered by OpenAI) as a homework helper.
I get the impression that most non-techies have either never tried "AI", or regard it as Google (search) on steroids for answering questions.
Maybe more related to his (sad but true) senility rather than lack of interest, but I was a bit shocked to see the physicist Roger Penrose interviewed recently by Curt Jaimungal, and when asked if he had tried LLMs/ChatGPT assumed the conversation was about the "stupid lady" (his words) ELIZA (fake chatbot from the 60's), evidentially never having even heard of LLMs!
My mom does. She's almost 60. She asks for recipes and facts, asks about random illnesses, asks it why she's feeling sad, asks it how to talk to her friend with terminal cancer.
I didn't tell her to download the app, nor she is a tech-y person, she just did on her own.
Exactly. I went through a phase of playing around with ESP32s and now it tries to steer every prompt about anything technology or electronics related back to how it can be used in conjunction with a microcontroller, regardless of how little sense it makes.
I agree. For me it's annoying because everything it generates is too tailored to the first stuff I started chatting with it about. I have multiple responsibilities and I haven't been able to get it to compartmentalize. When I'm wearing my "radiology research" support hat it assumes I'm also wearing my "MRI physics" hat and to weaves everything for MRI. It's really annoying.
It doesn't even change the responses a lot. I used ChatGPT for a year for a lot of personal stuff, and tried a new account with basic prompts and it was pretty much the same. Lots of glazing.
What kind of a moat is that? I think it only works in abusive relationships, not consumer economies. Is OpenAIs model being an abusive money grubbing partner? I suppose it could be!
If you have all your “stuff” saved on ChatGPT, you’re naturally more likely to stay there, everything else being more or less equal: Your applications, translations, market research . . .
I think this is one of the reasons I prefer claude-code and codex. All the files are on my disks and if claude or codex were to disappear nothing is lost.
> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
Their 'memory' is mostly unhelpful and gets in the way. At best it saves you from prompting some context, but more often than not it adds so much irrelevant context that it over fits responses so hard that it makes them completely useless, specially in exploratory sessions.
It's certainly valuable but you can ask Digg and MySpace how secure being the first mover is. I can already hear my dad telling me he is using Google's ChatGPT...
I think an OpenAI paper showed 25% of GPT usage is “seeking information”. In that case Google also has a an advantage from being the default search provider on iOS and Android. I do find myself using the address bar in a browser like a chat box.
The memory is definitely sort of a moat. As an example, I'm working on a relatively niche problem in computer vision (small, low-resolution images) and ChatGPT now "knows" this and tailors its responses accordingly. With other chatbots I need to provide this context every time else I get suggestions oriented towards the most common scenarios in the literature, which don't work at all for my use-case.
That may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now. I asked ChatGPT to roast me again at the end of last year, and I was a bit taken aback that it had even figured out the broader problem I'm working on and the high level approach I'm taking, something I had never explicitly mentioned. In fact, it even nailed some aspects of my personality that were not obvious at all from the chats.
I'm not saying it's a deep moat, especially for the less frequent users, but it's there.
> may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now
I’m not saying it’s minor. And one could argue first-mover advantages are a form of moat.
But the advantage is limited to those who have used ChatGPT. For anyone else, it doesn’t apply. That’s different from a moat, which tends to be more fundamental.
Ah, I guess I've been interpreting "moat" narrowly, such as, keeping your competitors from muscling in on your existing business, e.g. siphoning away your existing users. Makes sense that it applies in the broader sense as well, such as say, protecting the future growth of your business.
Sounds similar to how psychics work. Observing obvious facts and pattern matching, except in this case you made the job super easy for the psychic because you gave it a _ton_ of information, instead of a psychic having to infer from the clothes you wear, your haircut, hygiene, demeanor, facial expression etc.
Yeah, it somewhat is! It also made some mistakes analogous to what psychics would based on the limited sample of exposure it had to me.
For instance, I've been struggling against a specific problem for a very long time, using ChatGPT heavily for exploration. In the roast, it chided me for being eternally in search of elegant perfect solutions instead of shipping something that works at all. But that's because it only sees the targeted chats I've had with it, and not the brute force methods and hacks I've been piling on elsewhere to make progress!
I'd bet with better context it would have been more right. But the surprising thing is what it got right was also not very obvious from the chats. Also for something that has only intermittent existence when prompted, it did display some sense of time passing. I wonder if it noticed the timestamps on our chats?
Notably, that roast evolved into an ad-hoc therapy session and eventually into a technical debugging and product roadmap discussion.
A programmer, researcher, computer vision expert, product manager, therapist, accountability partner, and more all in a package that I'd pay a lot of money if it wasn't available for free. If anything I think the AI revolution is rather underplayed.
I just learned Gemini has "memory" because it mixed its response to a new query with a completely unrelated query I had beforehand, despite making separate chats for them. It responded as if they were the same chat. Garbage.
I recently discovered that if a sentence starts with "remember", Gemini writes the rest of it down as standing instructions. Maybe go look in there and see if there is something surprising.
Its a recent addition. You can view them in some settings menu. Gemini also has scheduled triggers like "Give me a recap of the daily news every day at 9am based on my interests" and it will start a new chat with you every day at 9am with that content.
I think Gemini 3.0 the model is smarter than Opus 4.5, but Claude Code still gives better results in practice than Gemini CLI. I assume this is because the model is only half the battle, and the rest is how good your harness and integration tooling are. But that also doesn't seem like a very deep moat, or something Google can't catch up on with focused attention, and I suspect by this time next year, or maybe even six months from now, they'll be about the same.
> But that also doesn't seem like a very deep moat, or something Google can't catch up on with focused attention, and I suspect by this time next year, or maybe even six months from now, they'll be about the same.
The harnessing in Google's agentic IDE (Antigravity) is pretty great - the output quality is indistinguishable between Opus 4.5 and Gemini 3 for my use cases[1]
1. I tend to give detailed requirements for small-to-medium sized tasks (T-shirt sizing). YMMV on larger, less detailed tasks.
Claude is cranked to the max for coding and specifically agentic coding and even more specifically agentic coding using Claude Code. It's like the macbook of coding LLMs.
That hasn’t been my experience. I agree Opus has the edge but it’s not by that much and I still sometimes get better results from Gemini, especially when debugging issues.
Claude Code is much better than Gemini CLI though.
Most investors are dumb as rocks, or, at least, don't know shit about what they're investing in. I mean, I don't know squat about chemical manufacturing but I have some investment in that.
It's not about who's the best, it's about where the market is. Dogpiling on growing companies is a proven way to make a lot of money, so people do it, and it's accelerated by index funds. The REAL people supporting Google and Nvidia isn't wallstreet, it's your 401K.
It’s all broadly backed by a bunch of IOUs issued by those with no clear path to come up with the cash to pay the bill when it comes due. Hence the rising prices of credit default swaps.
$658 billion in op income for just six companies, growing at 10%+ per year. Maybe $8.5-$9 trillion in op income over the next decade. They have nothing else to spend it on other than over-priced share buybacks or dividends, most are under realistic anti-trust restrictions and can't freely buy major competitors. AI is an open field and they have the capital to burn.
Without all that financial firepower in the background driving everything none of it happens.
“I said they were the best engineers in Canada”
(Great quote from the BlackBerry movie).
Rings true here. You can’t fight market forces. To push out the US tech you need to build something that’s better than the US tech. Anything else is just wishful thinking.
reply