Hacker Newsnew | past | comments | ask | show | jobs | submit | logsr's commentslogin

looking at ESO's homepage their main pitch is around NERIS compliance (https://www.usfa.fema.gov/nfirs/neris/) so it looks like this is a case of regulatory capture forcing fire departments to buy new software. An obvious solution here is that any government mandated information system must come with an open source implementation that guarantees compliance.


despicable attitude completely detached from reality. volunteer fire fighting services provide critical public services across broad sparsely populated areas of the United States and those volunteer services benefit everyone by preventing wild fires that threaten everyone. this kind of PE activity is parasitic and directly threatens lives and property by diminishing emergency response capacity. bad business to be in. it will get shut down very quickly.


My attitude is “despicable” by my saying “take my federal and state tax dollars and help communities so they can pay for firefighters and equipment”?


they aren't "your" federal and state tax dollars. if you want to privately provide welfare for PE out of your own pocket, go ahead, but leave the public's tax dollars out of it.


You did see the parts of the article where the same fire departments aren’t able to pay for equipment like fire trucks and training firemen?

Would you be okay if the software companies were charging the same amount and not funded by PE? What if they were funded by YC?


amazing work!


JS/TS has a fundamental advantage, because there is more open source JS/TS than any other language, so LLMs training on JS/TS have more to work with. Combine that with having the largest developer community, which means you have more people using LLMs to write JS/TS than any other language, and people use it more because it works better, then the advantage compounds as you retrain on usage data.


> Vibe-coded projects get bought by vibe-coded companies

this is so far from the truth. Bun, Zig, and uWebsockets are passion projects run by individuals with deep systems programming expertise. furthest thing from vibe coding imaginable.

> a decade of performance competition in the JS VM space

this was a rising tide that lifted all boats, including Node, but Node is built with much more of the system implemented in JS, so it is architecturally incapable of the kind of performance Bun/uWebsockets achieves.


> Bun, Zig, and uWebsockets are passion projects run by individuals with deep systems programming expertise. furthest thing from vibe coding imaginable.

Sure, I definitely will not throw projects like Zig into that bucket, and I don't actually think Bun is vibe-coded. At least that _used_ to be true, we'll see I guess...

Don't read a snarky comment so literally ;)


From the article: "Over the last several months, the GitHub username with the most merged PRs in Bun's repo is now a Claude Code bot."


> Node is built with much more of the system implemented in JS, so it is architecturally incapable of the kind of performance Bun/uWebsockets achieves

That sounds like an implementation difference, not an architectural difference. If they wanted to, what would prevent Node or a third party from implementing parts of the stdlib in a faster language?


uWebsockets, which is the foundation of the network and http server stack in Bun, as I understand it, is a compatible 3rd party extension to Node.js that gives it similar performance on HTTP implementation.

The key architectural difference is that Node.js implements the HTTP stack and other low level libraries in JavaScript, which gives it memory safety guarantees provided by the v8 runtime, while Bun/uWebsockets are a zig/C++ implementation. for Node.js, which is focused on enterprise adoption, the lower performance JS approach better aligns with the security profile of their enterprise adoption target.


Claude Code running on Bun is an obvious justification, but Buns features (high performance runtime, fast starts, native TS) are also important for training and inference. For instance, in inference you develop a logical model in code that maps to a reasoning sequence, and then execute the code to validate and refine the model, then use this to inform further reasoning. Bun, which is highly integrated and highly focused on performance, is an ideal fit for this. Having Bun in house means that you can use the feedback from all of automation driven execution of Bun to drive improvements to its core.


There are two layers here: 1) low level LLM architecture 2) applying low level LLM architecture in novel ways. It is true that there are maybe a couple hundred people who can make significant advances on layer 1, but layer 2 constantly drives progress on whatever level of capability layer 1 is at, and it depends mostly on broad and diverse subject matter expertise, and doesn't require any low level ability to implement or improve on LLM architectures, only understanding how to apply them more effectively in new fields. The real key thing is finding ways to create automated validation systems, similar to what is possible for coding, that can be used to create synthetic datasets for reinforcement learning. Layer 2 capabilities do feed back into improved core models, even if you have the same core architecture, because you are generating more and improved data for retraining.


In my view LLMs are simply a different method of communication. Instead of relying on "your voice" to engage the reader and persuade them of your point of view, writing with LLMs for analysis and exploration through LLMs, is about creating an idea space that a reader can interact with and explore from their own perspective, and develop their own understanding of, which is much more powerful.


> trying to make themselves too big to fail

this is super overblown. what their executive said was that eventually the scale of compute required is so large, that it requires not only investing in new DCs, but new fabs, power plants, etc, which can only happen if there is implicit government support to guarantee 10+ year investment horizons required for the lower level of capital investment. that is not controversial at all and has nothing to do with OpenAI specifically being too big to fail.


I think you are right that the entire analysis is flawed. The Amazon and Microsoft "rental" deals have inflated price tags because of the circular financial arrangements between them and OpenAI, and because those future revenue streams can be used notionally to finance CapEx. All of the Stargate DC build is being done through for-profit SPVs, so the financials are murky, but building the infra gives them collateral for debt, and they are going to lease the compute to the highest bidder, so there is a whole scheme for getting out of the non-profit box, creating a self-perpetuating loop of borrowing to build, using what they build as collateral for more borrowing, raising additional revenue and hedging by leasing compute to 3rd parties, and then using the for-profit SPVs to cross-subsidize OpenAI proper. That plan has enormous risks of its own (can the leadership team of OpenAI effectively build a competitor in the hyperscale compute space?) but whatever happens, it won't just be straight line scaling their current deals with existing hyperscalers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: