Hacker Newsnew | past | comments | ask | show | jobs | submit | DrStartup's commentslogin

The good ole VC OS Rug Pull. Classic.

It’d be nice if Mozilla (or a similar foundation) could create a baseline OS platform for a business communications suite.


If Mozilla did that, we'd have monthly news stories about them adding ads into the client, removing features people depend on, cramming in AI where it doesn't belong, abruptly making all sorts of controversial ToS changes, going back on old promises, and all kinds of other things we know and love Mozilla for. All before they'd get bored and discontinue the product after a couple of years.

Or maybe they'd just buy some existing closed source Slack competitor, promise to open source it, and then just never get around to it. You know, like how they bought Pocket in 2017, promised to make it open source, but somehow never got around to it before discontinuing it in 2025.


Feel like then people would just have one more thing to complain about the Mozilla Foundation over.


I'd be nice for anyone but Mozilla to do it. They can barely keep FF competitive


FF is plenty competitive on the technical and feature front. It's market share is not a reflection of technical merit.

What's more, next to Linux itself it is maybe the only case I can see where a major piece of user facing software is kept competitive with the Apple/Google/MS tools.

LibreOffice or Nextcloud are technically far further behind Office and Google's online offerings.

Which therefore begs the question: Who else is in a position to do this?

At first glance, Moz with Firefox + a suite of self-hosted team and productivity stuff that works well in Firefox would make a ton of sense...


It isn't competitive. They are paid by Google.

Worse, it's ridden with spyware, and is merely a honeypot for security-aware people that are not sufficiently paranoid to check any of the claims. Like, those VPNs from YT ads that use your IP to give AI companies residential proxies, the same kind of scam.

Spin up Wireshark and take a look at activity of Firefox. Try to shut the browser up. It won't work.

Even if they weren't a Google's proxy company, they would lose to standards commitees being infested by Google, and would have to play the "best luck catching up" game by constantly supporting new versions of JS, APIs and CSS features that nobody needs (except Google's YouTube will use them to stop you from using an adblocker).

FF is governed by ex-Oracle managers at the moment, singing the Google's song. Don't anthropomorphize your lawnmower.


I'm neither and have 2. 24/7 async inference against github issues. Free. (once you buy the macs that is)


I'm not sure who 'home users' are, but i doubt they're buying two $9,499 computers.


Peanuts for people who make their living with computers.


So, not a home user then. If you make your living with computers in that manner you are by definition a professional, and just happen to have your work hardware at home.


In the US, yes.


I wonder what the actual lifetime amortized cost will be.


Every time I'm tempted to get one of these beefy mac studios, I just calculate how much inference I can buy for that amount and it's never a good deal.


Every time someone brings up that, it brings me back memories of trying to frantically finish stuff as quickly as possible as either my quota slowly go down with each API request, or the pay-as-you-go bill is increasing 0.1% for each request.

Nowadays I fire off async jobs that involve 1000s of requests, billion of tokens, yet it costs basically the same as if I didn't.

Maybe it takes a different type of person, than the one I am, but all these "pay-as-you-go"/tokens/credits platforms make me nervous to use, and I end up not using it or spending time trying to "optimize", while investing in hardware and infrastructure I can run at home and use that seems to be no problem for my head to just roll with.


But the downside is that you are stuck with inferior LLMs. None of the best models have open weights: Gemini 3.5, Claude Sonnet/Opus 4.5, ChatGPT 5.2. The best model with open weights performs an order of magniture worse than those.


The best weights are the weights you can train yourself for specific use cases. As long as you have the data and the infrastructure to train/fine-tune your own small models, you'll get drastically better results.

And just because you're mostly using local models doesn't mean you can't use API hosted models in specific contexts. Of course, then the same dread sets in, but if you can do 90% of the tokens with local models and 10% with pay-per-usage API hosted models, you get the best of both worlds.


anyone buying these is usually more concerned with just being able to run stuff on their own terms without handing their data off. otherwise it's probably always cheaper to rent compute for intense stuff like this


For now, while everything you can rent is sold at a loss.


Are the inference providers profitable yet? Might be nice to be ready for the day when we see the real price of their services.


Isn't it then even better to enjoy cheap inference thanks to techbro philanthropy while it lasts? You can always buy the hardware once the free money runs out.


Probably depends on what you are interested in. IMO, setting up local programs is more fun anyway. Plus, any project I’d do with LLMs would just be for fun and learning at this point, so I figure it is better to learn skills that will be useful in the long run.


Nevermind the fact that there are a lot of high quality (the highest quality?) models that are not released as open source.


Heh. I'm jealous. I'm still running a first gen Mac Studio (M1 Max, 64 gigs RAM.) It seemed like a beast only 3 years ago.


Interesting. Answering them? Solving them? Looking for ones to solve?


absolutely f ads


none of it will matter soon. anything you want to see or watch will be dynamically generated just for you. enders game is here.


why would I want that?


What if I want to rewatch it, offline?


they need domestic chip capabilities


lower is better

https://github.com/frarees/typometer

Typometer

Typometer is a tool to measure and analyze the visual latency of text editors.

Editor latency is the delay between an input event and a corresponding screen update — in particular, the delay between keystroke and character appearance. While there are many kinds of delays (caret movement, line editing, etc.), typing latency is a major predictor of editor usability.

Check the article typing with pleasure to learn more about editor latency and its effects on typing performance.


hook up zen mcp to openrouter and use cerabras inference with kimi k2 and qwen3 480b at 2k tok/sec


is the author secretly the ceo of htmx?


entity resolution is the killer feature. context engineering is the problem with this benchmark attempt. The agent plan seemed to one shot, and the fact that the LLMs could write their own tools without validation or specific multi shot examples is worrisome. To me way to much left to the whims of the llms - with out proper context.


Yes, none of the top LLMs can do entity resolution well yet. I constantly see them conflate entities with similar names - they'll confidently cite 3 sources about what appears to be one company, but the sources are actually about 3 different businesses with similar names.

The fundamental issue is that LLMs don't have a concept of canonical entity identity. They pattern match on text similarity rather than understanding that "Apple Inc" and "Apple Records" are completely different entities. It gets even worse when you realize companies can legally have identical names in the same country - text matching becomes completely unreliable.

Without proper entity grounding, any business logic built on top becomes unreliable.


XUL! why not just use htmx and the platform?


Because using htmx is asking to get defaced via XSS, or worse. Security is an afterthought for the project, which is evident from the placement of the related documentation.


security rules for htmx are no different than any other hypermedia approach: you need to escape all user content

https://htmx.org/essays/web-security-basics-with-htmx/


How is this better?


It probably isn’t.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: