Hacker Newsnew | past | comments | ask | show | jobs | submit | MrAlex94's commentslogin

Am I being too cynical, or does anyone else envision a future where you ask Chrome to buy you something, anything, online and instead of it actually buying you the “best” item, you end up with items it “prefers” where Google make money from suggestions and/or completion of sale?

I know it calls out that there’ll need to be user confirmation before the final purchase, but if you’re already not expending the effort to find the product or service yourself, are you really going to sit and research what it’s given you? If you are, then what’s the point of using the agent?

Just seems like the next evolution in Google’s ad revenue generation.


> lags behind upstream Firefox in terms of security fixes

I’m not sure why this has become a thing - usually I either release Waterfox the week before ESR releases (the week the code freeze happens and new version gets tagged) or, if I’m actively working on features and they need to coincide with the next update I push, I will release on the same Tuesday the ESR releases.

You can check the GitHub tag history for Waterfox to see it’s been that way for a good while :)


Looking back with fresh eyes, I definitely think I could’ve presented what I’m trying to say better.

On a purely technical play, you’re right that I’m drawing a distinction that may not hold up purely on technical grounds. Maybe the better framing is: I trust constrained, single purpose models with somewhat verifiable outputs (seeing text go in, translated text go out, compare its consistency) more than I trust general purpose models with broad access to my browsing context, regardless of whether they’re both neural networks under the hood.

WRT to the “scope”, maybe I have picked up the wrong end of the stick with what Mozilla are planning to do - but they’ve already picked all the low hanging fruit with AI integration with the features you’ve mentioned and the fact they seem to want to dig their heels in further, at least to me, signals that they want deeper integration? Although who knows, the post from the new CEO may also be a litmus test to see what the response to that post elicits, and then go from there.


I still don’t understand what you mean by “what they do with your data” - because it sounds like exfiltration fear mongering, whereas LLMs are a static series of weights. If you don’t explicitly call your “send_data_to_bad_actor” function with the user’s I/O, nothing can happen.


I disagree that it’s fear mongering. Have we not had numerous articles on HN about data exfiltration in recent memory? Why would an LLM that is in the drivers seat of a browser (not talking about current feature status in Firefox wrt to sanitised data being interacted with) not have the same pitfalls?

Seems as if we’d be 3 for 3 in the “agents rule of 2” in the context of the web and a browser?

> [A] An agent can process untrustworthy inputs

> [B] An agent can have access to sensitive systems or private data

> [C] An agent can change state or communicate externally

https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa...

Even if we weren’t talking about such malicious hypotheticals, hallucinations are a common occurrence as are CLI agents doing things it thinks best, sometimes to the detriment of the data it interacts with. I personally wouldn’t want my history being modified or deleted, same goes with passwords and the like.

It is a bit doomerist, I doubt it’ll have such broad permissions but it just doesn’t sit well which I suppose is the spirit of the article and the stance Waterfox takes.


> Have we not had numerous articles on HN about data exfiltration in recent memory?

there’s also an article on the front page of HN right now claiming LLMs are black boxes and we don’t know how they work, which is plainly false. this point is hardly evidence of anything and equivalent to “people are saying”


This is true though. While we know what they do on a mechanistic level, we cannot reliably analyze why the model outputs any particular answer in functional terms without a heroic effort at the "arxiv paper" level.


that’s true of analyzing individual atoms in a combustion engine — yet I doubt you’d claim we don’t know how they work

also this went from “we can’t analyze” to “we can’t analyze reliably [without a lot of effort]” quite quickly


In the digital world, we should be able to go back from output to input unless the intention of the function is to "not do that". Like hashing.

Llms not being able to go from output back to input deterministically and for us to understand why is very important, most of our issues with llms stem from this issue. Its why mechanistic interpretabilty research is so hot right now.

The car analogy is not good because models are digital components and a car is a real world thing. They are not comparable.


ah I forgot digital components are not real world things


I mean, fluid dynamics is an unsolved issue. But even so we know *considerably* less about how LLMs work in functional terms than about how combustion engines work.


I outright disagree; we know how LLMs work


We know how neural nets work. We don't know how a specific combination of weights in the net is capable of coherently asking questions asked in a natural language, though. If we did, we could replicate what it does without training it.


> We know how neural nets work. We don't know how a specific combination of weights in the net is capable of coherently asking questions asked in a natural language, though.

these are the same thing. the neural network is trained to predict the most likely next word (rather token, etc.) — that’s how it works. that’s it. you train a neural network on data, it learns the function you trained it to, it “acts” like the data. have you actually studied neural networks? do you know how they work? I’m confused why you and so many others are seemingly so confused by this. what fundamentally are you asking for to meet the criteria of knowing how LLMs work? some algorithm that can look at weights and predict if the net will output “coherent” text?

> If we did, we could replicate what it does without training it.

not sure what this is supposed to mean


It's like you're describing a compression program as "it takes a big file and returns a smaller file by exploiting regularities in the data." Like, you have accurately described what it does, but you have in no way answered the question of how it does that.

If you then explain the function of a CPU and how ELF binaries work (which is the equivalent of trying to answer the question by explaining how neural networks work), you then have still not answered the actually important question! Which is "what are the algorithms that LLMs have learnt that allow them to (apparently) converse and somewhat reason like humans?"


…except we know what every neuron in a neural network is doing. I ask again, what criteria do we need to meet for you to claim we know how LLMs work?

we know the equations, we know the numbers going through a network, we know the universal approximation theorem —- what’re you looking for exactly?

I’ve answered the “what have they learnt” bit; a function that predicts the next token based on data. what more do you need?


Yes, in the analogy it's equivalent to saying you know "what" every instruction in the compression program is doing. push decrements rsp, xor rax, rax zeroes out the register. You know every step. But you don't know the algorithm that those instructions are implementing, and that's the same situation we're in with LLMs. We can describe their actions numerically, but we cannot describe them behaviorally, and they're doing things that we don't know how to otherwise do with numerical methods. They've clearly learnt algorithms but we cannot yet formalize what they are. The universal approximation theorem actually works against your argument here, because it's too powerful- they could be implementing anything.

edit: We know the data that their function outputs, it's a "blurry jpeg of the internet" because that's what they're trained on. But we do not know what the function is, and being able to blurrily compress the internet into a tb or whatever is utterly beyond any other compression algorithm known to man.


I believe you are conflating multiple concepts to prove a flaky point.

Again, unless your agent has access to a function that exfiltrates data, it is impossible for it to do so. Literally!

You do not need to provide any tools to an LLM that summarizes or translates websites, manages your open tabs, etc. This can be done fully locally in a sandbox.

Linking to simonw does not make your argument valid. He makes some great points, but he does not assert what you are claiming at any point.

Please stop with this unnecessary fear mongering and make a better argument.


Thinking aloud, but couldn't someone create a website with some malicious text that, when quoted in a prompt, convinces the LLM to expose certain private data to the web page, and couldn't the webpage send that data to a third party, without the need for the LLM to do so?

This is probably possible to mitigate, but I fear what people more creative, motivated and technically adept could come up with.


At least with finetuning, yes: https://arxiv.org/abs/2512.09742

It's unclear if this technique could also work with in-prompt data.


Why does the LLM get to send data to the website?? That’s my whole point, if you don’t expose a way for it to send data anywhere, it can’t.


I am more of a sceptic of AI in the context of a browser, than its general use. I think LLMs have great utility and have really helped push things along - but it’s not as if they’re completely risk free.


Waterfox might be what you’re after?

- Supports JXL out of the box (including support for alpha transparency and animations)

- Vertical tabs with optional tree tabs (hired the original tree style tab developer to implement the feature)

- For profit, but I don’t want your data, collect it or use it to earn a living (telemetry/analytics/experiments disabled at build time and alongside a fair few patches on top to make sure external connections are limited to what’s necessary)

Sidebar, I’m the developer of Waterfox


Firefox (with minor changes + addons) is what I use today, works well for what I care about. Thanks for the recommendation though!

While you're here, last time I came across your website (and it seems like it looks the same currently), I noticed that your browser comparison is not including Firefox, which is what you've forked from (as far as I can tell at least, it isn't made clear by the landing page actually, but the UI and name makes it obvious), which feels like it's a bit misleading almost intentionally.


Not intended to be misleading in a way, but it is on purpose as Mozilla don’t like it when there’s mention of Firefox on the website so I make any references sparingly.


Huh, interesting. Is it that you're avoiding Mozilla from some sort of retribution, preventing you from effectively working on Waterfox in case you anger them? I'm not sure it should matter too much what Mozilla thinks about other browsers comparing themselves to Firefox, it's definitely fair usage as long as you don't try to trick people into believing Mozilla is also building Waterfox / Waterfox is somehow exactly the same as Firefox.

Just adding Firefox in your comparison table really should be fine, and kind of makes me want to ask someone at Mozilla why others would be afraid of doing so.


Waterfox, I’ve spent a good amount of time on scouring through the code looking at what to remove and the next release I’ve found some last remaining remnants to disable


I hope I don't come across too harsh in my criticism here, but this is in my wheelhouse and I like to keep tabs on the privacy browser market in comparison to Waterfox.

> A bold technical choice: WebKit, not another Chromium clone

I don't find this a bold technical choice at all for a macOS only browser? I think this would be more impressive if it was Windows as well, as back (maybe ~5 or so years ago) when I was investigating WebKit on Windows, builds were not on an equal playing field[1]. So the engineering to get that up and running would be impressive.

> Speed by nature

Unfortunately, as of 16:40 UTC, I am unable to run the browser (installer?) to benchmark it due to "An error occurred while parsing the update feed.", but I recall 2 years ago when I tested Orion it was the slowest of all the browsers on macOS and Safari had quite a lead. I'd also be curious, being based on WebKit, how much faster it will actually be on macOS vs Safari?

I dropped speed as a focus point on Waterfox after compilation flags started making less of a difference compared to the actual architectural changes Mozilla were making for Firefox.

> Privacy etc

I think comparing to other major browsers such as Chrome the points are valid, but against Safari I'm not convinced it holds up as much. I know there is some telemetry related to Safari, but privacy is a big selling point for Safari as well and I'd be curious to see actual comparisons to that?

Safari includes iCloud Privacy Relay (MPR based on MASQUE[2]) and Oblivious DNS[3] - arguably two very valuable features that a company at a scale like Apple can subsidise.

The entire AI section also feels like trying to have it both ways as well. They criticise other browsers for rushing AI features, position themselves as the "secure" alternative, then immediately say they'll integrate AI "as it matures." This reads more like "we're behind on AI features" than a principled stance. If security is the concern, explain your threat model and what specific architectural decisions you're making differently? Currently Firefox has kept AI out of the "browser core" as it's been put, and I don't see them ever changing that.

Kudos that they have >2000 people paying for the browser directly, but I will say it doesn't excite me to see another closed source browser entering the market (I don't see any mention here of open-source apart from mention of WebKit being open source).

I do realise this is more a marketing post than an actual technical deep dive, but so much is just a rehash of every feature almost every modern web browser has?

I'll keep updating this comment as and when I can explore the browser itself a bit more.

[1] https://fujii.github.io/2019/07/05/webkit-on-windows/

[2] https://datatracker.ietf.org/wg/masque/about/

[3] https://dl.acm.org/doi/10.1145/3340301.3341128


WebKit on Windows has progressed since ~5 years ago. The gap between the Windows port and the Linux WPE/GTK ports is shrinking over time.

Every JIT tier has been enabled for JSC on Windows[1], and libpas (the custom memory allocator) has been enabled.

The Windows port has moved from Cairo to Skia, though it's currently using the CPU renderer AFAIK. There's some work to enable the COORDINATED_GRAPHICS flag which would enable Windows to benefit from Igalia's ongoing work on improving the render pipeline for the Linux ports. I go into more detail on my latest update [2], though the intended audience is really other WebKit contributors.

Webkit's CI (EWS) is running the layout tests on Windows, and running more tests on Windows is mostly a matter of test pruning, bug fixes and funding additional hardware.

There's a few things still disabled on the Windows port, some rough edges, and not a lot of production use (Bun and Playwright are the main users I'm aware of). The Windows port really needs more people (and companies) pushing it forward. Hopefully Kagi will be contributing improvements to the Windows port upstream as they work on Orion for Windows.

[1] https://iangrunert.com/2024/10/07/every-jit-tier-enabled-jsc... [2] https://iangrunert.com/2025/11/06/webkit-windows-port-update...


I tried Orion before and It was good to me. The closed-source part It is also something I don't like. I wish they do the browser open-source. I want to see if they use rust underneath :p.

Although, let's be honest few people look at the entire codebase. However i believe It would be beneficial to make It open-source for them so they could have contributors. Also new features would be easier to add. For example, i know some protocols like Multicast QUIC which was almost impossible to be added in Safari and Chrome.


By the way, I am trying right now Orion on MacOS and it goes quite fast as now. Actually, it is faster than Firefox at least when trying against speed.cloudflare.com under a VPN. Nonetheless the speed difference is not significant.

Also, there are two features which I would like to know/see in Orion:

- I use quite a lot the Containers and Group tabs in Firefox. The containers allow me to have different active accounts in the same browser. I use it a lot when managing AWS accounts.

- Change the behaviour of Cmd+Shift+F to be the same as Firefox, doing the full screen instead of the hide the tabs.


Orion Tech Lead here: Thank you for your patience — the issue has now been fixed. Please try updating now :)


Hi! Congratulations on the launch. Is your intention to ship using WebKit on Window and Linux too?


I can attest to that! While I never reached the upper echelons of virality that other browsers in recent memory have, I had much the same experience with Waterfox. Right place, right time gave it its initial user base: posting about a 64-bit build of Firefox on the Overclock.net forums. It hit the right time as people were starting to switch to 64-bit Windows and upgrading their hardware to match

I think that first week the project got 50k downloads! I was of course only 16 so had no concept of what to do but just keep getting new builds out - after all I made the builds because I couldn’t find anyone who would consistently and regularly do so.


Not quite independent as it’s a meta-search, but I developed a subscription based one at search.waterfox.net. Pays for the infrastructure costs and remains ad/tracking free.


Nice! I couldn't see the list of search engines that are included in your meta-search, the FAQ currently seems to imply that it only serves Google results?

If you give users the option to include / not include certain search engines in their results, so their money never goes to those particular engine companies, that could be of interest to some Kagi refugees.

I ended up vibe coding my own meta-search engine (augmented with a local SQlite database of hand-picked sites) so that I could escape Kagi, but I'm excited if Waterfox Search is an alternative I can recommend to others!


Currently only Google, but Brave and Mojeek are going to be made available as well very soon


Waterfox (which I maintain), Tor Browser, Mull (based on Tor) and I assume LibreWolf as well?


Love waterfox! Thank you for your awesome work.

I am aware of several forks myself, it was just odd they listed none lol


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: