It seems weird to run a closed-source browser on an open-source operating system when so many open alternatives exist—I certainly wouldn’t do it, and I’m a Kagi customer.
Being closed-source isn't just an ideological issue, it bring about a lot of practical issues. E..g.: distributions aren't going to package it, so users need to download the tarball and install it manually. They'll also need to manually update it (unless they're including some dedicated service?).
Then, integration with the OS will be weird. If you're distributing binaries, you can't dynamically link system dependencies (they are either bundled or statically linked). Any distribution-specific patches and fixes will be missing. AFAIK the default path for the CA bundle varies per distribution; I'm not even sure how you'd handle that kind of thing. I'm sure there's hundreds of subtle little details like that one.
The audience ends up being Linux users, who are fine with proprietary software, have time and patience for manually configuring and maintaining a browser installation, and are also fine with an absence of proper OS integration.
I think Steam is the only popular proprietary software on Linux, and they basically ship an entire userspace runtime, and almost don't integrate with the OS at all.
I hope freediver will shed some light on the open source plans, because that's a deal breaker for me too. I'm a long time paying customer and huge proponent (even evangelist) of Kagi, but a closed source browser is just too many steps backwards for me no matter who makes it.
I get (though wouldn't necessarily agree with) keeping it closed while it's still in the works, but would like to know if the plan is to open source in the future or not.
Thank you! I'm obviously just one person, but I deeply appreciate your willingness to engage on HN, and your transparency and honesty about things (not just today, but also in the past). Makes me feel even better about being a paid Kagi subscriber.
Kagi founder here. Orion isn't open source yet primarily because we're a 5-person team that spent 6+ years building this and created significant IP doing so, and we're not in a position to defend our work against a well-funded company using it as a base (we care very much about the business model of the browser surviving). Restrictive licenses help in theory but enforcing them against a company with a larger legal budget doesn't.
We also see limited upside from community contributions - the number of people who can meaningfully work on a WebKit browser is small (from our experience hiring), and most of them already work at Apple or Kagi. Meanwhile, managing an open source codebase of this size would add real strain to our small team.
The plan is however to open source when Orion is self-sufficient (business model of Orion is you are the customer and can pay for it - like we used to pay for browsers 20 years ago before advertisers started paying for our browsing), meaning it can sustain its own development independent of Kagi Search. I want to take the opportunity to thank all people who supported the Orion browser vision [1]. We're not there yet but recent 1.0 launch and expanding to Linux are steps in that direction. And on Jan 1st this year we began development of Orion for Windows (HN exclusive yay!).
I understand this is unsatisfying to people who want source access now. It's a tradeoff we've made deliberately, not something we're hiding behind.
> The plan is however to open source when Orion is self-sufficient (business model of Orion is you are the customer and can pay for it - like we used to pay for browsers 20 years ago before advertisers started paying for our browsing), meaning it can sustain its own development independent of Kagi Search.
Orion will never reach "self-sufficiency" as long as you don't actually charge for Orion. Orion is completely free to use. I can donate to Orion+, but Orion+ offers no paid features; it's basically a Patreon. https://help.kagi.com/orion/orion-plus/orion-plus.html
(No major browser has ever sustained its own development independent of a search engine's funding, not even Netscape, which charged $40/seat in the 1990s, with a free "shareware" tier so generous that hardly anyone paid. Netscape was funded by advertising, especially from Yahoo search. Funding browser development entirely on donations to a commercial business would be completely unprecedented.)
What if, instead, you made Orion "source available" to paying customers, but not open source? You could merge PRs only from users who sign a CLA. (Users would file PRs out of charity, for the same reason they sign up for Orion+ today.)
> managing an open source codebase of this size would add real strain to our small team
Can you please elaborate what do you mean when you say this? This is something I do not understand. How licensing terms affect your codebase management beyond setting things up so the code is available to users?
Publishing something under a FLOSS license doesn’t mean anything except that you grant end-users certain rights (the four essential freedoms). The rest (like accepting patches or supporting external developers) is customary but by no means obligatory. You don’t have a capacity for it - don’t do it, easy. There are thousands of developers who do that - they just dump whatever they have under a nice license and that’s it.
Unless you’re saying your legal department doesn’t have capacity to handle licensing concerns, especially if you’re using or potentially using non-FLOSS third party components. That I can totally understand, it could be pretty gnarly.
Please don’t be mistaken: Free Software is a purely legal matter of what you allow users to do with your work - not some operating principles or way of organizing processes.
Note: All this said, I can understand that you may not want to grant some freedoms to the end users, particularly the freedom to redistribute copies, because this could affect your plans of selling the licenses. But that’d be a whole different story than codebase management concerns.
As I wrote, If the concern is that they cannot figure out a way to distribute it as paid software as others may redistribute it for free, that’d be a valid point of concern (and there are plenty of options). But that’s not what they’re saying.
Someone steals their work. Violates the license. To defend their rights Kagi has to sue or lay down. Not giving away the keys to the kingdom until that ability to defend is established just underlines that they're doing valuable work.
Pardon my skepticism but I don’t believe that’s a realistic threat model. Yea, purely hypothetically that could happen. But realistically, why would someone do that - what’s the point? Especially so it’s severe enough to warrant a serious legal battle that takes more than a few sternly worded DMCA-like emails to hosting providers?
Mind you, if we’re talking about hypotheticals, someone can ship a differently branded or malware-ridden (or idk what else, my imagination runs dry pretty fast here) version of their binary distribution without any source code access just fine, violating licensing all the same. Patching unprotected binaries is pretty easy, frequently much less demanding than building from source. And with all due respect to the good work they’re doing, I highly doubt Orion team needs to buy a Denuvo license, haha.
(And, as I said, it’s not even remotely what they wrote.)
If it is open source, it will end for in LLMs and will be used in other browser variants (bigger and smaller). Any USP of the code itself will be gone.
If LLMs hoover up removal of auto-shipped telemetry (currently the main selling point) then I’d say that’d be a reason to publish and submit this to every indexer imaginable ASAP ;-) Shame it’s a bit of absence of code so it’s nor really possible to submit anywhere.
And other features are worthy because they’re implemented ideas, not because of their actual implementations. Like programmable buttons or overflow menus - I’m pretty sure there’s no secret sauce there, and it’s extremely unlikely one can just grab some parts of that and move it to a different product - adapting the code from Orion’s codebase would likely take more effort than just implementing the feature anew.
Most code is just some complicated plumbing, not some valuable algorithmic novelty. And this plumbing is all about context it lives in.
The value is usually not in the code, but in the product itself. Some exceptions apply, of course.
> What’s this “it” are you talking about, exactly?
Orion's code.
LLMs facilitate the attribution-free pillaging of open-source code. This creates a prisoner's dilemma for anyone in a competitive context. Anything you build will be used by others at your cost. This was technically true in the past. But humans tried to honor open-source licenses, and open-source projects maintained license credibiilty by occasionally suing to enforce their terms. LLMs make no such attempt. And the AI companies have not been given an incentive to prevent vibe coders from violating licenses.
It's a dilemma I'm glad Kagi is taking seriously, and one the open-source community needs to start litigating around before it gets fully normalised. (It may already be too late. I could see this Congress legislating in favour of the AI companies over open source organisations.)
> Most code is just some complicated plumbing, not some valuable algorithmic novelty. And this plumbing is all about context it lives in
Sure. In this case, it's a WebKit browser running on Linux. Kagi is eating the cost to build that. It makes no sense for them to do that if, as soon as they have a stable build, (a) some rando uses Claude to copy their code and sell it as a competitor or (b) Perplexity straight up steals it and repackages it as their own.
You don’t need a LLM to just copy their code as a whole thing. Copying and rebranding (plus some vendor adaptations) is a valid concern that I have already agreed about, but for the third time: it’s not what they wrote. Has nothing to do with codebase management.
And taking some individual pieces may sound problematic as an abstract concern but have you ever tried to adopt code from one FLOSS codebase into another different one? Especially UI code, if it’s not some isolated purportedly reusable component? Maybe Orion developers are wizards who wrote exceptionally portable and reusable code, but usually in my experience it’s a very painful process, where you’re constantly fighting all the conceptual mismatches from different design choices. And I’ve yet to see a LLM that can do architectural refactoring without messing things up badly. So I’m generally skeptical of such statements. And that’s why I’m suggesting we pick a concrete example we can try to analyze, for doing this on highly abstract “the whole code” level is not going to succeed.
I've found a few that work but many can be buggy or non-functional, just depends on the extension. The only one I use currently is called "Control Panel for Twitter", which seems to work pretty well.
I would ignore the haters, keeping Orion proprietary makes the most sense for being able to successfully charge for it as a commercial product. You can't sell an OSS product, only supporting services, as many many startups have realized and been forced to relicense to much anger within their respective communities.
And when the market is going to be primarily technical people I don't think you can trust them/us with source-available either as hackers with a strong aversion to paying for software thinking themselves clever will make and distribute bootleg builds with the license checks removed. Then you'll have to spend your time finding and DMCAing them which will only make people mad. Best to avoid it entirely.
I appreciate you/Kagi actually thinking about building a sustainable business in contrast to companies that open source their core competency and then fail to make money later.
earlier in the thread I read nhe plan was to release the source "when it has merit" But that instantly left me with the feeling that the authors of the browser, and I have very different opinions on what the word merit means. Such that they would be incompatible, and I'd never want to use it. This is a decision that has lowered my opinion about exactly how much I can trust Kagi.
> Kagi founder here. Orion isn't open source yet primarily because we're a 5-person team that spent 6+ years building this and created significant IP doing so,
But it's possible I haven't considered some detail where I might agree it's reasonable. Can you describe or offer any insight into the "significant IP" that you need to protect and defend? What threats from a larger company are you primarily concerned about?
Having access to the source is just one part of open source.
The state of webkitgtk is a bit rough, as I’m sure you and your engineers have noticed. The other part of what open source means to people is that you contribute back to the open source code you used to build your business, lifting all boats in the process.
What people certainly do not want to see is Kagi pull an Apple: utilize FOSS to the extent it helps you but return nothing but “thanks everyone but we got ours”.
Are you looking for people who worked on WebKit in the past?
I really hope you refactored WebKit's Bridge, because it allowed a lot of exploits in the past, and was neglected upstream by Apple.
When I started my RetroKit fork I was aiming to reduce that attack surface while offering farbled apis based on other browser behaviors and their profiles. [1]
My fork has been neglected a bit due to lack of time, as I'm currently still busy with other APT related things before I can get back to it.
Would love to chat whether your plan is to open source your WebKit fork, maybe there's some overlap and we can work together on it?
(I currently hope that ladybird will be getting into a more forkable and modular state, because servo passed by that goal a long time ago).
The GPL has pretty good legal precedent, and so does the MPL in the browser space (though, Firefox has mozilla behind it so it gets the enforcement benefit). If the SFC wins its vizio case, would you look into freeing orion?
uBO is not technically working on Orion for iOS. We do not have permissions to run certain web extension APIs on iOS needed for uBO feature set. The adblocking you witness is thanks to built in native adblocker in Orion.
We support Kagi across products. We believe alternate browser engines keep the web standard. We give more weight to that than to whether a particular browser's value add (on top of a double digit* but non-hegemonic engine) is open.
We believe software and hardware creators have a right to choose their business model and let that model compete, as Kagi's is competing right here in this thread.
* Having worked at mega banks etc., they do look at these numbers to decide whether to invest in standards support or slap on a "Requires IE" button.
I am generally ok with things being proprietary if they want, and I'm mostly ok with Orion being proprietary, but I do understand peoples' issues here.
For a lot of people (even relatively geeky people), their computers end up being "an interface to use a browser". People use their browser to file their taxes, to write their documents, to manage their websites, to create websites, to look at porn, to pirate movies, to chat with their friends, to send/receive money to their bank, and a whole bunch of other things.
It would be hard to imagine a piece of software that is capable of knowing me more intimately than my primary web browser, and as Google has proven, this intimate knowledge is valuable. Companies pay boatloads of money for large quantities of personal information to target ads (and probably a bunch of other more disturbing things).
I genuinely don't think freediver is lying; I believe him when he says there's no telemetry data being sent and that it's not tracking me, but there's the sticking word: "believe". I have to trust him, which wouldn't necessarily be the case if it were FOSS.
Now, granted, I could always run Wireshark or something to ensure that there's no telemetry data being sent regularly, but that only protects you so much; for all I know, they could be taking steps to actively make it look like they're not sending data, or they could be batching up N days of data and sending it in batches so it is not as obvious that telemetry is sent.
Again, I genuinely don't think they're doing that, I believe them, but I do see peoples' points.
> I genuinely don't think freediver is lying; I believe him when he says there's no telemetry data being sent and that it's not tracking me, but there's the sticking word: "believe". I have to trust him, which wouldn't necessarily be the case if it were FOSS.
Proving this is actually the easy part - all you have to do is install a network proxy and monitor connections. It is something literally anyone can do which is why the zero telemetry statement carries a lot of weight.
> For a lot of people (even relatively geeky people), their computers end up being "an interface to use a browser". People use their browser to file their taxes, to write their documents, to manage their websites, to create websites, to look at porn, to pirate movies, to chat with their friends, to send/receive money to their bank, and a whole bunch of other things.
I agree! Which is why it is so terrifying for me that Orion is the only browser on the market you can pay for. For the most intimate piece of software we have on our computers, you would expect that more people would want a clean transaction and 'being the customer' relationship. Yet for vast majority of users, their browsing has been paid for by advertisers and third parties (true for 100% of most popular browsers out there).
The paid browser market essentially collapsed after Microsoft bundled IE with Windows for free. For example Netscape was $49. Microsoft famously attacked this with "Why waste $50 for Netscape?! IE is free!"
This doesn't make browsers today really 'free' (same like search engines aren't really 'free'). Browsers are incredibly complex to make and maintain. And the customers paying all these cost are the advertisers/third parties, not the users using them (entire reason for Kagi's existance is to create an option where user is also the customer).
Being able to pay for the most intimate piece of software you have on your computer makes a lot of sense.
Come home? It never left it. Konqueror, the software where it all started, still is a core KDE app. WebKitGTK, arguably the most portable WebKit distribution and what Orion itself uses, has always been Linux-first.
I can’t speak to whatever konqueror uses these days but webkitgtk is notoriously behind and difficult to work with. You can read posts from the Tauri devs questioning their entire approach on Linux due to it.
I really hope Kagi contributes back upstream to improve the situation, it’s needed.
Edit: looks like konqueror uses qt web engine which is chromium. The irony of the KDE browser abandoning WebKit while the GNOME browser still tries to use it is too much.
Just curious, but is this really a big deal? As a customer, you already trust Kagi enough to feed them your entire search history, so I guess you don't think they're bad actors. Thus, why you find problematic the (momentary?) "unopeness" of the browser? I'd gladly try it (I'm on Arch), even just out of curiosity (unlikely to make it my main, though).
Jeez, downvoted for asking about context? People, calm down.
Requiring it to be open source is not just about trusting the publisher. There are a bunch of other possible reasons, including wanting to support open source as a counterbalance to proprietary software.
For me, it's a big deal (although not a dealbreaker) for that reason. If I have the option of two different pieces of software, one being open source and the other proprietary, I'll choose the open source one every time unless there's something really exceptional about the proprietary one. But that's very rare.
I was just trying to think of any proprietary software I use outside of work (where I don't have a choice) or games. There must be at least one, but I can't think of what it is.
Because Kagi Search is a service I subscribe to. A browser is a program I install. That difference means everything.
But since I have your attention, I just want to add that I'm a huge fan of Kagi Search and it's well worth the money I spend for it. I love the work you guys are doing, and that love is the reason why I'm even thinking about using Orion. But they are two entirely different use cases.
I am pretty sure the expectation would be different if Kagi search could be self hosted. Linux people have come to expect open source for code they run on their own machines. Historically closed source Linux software has run into a lot of problems with dependency version mismatches as libraries get updated through the distributions package manager.
I'm not the one running Kagi on my computer, and the expectations of software ran over a network are and should be different from software I run on my computer
About half a year ago, I ran into an instance of a user who requested more openness[0] regarding the sources Kagi used - initially there was a list that was available, and then it was removed. I know it's not exactly the same, and it's been a long time since that request was made, but if you happen to read this, I second their request.
Personally, I think it would be incredible if you open sourced your search engine. But like someone else said more eloquently, software runs on our computers. And to me, open-source software is table stakes when there are viable alternatives.
There aren’t great open-source search engines, so I’m moving from one proprietary option to the next. But there are great, open-source browsers already, and I refuse to go backwards.
If a good, open-source search engine were available, I would leave Kagi for it.
I can build it myself and skip that step. Or, if the build process is reproducible, you can make trust less of an issue by having a small handful of independent people run their own builds and post their signatures. That way you need those people to all collude with Kagi to forge a bad build. This is how e.g. bitcoind binaries are handled.
They give you a key and only if you have a higher tier account. The act of doing that requires that there is a step in the process where they know you’re requesting a key and who you are. They could bind them in the backend if they wanted, before giving it to you.
You’re still trusting them. Not to mention they could round them all up by IP or browser fingerprinting.
There is still some level of trust.
I happen to trust them enough for that; but it is still trust.
I am not an expert in the underlying cryptography, but the claim is indeed that the cryptographic approach makes it impossible for them to link the key to the queries in the backend.
Sure! But there is a stage where they generate those keys for you and give them to you. You need to be logged in to get that page. That is trust there.
No, issuer-client unlinkability is a feature of the design. The token is finalized by the client using private inputs so Kagi never actually sees the redeemable token (until it's redeemed).
Using the example doc you’re citing from kagi.com - though not the RFC, I don’t have the time to dive into that one at the second, I see that a session token plus some other stuff is passed in and a token comes out.
Where does it show that on the Kagi backend they couldn’t, theoretically, save the session key before performing the token response?
Sure, they probably do. Doesn't matter because neither the session key nor the token response can be linked to the tokens.
If you're not going to make an effort to understand how it works, don't make assertions about how it works. Ask your favorite LLM about the RFC if you have any further questions.
Google started as a company that seemed worthy of trust. The founders had ideals and followed them. Look what happened. Companies can turn evil surprisingly quickly. I'm also a Kagi customer, but I wouldn’t use a closed-source browser either.
Because free (as in the FSF definition) software should be a human right. We deserve to know how our tools work and be able to improve them and use them as we please. Free (as in freedom) software doesn't need to be monetarily free either. Make it so the purchase of orion comes with the binaries and a copy of the source code, or provide it on request. This has proved to be sustainable before, arguably the defacto standard for pixel art is (or was before a license change made it so you can't redistribute the source code) free software, despite costing money
Why does it seem weird? I run a lot of proprietary software on linux. Actually made a career of it. I also run a lot of open source whenever I can, but I'm pragmatic about the whole affair. I think most users are like that.
Yeah. The only webkit browser on linux aside from orion is GNOMEs browser (which frankly kinda sucks, which is why I want orion open-sourced so that GNOME can take its work on webextentions when they become supported)
Your body produces cholesterol naturally, without any meat or dairy. In my case it actually produces way more than I need, even on a vegan diet, because of genetic factors. People should test their LDL and evaluate whether eating cholesterol is healthy _for themselves_ as it’s different for everyone.
That's why you'd want to be able to replace the mainboard, screen, keyboard, speakers, trackpad, etc., and not just the RAM. Like https://shop.mntre.com/products/mnt-reform, but presumably easier for non-technical people to use.
Don't get me wrong, I like the approach, but if you want a laptop, it's the wrong tradeoff and not one that 90% of the users will take. Sure, _some_ will choose this, and small companies can do well to sustain themselves but it probably won't do a dent in ewaste numbers and you won't see iphone like adoption numbers, relegating it to just a niche product.
I built my last company on OpenBSD. It was easy to understand the entire system, and secure-by-default (everything disabled) is the right posture for servers. Pledge and unveil worked brilliantly to restrict our Go processes to specific syscall sets and files. The firewall on OpenBSD is miles better to configure than iptables. I never had challenges upgrading them--they just kept working for years.
It's barely usable by itself but I don't think it's an inherent problem of
seccomp-bpf, rather the lack of libc support. Surely the task of "determine
which syscalls are used for feature X" belongs in the software that decides which
syscalls to use for feature X.
The "what does the equivalent of pledge(stdio) actually mean?" doesn't have to actually be on the kernel side. But it's complicated by the fact that on Linux, syscalls can be made from anywhere. On OpenBSD syscalls are now only allowed from libc code.
So even if one uses Cosmopolitan libc, if you link to some other library that library may also do direct syscalls. And which syscalls is does, and under which circumstances, is generally not part of the ABI promise. So this can still break between semver patch version upgrades.
Like if a library used to just not write debug logs by default, but then changed so that they are written, but to /dev/null, then there's no way to inform application code for that library, much less update it.
If you ONLY link to libc, then what you said will work. But if you link to anything else (including using LD_PRELOAD), then all bets are off. And at the very least you'll also be linking to libseccomp. :-)
If libc were the only library in existence, then I'd agree with your 100%.
> So even if one uses Cosmopolitan libc, if you link to some other library
that library may also do direct syscalls. And which syscalls is does, and
under which circumstances, is generally not part of the ABI promise. So this
can still break between semver patch version upgrades.
Well but isn't that a more general problem with pledge? I can link to
libfoo, drop rpath privileges, and it'll work fine until libfoo starts
lazily loading /etc/fooconf (etc.)
A nice thing about pledge is that it's modularized well enough so such
problems don't occur very often, but I'd argue it's not less common of an
issue than "libfoo started doing raw syscalls." The solution is also the
same: a) ask libfoo not to do it, or b) isolate libfoo in an auxiliary
process, or c) switch to libbar.
> And at the very least you'll also be linking to libseccomp. :-)
libseccomp proponents won't tell you this, but you can in fact use seccomp
without libseccomp, as does Cosmopolitan libc. All libseccomp does is
abstract away CPU architecture differences, which a libc already has to do
by itself anyway.
No, for two reasons: 1) pledge() lets you give high level "I just want to do I/O on what I already have", and it doesn't matter if new syscalls "openat2" (should be blocked) or "getrandom" (should be allowed) are created. (see the `newfstatat` example on printf). And 2) OpenBSD limits syscalls to be done from libc, and libc & kernel are released together. Other libs need to go through libc.
Yes, if libfoo starts doing actual behavioral changes like suddenly opening files, then that's inherently indistinguishable from a compromised process. But I don't think that we need to throw out the baby with that bathwater.
And it's not just about libfoo doing raw syscalls. `unveil()` allows blocking off the filesystem. And it'll apply to open, creat, openat, openat2, unlink, io_uring versions of the relevant calls (if OpenBSD had it), etc…
But yes, if libc could ship its best-effort pledge()/unveil(), that also blocks any further syscalls (in case the kernel is newer), that'd be great. But this needs to be part of (g)libc.
Though another problem is that it doesn't help child processes with a statically compiled newer libc, that quite reasonably wants to use the newer syscalls that the kernel has. OpenBSD decided to simply not support statically linked libc, but musl (and Cosmopolitan libc?) have that as an explicit goal.
So yeah, because they mandate syscalls from libc, ironically OpenBSD should have been able to make pledge/unveil a libc feature using a seccomp-like API, or hell, implemented entirely in user space. But Linux, which has that API, kinda can't.
(ok, so I don't know how strictly OpenBSD mandates the exact system libc, so maybe what I just said would open a vulnerability)
> 1) pledge() lets you give high level "I just want to do I/O on what I
already have", and it doesn't matter if new syscalls "openat2" (should be
blocked) or "getrandom" (should be allowed) are created. (see the
`newfstatat` example on printf).
You can do this with seccomp if you're libc. A new syscall is of no
consequence for the seccomp filter unless libc starts using it, in which
case libc can just add it to the filter. (Of course the filter has to be an
allow-list.)
> And 2) OpenBSD limits syscalls to be done from libc, and libc & kernel are
released together. Other libs need to go through libc.
That avoids one failure mode, but I think you assign too much importance to
it. If your dependency uses a raw syscall (and let's be honest this isn't
that common), you'll see your program SIGSYS and add it manually.
If you have so many constantly changing dependencies that you can't
tell/test which ones use raw syscalls and when, you have no hope of
successfully using pledge either.
> But I don't think that we need to throw out the baby with that bathwater.
We agree here, just not on which baby :)
> And it's not just about libfoo doing raw syscalls. `unveil()` allows
blocking off the filesystem.
You're right, seccomp is unsuitable for implementing unveil because it can't
inspect contents of pointers. I believe Cosmopolitan uses Landlock for it.
> Though another problem is that it doesn't help child processes with a
statically compiled newer libc
If you're trying to pledge a program written by somebody else, expect
problems on OBSD too because pledge was not designed for that. (It can work
in many cases, but that's kind of incidental.)
If it's your own program, fine, but that means you're compiling your binaries
with different libcs and then wat.
> So yeah, because they mandate syscalls from libc, ironically OpenBSD
should have been able to make pledge/unveil a libc feature using a
seccomp-like API, or hell, implemented entirely in user space. But Linux,
which has that API, kinda can't.
My take is "it can, with caveats that don't matter in 99% the cases pledge
is useful in." (Entirely in user space no, with seccomp yes.)
But only in very small sandboxes, right? Yes, seccomp could potentially be used for your JIT/interpreter sandbox. And because it inherently executes untrusted input, that's definitely the most important place.
But compare how many applications execute untrusted remote programs to how many programs that have had security vulnerabilities. Or indeed, how much code.
What percentage of code runs in chrome/firefox's sandbox? 0.0001%?
Have you tried to create a seccomp ruleset for a real program? I have. There are too many variations between machines and code paths that you'll necessarily need to leave wide open doors through your policy. Sure, the more you disable the "luck" you manufacture in case of a bug, preventing exploitation. But no, it's not fit for purpose outside these extremely niche use cases.
Linux is far too bloated to ve run as a secure system and the attack surface of any linux distro, due to the number of kernel modules loaded by default, is very big.
> I built my last company on OpenBSD. It was easy to understand the entire system, and secure-by-default (everything disabled) is the right posture for servers.
That really depends. You could argue a router is a server. OpenWRT has the default of WiFi off for security, which means that if the config is somehow hosed and you have to hard reset the router, you now have an inaccessible brick unless you happen to have a USB-Ethernet adapter on you.
Sensible defaults are much, much better than the absolutionist approach of "disable everything".
Edit: it's so funny to know that all the people slamming the downvote have never hit the brick wall of a dumb default. I hope you stay blessed like that!
> Edit: it's so funny to know that all the people slamming the downvote have never hit the brick wall of a dumb default.
I'll bite. OpenBSD and OpenWRT are different things, and I'm honestly surprised to hear that tech matters enough to you to setup OpenWRT but not enough to own a desktop (or a laptop that doesn't skimp on ports)
They are, but Linux or BSD doesn't matter all that much when it is about the meta case of deciding the defaults.
Funnily enough I feel a BSD is much more suited to modems / routers, if it weren't for HW WiFi support. Yes, I know you can separate your routing and your access point onto different devices.
At any rate I'm just pointing out that that absolutionism is rarely the right answer. It's also pretty telling that people actually went through my comment history to downvote a few unrelated recent comments. People get angry when they have to adjust their assumptions.
As far as computing device goed, I prefer not lugging around a plastic brick. And one is bound to either lose or forget a dongle. In which case you get boned by OpenWRT's dumb default.
The reason for that default is that if they set up an open OpenWRT WiFi (or default passworded, think "OpenWRT2025"), in that split 5 minute window before you change it, some wardriver might login and mess with your network.
Obviously the chances of that are rather insignificant. And they could generate a default password based on the hardware. For the real security nuts they could tell them to build an image without default-on WiFi (currently they do the inverse).
I'm not comparing those, I'm comparing absolutionist vs. flexible attitude.
People are downvoting because I'm making them realize they have to rethink their assumptions, and it is less painful to attack the messenger rather than actually do so. People these days are generally bad at not tying their identity to things and not taking it personal.
Vision Pro is a perfect example of a greed-driven failure. Apple pissed off both devs and megacorps by keeping the ecosystem closed, fighting tooth and nail in courts such that every app needed to pay them 30% and couldn't be installed without their blessing, and unsurprisingly very few massive companies (or hackers) wanted to support Apple's fledgling closed garden. Without software, it's just a gadget.
Tesla announced they are adding it this week. Ford’s CEO expressed glee at GM removing it. There isn’t a CarPlay App Store nor downloads to get 30% from (or if there were, they’d appreciably be enabled by Apple’s platform as we aren’t in the habit of subscribing to or buying apps for our car today), and while we don’t know the licensing terms from the GM removal it sounded like privacy violations and extra subscription revenue are their motivations for dropping CarPlay. That doesn’t sound consumer friendly on the carmakers part at all. I think this field doesn’t line up with the overall thesis, squint as we might.
>Tesla Inc. is developing support for Apple Inc.’s CarPlay system in its vehicles, according to people with knowledge of the matter, working to add one of the most highly requested features by customers.
>The carmaker has started testing the capability internally, according to the people, who asked not to be identified because the effort is still private.
Tesla's news is interesting. A good question to ask in this who's in control in Tesla x CarPlay relationship. The answer is obviously former (Apple can't dictate anything and Tesla gets to boss around).
That's very different from a Toyota x Apple partnership.
So no, those are two different scenarios. The era of Apple controlling the platform is gone. (Except for legacy ones)
People buy Tesla for Tesla and not because CarPlay. But CarPlay is a purchasing decision factor for other brands, which means a power imbalance exists.
So this is a classic game theory situation. You want all participants (Toyota, Honda, Ford) to cooperate (not have CarPlay) and not defect. So participants watch each others move.
If they stick together, all of them stand to win.
If one defect, in the short term they might win but in the long-term Apple will seek to commoditize the car maker.
> People buy Tesla for Tesla and not because CarPlay.
They increasingly just don't buy Tesla. Strong growth in that segment lately.
I recall though, back in 2021 we rented one as a test drive situation. The UX was so horrific I did an immediate 180 on that idea. Hard pass. Carplay might've saved that sale, their stock infotainment is trash.
I wouldn't be surprised if they go all on in Carplay Ultra near the end.
Oh, I'm aware. I have no love for Tesla. I was making an observation of what I see around me (plenty of new Teslas on the road even after Elons shenanigans)
Huh? Apple does not charge for CarPlay. Some automakers are trying to give them the boot, but that has nothing to do with Apple's greed and everything to do with the automakers' greed. They want their own ecosystem of apps.
I'll let you in on a secret. Ask yourself what the business case of CarPlay is. "Why" should Apple do CarPlay. Put yourself in the shoes of a VP at Apple pitching CarPlay. Are they saying "let's invest millions of dollars in inventing the UI for cars and give it away for free, for .. goodwill?"
Nope, the slide deck would say 'Cars are the next computing platform. That's where most people spend time. So imagine is we (Apple) were meaningful present there .. and that's why we need to invest in it'
So, yes CarPlay is a move to control another computing formfactor. One they do not manufacturer (like tv and Apple TV) ...and unfortunately for them, car makers are wiser this time around.
A simpler explanation is that all of these little conveniences add up to keeping customers firmly embedded in the ecosystem, repeatedly buying new iPhones. And sure, if we can offer another environment where an App Store purchase can be used, great.
> unfortunately for them, car makers are wiser this time around
Maybe. Ditching CarPlay does not currently seem like the wise decision, given how many of us have decided that omitting it is a deal killer. I love my Lightning, but I do not for one nanosecond trust that Ford would keep the app ecosystem on my truck running as long as Apple will keep iOS working on iPhones.
I'd argue a missing social safety net combined with grossly inadequate public education, no job opportunities, unaffordable healthcare and housing, and a prison system designed to punish all drive people to take drugs. Drug addiction is just the symptom. Let's focus on giving people real hope and value and meaning in their lives, from birth to death, instead of killing people, without trial, a world away.
The key value of Pebble to me was its incredible C SDK that made it super easy to write custom apps for it. I remember way back I got full turn-by-turn navigation working on it.
“Normal exposure” is doing some heavy lifting in that sentence. Presumably having all your daily texts arrive on such paper wouldn’t be “normal exposure,” which if I recall correctly is handling a receipt for a few seconds a day with only your fingertips.
Does Kagi plan to open-source Orion on Linux?
reply