I don't think app packaging could be said to be the achilles heel -- how much work it involves depends a lot on the app (some things are very easy, others can be painful), but it is certainly not the case that Sandstorm didn't take off because packaging was too hard.
“Was”, what’s left of sandstorm anymore besides a couple weekender coders whose $dayjob is at cloudflare these days? Doubt CF has anything like 10%/20% time like GOOG.
This codebase is basically abandonware at this point and can’t be trusted for anything serious these days. If CF brought this on as an official product (and renamed it), it would go a long ways to build back trust
FWIW, the most-active Sandstorm developers over the past couple of years, such as zenhack (to whom you replied) and ocdtrekkie, are not Cloudflare employees, and never were. (They were never Sandstorm employees either.)
For my part, I am indeed mostly focused on Cloudflare Workers today and don't have much leftover energy to spend on Sandstorm. This has nothing to do with Cloudflare telling me how to spend my time. I make my own choices. I still love what we were building with Sandstorm, but what can I say, Workers has been a lot more successful and so I am drawn to focus my energy there.
Note that Cloudflare has no affiliation with Sandstorm, other than employing some of the former team members. Cloudflare did not acquire Sandstorm itself.
One problem with Sandstorm is that it didn't have something like miniflare. (Cloudflare didn't either until recently, and that's fixed now, but the value prop was high enough otherwise that this wasn't fatal.)
Another is that it lacked (and still does, from what I can see) a killer app. It had the perfect conditions last year with the Twitter acquisition, but to my knowledge didn't seize upon it. (Likewise, Cloudflare had a similar opportunity, but the Wildebeest project comes off as a celebration of Cloudflare's infrastructure for the sake of it and/or aimed at those who love devops stack complexity, generally—a fairly tonedeaf response to what would drive someone to want to run their own fediverse node that's not backed by Mastodon.)
The weird URLs—or rather: the constraints that led to them, and the downstream consequences to UX—didn't help.
This comment confuses me: Sandstorm itself is entirely local, and the vagrant-spk dev tool assembles local Sandstorm dev instances in virtual environments with three commands. I'm not sure what "thing like miniflare" we are missing!
As far as the killer app, Sandstorm has some really awesome Sandstorm-only apps, but probably not enough yet for our "exclusives" to be a draw to the platform on their own. Sandstorm currently has some limitations that make it difficult to use in federated environments, and we are working on it, but yeah, it meant we didn't have a fantastic story for social during all this Elon stuff.
> This comment confuses me: Sandstorm itself is entirely local
The hosted version wasn't. Having to prop up your own Sandstorm instance pretty much compromises the project goal—ease of app installability means little gains when you're still responsible for running Sandstorm infrastructure that it relies upon.
Your original complaint was the lack of a "miniflare", i.e. a simulator for local development purposes. But you can run Sandstorm itself locally and there are in fact tools to streamline the process of doing app development using a local Sandstorm server.
Now you seem to be arguing something about how running Sandstorm locally is too much work for end users, who would prefer to use the hosted version? But I thought we were talking about app developers. End users don't need a "miniflare".
If what you're looking for here is someone to confirm that your crummy false equivalence is crummy and false, then that's doable. (Say the word if so.)
Sandstorm implements a capability-based security model, where not only does each app run in a strong sandbox, but a new instance of the app is created for each document (or whatever the app's logical unit of data may be). Sandstorm itself enforces that each document can only be accessed by the people with whom it has been shared, regardless of any bugs that might exist in the app itself. All communications between the user's browser and the app go through a proxy implemented by Sandstorm which applies this authorization regime.
Apps cannot even talk to each other or the internet without specifically requesting access, granted by the user. The UX model for these requests is designed to flow naturally for the user, by deriving the user's intent to permit access from the action they took that caused the access to be needed. For example, say the user wants to embed a chart into a document, where the chart editor and document editor are separate apps. The user clicks some sort of "embed" button in the document editor. Now they are presented with a chooser where they can pick which thing they want to embed. If they make a choice, there is no need to separately ask the user if they want the document to have access to the chart -- of course they do. Sandstorm works by having the system implement the "picker" UI directly, so that Sandstorm knows the user made this choice, and can automatically provide the implied authorization.
All this actually makes apps easier to write since they don't have to deal with authorization and user management themselves, and as a result there are a lot of neat unique apps for Sandstorm written by various people in a short amount of time. However, the down side is that existing off-the-shelf apps that do already feature their own user management and authorization are somewhat laborious to port to Sandstorm.
Yunohost takes a more traditional model of just running each app and letting it figure out its own authorization.
Cloudron is pretty cool, and Docker containers definitely help, but there's a huge security difference between what Docker provides and what Sandstorm does. One of the biggest differences is that most plain Docker-based self-hosting platforms isolate at the app level. Sandstorm isolates individual documents, which often means Sandstorm apps are protected from their own internal flaws as well.
Also, Cloudron is not open source and you need a subscription to use it generally. This has led it to be more successful of a business venture, of course, but is a big downside for software freedom.
What would an application do that is more secure than namespaces+cgroups? Containers absolutely are a security tool (especially if you configure your capabilities and don't use PID 1), one with a security tradeoffs compared to real virtualization, sure, but you're not going to do that on a VPS.
Docker (or Podman) is just a tool that sets up some linux primitives for you. One with a track record and volume of users that'd make me trust it over things that specifically advertise themselves as "security sandboxes" (i.e. firejail, who had some extremely funny basic exploits to get root from their SUID'd binary). You can even pair containers with user namespaces now and get the fakeroot equivalent of containers.
> What would an application do that is more secure than namespaces+cgroups?
Object capabilities, which is what Sandstorm/Cap’n Proto are based on, provides much more security than Mandatory Access Control systems while also providing a much simpler method for getting there.
Sadly the literature on OCAP is fairly poor, often being either too low level or too abstract.
The tl;dr is that OCAP systems work by assuming an application has no functionality whatsoever, and then must be passed (not sandboxed but passed) functionality, either at start time, or when the application requests it.
The easiest way to understand it is imagine if instead of being able to open a file on the filesystem by path, you had to specifically be passed the file descriptor by the OS, possibly before runtime.
Another way to think about it is thinking about the OAuth2 capabilities that you can grant an application. You authorize an application to have certain capabilities, and then the client is handed back a set of API tokens or addresses. Those are the only way it can exercise those capabilities.
It's not being sandboxed, it simply doesn't have any additional way to get access.
That seems interesting and like a technically correct way to go, but also I can see nobody adopting it on a general purpose operating system and thus rendering it useless. Getting developers to use portals already is hard enough.
That's pretty much my job. Can you be more concrete with what you're talking about instead of this passive agressive "you don't know what you're talking about" tone?
Flatpak and Snap are pretty much using the same primitives as containers, they just don't like using the word because of existing connotations. Mandatory access controls are neat, but nobody actually uses them unless the default doesn't disturb them or they have a compliance requirement.
> A pretty poor track record.
Modern containerd, after having been split up from Docker/Moby, has a better track record than most other sandboxing tools. I mentioned a laughably bad one that the Arch wiki and many HN users still seem to endorse.
"Security" really ought not going to be much of a selling point, since no one is particularly good at it these days; I'd be shocked if this thing could actually point to meaningful, as opposed to theoretical, security advantages.
Sandstorm is really good at security. I would absolutely encourage you to download a copy and try to identify an exploitable security vulnerability. As a Sandstorm user, I'd really like to know about it!
I suppose I'm talking about marginal differences. As in, I'd be surprised if Sandstorm can strongly outdo, e.g. my own Docker + Nginx proxy manager+SSL stuff?
You will be surprised then. Sandstorm utilizes capability-based security to grant access to applications, each app is only ever running when an authorized user attempts to launch the app, and each document within a given app is isolated to it's own only-run-on-demand container.
The core difference here is that for all intents and purposes app vulnerabilities are almost wholly mitigated. The only significant type of app vulnerability which Sandstorm cannot prevent is privilege escalation within a single document, where you've shared limit access of a document with a user, say read only, and due to an app vulnerability, they've figured out how to edit it.
Since all documents are private solely to the user who created them initially, and the process isn't even running until that particular user tries to open it, there's effectively no attack surface for Sandstorm apps most of the time. When they are spawned, they're spun up at randomly-generated ethereal subdomains, and authorized solely for access by the web browser that launched them.
I understand where you're coming from, and I would encourage you to get familiar with object capabilities, as it is one of those big pieces of hitherto-ignored low-hanging fruit.
Packaging existing apps tends to be extra work, since you have to retrofit things (like user accounts). But creating brand new apps can actually be less work, because it has things built in (like user accounts).
But yeah, Sandstorm has been in a state of "not dead but not moving fast" basically since the company went under; things picked up a bit in 2020, and I got oriented-enough on the codebase during that time to keep it floating along, but, per the post, it's never been easy going.
Anyway, I wish I were answering this question in 6 months time, when I'll be able to show off a variation of Tempest that is relatively usable and can do a few tricks that Sandstorm can't.
Just spent ten minutes delightedly reading your updates on this. It looks fantastic! I'm really excited to watch what happens next, and wish I had time to contribute.
Are there primary areas in which contributions would be most valuable, especially from those without Golang experience/skills?
At some point the project would greatly benefit from some attention from a UI/UX specialist -- I'm doing my best, but it's not what I'm an expert at. Though right now I'm mostly focused on getting enough stuff to work for it to be of interest, and someone fussing with the UI might just be distracting.
Doing app packaging for Sandstorm might be the most accessible way to help for someone who's not a Go dev -- Tempest will of course benefit from more apps when it's ready to run them, and it'd be a high impact thing for the community right now.
Currently Tempest can run a lot of Sandstorm apps. However, a lot of functionality one would expect or need is not implemented yet. It will be a bit before it can replace a Sandstorm server. But it's architecturally different in some fundamental ways which will make implementing new features that Sandstorm never had much easier.
always interested in self hosted solutions, not tried sandstorm and up for giving it a go, however if I was starting a small fairly simple personal project with no need to scale can I get wheels on it with Tempest, or should I consider building on sandstorm then porting?
All of the dev tools use Sandstorm, and Tempest is not at this point, "complete". But if you were to build something on Sandstorm today, you can be reasonably confident it will work with Tempest.
The networked actor-model bit & CapTP goes back to E originally[1]. The other contemporary real-world protocol based on this design is capnproto rpc[2], which has implementations in several languages including both Haskell (of which I am the author) and Python.
The ergonomics of the Haskell implementation could use some work IMO, and I've got some ongoing refactoring work on a branch. But it does work, and folks are using it.
> Of course, there's a very conscious tradeoff being made here. In rust, "cargo build" allows arbitrary execution of code for any dependency (trivially via build.rs), while in go, "go build" is meant to be a safe operation with minimal extensibility, side effects, or slowdowns.
I've been working off and on on a language that tries to get the best of both worlds to some extent. The whole language is built around making sandboxing code natural and composable. Like Rust, it has a macro system, so lots of compile time logic is possible without adding complexity to the build system, but macros don't have access to anything but the AST you give them, so they are safe to execute. There's a built in embed keyword that works like Rust's include_bytes, which runs before macro expansion, which you can use to feed the contents external files to macros for processing. At some point I'll probably add a variant that lets you pass whole directory trees.
The trouble is that this assumes pre-determined quantity of work. The reality I've seen most in places is that there's no end of stuff to do. The work is never "done."
What there is instead is an expectation of how much work you're supposed to get done per unit time (albeit calendar time in shops that have things more together). But this is in turn informed by how much time you are expected to devote to work vs other parts of your life.
It's always amazing to me how much performance work the basic gnu tools have seen in general. Grep makes some sense, but even yes(1) is fairly carefully tuned; in some cases it actually strikes me as kindof excessive, and not clearly worth the readability drop.
The GNU people are absolutely obsessed with correctness and performance. It's kind of annoying actually. I assume it has something to do with RMS' ties to MIT and Lisp machines (an example of staggering complexity), see Richard Gabriel's "Worse is Better" essay.
I tend to fall in the NJ school myself, though I do think Emacs is something special. And though it's derided as slow, Emacs has a ton of super crazy optimizations, especially around rendering.
Pretty much every GNU project written in C is totally unreadable and hopelessly baroque: gcc, glibc, coreutils, etc. For fun, compare some OpenBSD Unix utility implementations with their GNU counterparts. The OpenBSD tools' source code is a joy to read and represents the pinnacle of elegant C. The GNU versions are ugly as sin, but functionally superior in pretty much every way. I don't really agree with how they write code, but you got to respect the almost OCD attention to detail: no edge case goes unhandled.
World's apart from the terse and cavalier style of traditional Unix hackers.
> The GNU people are absolutely obsessed with correctness and performance. It's kind of annoying actually
I can't accept this as a criticism without some elaboration.
> The GNU versions are ugly as sin, but functionally superior in pretty much every way. I don't really agree with how they write code, but you got to respect the almost OCD attention to detail: no edge case goes unhandled.
it sounds like you want the gnu people to stop making software that is 'functionally superior in pretty much every way'.
I think the cost of the complexity is too high. I much prefer simple code that provides 90% functionality to 10x more complicated code that provides 98%.
Personally I think there's a line to be drawn somewhere - I can certainly appreciate focusing on readability, but having certain tools run super fast can be very advantageous on a system level.
Generally I think OS-bundled tools that may be used for stream processing of large amounts of data (like grep) should skew further towards performance, because there are a non-trivial number of people passing gigabytes of data through grep every day and a 10-20% performance improvement there is a significant boon to global productivity.
Tools that solve a very specific problem and do it with a strong expectation of correctness and performance are what makes *nix environments so powerful - not only can you throw together a nifty bash script that duplicates the functionality of a massive cluster operation, the script may actually outperform the cluster because of how performance-focused those tools are!
I see this issue as being a bit similar to high- vs. low-level programming languages. Could we write absolutely everything in JavaScript and probably have a lot of the code be more readable than it'd be in C or Rust? Absolutely. But that doesn't mean it's not worth using C in some cases to make the program perform well on hardware that doesn't have enough firepower to just ignore the overhead, and it doesn't mean it's not worth using Rust in cases where stability is absolutely required.
I think grep is a compelling example of when it makes sense to do the extra work here. But I'm more dubious of the idea that carefully optimising yes was a good use of engineering time.
There's an opportunity cost. There has to have been a better use of that person's time than taking a tool designed to say yes to repetitive "are you sure?" prompts, and get it to keep up with memory throughput. The throughput of a tool like that does not matter.
As a writer, of course you do, you only have to do 10% of the work.
As a user of the software, which would you rather have, 90% or 98% functionality? Also the number of users outnumber the number of coders by orders of magnitudes.
I for one am well chuffed that the emacs team are so hardcore about bugs.
Oh, man, I did some firmware writing to an sd card this year. I can’t tell you just how important that 4096 boundary is for sd card performance. Tuning the buffer size to the right multiple of 4096 was important too (especially with only 48k to work with).
$75M is rounding error compared to anything being discussed in the vicinity of the pandemic; I wouldn't really expect it to enter into the conversation.
I don't think app packaging could be said to be the achilles heel -- how much work it involves depends a lot on the app (some things are very easy, others can be painful), but it is certainly not the case that Sandstorm didn't take off because packaging was too hard.