That's just marketing bullshit. Unless the API is magic (and I don't mean advanced technology "magic" but Harry Potter "magic") it has no way of knowing what the application is allowed to send or not and therefor cannot filter. It's like saying it cannot leak data because it has to use HTTP.
> Unless the API is magic (and I don't mean advanced technology "magic" but Harry Potter "magic") it has no way of knowing what the application is allowed to send or not and therefor cannot filter.
You're assuming that Sandstorm apps have arbitrary IP network access. They do not.
Sandstorm is based on capability-based security. Any outgoing request has to be addressed to a capability representing some specific permission that the user has granted to the app. A capability might point to another app, or it might point to a specific external host that the user has designated.
More specifically, a Sandstorm app's only connection to the outside world is through Cap'n Proto RPC, which is an object-capability protocol, meaning that an app can only send requests to objects to which it has explicitly received a reference.
Of course, for backwards-compatibility, we have translation layers so that apps written to use regular old HTTP need not be entirely rewritten. You just have to tweak it to make the correct permissions request first, which has proven not very hard in practice.
Note that Sandstorm is still in development and for the moment we've created a hack to allow ttrss to make arbitrary HTTP requests in order to update feeds.
However, in a few more months this won't be necessary. Instead, when you click "subscribe to feed", the app will call a method on the Sandstorm API saying "Prompt the user for a URL and then give me permission to access it". So, you'll get a dialog box to enter the URL rendered by Sandstorm itself. If you enter a URL, it's plainly obvious that you want the app to have permission to fetch it, so Sandstorm grants said permission. We call this UI the "powerbox".
Notice how the UX here is equivalent to what we have today, where the app renders its own prompt. This technique of inferring security decisions from actions the user was doing anyway is the core of how we plan to implement tight security without inconveniencing the user.
I've been using TinyTinyRSS on Sandstorm for a while. It even has a mobile app that works with Sandstorm's API. (Though it's a fork, not the official Play Store version.)
Sandboxed applications literally cannot send any data by default. They can't open a connection to <whatever server>, no matter what protocol.
The goal, once they've built their Powerbox, is to then implement a set of protocol drivers which the application can use. So it still can't connect to arbitrary servers, but it can ask the user for permission to, say, connect via SMTP to <wherever>, and the user has control over that.
Yes, they could leak anything that you put in them if you allow them to connect to someone you don't trust. However, even if you do so once, most applications will be per-document - you have an instance of your document editor for each document, and they don't know anything about any other documents you have.
In short: applications can only leak what you give them, and only to people you say to give them to. They can't call back to home base without your permission or the permission of someone you've given the app permission to contact. So for all reasonable definitions of "cannot leak data", applications cannot leak data without your permission.
It's worth keeping covert and side channels in mind, though: e.g. an instance can leak bits by timing variations. Capability security is a big big deal, a qualitative change in the game, but I think this comment is over-promising things.
Yes, covert side channels should always be assumed to be possible.
However, there are two reasons I think you don't need to worry about them too much:
1) They'll typically be fairly expensive and low-bandwidth.
2) They're unambiguously malicious. This is not a technical barrier to using them, but it's a huge political barrier. Today, major developers will happily stick covert statistics gathering into their code, and then when called out on it, will make some contrived argument about how it benefits users (if that's true, why don't you ask them first?) and how it's mentioned in the privacy policy so therefore it's legit. OTOH, you can't exploit a covert channel in Sandstorm and then plausibly claim you haven't done anything wrong.
Some hardcore security nerds will of course scoff at this argument, and to them I can only say: "OK, yes, there are possibly covert channels, sorry. Please don't put sensitive data into an app you don't trust."
A theoretical long-term solution is deterministic computing, but that probably requires apps to be written in a different language or be run in a heavy-handed VM. Not practical at the moment.
It's also worth noting that Sandstorm is designed to make it impossible for an app to leak capabilities via covert channels. They can only leak bits, and a capability is not just bits.
Yep, good points; I just think the GP was too absolute. It's good to hear Sandstorm's built on object capabilities instead of password capabilities; since I wasn't sure I didn't get into that, or deafening (determinism to eliminate side channels into a process; I gather that outward is much harder to control).
* Backend: Due to Linux network namespaces, the app can't communicate with the network (except over "sandstorm-http-bridge" which allows it to respond to inbound HTTP requests).
* Frontend: Due to Content-Security-Policy, the client part of the app can't communicate with any hostname other than the one the app runs on. The CSP header is set by Sandstorm, not the app.
So then it has no network access, and therefore even if it is compromised, can't leak anything.
This does hinge on the app's dynamic code only being run for logged-in users. For many apps -- imagine a Google Docs spreadsheet only accessible to people within your domain -- this is a pretty straightforwardly reasonable model. Sandstorm handles authentication for apps, so it can enforce this even if the app is 0wned.
An app does not have the ability to edit who has permissions to itself. In order to add yourself as an admin of some app, you'd have to compromise Sandstorm, not the app.
I'm not sure how that is supposed to work. You would have to rewrite every webapp so that it's data can be protected by sandstorm. Which seems hugely impractical. And as long as the webapp has access to it compromising it will compromise the data.
Not "rewrite". You do have to tweak apps to be Sandstorm-appropriate, but it's usually somewhere between five minutes and a couple days of work. Namely:
* Delete the login system and use Sandstorm's. If you build on Meteor, for instance, this is a simple matter of swapping dependencies.
* Delete your sharing system. If the app hosts multiple things that can be independently shared, change it to host only one such thing. The user can create multiple instances of the app and using Sandstorm's sharing. This is probably the hardest part, but we've done it for several apps now without too much trouble. Since it's largely deleting code, it's not very difficult.
* Find the places where your app connects to the outside world and insert a bit of code to make a Sandstorm powerbox request to get permission first, then address the requests to that permission.
None of this involves "rewriting". We have 20+ apps on the Sandstorm app list, most of which were ported by two people who certainly didn't have time to rewrite each one.
I've ported apps to Sandstorm with literally no prior experience with the languages those apps were written in. Porting to Sandstorm involves more deleting stuff you don't need than actually writing code yourself. :D
The only "everything" you should be able to get, if the security is correct, is for the app you compromised, not the other ones running on Sandstorm. No, it does not magically secure applications put behind it (though IIRC it does put a couple of useful tweaks in place, but that's all it can do), but it can prevent "I compromised your WordPress and stole your entire machine's contents."
It's because of Sandstorm's security as a platform. Apps cannot see each other's files on disk, because each one runs in a container with only their own subdirectory mapped in.
It doesn't, but your SSH server wouldn't be able to produce a valid token for Paypal.com.
Btw i'm not liking this SSH solution either, I was just pointing out that's still better than SQRL, which is awful in that it has exactly 1 advantage (it protects users against password reuse) and many nuanced flaws.
Host authentication is a job of SSL though. So you should simply not log-in on a website if it cannot be verified that it is actually Paypal. This holds for both SQRL and passwords and seems to be an independent issue.
It also applies to this case because the ssh server is supplied by the website.
It's extra stuff that should be in the standard library but isn't.
These days, a lot of it is actually in the standard library - for example, array maps - and invoking lodash just calls the es5/es6 built in, with a slightly uglier syntax.
But if you want backward compatibility, you still probably want to use something like Lodash. I'd also argue that Lodash's/Underscore's interface is far better thought out than the standard library. The standard library has so many absurd gotchas, like `["2", "2", "2", "2"].map(parseInt)`. The verbosity of Javascripts lambdas also makes composition of simple parts more arcane looking than necessary, and having a whole bunch of composable/chainable utility functions does a lot to help people write self-documenting code.
> But if you want backward compatibility, you still probably want to use something like Lodash.
A few years ago, sure, but these days I'd use es6-shim. The code will be shorter, have more documentation around the internet, and when old browsers die you won't have to change anything to be on standard JS.
Interestingly, in practice lo-dash actually doesn't proxy through to the native implementation for things like map and forEach. Just using loops ends up being more performant because of some edge cases in the ES5 spec for those methods that lo-dash and underscore don't follow.
The most important feature is chaining and lazy evaluation.
I think the most useful functions are the typechecking utilities (typeof in javascript is the most useless keyword the human kind engineered in a language).
In the end is a very nice library to work with when it does not get in the way (of course, if you are using `_.each([], fn)`, you should think again and use `Array#forEach` or a nice `for`)
It should start being called "the jQuery effect" since it's one of those libraries that comes to be considered by the community as a standard way to patch the environment. Not only that, but some people begin to develop very strong, quirky aesthetic preferences for seeing it in their code.
I also had no idea what this was until I got to the bottom of the page. Looks like a "better underscore.js". They should just say something to that effect upfront.
They used too say that when the project was new. Perhaps now they feel the project is established enough that it can stand on it's own without trying to position it as a drop-in replacement for underscore.
How do you raise awareness and educate people by selling them conceptionally broken keys? They will get used to "someone else is creating the key for me", as well as to "GPG fingerprints should look like funny words, not 17AB42DE!".
For testing and explaining GPG to people, nobody needs vanity keys. You don't raise awareness for GPG by having "DEADBEEF" on your business card.
They (BitQuest) advertise the "Stores" which apparently have "some incredible weapons and armor". And they are explicitly denominating in BTC and hooked up to a system for accepting BTC payments.
Both of which seem to fall afoul of the Mojang post.
We are not accepting payments nor selling the in-game currency thus this is not a pay-to-win game or anything outside Mojang's own recommendations for monetizing servers :D
I didn't realize you could buy in-game currency with money. I thought it was a one-way process (mining in game -> bits) used to create a real world value for items in game.
In short: The protocol is not even remotely thought through. It is incomplete and cannot work the way it is described.
I wonder if it is theoretically impossible to create a protocol that punishes cheating when the cheating occurs outside the system (when the physical goods change hands).
And even if it isn't you would also somehow have to consider the fact that things get lost or damaged in the mail and assign blame to one party on the protocol level.
What are you referring to? Looks like you have several three-in-a-rows, and some other squares are highlighted because they will be invalid once you fix the triples. (Could be wrong!)