Mylar protects data confidentiality even when an attacker gets full access to servers. Mylar stores only encrypted data on the server, and decrypts data only in users' browsers. Simply encrypting each user's data with a user key does not suffice, and Mylar addresses three challenges in making this approach work.
First, Mylar allows the server to perform keyword search over encrypted documents, even if the documents are encrypted with different keys. Second, Mylar allows users to share keys and data securely in the presence of an active adversary. Finally, Mylar ensures that client-side application code is authentic, even if the server is malicious.
The encrypted multikey keyword search is very interesting, but using a certificate for code is troubling... Also there is almost no info on the way they use the browser xss protections on the scheme. It's only briefly mentioned.
It seems absolutely ridiculous that large piles of data are being stolen by small teams of hackers. If solutions like Mylar's prevent reading of the extracted data, that sounds promising. Sure, it's just another hurdle in an attack but it's something.
I wonder if we're only a few years away where technologies and practices like this become standard?
For 90% of web use cases, encryption like this is overkill. Serving a blog or static page does not require this level of security, and most e-commerce redirects to a handful of payments providers.
So someone builds a nice secure and useful framework/platform which is great. But why not put some effort to mention some more details or documentation about how to use it, sample, tutorials etc?
The first paper linked from the webpage describes how to use the framework.
For a specific example, see Section 8 of the paper (Building a Mylar Application) which starts with "To demonstrate how a developer can build a Mylar application, we show the changes that we made to the kChat application to encrypt messages."
If academics in general used Github much more, they'd do much better work. Especially when there are significant amounts of code and data (e.g. for statistical analyses or simulations), keeping it all hidden pretty much guarantees there are lots of errors.
The academic world in general tends to focus on collaboration with other academics, and people in their research groups, which tends to result in people using their own internal git repo hosting infastructure, rather than external ones like Github, where anyone who wants to commit would have to create and maintain a separate Github account.
Other than having a GUI and nice web interface, what are the advantages of Github over the git repo Mylar is using?
I don't know anything about Mylar or the people who created it; I was speaking in more general terms. As an example of what I'm talking about, see Reinhart and Rogoff. Those two rascals would probably beg off of Github because Excel, but that's actually more proof (as if more were needed after their shenanigans) that economics papers shouldn't be written using Excel.
I don't expect other people in the research group to catch all of those types of errors, so a more openly collaborative environment would be an improvement. (I doubt R&R would set up an account for the UMass grad student who caught their error. The attempted replication that took him a month would have been much simpler if he could have git-cloned.) It doesn't have to be Github: Bitbucket would be fine. Even an academics-only or field-focused equivalent would get most of the way there, although it would exclude most interaction with the interested amateur public. Especially for those whose field is computer science, "maintaining" a Github account doesn't seem onerous.
My point is that meteor.js is attracting a lot of attention as does security and encryption, github.com is just natural place to show work done and start dialog. Also remember that notable https://xkcd.com/664/
* https://crypton.io/ a zero-knowledge web framework from SpiderOak
* http://hails.scs.stanford.edu/ a secure web platform framework for untrusted 3rd party plugins