> I'm not really sure why this isn't already in place.
Everything we need is already in place, except for a tweak in the caching strategy of the browsers[1]. With Subresource Integrity [2] you provide a cryptographic hash for the file you include, e.g.
As it is, browsers first download the file and then verify it. But you can also switch this around and build a content addressable cache in the browser where it retrieves files by their hash and only issue a request to the network as a fallback option, should the file not already be in cache. Combine this with a CDN which also serves their files via https://domain.com/$hash.js [3] and you have everything you need for a pretty nice browserify alternative, without any new web standardization necessary.
[1] And lot's of optimization to minimize cache eviction and handle privacy concerns, but that are different questions.
Folks in W3C webappsec are interested, but the cross-origin security problems are hard. We'd love feedback from developers as to what is still useful without breaking the web. Read this doc and reach out! https://hillbrad.github.io/sri-addressable-caching/sri-addre...
I think that using the integrity attribute is great because if it happens it's going to have to work through a lot of tricky implementation details (e.g. things like origin laundering) of moving to an internet of content by hash rather than content by location.
However beyond just having an integrity attribute added to html I am interested in the question of how do we encode an immutable url as well as the content-hash for what it points to (as well as additionally required attributes) into a `canonical hash-url` (i.e. encode all these attributes) that is backward compatible with all the current browsers / devices, and which browsers can use in the future to locate an item by hash and/or by location.
The driving reason for this encoding is make sharing of links to resources more resilient, and backwards compatible. Eventually the browsers could parse apart the `canonical hash-url`s and use their own stores for serving the data, but not until the issues (and likely other unthought of ones) listed in the sri addressable caching document you linked are worked through.
These problems are really hairy. Thankfully, all the privacy issues are only one-bit leakages (and there are TONS of one-bit leakages in web browsers), but the CSP bypass with SRI attack is really cool.
One thing that I've found incredibly disappointing about SRI is that it requires CORS. There's some more information here: https://github.com/w3c/webappsec/issues/418 but it essentially means that you can't SRI-pin content on a sketchy/untrustworthy CDN without them putting in work to enable CORS (which, if they're sketchy and untrustworthy, they probably won't do).
The attack that the authors lay out for SRI requiring CORS is legitimate, but incredibly silly - a site could use SRI as an oracle to check the hash value of cross-domain content. You could theoretically use this to brute force secrets on pages, but this is kind of silly because SRI only works with CSS and JavaScript anyway.
I, as someone who worked on the SRI spec find this incredibly disappointing as well. We've tried to reduce this to "must be publicly cachable", but attacks have proven us wrong.
And unfortunately, there are too many hosts that make the attack you mention credibly silly:
It is not uncommon that the JavaScript served by home routers contains dynamically inserted credentials. And the JSON response from your API is valid JavaScript.
To be completely honest: Only reach out if you have solutions for any of the problems or can reduce what you want down to something that is solvable with these problems in mind.
If your solution does not live on the web, you'll have a hard time finding allies in the standards bodies that work on the web :)
Everything we need is already in place, except for a tweak in the caching strategy of the browsers[1]. With Subresource Integrity [2] you provide a cryptographic hash for the file you include, e.g.
As it is, browsers first download the file and then verify it. But you can also switch this around and build a content addressable cache in the browser where it retrieves files by their hash and only issue a request to the network as a fallback option, should the file not already be in cache. Combine this with a CDN which also serves their files via https://domain.com/$hash.js [3] and you have everything you need for a pretty nice browserify alternative, without any new web standardization necessary.[1] And lot's of optimization to minimize cache eviction and handle privacy concerns, but that are different questions.
[2] https://developer.mozilla.org/en-US/docs/Web/Security/Subres...
[3] Imagine if some CDN would work together with NPM, so every package in NPM would already be present in the CDN.