You know. Weird that you mention this. I had this happen to me on Facebook after I initiated media. It took, what appeared to be a screen capture of my browser. I'm like WTF, how did Facebook take a screenshot of the chrome around the inner-frame of the page.
I always have this fear in the back of my mind that this sort of website is recording the unflattering video feed of me peering over my triple chins at the laptop that rests on my belly in bed, waiting to potentially blackmail me with it, post it to my facebook feed, or send it to any women that ever had an interest in me.
Yes, I would be able to see that in the network traffic, but not if they have a 0-day for my OS.
I know of someone who built a fake "chatroulette" (before Chatroulette was a thing) as an internal April Fool at their company.
It would record the user to the server, the idea was to make a compilation of people getting pranked. It displayed a prerecorded video of a woman that then faded into a bunch of text and "April Fool!! We got you!!"
...
Well, things turned sour when people started to open the website with their pants down, ready to show the goods to the ladies. Fast forward a few days when the developer opened the folder with videos... to see a bunch of naked guys.
If you don't fancy giving webcam permissions to a random link on the internet you'll get a blank page. If you want to read about the code you're about to run the originating repo is here - https://github.com/jeeliz/jeelizFaceFilter - this is one of the examples.
There should be a way to "sandbox" a webpage, ie completely disconnect it from the outside world. I would be much more comfortable with this kind of demo that need webcam, audio or file upload.
So:
- block all incoming and outgoing requests from that page
- discard all data persisted in cookies/localStorage
I agree. This sort of thing would also be useful when performing client-side decryption of encrypted content.
If it's not possible for the page to perform any lasting global state modifications, you wouldn't have to trust that the server is not shipping malicious code to exfiltrate the key or plaintext. It wouldn't be able to do so.
I wouldn't trust such a set up for cryptography, the page could either use a bad RNG, or generate your private key by encrypting it with a public key for which they control the private one, so when you release your public key they are able to generate the private one.
Wouldn't there need to be some sort of way to verify the client hasn't been modified in order to trust that? That's not really possible, because you'd be asking something to verify itself.
If you configure your browser not to allow the client to leak anything, it doesn't even matter if the client has been modified because it can't leak anything anyway.
EDIT: I'm speaking from the position of an end-user. I'm downloading ciphertext + a decryption tool from an untrusted server and executing the decryption tool locally, with a key which I input locally. I want to make sure that the plaintext and key do not get leaked back to the server.
Situations where this arises include encrypted pastebins, bitaddress.org, Ian Coleman's BIP39 tool, anything else that generates or handles bitcoin keys, etc.
That was my first thought, but I realized that it gave a window for an attacker to exploit vulnerabilities during the loading process to send private information to the servers.
The wget step ensures that the user knows and controls everything that gets sent upstream.
Is there a browser that can require of each HTTP request?
Yes this is a demo so the fallback use cases are not implemented (if the user does not have a webcam, if he does not agreee to share it, if he has no WebGL), but they can be implemented for more packaged apps and the error messages are detailed in the doc of FaceFilter
I did a big getUserMedia wrapper, the standard is very badly implemented. There is problems with firefox and Android which had to be solved too. That's why we have included the video capture into the library (but it is still possible to provide a custom <video> element too)
Do you try with Safari ? And is it up to date ? We tried with safari, Ipad pro and Iphone X and it worked. It can only work with safari 11.2 at least and Chrome for IOS does not implement mediastream API yet (so no webcam) because it is based on a Safari Webview.
If you have an app which uses the camera opened at the same time it may not work too.
Works fine on Chrome/Brave on Android. The blinking rects is actually the face detection algorithm looking for faces. If it finds one, a blue rectangle remains around the face.
Yes, It is a demo app so it is not very well packaged. The code drawing the rectangles (in /helpers/FaceCut) can still be changed to display more user friendly stuffs :)
For video streams you need WebRTC / MediaDevices which are not supported on apples iOS rendering library. So, no chance of getting it to work on iOS in a browser.