"self-XSS" (the thing this malfeature is purportedly protecting people from) is a made-up concept. It's basically "don't run your own scripts to interfere with our site, and we'll use scary-sounding security words in an attempt to discourage you from doing it." I don't believe for a second this is about helping the user - more likely is that FB and Netflix want to prevent users running scripts that add features or do functions they find inconvenient, like exporting your address book or movie rating info.
I get to run code just as much as you do - it's MY computer, MY browser, and MY bandwidth. Making up a scare word (that just means "users running code I don't like") in an attempt to legitimize disabling access to development and exploration tools is beyond the pale. There is absolutely no reason to permit this kind of behavior, and I'm frankly a little appalled a community of startup founders and hackers would ever defend this kind of behavior, as some of the comments here have done. If you want to protect users from themselves and limit and restrict what they can do, write a mobile app. Don't try and put your shit on the web if you want it to be a walled garden.
I don't see exactly how it is a concern for Netflix, but sadly "self-XSS" is real on Facebook. Not among us tech-savvy people obviously, so consider how much you look from inside a bubble.
If people read of a "h4x0r trick to read their bf/gf private messages", they will execute it. And hey, "it has this l33t keyboard shortcut that will make a strange window pop up, it must be what the hackers in the movies use!!". And then "Oh well, thanks to this friend of mine for sharing this cool trick that gives me the stuff to paste there, I would not know how to use it!". And finally "Booooo, Facebook sucks, my account got hacked".
I remember of the internet making fun of a girl that believed to be enrolled in some secret police because she popped up the Dev console. Well, that is just normal people, not uncommon.
I trust that actual developer can find their way around blocks and warnings, that however raise the bar for social engineering.
If Facebook made use of smart OCAP practices, none of that would be possible. Use of object identity as a "security key" would prevent code that doesn't have access to the "key" object from being successful.
Not necessarily, because you can limit how certain values can be accessed due to your scope. For JavaScript, if a key is saved in a variable inside an anonymous function, it's inaccessible for somebody who has a console that sits at the root of the document.
But it requires that the value is hardcoded inside the function. If it was given to an unreachable scope by some async action (like an ajax request) this trick wouldn't work.
One could possibly also wrap the function in an native .bind call to change the output of toString() to [native code]
I wonder if even that's feasibly secure though, when you have stuff like http://esprima.org that can let you fully parse the entirety of the JS on the page.
It's better to assume the console has 100% root (client-side) privileges.
But the enemy here isn't necessarily the console, it's the social attack against the console. Making it harder for the user to screw himself over is a worthwhile endeavor, and not merely "security by obscurity".
right, because patching one of 1000 is the right solution? please give back your engineer card.
- just download this file to see your gf private messages. ok, lets remove download from the browser (actually, ios did this)
- just run this long string in the address bar to see whatever. ok, let prevent javascript: schema in url bar (actually, android stock browser did this)
anyway you go at it, is ineffective. the only solution is to educate. trying to prevent idiots from harming themselves will just lead to annoyance to the non-idiots and more sophisticated attacks until you cant prevent them.
people who implement those dumb thing disgusts me. your comment disgusted me.
Exactly. The more things you do to restrict users so "it's safer for them", the more reckless and stupid they'll become - "because $security_feature will protect me" - and they won't ever learn anything. On the other hand, if we give them the freedom to make mistakes, those that do will learn from them. Instead of trying to block "self-XSS" or telling everyone "don't listen to anyone telling you to paste something into the URL bar", we should be encouraging them with "if you don't know what it does or don't trust the one who told you to do that, then don't do it - or find out what it really does." That last part is particularly important, since it encourages curiosity and that motivates learning.
I understand that many people would just want to use something and not want to learn all that much, but I feel we should also not be encouraging this "lack of thought" mentality either.
Sorry if offended too much. intended for just a little because that behavior is awful and must be treated as such.
Also if you want to learn how to do it right, look at Mozilla. They mostly do the right thing. E.g. telling user and in extreme cases making him wait a very short time before accepting something that may be dangerous.
I'll give back my engineer card (if I find it) if you swear not to apply for a security one :)
There's no way to make a system really secure and as you point out there is no way to "prevent idiots from harming themselves". What you do is stopping common/easy vectors and raising the effort bar/reducing conversions for the attacker.
If we require that a guy to use his computer learns how its threat model works, we failed as a industry.
And to address your points: users know that downloaded files are evil and Chrome warns about that. The javascript schema is mitigated by Chrome - if you copy-paste the initial "javascript:" is cut out. I'd love to know the other 998 to open discussions about them.
Note about the use of the word idiots: more of people who are not tech-savvy; you might be an idiot in this meaning for one or more of: electricians, mechanics, doctors...
It seems that users were being duped in to running malicious scripts that gave attackers control of their accounts. Sure, Facebook could be evil and not offer the option to re-enable the console and I'm sure other sites will do exactly that until browser makers prevent it, but at this time, Facebook is not being evil. I'm not sure about Netflix.
If people are being successfully duped in to running malicious scripts this way, perhaps browser developers should put a first-run warning on the dev tools saying that running code there supplied by a third-party is dangerous.
> I don't believe for a second this is about helping the user - more likely is that FB and Netflix want to prevent users running scripts that add features
How does that even make sense given that Facebook clearly gives the opt-out link which is easy to use, and works? I don't believe for a second that you aren't being completely cynical.
If it was actually about helping the user, there wouldn't be an opt-out link, because the first instruction in the new "self-XSS" instructions is going to be "go to this page and opt-out - yeah, it says a bunch of scary bullshit but that's just because they don't want you to know about $feature!"
The opt-out is there to boil the frog until they can remove it in the guise of "security". I also agree with bsamuels that it provides a convenient CFAA lever to hit in the event you do run a script they don't like. ("They had to deliberately bypass our 'protection' to paste the javascript into the console!")
Asking targets to first go to the account->security settings to disable the option named "Allow my account to be hijacked if I paste malicious JavaScript", and then to paste this JavaScript, seems to me to be quite clearly less effective than the simpler "paste this JavaScript".
Furthermore I strongly suspect that compromised accounts cause more harm to Facebook's bottom line than users who are exporting their address books. Millions lost every quarter due to fraud vs... what, exactly?
I don't think they added the blocking script to actually keep people out.
Rather, it makes more sense for them to add it so that if someone does debug their site, it gives Netflix a legal precedent to press charges against them for hacking their site by bypassing a security system.
By using developer consoles you're accessing to your own computer, and as long as you aren't circumventing DRM, or committing some other criminal act, it's not illegal. Not by my reading of the situation, but IANAL... What is the legal precedent to which you're referring?
These companies lose more money to fraud in a quarter than they have ever lost due to people exporting their movie ratings, or whatever other non-criminal acts people have been committing at the console. I don't understand why we need to posit a 2nd (more sinister) motivation.
It's like those tacky trailers at the start of a movie "pirating movies is illegal". First question the judge asks - what steps did you do to prevent pirating and when did you notify your customers.
He may have been referring to the trailers included in DVDs and such. But here in France I think we also have piracy warnings in theatres now. As usual, the customer is the victim.
It seems to me, it's just the laziness of the programmers working on the front end that leads to them needing this. The code should be designed with the idea that the end user will run arbitrary JS of all kinds. If it can't handle that without negative side effects, it has huge problems. I hope Chrome fixes this bug once and for all and takes steps to prevent overwriting window.console by user code.
You can defeat this without any extensions, here is how:
Since this only applies to Chrome, so do the instructions:
1) Open netflix.com
2) Open developer tools.
3) Go to Sources tab.
4) Click on the tiny icon for "Show Navigator" on the left.
5) Find the JavaScript file that has:
(function(){try{var $_console$$=console;Object.defineProperty(window,"console",{get:function(){if($_console$$._commandLineAPI)throw"Sorry, for security reasons, the script console is deactivated on netflix.com";return $_console$$},set:function($val$$){$_console$$=$val$$}})}catch($ignore$$){}})();
For me this is cdn1.nflxext.com/FilePackageGetter/sharedSystem/pkg-nflxsrc-*
6) For me the offending line is line 3. Click on the line number, this will set a breakpoint.
7) Reload the page, now the Script will pause before running line 3.
You don't need the breakpoint. Just run Object.defineProperty(window, "console", {configurable: false}); before loading netflix and you are good to go ;)
That doesn't work for me. When the page load happens aren't a new set of client-side browser object created? Do you have any more detailed information?
Yes, or if this becomes common one could easily write a little Chrome plugin that stashes a reference to the console and checks to see if it has been disabled (onload and on an interval) and simply puts it back.
The only thing Netflix's 'security' measure does is make me respect the company a bit less. Who, precisely is this going to hurt? A serious "attacker" is going to be stopped for about 15 seconds. It might stymie some kid trying to learn about front-end web dev. Good job Netflix!
The most benign explanation I can think of is to prevent an attacker from using social engineering where they give the victim a command to run in the JavaScript console.
Sites with a large number of inexperienced users (including children) have to think about these things.
The other day at work, my co-worker noticed that FB was hijacking the console object. When you trying to invoke console.log it would output in big red letters telling you that it was dangerous to paste js into webpages. And offered a url to a page you could turn off the warning.
I just tried it now on my FB account and it looks like they didn't touch my console. My guess is that I've been developing apps on FB since 2007 and maybe i already disabled that via something long ago.
It was something they only ever rolled out experimentally to a few users as far as I know. Having worked on a relatively popular social network before the reason they did it made perfect sense to me. A lot of people are really willing to buy in to the idea that they'll get some special treatment if they say a magic incantation into a thing they don't understand.
Just look at those "Forward this email or Bill Gates will kill MSN/sell your children/make the moon landing fake!" things that used to go around. People fell for them.
I think the most compelling example of this is something called "Freemen on the Land", and related belief systems. Instead of people's ignorance of technology, it exploits their ignorance of the basics of law. Like, did you know the government holds hundreds of millions of dollars in your name in a secret account? If you just say the right magical incantation in court, you can use it to pay off your parking tickets. Many people have believed this and tried it.
If you haven't heard of it before, prepare for a journey through Wikipedia as fascinating as it is pitiful.
According to the stackoverflow link, it appears to be a soak test on certain accounts. If you aren't affected (or if you have developer tools on) it will likely not do this.
There are legitimate security reasons why various major sites want to do this, and the changes do appear to be in response to actual, self-XSS attacks that have been seen. While I am no fan of the NSA, I don't see how this has anything to do with them. I also think this is very distinct from the right-click-disabling that used to be so popular: that was not in response to actual attacks, and also, to my knowledge, never happened on reputable sites. Additionally, I don't recall it being justified as being for "security" reasons: websites were usually rather honest about having it to prevent saving or copying and pasting.
This is, in my view, a poor solution to the problem, but as a temporary measure, it makes some sense. A change to Chrome to make a warning message appear the first time the developer console is opened, or javascript is used in the location bar, could be a good idea. And, as the pastebin notes, there are likely better, if more complex, technical solutions from the website side. All of these, however, will take considerably more time and effort, and the attacks are already happening.
It doesn't have anything to do with the NSA. He was just saying that "for security reasons" is a stupid excuse that, he says, seems to be frequently used to excuse any nefarious behaviour.
Chrome can easily fix this, but it wouldn't actually be a bad idea for them to show a message to the user warning them of social-engineering based self-XSS attacks when devtools are first brought up.
Either that or "hide" the developer tools a bit like they do in modern Android so that it is really obvious to the user if they are directed to mess with things that they shouldn't be messing with without understanding them.
"so that it is really obvious to the user if they are directed to mess with things that they shouldn't be messing with"
I think that as soon as an attacker tempts the user with: "follow these steps to access American/UK (substitute a locale that has content your account shouldn't have access to) only films that Netflix don't want you to know!" that the apparent gain for the user will lead them to ignore any warnings. In fact, warning might actually encourage these kinds of attacks since the user could think "that's just Netflix trying to hide something, I'm gonna following [the attackers] guide"
I don't disagree at all that such a warning would not be 100% effective, but I'd much rather have an attempted warning on Google's part along with a fix that doesn't allow dev tools to be broken by sites than have a situation in which more and more websites break dev tools with this workaround.
The warning won't be fool proof because the world is constantly evolving greater fools, but if well implemented it'll stop at least some of the fools without really impacting legitimate uses of the dev tools.
The issue is that there is currently not a clear line for many laypeople between what parts of the computer are normal to access, and what parts are not. As the moment telling someone to go into the console is not more suspicious then telling them to go to the control panel.
What the proposed dialog would do is inform users that the console is someplace that they probably do not want to be.
You can't protect the users this way: the attacker will create a custom Chromium build and lure the user to download and execute it. At that point the user will be pawned either way.
The argument would be that the custom build of Chrome (or other malicious software) is harder to create, but not so much harder so as to be not worth doing for the attacker.
Is the console itself mostly written in JS? If that's the case, why didn't anyone think "this thing is accessible from JS, JS which could come from a (possibly untrusted) external site!" Were they planning on this "feature" being useful somehow? It reminds me of debuggers that can be crashed by what they're debugging, VMs that escape into their hosts, and (as mentioned) websites that try to disable right-click or otherwise interfere with the browser, which should ultimately be the one in control (by the user)...
I really like the comparison with right click deactivation.
Anyway, people who use social engineering will still win; you can put JS code in the browser bar with the good old javascript: "protocol" if you want somebody to execute something.
I dunno, I kinda like this. Of course it's not blocking serious hackers, that's not the point. I'm guessing this feature is disabled for the same reason Facebook is disabling it - to prevent self XSS (people copy/pasting scripts that'll "give them free/more/better X")
Intuitively, I would expect that to FUBAR the console completely but it's actually deleting the overridden version so it restores the real console's properties (which thankfully cannot be deleted.) Cute way of fixing it.
Things like this always annoy me. It's a marginal annoyance to people who know what they're doing. For those less knowledgeable, who were using something they learnt by rote to improve their experience, they'll either google and find another way to do it (thanks to a blog post/youtube video by someone with a bit of nouse) / download some virus / wait until they see the person who originally showed them what to do.
It just seems like a bit of grief for a temporary gain.
First Netflix attempts to subvert web-standards with WebDRM. And now they attempt to lock down the regular HTML as well by disabling legit inspection tools. I can't wait to see what's up next!
It should be clear by now that if you care about the open web, Netflix is not a company you can trust, much less fund with your money.
Interesting that this happens soon after Google restricted extensions to developer mode, isn't it? Back then I said the "security reason" is definitely BS, because they already took a "strong enough" measure to only allow extensions to be installed with drag and drop into the Extensions page.
Plus, when you have the company that lives by data, not show you the data that made them make this move, you know something is up. I asked then, and I actually asked when they moved to drag and drop, too: show me the data that proves this is so necessary!
Even before any of this, Chrome was far better than IE and even Firefox at staving off bad extensions. So to me both of those moves seemed unnecessary, and most likely with another "agenda" behind. Now we begin to see that that agenda could be.
I've also connected stuff like this with MPAA taking board membership at W3C. Expect stuff that's much worse than this, and the MPAA-influenced W3C to start keeping features away from browsers that MPAA freaks out about, while Google will increasingly start to ban various extensions from the store for various "ToS reasons".
And people still think W3C's DRM extension won't be used to close down the Internet? It took Netflix weeks to take advantage of Google's recent move. Watch what happens when DRM can be enabled in the browser by anyone, just as easily, Then we'll see if the "convenience" of not playing Netflix through a plugin was worth it.
There is a way to get Silverlight as a plugin in your standard Linux browsers. It's called Pipelight[0]. Yes, it uses Wine, but not for the rendering, so it's not a horrible experience.
> API requests can be made inaccessible from XSS (and that includes self-XSS) by means of a CSRF token that is properly secured
How can self-XSS be prevented with a CSRF token? Can't the script included via self-XSS get the token out of the page and use it to make requests that appear as if they originate from the app itself? Can't a script injected through self-XSS do absolutely anything the page can do in the first place?
No, but it's not easy (in fact it's quite hard to do well) because of how insecure the browser is.
You can take advantage of the fact that you can store private information in closures. To prevent malicious code from overwriting a native function to which you pass sensitive information (like the CSRF token in this case) you need to Object.freeze the prototype of things like XMLHttpRequest or take your own references of the native functions.
Naturally all of this assumes the user doesn't do something like set a breakpoint and then inject a script with access to scope variables. But if social engineering gets you that far, you could probably just have the user run any arbitrary code on their machine.
“And interestingly, Chrome (even Canary) still allows the user to run javascript from the omnibar.”
Worth noting that they remove the "javascript:" part when you paste from clipboard.
My guess is that it protects against people telling others to "copy this in the address bar to steal your friends' Facebook accounts", much like why Facebook disabled the console previously.
I've noticed a rather disturbing trend of thought in technology that's been showing up more and more recently with things like this: "Make it harder for users to know how things really work. Make it harder for users to explore, make mistakes, and learn. Make it harder for users to become developers. The less the users know, the easier it'll seem to them, and the easier it'll be for us to stay in control. Keep them ignorant and consuming. Lock them in a walled garden and tell them it's all 'for your security/safety'. Because knowledge is power, and we don't want that in the hands of the users."
Netflix doing this is one of the more obvious manifestations, but they are not alone - many other companies and even open-source, free-software projects are taking this approach, Google included.
Whenever Congress wants political cover to pass a law, they focus on how it affects kids ("think of the children!") - child pornography is often a convenient excuse, as it was during SOPA/PIPA.
In technology, the excuse is usually either "security" or "better user experience".
Like preventing child pornography, these are both worthwhile goals, but they are often used to cover up goals far less noble, as you note.
Yes, the bottom line is that Netflix is sending you code executed on your own machine. You, as the user, should have full access to inspect the executed code. Obfuscated or minified code should only be used as a bandwidth saving device, and not to hide the true functionality from the user. If the browser is seeing it, why can't the user?
Disabling a feature of the user's browser is absolutely absurd. If you want to hide from the user whats going on then that code needs to run on the server.
Can you imagine opening an image in Photoshop and because of some flag half the tools disappear? Yeah, me either, and images don't even run code.
HOLY SHIT. I never knew about this. Makes me wonder though if someone decided to prevent the scanning of every single currency out there and the developers had to code in a shitload of recognition patterns into the scanners firmware.
Of course, that isn't a defense for disabling the developer console (which should be considered a critical security issue): all of those were as stupid and immoral and indefensible as this.
I don't use Chrome (I don't use Google products) and I wasn't defending it. It's a critical bug in the browser, but that doesn't mean it's okay for Netflix and Facebook exploit it.
It's a great side effect of the web that a good chunk of an app's source code is open to be viewed and learned from. It's fun to explore another site's code and discover how they pulled off their tricks.
Netflix may claim they're doing it for "security reasons" but I could see other companies hopping on the bandwagon "to protect their intellectual property" and that'd be a sad day for the internet. Hopefully this doesn't turn into the crazy right-click-blocking craze as another poster mentioned.
In some cases, it's a speed bump that just ensures that you can do a search and read simple instructions. For example, if you want to turn on Android developers tools, you can find out by searching on "turn on Android developer tools".
Basic literacy and ability to search the web is not that high a bar. There are lots of other gotchas you'll have to overcome to be an Android developer; it's not really a nice programming environment for beginners.
This is part of a larger trend where programming environments are getting increasingly creaky and complicated. Java started out simple, but it's not anymore. The web started out simple, but it's not anymore. Every so often, development environments need a reboot and I think we're long overdue.
This is part of a larger trend where programming environments are getting increasingly creaky and complicated. Java started out simple, but it's not anymore. The web started out simple, but it's not anymore. Every so often, development environments need a reboot and I think we're long overdue.
There's also the question of whether all this increasing complexity is actually needed - not everyone needs enterprise solution platforms or scalable web application architecture frameworks. Some people just want to write a few webpages to share information with others, and browsers made it easy to get started. Just because we have some really idiotic users and the page source might contain "exploitable" things like cookie values and session IDs doesn't mean we should remove "View Source" and make it a developer-only option. (If the trend continues, I can actually see this happening in the not-so-distant future...)
Similar situation in the 90s it was like "don't right click and view my HTML." They had a point maybe about copyright but the passive aggressive notes, comments, and general behavior hasn't changed. This also exists on online forums a great deal where admins believe that their users understand their concept of ownership and control.
It's a very old impulse and not really limited to corporate group-think. "I'm gonna obfuscate my code and encrypt my data so nobody can STEAL my work!!!" said the programmer of previous decades, oftentimes experiencing a fit of egotistical paranoia over a relatively small personal project.
The same kind of mindset that thinks that the world would operate better if everyone used Bitcoin and trusted no-one.
Often it seems to simply be out of complete delusion about the value of what they've done. I interviewed a guy once that brought in a disk with a horribly crappy little mail client for Windows he'd written that was totally uninteresting to us.
I took one glance at the code, and it confirmed what we'd learned during the interview (he was turned down).
Yet he was terribly concerned with getting the disk back to ensure we wouldn't steal his mail client code. The interview was at Yahoo - it's not like we didn't run a vastly more complicated and advanced e-mail system. And not like there weren't dozens of open source mail clients that were far superior to what he brought in if we for some bizarre reasons should have wanted to "steal" code for a desktop mail client.
I think you totally missed the point. We don't want Netflix to go away. (Actually, Netflix is one of the most important technology companies in existence at the moment). We want better behavior.
Because before Netflix, legally streaming copyrighted movies to your computer was a pipedream and now we have several successful providers including Amazon, Vudu, CrunchyRoll, YouTube Live, Hulu, as a part of Hulu various television networks. The Superbowl was on the internet this year. That's kinda big.
It's partly responsible for bringing options to people with regard to what they see.
it's providing a service that will hopefully lead to a more open future. instead of being forced into buying bundled cable packages, maybe one day we'll be able to have more specified choices. i'd say services like Netflix, Hulu, etc. are all things that push towards that. at least i hope
I still don't really understand the importance of this. Also rather than being open, isn't Netflix still locked down with DRM? I seem to recall there is a Netflix DRM plugin built into ChromeOS.
It's not. I don't watch TV. It's not relevant in my life.
I watched a little bit of the Olympics (my wife made me) and while the ladies figure skating was ok, the rest, and especially the commercials, made me feel dumb.
I had to go read a good book (Storm of Steel by Ernst Junger) to boost my mental processes back to normal.
> Google should really patch this. The command line API should be privileged so that third parties can't modify how the browser behaves without explicit authorization (i.e. an extension)
Absolutely agree. Why don't they do that, or at least make it an option?
I don't think the goal of this is to hide legitimate uses of the dev tools. As many have mentioned, it's really easy to circumvent. It's to shut down an attack vector.
never trust the client. no matter what kind of hacks you come up with to protect yourself. you have to assume the client side is always compromised. always protect yourself in the parts you have full control.
May I suggest you read the whole thing next time? You can certainly disagree with the author's assertion that companies and governments abuse "for your security" to do awful things (or that it's what happened here), but it's far from ridiculous. I think you just misunderstood the point about the NSA--not reading will do that to you--and it makes you look silly.
I did understand the point he was trying to make, but to me that's a poor analogy. This self-XSS prevention is a temporary solution. Facebook probably thought they had enough of people reporting dev console self-XSS so they took the initiative.
Netflix is not abusing "for your security" to do awful things. How? I just don't see it. I see that as an accusation, putting Netflix and Facebook's temporary solution in the same category as NSA's excuse is bad. I might be unfair to the author for not reading the entire post (well technically I read most of it, except Crockford and afterward I gave a quick glance), I will admit that's my failure, but that argument doesn't appeal to me at all.
The poster did not "connect this to the NSA". She/he argues that blindly acquiescing to removal of rights in the name of security is a bad idea, using the NSA phone tapping deal as a point of comparison.
Calling it a "right" instead of a permission doesn't change much. "Digital Rights Management" doesn't manage rights under your definition (unless pause or fastforward is a right).
The NSA ranting around here is out of control. Not only did the poster somehow tie this bit of javascript to the NSA, but he claimed that the NSA records our phone conversations too. There is no evidence that it does unless you're a head of state or somebody the FBI has a warrant to tap.
The poster did not "tie this bit of javascript to the NSA". She/he argues that blindly acquiescing to removal of rights in the name of security is a bad idea, using the NSA phone tapping deal as a point of comparison.
Removal of rights? Saying that either this JavaScript or the NSA's actions amount to a removal of rights is a huge stretch. You might as well compare the the sealed battery on the iPhone to the NSA's data gathering.
I half-disagree with you: the NSA's actions, IMHO, certainly do amount to a violation of the right to privacy. However, calling the ability to run the javascript development console on a page a 'right' is a stretch, I agree.
Please feel free to read the actual article and quote it and find fault with its conclusions. As it is now you're seizing on individual words, not ideas. If this were a Turing test you wouldn't be doing so well.
Reread what you just posted. You just said that the poster's comparison to the NSA is invalid, which is exactly my point. Why does everything around here have to be tied to the NSA (see Gruber's ridiculous conspiracy theory about the Apple bug for a recent example), and why does half of the stuff posted about the NSA have to be hysterical nonsense like that they're recording all our phone calls? It removes legitimacy from the rest of us who are complaining about things the NSA actually does.
I get to run code just as much as you do - it's MY computer, MY browser, and MY bandwidth. Making up a scare word (that just means "users running code I don't like") in an attempt to legitimize disabling access to development and exploration tools is beyond the pale. There is absolutely no reason to permit this kind of behavior, and I'm frankly a little appalled a community of startup founders and hackers would ever defend this kind of behavior, as some of the comments here have done. If you want to protect users from themselves and limit and restrict what they can do, write a mobile app. Don't try and put your shit on the web if you want it to be a walled garden.