Even enthusiasts mix up the anonymity offered with Tor with security.
I can almost guarantee that this will do waaaay more harm than good. People will enable this and think they are safe while they are suddenly routing all their cleartext through an untrusted third party (that is, very often, malicious).
At the very least a lot of passwords will be gathered (alongside email in a lot of cases) in insecure and unencrypted forums etc. And since most people have the same password for unencrypted forums as their email, facebook and twitter...
For this to work it will have to be an option buried deep and before enabling it you'd have to have a huge nag box (the size of a blue screen) that clearly shows the dangers of this. And although many have tried I haven't seen any implementation of such a nag box that actually works (forces the user to think and not just press "OK"/"YES"). And even if it did work people won't understand it, at most they will understand that "okay, this is risky" but they have no way of evaluating that risk since they have no idea what they really are enabling.
That said, Tor inbuilt into firefox would be awesome. I just can't imagine it doing more good than harm.
So the way to fix this is to get involved and write some software instead of commenting in a forum as if software is a static problem. Why not open a bug report? Maybe we can prevent password reuse by the browser taking a hash of the password being entered in a form and comparing it against other password hashes locally?
These issues are trivial compared to the bigger issue at play: the right to read and to write without exception.
That is what is at stake here.
Exits sniffing passwords for plain-text forums is a minor concern compared to the bigger issues at play. I'd like to point out that it's trivial to test for this too...(create some plaintext accounts on forums, post there, then run a honeypot server with your own email address and see if you get a login, do this automatically for each exit) However the time commitment isn't so trivial: If you wrote such a program it would be appreciated, please yell loudly about it on the tor-talk list if you do! Tor works by and large by people following some basic principles, like "if you think it should be done, do it".
You could also see the issues with plaintext passwords on HTTP sites and password reuse as a problem of user education. If you're not up for writing software, why not open a bug and suggest some wireframed ideas for how a UI could look to prevent password reuse? There is a whole bunch of ways to contribute.
There are literally 100's of problems like this Mozilla or Tor could get help with... get involved instead of ahead of time saying "This can almost guarantee that this will do waaaay more harm than good." I hope to see your responsible posting of these concerns to a bug tracker. A lot of what gets done in FOSS projects like Tor really is just a bunch of people who've decided "hey, if I don't do this, who will?".
As for doing more harm than good: I can't possibly see how this does more harm than NSA domestic surveillance.
So the way to fix this is to get involved and write some software instead of commenting in a forum as if software is a static problem. Why not open a bug report? Maybe we can prevent password reuse by the browser taking a hash of the password being entered in a form and comparing it against other password hashes locally?
The lack of a proposed solution does not make the identification of a problem worthless. Writing code is as much part of the process of solving problems with software as identifying those problems is. When a problem is largely a social one, rather than a technical one, identifying the problem and making people aware of its existence is often the bigger part of the process.
Tor guarantees anonymity. It succeeds at that. There is no technical problem. There is no code to write. This could lead you to think there's a problem with scope, that maybe Tor should guarantee security too. Unfortunately, the design of Tor is not compatible with that goal: if your plain text data goes through an exit node, then by definition, that exit node gets to see what the data is. A bug report titled "Tor does not provide secure communication" would get closed faster than you can say "won't fix". And rightfully so.
There is, however, a social problem, as stated by the OP: many people actually expect security. This is a problem that can only be solved by educating as many people as possible that this is the wrong expectation to have, which is what the OP is contributing to by making this post. This is very much in the spirit of "if you think it should be done, do it".
I fails to see how HTTPS Everywhere + a nag whenever a form is detected on a HTTP page doesn't make this a complete non-issue. OP is not giving Mozilla any credit.
If OP has ever used Tor, he would know that the performance hit is so huge, that no-one in there right mind is going to leave it on by default, if that's even an option. In OPs myopic view, he's missing the fact the the added benefit built in anonymity to all FF Users far outweighs this hypothetical security risk.
Maybe we can prevent password reuse by the browser taking a hash of the password being entered in a form and comparing it against other password hashes locally?
Won't work. People have too many accounts to be able to remember unique passwords. A large percentage of mundane users won't understand these things and will just be nagged all the time that they cannot log into their accounts with this new Firething.
You could do that. As you type into any <input type="password"> the browser would buffer the text and not send it through to the DOM immediately. When you hit enter, leave the field, submit the form, etc then it'd get sent through to the DOM where it'd be available to (potentially malicious) javascript. While the password is in the buffer the browser could check for "non-throwaway password + http" and block it if need be.
This does require the browser to know which passwords are which, though, at which point it might as well be entering them for you (and only being willing to do that over https).
Ideally yes, but almost no sites send passwords as they are typed. The only likely situation where that would be a problem is if the site itself is compromised, in which case nothing can save you.
Not just a site compromise; a path compromise (malicious exit node, shady ISP, or fake WiFi access point) can inject Javascript in an unencrypted page. CA-signed SSL with cert pinning prevents MITM injection.
The thing is that I doubt anyone is actually running a malicious server that would login to your honeypot as soon as it gets your credentials, at least I would let the username/password list collect for couple of months before checking it and even then I wouldn't blindly hit everything.
But the thing is that thous TOR exit points (just like Proxies) can inject all kinds of JS on the sites they display and while some people run NoScript or similar browser extensions most do not.
And giving out your username/password combinations for scriptkiddies and criminals and allowing them to run any JS on sites you visit does hell of a lot more short term damage than any government surveillance, losing all of your possessions in an identity theft or just getting phony bills is way worse for your average consumer than having their Facebook chat logged.
Tor is a good tool when used properly, but slapping it on to everything isn't what the tool was made for.
> Maybe we can prevent password reuse by the browser taking a hash of the password being entered in a form and comparing it against other password hashes locally?
I always thought that one possible way around this would be for websites to generate the passwords for their users, essentially forcing them to use some sort of password storage (either in the browser, or LastPass, or whatever-you-want). So one would register with just an email address/username and then be shown a generated password à la O5O1zn8H3zEGNjf1Ly8v to store in the browser’s password manager.
Of course, this only works if people only ever use the service from devices they own and I have absolutely no idea how common internet café/public library users are nowadays.
Is that similar to using something like OpenID, where you sign into the authentication service, then one by one authorize the identity to log you into services?
It seems like a more modular approach than using Facebook or Google to sign into third party sites, but similarly secure in terms of exposing passwords.
I don’t think it has much to do with OpenID – my suggestion still keeps the classical username/password tuple as credentials, but simply disallows the user to choose the password freely.
One extension, of course, would be for websites to somehow signal to browser ‘Hey, please store this password/username for this website’, but I’m not sure whether that's necessary.
>So the way to fix this is to get involved and write some software instead of commenting in a forum as if software is a static problem. Why not open a bug report?
It cuts both ways. I don't understand why you meta-comment, instead of going fixing those things yourself.
It's a little bit different on that side. Telling others to go fix things, if you succeed, is better than (just) doing it yourself. I can donate money to charity, but let's say my budget (and in other scenarios equate budget to time, or skill, or whatever) is £100/month, if I donate that and tell others to donate, then maybe I can have a £1000/month impact.
Whereas if I'm just telling people "this is no good, it needs fixing" then not only am I not helping, I'm not persuading other people to help either.
The main premise is wrong too: "if I'm just telling people "this is no good, it needs fixing" then not only am I not helping, I'm not persuading other people to help either.".
Lots of things that are no good NEED to be pointed out. Project members don't magically see all of them.
Either because they don't have that particular use case themselves, or because they haven't thought it that way, or they think it's not important but the users think otherwise, or they have invested too much in some design that they cannot admit it's flawed, etc.
Pointing things out helps making those flaws visible, helps start a discussion, makes the opinion of potential or actual users known, etc etc.
Telling people to go and help is very secondary to that. I don't think many people ever got motivated because of some second-hand meta-comment from someone not even involved in the project.
You're absolutely right - when I commented I was thinking purely in terms of "you shouldn't tell others to get involved, you should be getting involved yourself", I didn't stop to think about the pros or cons of the original comment.
As for doing more harm than good: I can't possibly see how this does more harm than NSA domestic surveillance.
How about false sense of security? Zero improvement in security plus baseless confidence that you are now secure sounds like a step backwards to me! It's already suspected that NSA has been working around Tor for years now. Wasn't there an article recently stating that the NSA (or was in the CIA...?) control 2/3 of Tor exit nodes?
Lastpass does something like that - it warsn about identical passwords, though of course it actually has the plaintext, but a hash would work fine as well.
It's not more dangerous than connecting through a regular proxy, or through a Wifi network at some coffee shop or hotel. And how trustworthy is your ISP exactly?
For example, while I visited the US it happened at 2 motels I stayed at for their network to insert ads in the web pages I was visiting. So they where not only snooping my traffic, but going further into altering it. Also, go to any conference or gathering of people, create an open Wifi hotspot (like, with your phone) and then enjoy all the cookies and passwords flowing through, from all the clueless people that connect to it.
Truth of the matter is, websites should be secure by default. Those that aren't represent a huge security risk. Fortunately many of the big ones are (e.g. GMail, Twitter, Facebook, etc...)
I interpreted tjoff's argument to be that the average Tor exit node is more malicious than any of the other average ways for proxies to be dangerous. In other words, yes coffee shops could be dangerous, but they are most often safe, whereas if you're getting routed through a bunch of exit nodes over a period of time (is that even how it works?) you stand a much greater chance of running into malice.
Would love to hear someone address that claim as I understood it.
If you care about your security (or as a service provider, the security of your users) and if you can't establish a chain of trust, then you can't call a solution secure.
If the connection is encrypted, then you have to trust the certificate authority, the browser and the service provider. If the connection is unencrypted, then in addition to the above, you have to trust that nobody is listening on that connection and that's a really tough pill to swallow, even if nothing bad happened to you in the past (just like with car accidents, it needs to happen only once to mess up your life).
There is different _probability_ of having malicious network sniffer with different service providers. I would bet my left testicle that the probability with tor is higher than average ISP. If you want to speak about security _theory_, then please state so. Usually in these kind of discussions people speak about practical security, which means trusting different people different amounts, and trading theoretical security with trust.
"It's not more dangerous than connecting through a regular proxy, or through a Wifi network at some coffee shop or hotel. And how trustworthy is your ISP exactly?"
Totally false claim. I would bet my ass that there is higher probability to have your traffic sniffed by tor exit nodes than by regular ISP's. At least for short-term malicious purposes, like email snooping and credit cards.
(For more long-term malicious purporses, NSA etc., you can't be really sure.)
"I would bet my ass that there is higher probability to have your traffic sniffed by tor exit nodes than by regular ISP's"
Totally false claim. The biggest nodes on the tor network is run by established tor enthusiasts, and the largest portion of traffic is routed through them. Social pressure makes those people unlikely to put the node in jeopardy by sniffing exit traffic.
However, there is not much that would deter a coffee shop employee against sniffing the wi-fi, or for that matter, making them care to secure the router against would-be-attackers.
As such, I would be that there is a higher probability to have your traffic sniffed when visiting the local coffee shop, than by a tor exit nodes.
You can actually put up the "this is dangerous" sign for pretty much any invention out there. I understand that you are not advocating "this should not be done" but like any other feature one needs to be vigilant with Tor.
Are you saying that tor exit nodes are generally malicious? I mean, I understand the precautions involved in routing the data like that, and that there's the possibility, but is it really that rampant?
Tor is a very attractive target for such behaviour. It takes very little effort to set up a Tor exit node and to start passively sniffing traffic. It would surprise me more to find out that this behaviour wasn't rampant.
Also, try and use your online banking through Tor. You might be using HTTPS, but watch as your account is frozen when they detect you accessing it from multiple different countries in a short period of time.
Tor is unsafe to use unless you understand how it works and use it selectively and carefully. Selectively and carefully is not a way I would describe the average persons browsing technique.
One possible way to implement ‘buried deep down’ would be with a command-line option (--enable-tor) that requires input on the console to continue launching the browser. Something like the following:
$ iceweasel --enable-tor
The option --enable-tor will route _all_ traffic generated by Iceweasel through
possibly-malicious TOR exit nodes. Please read http://example.com/TOR for more
information on how TOR works and then type "Enable TOR" to continue launching
Iceweasel with TOR enabled.
Awaiting Input: <user types Enable TOR + Enter>
<Iceweasel starts>
Sure, people could build scripts with expect or so, but I would assume this to deter most unknowing users from enabling TOR accidentally (idea stolen from apt-get’s ‘Yes, do as I say!’).
The way to fix this is to throw up a giant warning (al la untrusted SSL cert) when using the Tor mode and accessing a site that does not support HTTPS.
People will enable this and think they are safe while they are suddenly routing all their cleartext through an untrusted third party (that is, very often, malicious).
So wouldn't the fix for this be to have the browser warn/refuse to send cleartext over Tor?
Probably, if FF implements a 'native Tor button.' The other possibility would be, to only support hidden services natively for .onion URLs. ( And use standard TCP/IP for everything else.)
It would push for more secure practices, and would complement Mozillas current efforts with Persona. Site that do not use https, Persona, OpenID, google/facebook/microsoft/what-ever-authentication-system is a problem, but better served by some form of detection mechanism. incognito mode could simply try detect the common exceptional cases, and then provide a user warning. As such, user would report the warning to the forum/wp admin, and the web would improve as an result.
Second, the tor exist node are not more risky than using local ISP, coffee shops, or a business/school network. Who is more likely to secure the network routes against malicious attacks: a underpaid janitor, restaurant worker, teacher, or a several year established tor enthusiast? Most tor traffic flow through the biggest and most established nodes, and those has a lot to loose from sniffing the network. Can the same be said about the person behind the counter at the local coffee shop?
Probably the best way would be to only route HTTPS traffic (with certificate pinning & checking) through tor. Plain HTTP traffic would go through your regular ISP.
That avoids the "password collection"/MITM attack?
It would be better to have it not route at all if unencrypted and the Tor feature is turned on. The idea is anonymity. Having it suddenly use your real-world IP because it sees you aren't using TLS breaks this.
Would it be possible to make it so it just won't connect to a website unless it's encrypted or a .onion domain? You could get past it by going through said warnings and nagging, so users who know what they are doing wouldn't be troubled.
I think that a good, incremental first step would be adding ToR into Firefox with support for the .onion domain. This offsets the problem that there could be potentially dangerous exits sniffing your traffic. Additionally, this makes ToR space much more accessible. The (slight) problem of this is that it'll probably put significantly higher load on the ToR relays, and considering that they're already over capacity, it could hurt ToR badly.
Maybe, the relay code could be bundled into Firefox, and there could be a toggle for "make me into a relay"
According to some providers, relays are doing just fine (40 Mbps on a 100 Mbps line). There is significantly more relays than exit nodes because law enforcement can't really see them and harass you.
This was patched if you were using the latest Tor Browser Bundle at the time this exploit was attempted [1] . Vulnerabilities will always be disclosed, developers will always have to patch them. Automatic updating and deterministic builds [2] are the way to go.
Disabling javascript would probably add about the same amount of fingerprintability. Panopticlick says that 1 in 24000 browsers have the same fingerprint as TorBrowser.
There was a recent attack against TorBrowser that utilized a vulnerability in older versions of the Firefox Javascript engine. These sorts of vulnerabilities are not uncommon, so turning off JS significantly decreases the attack surface.
Firefox 23 doesn't ship with JavaScript "always on".
It's on by default, as it always has been. And the option to turn it off was removed from the main options dialog. This is for sensible reasons -- many users turn it off either accidentally, or without understanding what it means ("this must be that insecure Java thing I keep hearing about") which breaks many websites.
But the 0.01% of people who like to browse without JS enabled can still do so via about:config -- look for "javascript.enabled". Or they can use an add-on. Or they can wait a version or two (I'm not sure the exact timeline) whereupon you'll be able to disable it via the developer tools UI.
My last experience with Tor was somewhat painful due to the sluggish nature of it. If Firefox added Tor as a feature, making Tor "mainstream", could the existing Tor network handle the extra load?
My understanding (I'm no expert on this) is that the more users, the more efficient routing is within the Tor network, in principle making .onion space less sluggish, but if you have a big increase in the number of users with no corresponding increase in provision of exit nodes, then the existing bottleneck for accessing sites outside Tor will get worse.
Are you aware that users running the TOR Browser Bundle will normally be a node (i.e., they will provide proxying services to other users/nodes), just as with Bit Torrent clients?
There is no technical reason why it couldn't be added. However there is political and ethnical reasons to not add that funcionality. There is debatable legality of running a tor exit node. You don't want "Use Firefox, get a visit to the police" to be a headline people can use why not to use FF.
Well, Hacker News, for one, seems to auto-hellban posts (or new user registrations?) from Tor. I'm not sure if this is specifically targeting Tor, or if it's some automated system reacting to active abuse from Tor exit node IP addresses.
I imagine it can be a tough call for web site operators who want to support anonymity, but also don't want to deal with the bad apples who will use it to create grief.
I really doubt it would become a one-click privacy measure: If you like to participate in an onion-routed, privacy enhancing, anonymity network, why put it in an browser?
A browser accesses that network, it is the weakest and least point in that setup.
There are so many ways to track an individual, independent of the network, with java-script, extensions, addons, plugins, client-side-caching that even if tor becomes a feature in firefox, the slightest unmitigated problem, even your behavior may compromise your privacy.
How is going to provide the ridiculously large requirement of exit nodes is this hits production? Don't exit nodes cost money in the sense that there is huge risk involved in running one?
It would be fine to just have the feature so .oninon sites are accessible. To get more than that requires education, much like an aircraft pilot. One small mistake and you may literally get yourself and others killed. Even Tor Browser can't be trusted, due to bugs. A bare beginning solution is a firewall to block non-Tor connections:
Without that, you're just the Titanic happily floating across the ocean without making sure you've got enough lifeboats if something goes wrong. Should failure always mean death? Is it too much to ask to insist on a firewall safety net to block non-Tor connections when the next bug is found in the Tor Browser Bundle (or whatever)?
I'm all for making .onion sites reachable, as long as that's the only thing this new feature promises to do. It would make *.onion sites mainstream, which is good for everybody. Strength in numbers, heard immunity, get lost in the crowd - that's precisely what Tor relies on to achieve its most basic goals. Taking Tor mainstream with support in Firefox would mean there would be more Tor users for the seriously privacy-paranoid to hide behind.
If it only works with secure connections, and there's a general improvement in bandwidth and it prompts some big non-profits-with-profits like Mozilla to actually run exit nodes in countries that don't cooperate with other exit-node countries, then ... yes, it could be a little bit better.
The idea here is to bring Tor to a wider audience - not that it would be the first time Tor can be used through Firefox. There may also be other technical advantages or disadvantages, such as the fact that main Firefox will get updates quicker than TorBrowser.
OMFG how many times will this fucking argument be brought up? It is open source. Who cares? The Internet was a government sponsored project at one point too.
I hope this doesn't happen. I would uninstall Firefox, and not even test development for our application in Firefox. Maybe I could test some other gecko-based browser, like SeaMonkey, but I don't want anything tor related on my computer.
I don't have a need for tor. I am not a political activist, I'm not a journalist.
I'm not convinced that supporting tor is also not supporting criminal activity, such as child pornography and money laundering.
I don't want software on my computer that I don't need and I don't like. If Firefox wants to dive deep into controversial and politically complicated topics, that's their right, but I don't have to have them on my computer or support that.
Tor has no specific features that make it useful for child pornography or money laundering, so it sounds like you just object to anonymity for TCP clients in general.
You're certainly free to continue telling every server your client machine's location and your ISP about every server to which you connect. But that's not going to help children or fight any crime.
To me, possibly the best thing we can do to protect our children today is to enable them to grow up with a shred of privacy, without the baggage of an unknowable amount of juvenile internet browsing history following them into adulthood to be sold and traded by evil men who would be their masters. That's the industrial scale child exploitation, that is.
I do not object to anonymity for TCP clients in general. I've setup squid on AWS proxies when I've wanted anonymity, like in open wifi areas. People can set up VPNs or sign up to those services if they want to protect their network activity, which should all be sufficient for most use cases, including avoiding ISP monitoring your activity. Tor on the other hand is used to avoid law enforcement and government surveillance. I'm not a journalist, I'm not politically active, and I'm not engaged in criminal activity, and I'm not going to support something that can aid people avoid justice for their misdeeds. I think journalists should come up with their own solution that can't be taken advantage of by bad people who do bad things.
Circa 1995: "I think researchers and computer scientists should come up with their own solution to display text and images that can't be exploited by bad people. I will never support the World Wide Web. Gopher all the way, baby!"
A bit tongue in cheek, but lets be serious - there is no technology in existence that can't be exploited by bad people. Tor is a tool, it can be used for good or for naughty or for boring. Like most every other online tool or protocol out there.
How would you feel if this anonymity were built into the networking protocols from the beginning?
When the non-anonymous web first became popular and was used for child pornography and money laundering (as it still is) - did you object to using it? USPS or FedEx can also be used to (trivially) ship contraband material, even anonymously.
I'm far more concerned about companies like Facebook tracking and recording my every activity than I am the NSA or law enforcement - I'm boring to the NSA, but my product research and online purchasing behaviors are pure gold to the corporate world.
So, let's dispense with the bill of rights, and let innocent people falsely accused of crimes come up with their own solution, 'cause we don't want to protect criminals' rights?
Do you really think that you have no need for Tor? The journalists and political activists rely on Tor and they do a crucial job for all of us, including you. And Tor gets better for them if more people use it, more traffic means it gets harder for adversaries to filter out a specific connection.
Sure, Tor can be used for nefarious purposes, child pornography and money laundering - but so can the regular mail, the telephone, the internet, public roads or in general pretty much every piece of technology. Is that a reason for wholesale surveillance and abolishment of privacy? I don't think so.
It's a utility I don't have a need for. Sensitive information I don't want to disclose is hidden with SSL encryption. Maybe you have need for tor, I don't.
This seems like a nice idea, but the developers of Firefox have made it more than known they would prefer to argue the inclusion and removal of version numbers as opposed to actual issues. Look how long it took them to fix the memory leaks that plagued the browser since version 2 and were only not long fixed?
It's easy enough to set Tor up with Firefox yourself. Perhaps all that is needed is an easy to understand and access Tor guide. Perhaps the first page you see upon loading Firefox after installing or updating is a, "We recommend you use Tor for a safer browsing experience" and then give some scenarios where Tor should and shouldn't be used.
Whenever I see this kind of poorly-informed bashing of the Mozilla team (always mentioning "the memory leaks") I can't help but wonder how the author would do building and maintaining a piece of software as big and complex as a modern Web browser.
The thing to note is that these issues were fixed and Firefox is still a competitive, solid browser.
It's well-intentioned bashing if you want to slap a label on it. I am one of the few that still use Firefox, I've used it since the beginning more or less and the memory leaks issue you attempt to downplay was a very real issue that plagued the browser for so long when Chrome came along and sanboxed everything, separate process and was super fast people jumped ship faster than the Titanic.
The argument of turning around someone's opinion and asking how they would fix and maintain the problem is an exhausted counterargument that has no validity. It wasn't the Mozilla team didn't have the smarts to fix the memory leaks, it's the fact they denied there was even a problem for so long. It's all about prioritising what you work on. So if I had any involvement in Mozilla and Firefox's development, I would be prioritising what's important and what isn't.
This place has really changed. You can't give your opinion on something without being down-voted into oblivion, even if your opinion is well-intentioned and constructive. My comment wasn't negative, it was my opinion. I didn't bash anyone, I'm sick of this place misconstruing other peoples comments (a frequent occurrence from what I've noticed).
Setting up tor as in using the Tor proxy is easy enough.
Setting it up so it's actually not leaking information that would identify you is hard-to-impossible. See the things the Tor Browser Bundle ends up doing (including changes to C++ Gecko code!).
Even enthusiasts mix up the anonymity offered with Tor with security.
I can almost guarantee that this will do waaaay more harm than good. People will enable this and think they are safe while they are suddenly routing all their cleartext through an untrusted third party (that is, very often, malicious).
At the very least a lot of passwords will be gathered (alongside email in a lot of cases) in insecure and unencrypted forums etc. And since most people have the same password for unencrypted forums as their email, facebook and twitter...
For this to work it will have to be an option buried deep and before enabling it you'd have to have a huge nag box (the size of a blue screen) that clearly shows the dangers of this. And although many have tried I haven't seen any implementation of such a nag box that actually works (forces the user to think and not just press "OK"/"YES"). And even if it did work people won't understand it, at most they will understand that "okay, this is risky" but they have no way of evaluating that risk since they have no idea what they really are enabling.
That said, Tor inbuilt into firefox would be awesome. I just can't imagine it doing more good than harm.