> The things that makes TOR useful for people avoiding prosecution also makes it useful for people involved in malicious and criminal activities ... Everything from spam and network attacks to trafficking people and contraband.
> Although the Tor Project could promote options to restrict these malicious actions, they choose to do nothing. Seriously: if a TOR hidden service offers hard-core drugs or human trafficking or fake IDs, then they should be shut down.
I hope I can convince some folks here to be skeptical of this kind of thinking.
Think about it: If the Tor project came up with some clever way to take down bad hidden services, then what would be stopping a government from forcing them to take down legitimate hidden services too? And meanwhile, the criminals would move on to some other Tor-like service, and keep doing their thing.
In short, you're not going to stop criminals, who will always find a way to keep doing what they're doing. You will, on the other hand, impede the free speech of honest users.
The Tor project should focus on making it impossible for hidden services to be taken down--even by the Tor project itself! And they do :)
The same thing is true of freedom of speech and other systems that allow for that. Give people the ability/right to say anything without censorship and you'll get bullies, trolls, extremists and other horrible people taking advantage of it. But it's better you live with that in order to protect 'normal' people's freedom of speech rather than censor it and set a bad precedent.
In both freedom of speech and privacy, you have to be willing to support the rights of those people consider 'horrible' in order to help the greater population as a whole.
Freedom of speech means freedom from the government interfering in what you have to say (eg: Snowden).
Letting people say whatever they want on the Internet does result in bullies, trolls, extremists and other horrible people talking louder than more thoughtful speech, and in the end it silences reasonable people.
There is no law that forces me as an individual to respect what you say or to not try to silence you.
Censorship can only be done by the government, not by individuals; no one should trust the unfair bully who cries unfairness at being silenced.
***
It's the silencing of obnoxious voices
that allows thoughtful voices to be heard.
***
To those downvoting, the above is a honest question. If a powerful group has significant control over public speech (say Google and Facebook-like power), why shouldn't we be wary of their power to censor their opponents?
Merely because they don't (directly) have the power of law behind them, doesn't make them less dangerous than a government.
I find your voice obnoxious and unthoughtful. Be silent.
See how that works? "Horrible," "thoughtful," "reasonable"--as defined by you. Might-makes-right and the-ends-justify-the-means. Astounding hubris and a stubborn refusal to learn from history. And an apparent inability to reason from abstract principles.
This is why national borders (and federalism within national borders) are good: compartmentalization prevents damage from spreading. Without bulkheads, flooding easily sinks the whole ship.
> The things that makes TOR useful for people avoiding prosecution also makes it useful for people involved in malicious and criminal activities ... Everything from spam and network attacks to trafficking people and contraband.
It is ok if someone that doesn't understand it says those sorts of things, but when an organization like cloudflare[1] jumps on the goof-troop bandwagon, it really does make a difference.
I hope I can make you more skeptical of the "Cloudflare is the bad guy" trope. Cloudflare is lightyears ahead of any CDN when it comes to supporting Tor.
They specifically built controls so that web sites can remove CAPTCHAs for Tor users completely.[0]
They also do not block/CAPTCHA Tor users automatically. They treat Tor IPs like any IPs: if they detect abuse from the IP, they start giving the CAPTCHA.
Finally, Cloudflare has stated publicly[1] that they have a desire to setup .onion sites for their customers automatically. But they cannot do so until the Tor project is able to upgrade the hashing algorithm used for .onion addresses. If the two organizations could work together, this could be game-changing for online anonymity. Imagine millions of web sites automatically supporting Tor!
I can't understand why the HN crowd is so anti-Cloudflare. This Tor thing seems to be one of the major misconceptions.
Disclaimer: I'm not affiliated with with either Tor or Cloudflare in any way.
I think a lot of people on HN are anti-Cloudflare because of the way they portray themselves in blog posts. Every blog post makes it sound like they are saving the internet.
In addition, their response to the memory leak issue a few months back left a bad taste in a lot of people's mouths. They attacked Google unfairly for not purging their leaked content fast enough, while trying to downplay the severity of the mistake they made.
My link was a direct response to your second link.
I do not believe cloudflare on your first link (that they treat tor ips like any ips).
I can tell you from experience that I have never connected to a cloudflare backed site with tor that didn't require multiple captchas. So every tor ip is hostile to cloudflare sites? If so, how is that practically different than just blocking tor?
I think that if you read the response at your first link again, you can see that they are implying what you are saying, but that are not saying what you are saying. I think they are blocking tor, but explaining it in a diplomatic way.
> My link was a direct response to your second link.
Yes, and they only take issue with the the claim that 94% of Tor requests to Cloudflare are malicious. It's a shame that Cloudflare hasn't responded with the data they requested, and it's fair to hold that against them. But I'm also not aware of a response from Tor regarding Cloudflare's desire to make automatic SSL certificate generation possible for .onion addresses.
As a huge fan of both organizations, I wish they would act like adults and work together, rather than spend so much time pointing fingers.
> If so, how is that practically different than just blocking tor?
Because Cloudflare allows their web sites to disable CAPTCHAs for Tor if they choose to.
> I think they are blocking tor, but explaining it in a diplomatic way.
We'll have to disagree on that. The Cloudflare post outlines not one, but two ways that the two organizations could work together to solve the problem.
But again, I agree it would be great if Cloudflare would release more detailed data about the attacks they see from Tor.
OP blog post claims 96% of the traffic going to their tor hidden service is hostile. It doesn't seem unreasonable to me at all that every tor ip is hostile.
It is their definition of "hostile" that is the problem. They do not explain. I suspect it means "I can't track you, so you are hostile." Otherwise, where is the data for this?
> I can tell you from experience that I have never connected to a cloudflare backed site with tor that didn't require multiple captchas. So every tor ip is hostile to cloudflare sites? If so, how is that practically different than just blocking tor?
so you manually looked up the provider of every site you visited?
sounds like 100% of cloudflare sites that are configured to require captchas require captchas.
TOR is a source of lots of DDOSing and things like that, which happen to be the exact thing Cloudflare takes money from people to protect against. From the blog posts I have read in the past, it seems they appreciate TOR's existence but recognize that in its current state, it is a thorn in their side.
Mitigating abuse while supporting the TOR ecosystem is an open problem and they have certainly done more than any other CDN afaik to explore ways to allow legitimate TOR users past their firewall. Unfortunately, if I remember correctly the solution involves tracking IDs which can deanonymize users.
I had an idea a while back of a distributed, anonymous reputation system with rotating tokens. I still believe this is a better solution than the permanent tracking IDs currently used and maintained by other companies. It would return control to the user.
I'm not too sure what "jumping on the goof-troop bandwagon" means to you, or what difference you are referring to.
Cloudfare /did/ invest a lot of time communicating and trying to remedy the situations with the DDoS. The is evident in the amount of communication that can be found in the bugtracker.
As larger proportions of the infosec community get hired or contracted by law enforcement, intelligence, and other government agencies, you'll find that the respect for liberty that the hacker manifesto espoused is increasingly hard to find.
The prescient Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Are you sure it's not the other way around? It appears that the commercial opportunity for infosec has increased drastically in the last 10 years.
I mean, yeah, it's not the early 1990s anymore, where mostly academics and enthusiasts were on the internet. But is the hacker manifesto a standard to hold infosec practitioners who likely were in diapers during the halcyon days you refer to?
I don't understand how a professional in infosec couldn't see the necessity of anonymity in myriad use cases. Overlay networks seem to be the only viable option in many scenarios....?
Yes, Tor isn't secure for anyone unless it's secure for everyone. I mean, just look at what's been leaked from the CIA and NSA. There's no way to guarantee that any backdoor won't get leaked, or be discovered independently.
Your argument sounds a lot like "criminals will always find a way, no point in gun control." The thing is, that argument can be applied to all laws. Why even have laws? Criminals will always find a way.
Look, I agree that anonymity is important. I agree that the world is better if some things are kept absolutely private, from everyone, to whatever extent is possible. But let's not pretend that there aren't trade-offs.
> "criminals will always find a way, no point in gun control"
You could partially refute this argument by claiming that gun control is hard to ignore or bypass (although not that hard, speaking as someone who knows a fair amount about fabrication and guns).
On the other hand, this argument ported over to Tor doesn't make any sense; if Tor intentionally cripples its functionality, criminals will move over to non-crippled solutions like I2P. The best you can hope for is to mildly and temporarily inconvenience criminals while really hurting innocent people who need Tor.
A possible compromise: build "plugins" for Tor that can be downloaded and installed freely, similar to browser extensions. Some of these plugins may include traffic monitoring functionality that blacklists certain types of traffic/known malicious actors, greatly reducing the effectiveness of the network for criminals, if the majority of users choose not to forward their traffic.
The author is correct. HTTPS adds more vectors which can be used to form a fingerprint, such as the TLS version or the preferred cipher suites of the browser.
That's what you use when you don't have the richer information offered by HTTP sniffing.
TLS versions tell you someone is one of a billion users of iOS version X; with HTTP they can piece together session cookies across every site and service you use, or with active attacks use things like the Verizon injected tracking code uniquely identifying you across devices.
That's true. If you're talking about your ISP sniffing your traffic, HTTPS is a win. But since the author was talking about using Tor, I figured he was focusing on fingerprinting by the websites themselves, where HTTPS is easier for them to fingerprint than HTTP.
> fingerprinting by the websites themselves, where HTTPS is easier for them to fingerprint than HTTP.
Could you explain your reasoning on that? If they're using the Tor browser every user is going to be very similar on crypto suites, user-agent, etc. — it's a rebadged Firefox distributable so it's going to be using their HTTPS implementation and you won't even get the OS version variations unless someone at the Tor project massively screws up.
The bigger problem is that if you are being targeted by the website, there are far more interesting attacks they can try – convince the user to turn on JavaScript and do all of that profiling for WebGL/canvas rendering, local fonts, network resource timing to look for cached content from other sites, etc.
Yes, great point. But I said _easier_, not _easy_. Using HTTPS, a user may have a more outdated version of the Tor browser with different cipher suites than everyone else. Using plain HTTP, that can't happen.
> unless someone at the Tor project massively screws up
And that is what the author of the article is claiming.
My comments here aren't in agreement with the author of the article, and I'm not claiming "HTTPS is bad" or anything like that. It's simply a categorical fact that HTTPS has more vectors to be fingerprinted than HTTP.
But of course, as you mentioned, features enabled by Javascript are the bigger problem, which is why users who wish to be anonymous should completely disable it!
Adding HTTPS plausibly adds a single low-cardinality signal but it removes a ton of other ones for network-level observers. When it comes to hostile site owners, realistically you're screwed but from a privacy perspective it's a question of whether you're one of the small percentage of users who have a) failed to install updates and b) disabled JavaScript.
That's a pretty small percentage of users for whom HTTPS isn't an across-the-board win for privacy.
I don't doubt that HTTPS fingerprinting can be useful, but you can see the contents of their communications with HTTP. That seems much more useful for getting intelligence about Tor users.
Also me, so I wondered what experience he could have had to lead him to this conclusion. I found this:
> His research focus on anti-anonymity technologies combines fields as vast as ergonomics and child development to artificial intelligence and theoretical biophysics.
I suspect it was because the author thought the key used to connect could uniquely identify that browser, or that keys are somehow reused across sites?
True, almost every remotely important page uses HTTPS now so the TOR traffic doesn't really stand out as much as it did a couple of years ago, but it still might look suspicious once you look at the overall bandwidth.
> Seriously: if a TOR hidden service offers hard-core drugs or human trafficking or fake IDs, then they should be shut down.
If it were possible to do this, TOR would lose any shred of value it has for people using it to fight oppression.
Ok, let's say we put technology in place to "shut down" sites that sell fake IDs to teenagers (god forbid!).
Well now, Mr. Lawman from the U.K. or China is going to come in and say "hey, wait a minute, you can shut down websites that illegally peddle fake IDs, so you obviously have the ability to shut down websites that peddle illegal extremism (meaning falun gong, anti-government groups, etc.)." The only defense against the TOR project and its supporters being forced to do this is that it's not technically feasible.
It's really bad that this isn't manifestly obvious to someone who is apparently involved with the TOR project to a substantial degree.
> If it were possible to do this, TOR would lose any shred of value it has for people using it to fight oppression.
Most people in that field mess up their opsec sufficiently often that this is very well possible, see SilkRoad and its successors.
When it comes to the kiddyfuckers, I'm a bit torn myself when I ask myself if child pornography (and apparently people even shared videos of raped toddlers) is an excuse for hacking and exposing actually innocent TOR users. It's the classic 4chan/reddit dilemma: what kind of content justifies which measures, and when is it worth to limit the right to free speech?
For the record, I support anything done to bring child porn offenders to justice, but I also recognize that this opens dangerous doors - from the issue of "now it's an excuse for the Chinese/Russians/Iran/Saudi-Arabians to crack down on legitimate activities" to "people are actually already planting fake child-porn evidence, including in scareware/ransomware".
Yeah, certainly removing child porn from TOR reduces the amount of background noise in traffic. But there should still be vast quantities of people using TOR for file-sharing to provide significant noise...
Why target one illegal activity and not the other? It would serve no purpose to allow some crimes to take place, but not others. I believe that organizing drug sales, small amounts as well as large, is just as horrible and can ruin just as many lives as distributing images and videos (I wonder how many are duplicates) of exploited children.
Because it's usually the drug consumer him/herself who decides what to buy and consume - and given that most of the drug sellers apparently don't cut their products with weird stuff from rat dung to lead, fentanyl or other stuff that sometimes causes dozens of ODs (fentanyl-contaminated heroin batches are well known for this, and a plague for ERs because the victims always come in a bunch) one might argue that clean, vetted drugs via TOR/Silk Road are better for society than if the users would hit the streets. Also, drugs bought on the streets directly finance the street mafia and contribute to gang violence, as well as negative reputation for the "dealer city quarters". Internet drug shopping kills off this part of the chain totally.
Child porn is just ... inexcusable no matter how you think about it. Fine, if some porn stars make themselves look young, okay, but that's consenting adult performers. Abusing Toddlers and children for porn is not just violent in itself, it literally creates wrecks.
You cannot defend one type of crime, because another type is much worse. If possible, one could argue that child pornography is less horrible than hitman services and human trafficking, given that the majority of the content shared is not new content of new victims.
Let's not try to justify serious crime, because other crime may be seen as more serious.
I believe that I'm in my right to buy drugs. If I were a vendor I would be proud of my job, and I would take any jail time as if it were for homosexuality in the 40s.
People ruin their lifes, some using drugs. It's true that drugs, like alcohol, may be an existential risk to the life and potential of a small quantity of people. So are casinos, fast food, extreme sports, or videogames.
"Panopticlick will analyze how well your browser and add-ons protect you against online tracking techniques. We’ll also see if your system is uniquely configured—and thus identifiable—even if you are using privacy-protective software."
The title is clickbait. There are no exploits involved. He's not dropping NITs on users, for example. Fingerprinting is not a huge issue. The main defense is preventing adversaries from learning one's ISP-assigned IP address. Maybe Tor Project does encourage too much confidence in the "all users look alike" feature. They certainly do, in my opinion, regarding the security of Tor browser in Windows, with no protection against exploits and Tor bypass.
This is little more than an opinion piece. AFAICT, the only "vulnerability" is that with JS on, it can detect the Operating System.
At some point it claims it can also detect screen size
> However, there are not too many people using the same OS and same screen size and visiting the same sites at around the same time. You will likely stand out.
but both my tests and themselves contradict that:
> On a normal desktop browser, the Window Size is smaller than the Screen Size. (Mobile devices may show a Windows Size that is larger than the Screen Size.) To prevent screen profiling, the TOR-Browser sets them to be the same size.
Note that detecting Tor Browser is doable from the User-Agent, so there's no point in setting Window Size = Screen Size.
Definitely not "exploiting", and I suspect that's why it couldn't get a reply from security MLs, which see a lot of these. Flagged.
Let's say X event happens and Tor shuts down. What will happen with all that horrible people using the Tor network? Will they stop doing horrible activities? Probably not. Maybe they'll have more difficulties in their activities since they'll be geographically isolated.
Tor it's just a channel that horrible people uses, but the horrible stuff that they do happens in real life.
There is one approach that for sure it's going to solve the horrible activities that people do. Put a camera in every house. Put a camera in every corner. Then you can monitor every person and check if they're doing horrible things.
A little condescending and obnoxious for my taste but mostly everything here is well-known and while the TOR-Browser devs could implement everything this guy wants it, and it will end in a moral dilemma of what could be considerable "malicious". IMHO the Tor Browser is not suitable for the truly paranoid but usually this wont matter because the truly paranoid is not using the TOR network as-is (with the integrated browser) and those who aren't truly paranoid can live with those risk.
for asynchronous messaging, agl had something really promising with pond, but for "reasons" decided to abandon it, and nobody bothered to continue its development.
All in all not a really in-depth article, but the author has some valid points. All parameters exposed by the browser should be variated, especially the scroll bar size which seems to be the real offender here.
Not sure how the Tails [0] distribution handles it, but IIRC it notified me of the screen size / view port size problem as I maximize the browser.
You can still target for the screen size using CSS - you can even track changes by creating a css file with literally thousands of thousands of media queries, where each media query sets e.g. the background-image property of a hidden div.
Thanks to gzip compression, this shouldn't even take much data to transfer.
Oh, and as I think of it, would this here still work?
Of course but that will require to also accommodate for the fact that after N queries the circuit/endpoints changes and unless you have a unique tracking system per every "attacked" user (like an random generated ID per CSS served) or a way to store persistent across sessions, Screen Size alone won't be able to identify everyone on the TOR network, and that's my point if you disable JS (raise the security level to the max for the case of the tor browser) you will be good for the most common attacks for "APT's" you pretty much don't stand a chance unless you go "outside the grid" aka don't use the integrated browser.
You have lots of signals to generate without JS. System font base (this alone should provide a fairly unique identifier!), screen aspect ratio, DPI value, "pointer" media query, the relationship between width/device-width... and on non-TOR-scenarios you can fingerprint supported HTTPS encryption layers plus the user-agent. Oh, and you can also passively fingerprint on the presence of an ad blocker.
If you can identify _a single person_ on the TOR network with the TOR Browser, across several sessions, just with the data you are describing and without false positives, it will probably make a case but those attacks are well-know since 2k8 and so far no one has made the same claims you firmly believe, so unless you know something that no-one in the world, then for the common "TOR user", those who use the integrated TOR Browser are in good standing just by disabling JS alone.
If you are THAT paranoid, you should already know that you shouldn't use the integrated browser itself, since you are loosing half the battle just there by giving your adversaries a well-known attack vector.
Hardly good advice. Tor Browser is specifically built with privacy and security in mind. Using any "mainstream browser" and then expecting to do all those tweaks is not practical advice nor feasible.
As far as using a mainstream browser goes, you'd want to disable a smattering of features to get the most security/anonymity, like plugins and webgl. Any recommendations for a well-thought-out list?
And while I understand the motivation to avoid an exploit magnet, what do you mean by "inferior security design"?
I have a habit of browsing with random window sizes (usually shrunk to just fit what I want to see. I guess if I ever need to use the TOR browser, I'll need to get out of that habit! This researcher disclosed his simplest approach to fingerprinting TOR browsers but if he really has 12 factors, you'll only be safe browsing sites with huge amounts of traffic.
> He would respond that it was the
onion routing, the original program
of projects from NRL. It was Rachel Greenstadt who noted
to him that this was a nice acronym and gave Tor its name.
Roger then observed that it also works well as a recursive
acronym, ‘Tor’s onion routing’. It was also his decision that
it should be written ‘Tor’ not ‘TOR’. Making it more of an
ordinary word in this way also emphasizes the overlap of
meaning with the German word ‘Tor’, which is gate (as in
a city gate).
To sum up, “Tor: The Second-Generation Onion Router”
is about the design of onion-routing systems, not just onion
routers themselves. Tor is the third generation of onion rout-
ing, not the second. And the ‘r’ in ‘Tor’ represents ‘routing’
not ‘router’. In hindsight we probably should have spent a
bit more time on the paper title.
You have a significantly different definition of "not an acronym" than I do.
> Although the Tor Project could promote options to restrict these malicious actions, they choose to do nothing. Seriously: if a TOR hidden service offers hard-core drugs or human trafficking or fake IDs, then they should be shut down.
I hope I can convince some folks here to be skeptical of this kind of thinking.
Think about it: If the Tor project came up with some clever way to take down bad hidden services, then what would be stopping a government from forcing them to take down legitimate hidden services too? And meanwhile, the criminals would move on to some other Tor-like service, and keep doing their thing.
In short, you're not going to stop criminals, who will always find a way to keep doing what they're doing. You will, on the other hand, impede the free speech of honest users.
The Tor project should focus on making it impossible for hidden services to be taken down--even by the Tor project itself! And they do :)