If you are running an onion service but don't need to hide the server IP, like you do if you also provide clearnet access to the same server, you should enable single hop mode [0] to reduce the load on the Tor network and also speed up the connections. This way your server directly connects the introduction and rendezvous points while the client still stays anonymous with a 3 hop circuit.
For my understanding, usually establishing a connection with a hidden service involves two separate Tor circuits: one circuit for the visitor, but another full circuit for the hidden service.
This “Non Anonymous Mode” effectively omits the second circuit, and allows relays to connect directly to the hidden service’s IP address, thus significantly improving latency and reducing the strain on the Tor network?
Ah right, so what you’re saying is that hidden services don’t need Exit relays for hidden services at all, and as such do not have the bottleneck issues that usually plague exit nodes.
I'm using Tor to access my local network services through hidden services. Since I don't need to hide my IP address I'm going to follow your advice gratefully. Didn't know that's possible.
That's sort of like having backdoor access to your internal network (similar to teredo). Others may use it to gain access to that network. If it's your home, that may be OK to you, but if it is an employer, you may want to obtain approval to do that and be sure all of your hidden services use keys or strong passwords for access.
This is completely incorrect. It is physically impossible to make a connection to a hidden service without the hidden services onion address (I am talking about the current v3 onion addresses, the ones that are 56 characters long). This is thanks to the fact that the onion address itself is the hidden services public key.
If you keep your onion address private then nobody can connect to your hidden service or even know that it exists. Simple as that.
It's also "physically impossible" for someone to gain access to a well configured IPSec endpoint, yet we still consider this a point of access that needs appropriate controls and security oversight. There are many, many ways that people collect key material to use to access tunnels to corporate networks. No matter how confident you might be in the technology, you should never provide an access point to a private network without full consideration of the security and compliance implications.
Perhaps the bigger issue though is that Tor at least used to be frequently used by botnets for C2, I'm not in a SOC environment any more so I'm not sure how much that trend has changed. But it's very common for corporate security programs to configure IDS to report on Tor traffic since it's associated with some sort of compromise a good percentage of the time. This does mean you get occasional false positives from normal Tor use to e.g. anonymously access public materials but that's life in a SOC. The point though is that most corporate environments ought to notice this kind of thing happening whether or not it's done with the approval of IT/security.
Could you explain this a bit more? How would this be more open than port forwarding? I don't see how someone could leverage this without exploiting whatever app is hosted as the hidden service?
No, because it is possible establish a token required for access to an onion service on top of the obscurity of having to actually discover the service's public address.
It is also extremely likely that said adversaries control most of Tor, considering that the main mechanisms of tracing Tor circuits do not require control over any nodes of the Tor network whatsoever- snooping on IXPs and as many autonomous systems and underwater wires as possible.
there are an almost infinite amount of ways to host services from behind firewalls without port forwarding, or any network admin approval. Tor is not special in this regard, nor dangerous due to supposed bad people controlling the network.
This is incorrect. A Tor hidden service is fundamentally different from port forwarding. If you don't have the hidden services onion address (v3 address) then you physically cannot make a connection to the hidden service. This is because the onion address is the hidden services public key.
You can scan the entire internet for open ports, you can't scan the Tor network for hidden services to connect to unless you already have the hidden services onion addresses.
When you create an onion address, does that address get leaked at any point? As in, are there nodes or servers in the Tor network that know that xxxx.onion is a valid address at the time of creation or afterwards?
With the old v2 hidden services (16 character long onion addresses) it was possible to recover the onion addresses of any service running on the Tor network while the v2 hidden service was running.
However, that issue was only present in v2 hidden services. v2 has been depreciated in favor of the new v3 hidden service protocol (56 character long onion addresses) which is not vulnerable to this issue. This new protocol contains a full ed2559 elliptic curve public key in the onion address. The key in the onion address is used to derive what are called "blind keys". These "blinded keys" are then announced to the Tor network in such a way that nobody can recover the original public key without prior knowledge of the it, leaving them unable to establish a connection with the hidden service.
I have only briefly elaborated on how v3 hidden services work. If you are interested in a more in depth and technical explanation I encourage you to read:
Onion address is not unguessable, it's stored on a DHT shared by relays with the HSDir flag (which they earn after ~7 days IIRC).
I think this changed slightly with v3 addresses, so my comment might be out of date, but I think the general premise remains the same. (EDIT: Apparently with V3 addresses, there is still a DHT, but client uses key derivation so that the HSDir only stores a daily-rotated identifier known as a "blinded public key." [0])
Although your hidden service address is not hidden, you can require that any client connecting to it present a valid authorization key (I think this is also new in V3?).
Also, it obviously depends which service you're exposing — if you are exposing an SSH server that only allows key-based authentication, then it shouldn't matter if people can simply connect to it — assuming you trust the SSHD software, and your threat model doesn't depend on avoiding detection completely.
Hidden services are very easy to configure (the basic config, if you want to be as anonym as possible you have to do more). Install tor, add a few lines to config, done. And: You don't have to change your firewall settings at all. Nothing is exposed to the clearnet.
You can also make your service be accessible only to certain clients which have a certificate. I consider this very secure.
I guess I can understand that from an ease of configuration standpoint. Having said that I had no trouble with setting up zerotier VPN, which is also very easy to configure.
I do the same but you still need to be careful when running Zerotier to listen only on IP addresses that the ZT link is assigned. I run a private mailserver and I've made sure that there are no sockets listening on any non-ZT externally routable IP address. (I guess for good measure I could have nftables drop traffic coming in on those ports on my WAN link.) But with Tor you just point it to a service listening on 127.0.0.1 or [::1] and you're in business. For me ZT is fine, but for folks who want to muck around a bit less, I can see the appeal of Tor.
Could you please post a source for this? The only thing i could find is from the man page "However, the fact that a client is accessing a Single Onion rather than a Hidden Service may be statistically distinguishable." but I'm not sure what exactly the impact is from that.
Tor is not anonymous just like VPN's are not anonymous when you have 5eyes oversight of the network. Its like watching trucks navigating around the road network, you can see the junctions they take and you can see where they start and end, but you cant see the contents of the truck.
The Road network and internet have an awful lot in common!
> Using onion services mitigates attacks that can be executed by possibly-malicious “Tor Exit Nodes” — which, though rare, are not nonexistent
Is there any evidence that the majority of exit nodes aren't malicious? There's only 300 or so in the US, 300 or so in Germany, and in other countries even less. What would it take for three letter agencies to compromise most of it?
I mean, suppose all of the existing nodes weren't malicious. Could a government agency plausibly run 1000 exit nodes in a way that doesn't give away they are government-run? This would make the majority of exit nodes malicious.
Effectively they set up a honeypot and used clear text passwords to log in, and plenty of exit nodes picked up on this and those credentials were later used to (attempt to) log in into the honeypot.
the article talks about the research stumbling upon exit nodes performing MITM and other sniffing but does not refer to the exact details. is there a paper for this?
well, was referring to the research indicated by the title of the article- honeypot setup to detect malicious exit relays.
yes thats the one. interesting, seems they caught 15 unique relays harvesting logins. There seems to be scope to improve reporting and detection of malicious actors like this. They also have a block list on Tor's gitlab repo but doesn't seem to be up to date.
There were slides in the Snowden leaks where it laid out the NSA's strategy for dealing with TOR and compromising exit nodes was a big part of it. They have had the last 10 years to work on it; one might expect they had results.
The behaviour of not always using the same exit means that you, over time, will almost assuredly use a malicious exit should more than zero exist. It's reckless to suggest that anybody should be using this system, your situation is almost always going to be worse than not.
The only attacks an exit alone can do is sniff all traffic and modify the traffic. There are constant checks done by the Torproject to detect bad exits that modify traffic but sniffing is not detectable of course. But both of those attacks are mitigated by https which most sites support nowadays. Firefox and therefore the Tor Browser also has an option to disable http. [0] And using an .onion service removes this attack vector also.
> But both of those attacks are mitigated by https which most sites support nowadays.
Unfortunately, not as much as you might hope.
For good reasons, the Tor browser doesn't store your browsing history - so there's no 'recently visited sites', no address bar autocomplete, no cached redirects, no cached HSTS, and no colour-changed 'visited' links.
So if you're visiting a site that isn't HSTS-preloaded - for example bitcoinknots.org - you'd better remember to type in the https:// explicitly, as that's your sole protection against getting MITMed.
> So if you're visiting a site that isn't HSTS-preloaded - for example bitcoinknots.org - you'd better remember to type in the https:// explicitly, as that's your sole protection against getting MITMed.
>Tor Browser already comes with HTTPS Everywhere, NoScript, and other patches to protect your privacy and security.
Caveat: HTTPS Everywhere relies on a manual whitelist of HTTPS-enabled sites. If the website you’re visiting isn’t popular enough to be on their list, you’re out of luck.
Mentioned below that HTTPS-default (which is an option in ordinary Firefox) is intended to become mandatory in Tor-browser.
I use this on my main PC. Once or twice a day I might visit some old or especially cantankerous site that doesn't do HTTPS, I get a full page interstitial explaining the problem, I can decide if I'm OK with that. Otherwise every single link, typed URL, etc. is HTTPS regardless of whether that was what was originally written.
I wouldn't recommend it in its current state for my mother, but it's definitely what someone using Tor would want, and it's only getting more ubiquitous.
If they're not checking everything, any sort of non-general modification of traffic will obviously go completely unnoticed. The bad exit flag really is only ever going to catch the most obvious, ham fisted bad behaviour.
You're correct this isn't really a solution but Tor Browser has already merged https only mode [0] so this should become less of an issue in the near future.
Even if every exit node in the US is operated by private people or organizations, courts can compel the node owners to work with the government and not talk about it.
Courts can't compel you not to talk. They can merely punish you after-the-fact.
So if you're talking about "everyone in a giant group of people" and doing it routinely, existence of those secret subpoenas seem like they'd get leaked eventually. Especially if it's hard to tell which of the 300 people leaked it.
Can you just shutdown your nodes or can they force you to continue? Best practice for relay operators is to just stop the operation altogether if the authorities force you to attack the users.
They could probably compel you to continue, or forcibly take over the node. Once you're in NSL "we can do anything we want and you can't tell anyone about it" land, being prevented from shutting down your own business or service isn't terribly far-fetched.
This seems obvious but it is a constitutional right that has been cited as a reason to not comply with extra-judicial pressure to assist the government with an investigation.
This is why some projects do not accept donations and have a canary.
Had the authors of Truecrypt been paid, they could had been compelled to modify their source code to the government's will.
By not accepting payment, they are protecting themselves.
> Aren't there warrant canaries set up to prevent this?
No, because the police will tell you to not tell anyone about the court order. If you do so (for example using a warrant canary), you will be in big trouble. Those canaries were always a convenient fiction, almost to the point of it being entirely in question whether or not this fiction was created in good faith.
You can always be in trouble by the police for any reason or no reason at all. The question is law. The notion of a warrant canary is that the police cannot compel you to state that you are not under a court order. They can annoy you to lie, and they can always break the law, but they cannot legally force you to. To claim that regardless of this, the police can compel you to lie, is tantamount to saying that the rule of law has failed.
> To claim that regardless of this, the police can compel you to lie, is tantamount to saying that the rule of law has failed.
Your life quality will take a sharp negative dive if you don't conform to the spirit of what they ask you. Whether or not such things are legal really is immaterial: You will be in trouble anyway. As such, I dislike advice that leans on what the law says.
The reason to use the tor<->plaintext bridge is to route around censorship, eg the great firewall of china, or various western ISPs blackholing the pirate bay, and also to prevent the server from learning much about the client’s identity
If you don’t want any MITM possibilities, that’s what the onion services are for (both client and server are speaking over tor connections)
Because it switches over to the clearnet there, the operator could do stuff like intercept non-https traffic or use a malicious DNS to attempt to MITM https traffic.
It is possible to advertise your .onion address and offer automatic redirect to it for Tor Browser users using the "Onion-Location" HTTP header. Example with my personal home page:
It would be interesting to try to see if the Tor Browser has a TOFU policy and warn its user if the onion address change after they visited the site once.
If it is the case then you combine the ease of access of typing a normal domain name and the Onion security through an HSTS equivalent mechanism.
Is there some sort of attack possible here where you could hand out unique onion addresses to each visitor, so when they connect with Tor you could fingerprint their Tor connection and match it to their cleartext connection? *takes off his black hat*
I think the avoiding exit nodes part is probably the most important to me. Exit nodes have always been problematic - from memory about 20% of relays have an exit flag but most of the traffic is directed to the most performant relays. Tor actively discourages using the network for file sharing because of the exit node bottleneck.
I think there are probably some uses of the Tor network that aren't fully realised yet - file sharing (something similar to I2P) which avoids the exit node using onion addressing and chat applications (like Briar which uses onion addresses, or Secure Scuttlebutt).
As for web traffic, it is nice to offer an onion address. I wonder if websites could offer an "upgrade" to onion addresses, similar to how IPFS does?
I think some comments here are misunderstanding the intent of the
article. For those saying TLS already solves... it is not advocating
Tor as a replacement for transport layer security, indeed most Tor
users also use TLS (and site certs) with little overhead.
No, the article is asking how you could, as a website owner, make
things easier on Tor users and yourself! It starts with the assumption
that you care, and want to help users who require better privacy.
It answers, though not in detail, the many HN readers who invariably
post replies concerning Tor that "All my abuse comes through Tor".
Creating an .onion address mitigates that significantly.
I'm not clear from the article how having an onion address helps website operators who receive abusive traffic through Tor. Perhaps some of that abusive traffic will come in via the onion address instead, but presumably such an operator will want to continue serving their regular site to Tor exit nodes as well, so I don't see how it would actually mitigate anything, nor make the malicious traffic easier to segregate from valid traffic over Tor. What am I missing?
> I'm not clear from the article how having an onion address helps
website operators who receive abusive traffic through Tor.
No, it's not clear. Also "abusive traffic" is vague. Are you mainly
concerned with shitposters, trolls, DOS attacks?
> What am I missing?
Maybe you're not missing it, but essentially it's a behavioural/social
rather than technical challenge. Most abusers, ones that technical
changes can address, operate at scale over HTTP/S and use Tor simply
as a free VPN via regular exit nodes to hide their IP. The author
calls this the "Wheat/chaff problem". Viewed this way, it's easiest
for a site owner to just block all of Tor and kill all legitimate users
too.
Most of those bulk abusers cannot be bothered to deal with marginal
cases like using an overlay network with .onion addresses whereas
those who _need_ Tor are highly motivated.
Other kinds of abusers, like persistent troll posters, are better
dealt with by other means even if you're using HTTP/S.
Back when I was staff on (pre-madness) freenode providing an onion address was pretty much the only way we could afford to support tor at all given the moderation resources available.
Smaller networks often (usually regretfully) end up blocking tor entirely if they don't have the capacity to set up such infrastructure.
That's how we did it and so far as I'm aware is the most common answer to "How do we keep Tor while keeping the abuse mitigation efforts required within the resources available?" for services in general.
> The first benefits are authenticity and availability: if you are running Tor Browser and if you click/type in exactly the proper Onion address, you are guaranteed to be connected to what you expect — or not at all.
What? Writing raw onion addresses is like writing raw IPv6 addresses. Nobody can remember then and check them.
> you are guaranteed to be connected to what you expect — or not at all.
Exactly the same guarantees are also achieved by putting your clearnet address on HSTS Preload lists, or by writing https:// in front of the url on the users side.
But then you are relying on the CA system which is a huge risk. A significant benifit of onion addresses is that The key is distributed with the address. So as long as you get the address over a secure channel you are safe.
With https you need to get the address over a secure channel and hope that no CAs are compromised. The secure channel might be easier (because you can quickly memozrize twitter.com) but to avoid the second you need some complicated and not officially supported certificate pinning.
It’s been mandatory since 2018. Browsers will reject certificates which have not been publicly logged.
Perhaps next you’ll wonder if it’s as simple as compromising a CA and a CT log? Nope, as browsers require cryptographic attestations from multiple CT logs. If you’re using Chrome, one of those logs has to be the one operated by Google.
Yeah so in case of Tor, people use DDG which is the default. And DDG, being bad and handling SEO spam worse than Google, often returns wrong onion address. (Which happened to me several times.)
And you cannot really check if it's the correct one.
At least on regular net, you have a chance to spot nytime5 is fake.
It's very easy to think that things we do ourselves are universal because they seem so intuitive and natural for us. I for one type addresses from scratch all the time.
I think the only legit reason (assuming your clearnet site is using HSTS) is that .onion site reduces the risks of users screwing up. And i suppose better performance if you don't have to use exit bandwidth (i would guess, dont actually know)
Users are bad at security. If they fail to set up tor, .onion links don't work, so it acts as a barrier against users shooting themselves in the foot.
> This is counterbalanced by higher phishing risks
I would argue that this is the much bigger footgun for users. Just look at how much money darknet users are losing to the big industry of .onion phishing pages.
Its a fair argument. I think incorrect tor setup is a bigger risk for things like securedrop leaker stuff, where it is likely the first and only time the user will use TOR.
I agree about securedrop, but the blog post seems to discuss “platforms such as Facebook, the BBC or NYT”.
Also in the case of securedrop it might make sense to have that separate from the rest of your infrastructure, so the “hidden” part of “hidden services” suddenly becomes useful.
> https://hstspreload.org/ offers the same benefits. You are guaranteed to be connected to what you expect - or not at all.
TLS/HSTS is still subject to CA attacks, e.g. diginotar.
CA/X.509 is a complex stack too.
> TLS mitigates attacks that can be executed by malicious exit nodes (or WiFi networks, or ISPs), that is the whole purpose of TLS.
A malicious exit node could refuse to serve some websites. This seems a minor risk though.
Reducing load on exit nodes is a technical benefit that's in that blog post.
Another benefit to using Tor onion services for large sites is that the Tor circuit ID can be used as an additional key in an IP rate limit cache. This helps block Tor bots (on the basis that establishing a Tor circuit is expensive).
>TLS/HSTS is still subject to CA attacks, e.g. diginotar.
Largely solved by Certificate Transparency. If you compromise a CA, you can issue certificates. However, you can't issue new certificates without broadcasting that fact to the whole world as browsers will not accept certificates without SCTs.
>Reducing load on exit nodes is a technical benefit that's in that blog post.
This hasn't been a real benefit for years. Exit nodes are running at something like 10% capacity.
>Another benefit to using Tor onion services for large sites is that the Tor circuit ID can be used as an additional key in an IP rate limit cache. This helps block Tor bots (on the basis that establishing a Tor circuit is expensive).
This is just another problem with hidden services. Opening circuits costs malicious clients far less cpu time than it costs the server.
Most of the technical points listed here are pretty much entirely mitigated by TLS. Exit nodes can of course deny access to specific sites, but hidden services suffer from comparable (or worse) issues.
There are no other practical attacks that malicious exit nodes could execute against sites using TLS and HSTS preload lists. If you’re a website administrator, fixing those things should be your priority before implementing onion addresses.
Onion addresses also come with slight drawbacks. They’re difficult for users and more vulnerable to phishing. Hidden services are also extremely vulnerable to CPU-based DoS attacks.
It's much harder to deanonymize people who are connecting to hidden services because they don't have to use exit relays which are often illegal to run.
[0]: Search for HiddenServiceSingleHopMode on https://2019.www.torproject.org/docs/tor-manual.html.en or just use the following config options
SOCKSPort 0
HiddenServiceNonAnonymousMode 1
HiddenServiceSingleHopMode 1