Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why offer an Onion Address rather than just encourage browsing-over-Tor? (alecmuffett.com)
250 points by kettunen on March 10, 2022 | hide | past | favorite | 130 comments


If you are running an onion service but don't need to hide the server IP, like you do if you also provide clearnet access to the same server, you should enable single hop mode [0] to reduce the load on the Tor network and also speed up the connections. This way your server directly connects the introduction and rendezvous points while the client still stays anonymous with a 3 hop circuit.

[0]: Search for HiddenServiceSingleHopMode on https://2019.www.torproject.org/docs/tor-manual.html.en or just use the following config options

SOCKSPort 0

HiddenServiceNonAnonymousMode 1

HiddenServiceSingleHopMode 1


For my understanding, usually establishing a connection with a hidden service involves two separate Tor circuits: one circuit for the visitor, but another full circuit for the hidden service.

This “Non Anonymous Mode” effectively omits the second circuit, and allows relays to connect directly to the hidden service’s IP address, thus significantly improving latency and reducing the strain on the Tor network?


That is correct, but note that not all tor nodes are exit nodes, so latency will be increased but troughput may even be better.


Ah right, so what you’re saying is that hidden services don’t need Exit relays for hidden services at all, and as such do not have the bottleneck issues that usually plague exit nodes.


Yes that's correct.


I'm using Tor to access my local network services through hidden services. Since I don't need to hide my IP address I'm going to follow your advice gratefully. Didn't know that's possible.


That's sort of like having backdoor access to your internal network (similar to teredo). Others may use it to gain access to that network. If it's your home, that may be OK to you, but if it is an employer, you may want to obtain approval to do that and be sure all of your hidden services use keys or strong passwords for access.


This is completely incorrect. It is physically impossible to make a connection to a hidden service without the hidden services onion address (I am talking about the current v3 onion addresses, the ones that are 56 characters long). This is thanks to the fact that the onion address itself is the hidden services public key.

If you keep your onion address private then nobody can connect to your hidden service or even know that it exists. Simple as that.


It's also "physically impossible" for someone to gain access to a well configured IPSec endpoint, yet we still consider this a point of access that needs appropriate controls and security oversight. There are many, many ways that people collect key material to use to access tunnels to corporate networks. No matter how confident you might be in the technology, you should never provide an access point to a private network without full consideration of the security and compliance implications.

Perhaps the bigger issue though is that Tor at least used to be frequently used by botnets for C2, I'm not in a SOC environment any more so I'm not sure how much that trend has changed. But it's very common for corporate security programs to configure IDS to report on Tor traffic since it's associated with some sort of compromise a good percentage of the time. This does mean you get occasional false positives from normal Tor use to e.g. anonymously access public materials but that's life in a SOC. The point though is that most corporate environments ought to notice this kind of thing happening whether or not it's done with the approval of IT/security.


Security through obfuscation isn't bad, but its certainly not infallible.


Could you explain this a bit more? How would this be more open than port forwarding? I don't see how someone could leverage this without exploiting whatever app is hosted as the hidden service?


It's a tunnel into your internal network.

If nation states and/or cyber criminals do control most of tor, then you are opening your internal network to those groups.


No, because it is possible establish a token required for access to an onion service on top of the obscurity of having to actually discover the service's public address.

It is also extremely likely that said adversaries control most of Tor, considering that the main mechanisms of tracing Tor circuits do not require control over any nodes of the Tor network whatsoever- snooping on IXPs and as many autonomous systems and underwater wires as possible.


So is port forwarding or running any other services, tor is special in absolutely no meaningful respect here.


The OP said "employer," and most people can't port forward at the office without talking to IT.


there are an almost infinite amount of ways to host services from behind firewalls without port forwarding, or any network admin approval. Tor is not special in this regard, nor dangerous due to supposed bad people controlling the network.


Yes, it's exactly like port forwarding.


This is incorrect. A Tor hidden service is fundamentally different from port forwarding. If you don't have the hidden services onion address (v3 address) then you physically cannot make a connection to the hidden service. This is because the onion address is the hidden services public key.

You can scan the entire internet for open ports, you can't scan the Tor network for hidden services to connect to unless you already have the hidden services onion addresses.


When you create an onion address, does that address get leaked at any point? As in, are there nodes or servers in the Tor network that know that xxxx.onion is a valid address at the time of creation or afterwards?


With the old v2 hidden services (16 character long onion addresses) it was possible to recover the onion addresses of any service running on the Tor network while the v2 hidden service was running.

However, that issue was only present in v2 hidden services. v2 has been depreciated in favor of the new v3 hidden service protocol (56 character long onion addresses) which is not vulnerable to this issue. This new protocol contains a full ed2559 elliptic curve public key in the onion address. The key in the onion address is used to derive what are called "blind keys". These "blinded keys" are then announced to the Tor network in such a way that nobody can recover the original public key without prior knowledge of the it, leaving them unable to establish a connection with the hidden service.

I have only briefly elaborated on how v3 hidden services work. If you are interested in a more in depth and technical explanation I encourage you to read:

[0] - https://gitweb.torproject.org/torspec.git/tree/rend-spec-v3.... [1] - https://gitlab.torproject.org/legacy/trac/-/wikis/doc/NextGe...


You can set up a token that is required to actually make the connection[0].

[0] - http://xmrhfasfg5suueegrnc4gsgyi2tyclcy5oz7f5drnrodmdtob6t2i...


Are you sure? Don't an attacker need knowledge of the onion address, which is almost unguessable?

But with client authentification that wouldn't be a problem anyways because only chosen clients get access.


Onion address is not unguessable, it's stored on a DHT shared by relays with the HSDir flag (which they earn after ~7 days IIRC).

I think this changed slightly with v3 addresses, so my comment might be out of date, but I think the general premise remains the same. (EDIT: Apparently with V3 addresses, there is still a DHT, but client uses key derivation so that the HSDir only stores a daily-rotated identifier known as a "blinded public key." [0])

Although your hidden service address is not hidden, you can require that any client connecting to it present a valid authorization key (I think this is also new in V3?).

Also, it obviously depends which service you're exposing — if you are exposing an SSH server that only allows key-based authentication, then it shouldn't matter if people can simply connect to it — assuming you trust the SSHD software, and your threat model doesn't depend on avoiding detection completely.

[0] https://blog.torproject.org/v3-onion-services-usage/


Any reason you don't use some kind of VPN solution for that instead?


Hidden services are very easy to configure (the basic config, if you want to be as anonym as possible you have to do more). Install tor, add a few lines to config, done. And: You don't have to change your firewall settings at all. Nothing is exposed to the clearnet.

You can also make your service be accessible only to certain clients which have a certificate. I consider this very secure.


Only recently has there been an easy to setup and secure alternative with the same properties – Tailscale

It is centralized, yes, but it is way, way faster if you care about latency

https://tailscale.com/

(you can also self-host it with the open source “headscale” project)


+1 for tailscale, it is an absolute joy to use.


> You can also make your service be accessible only to certain clients which have a certificate. I consider this very secure.

Are you talking about this? https://community.torproject.org/onion-services/advanced/cli...


Yes, client authentification it is called.


Thanks for mentioning it, I would have overlooked that feature entirely, otherwise.


I guess I can understand that from an ease of configuration standpoint. Having said that I had no trouble with setting up zerotier VPN, which is also very easy to configure.


I do the same but you still need to be careful when running Zerotier to listen only on IP addresses that the ZT link is assigned. I run a private mailserver and I've made sure that there are no sockets listening on any non-ZT externally routable IP address. (I guess for good measure I could have nftables drop traffic coming in on those ports on my WAN link.) But with Tor you just point it to a service listening on 127.0.0.1 or [::1] and you're in business. For me ZT is fine, but for folks who want to muck around a bit less, I can see the appeal of Tor.


Not only it's easier to configure, but it provides better security. The onion address works as a server certificate.

1) You don't have to pay or trust a VPN provider

2) It works on dynamic IP addresses and without relying on DNS

3) It exposes only one TCP service


You are not just reducing load, you are also reducing anonymity for other participants. The extra hops make it harder to analyze the data overall.


Could you please post a source for this? The only thing i could find is from the man page "However, the fact that a client is accessing a Single Onion rather than a Hidden Service may be statistically distinguishable." but I'm not sure what exactly the impact is from that.


Citation needed!

Any timing correlation attack carried on against entry and exit nodes is independent from the number of hops.


Tor is not anonymous just like VPN's are not anonymous when you have 5eyes oversight of the network. Its like watching trucks navigating around the road network, you can see the junctions they take and you can see where they start and end, but you cant see the contents of the truck.

The Road network and internet have an awful lot in common!


If I use Onionshare, where do I set this?


> Using onion services mitigates attacks that can be executed by possibly-malicious “Tor Exit Nodes” — which, though rare, are not nonexistent

Is there any evidence that the majority of exit nodes aren't malicious? There's only 300 or so in the US, 300 or so in Germany, and in other countries even less. What would it take for three letter agencies to compromise most of it?

I mean, suppose all of the existing nodes weren't malicious. Could a government agency plausibly run 1000 exit nodes in a way that doesn't give away they are government-run? This would make the majority of exit nodes malicious.


It’s not even about three letter agencies; many exit nodes are being scanned for passwords, if you happen to go through http instead of https.

Here’s research conducted years ago about this matter: https://www.vice.com/en/article/mgbdwv/badonion-honeypot-mal...

Effectively they set up a honeypot and used clear text passwords to log in, and plenty of exit nodes picked up on this and those credentials were later used to (attempt to) log in into the honeypot.


I will also point out that even if they are only observing ciphertext, they can still glean a lot:

http://web.cs.wpi.edu/~claypool/papers/yt-crawler/final.pdf


the article talks about the research stumbling upon exit nodes performing MITM and other sniffing but does not refer to the exact details. is there a paper for this?

only found this paper going over systematic process of exposing bad relays - http://www.cs.kau.se/philwint/spoiled_onions/pets2014.pdf


It’s not related to that particular article, but you might find this interesting, they write about a lot of similar research: https://nusenu.medium.com/tracking-one-year-of-malicious-tor...


What research are you talking about? The article talks about at least two different researchers working on separate projects.

Here's the link for first one:https://web.archive.org/web/20150705184539/https://chloe.re/...


well, was referring to the research indicated by the title of the article- honeypot setup to detect malicious exit relays.

yes thats the one. interesting, seems they caught 15 unique relays harvesting logins. There seems to be scope to improve reporting and detection of malicious actors like this. They also have a block list on Tor's gitlab repo but doesn't seem to be up to date.


There were slides in the Snowden leaks where it laid out the NSA's strategy for dealing with TOR and compromising exit nodes was a big part of it. They have had the last 10 years to work on it; one might expect they had results.


The behaviour of not always using the same exit means that you, over time, will almost assuredly use a malicious exit should more than zero exist. It's reckless to suggest that anybody should be using this system, your situation is almost always going to be worse than not.


The only attacks an exit alone can do is sniff all traffic and modify the traffic. There are constant checks done by the Torproject to detect bad exits that modify traffic but sniffing is not detectable of course. But both of those attacks are mitigated by https which most sites support nowadays. Firefox and therefore the Tor Browser also has an option to disable http. [0] And using an .onion service removes this attack vector also.

[0]: https://support.mozilla.org/en-US/kb/https-only-prefs


> But both of those attacks are mitigated by https which most sites support nowadays.

Unfortunately, not as much as you might hope.

For good reasons, the Tor browser doesn't store your browsing history - so there's no 'recently visited sites', no address bar autocomplete, no cached redirects, no cached HSTS, and no colour-changed 'visited' links.

So if you're visiting a site that isn't HSTS-preloaded - for example bitcoinknots.org - you'd better remember to type in the https:// explicitly, as that's your sole protection against getting MITMed.


> So if you're visiting a site that isn't HSTS-preloaded - for example bitcoinknots.org - you'd better remember to type in the https:// explicitly, as that's your sole protection against getting MITMed.

>Tor Browser already comes with HTTPS Everywhere, NoScript, and other patches to protect your privacy and security.

https://www.torproject.org/download/


Caveat: HTTPS Everywhere relies on a manual whitelist of HTTPS-enabled sites. If the website you’re visiting isn’t popular enough to be on their list, you’re out of luck.


HTTPS Everywhere doesn't have a working rule for bitcoinknots.org

And more broadly, neither HSTS preloading or HTTPS Everywhere include a list of every single site on the internet.


Mentioned below that HTTPS-default (which is an option in ordinary Firefox) is intended to become mandatory in Tor-browser.

I use this on my main PC. Once or twice a day I might visit some old or especially cantankerous site that doesn't do HTTPS, I get a full page interstitial explaining the problem, I can decide if I'm OK with that. Otherwise every single link, typed URL, etc. is HTTPS regardless of whether that was what was originally written.

I wouldn't recommend it in its current state for my mother, but it's definitely what someone using Tor would want, and it's only getting more ubiquitous.


I’m surprised to hear it’s not just using the native HTTPS-only mode now.


"... you'd better remember to type in the https:// explicitly ..."

You should use a slug[1] if you need assurance that you’re staying on your vpn/protocol/exit.

It would be very simple to create a "tor only" network slug.

[1] https://john.kozubik.com/pub/NetworkSlug/tip.html


If they're not checking everything, any sort of non-general modification of traffic will obviously go completely unnoticed. The bad exit flag really is only ever going to catch the most obvious, ham fisted bad behaviour.


You're correct this isn't really a solution but Tor Browser has already merged https only mode [0] so this should become less of an issue in the near future.

[0]: https://gitlab.torproject.org/tpo/applications/tor-browser/-...


Even if every exit node in the US is operated by private people or organizations, courts can compel the node owners to work with the government and not talk about it.


Courts can't compel you not to talk. They can merely punish you after-the-fact.

So if you're talking about "everyone in a giant group of people" and doing it routinely, existence of those secret subpoenas seem like they'd get leaked eventually. Especially if it's hard to tell which of the 300 people leaked it.


Any of these TLAs will love figuring out who leaked it, and it usually isn’t hard.

And knowing this, the jail time or personal life destruction that would almost inevitably occur isn’t worth it for almost anyone.


Can you just shutdown your nodes or can they force you to continue? Best practice for relay operators is to just stop the operation altogether if the authorities force you to attack the users.


They could probably compel you to continue, or forcibly take over the node. Once you're in NSL "we can do anything we want and you can't tell anyone about it" land, being prevented from shutting down your own business or service isn't terribly far-fetched.


The government cannot anyone to work without pay.

This seems obvious but it is a constitutional right that has been cited as a reason to not comply with extra-judicial pressure to assist the government with an investigation.

This is why some projects do not accept donations and have a canary.

Had the authors of Truecrypt been paid, they could had been compelled to modify their source code to the government's will.

By not accepting payment, they are protecting themselves.


https://www.cfr.org/in-brief/what-defense-production-act grants the US government significant control over the US economy and businesses. As it turns out, we're still at war :-)


> the government cannot force anything to work without pay

but they can force people to work with pay https://www.wbay.com/2022/01/20/thedacare-seeks-court-order-...


> it is a constitutional right

Ahem... "terrorism!"

Poof! Now your Constitutional rights no longer exist.


I know this is possible for organizations in the US, but for private persons too?


Aren't there warrant canaries set up to prevent this? Every website that can be compelled to behave that way should have one.


How could a warrant canary work when you don't know which exit node you will connect to?


> Aren't there warrant canaries set up to prevent this?

No, because the police will tell you to not tell anyone about the court order. If you do so (for example using a warrant canary), you will be in big trouble. Those canaries were always a convenient fiction, almost to the point of it being entirely in question whether or not this fiction was created in good faith.


You can always be in trouble by the police for any reason or no reason at all. The question is law. The notion of a warrant canary is that the police cannot compel you to state that you are not under a court order. They can annoy you to lie, and they can always break the law, but they cannot legally force you to. To claim that regardless of this, the police can compel you to lie, is tantamount to saying that the rule of law has failed.

( https://www.eff.org/deeplinks/2014/04/warrant-canary-faq )


> To claim that regardless of this, the police can compel you to lie, is tantamount to saying that the rule of law has failed.

Your life quality will take a sharp negative dive if you don't conform to the spirit of what they ask you. Whether or not such things are legal really is immaterial: You will be in trouble anyway. As such, I dislike advice that leans on what the law says.


Has this ever been tested in court?


Not to my knowledge.

Which can mean "no company has tried" or "no TLA has tried".


They cannot compel someone to re-authorize a deadman switches' canary.

If the canary doesn't receive a signed message within X amount of days, the canary sings.

Nobody can force someone to do work or self-incriminate.


Can those warrant canaries holders who follow police/court orders, be sued for false advertising?

Seems like a catch 22 that it's a lose-lose.


If being sued for false advertising was a real risk, advertising would be a lot more truthful :)


What does compromising the exit do? I thought the layering means you would need to compromise the entire path to do anything.


Exit node is where the tor-encrypted path ends and traffic goes to the clearnet.


What’s the point of any of it then to a paranoid user?


The reason to use the tor<->plaintext bridge is to route around censorship, eg the great firewall of china, or various western ISPs blackholing the pirate bay, and also to prevent the server from learning much about the client’s identity

If you don’t want any MITM possibilities, that’s what the onion services are for (both client and server are speaking over tor connections)


Got me - it looks like a decently effective honeypot though for ‘paranoid but hasn’t thought it all the way through’


the exit node has no knowledge of your IP address and your traffic should still be protected by HTTPS.


Because it switches over to the clearnet there, the operator could do stuff like intercept non-https traffic or use a malicious DNS to attempt to MITM https traffic.


Or just use HSTS. Problem solved.


It is possible to advertise your .onion address and offer automatic redirect to it for Tor Browser users using the "Onion-Location" HTTP header. Example with my personal home page:

    $ curl -I https://pablo.rauzy.name/
    HTTP/1.1 200 OK
    Server: nginx/1.14.2
    Date: Thu, 10 Mar 2022 14:04:44 GMT
    Content-Type: text/html; charset=utf-8
    Content-Length: 2843
    Last-Modified: Sun, 23 Jan 2022 22:21:41 GMT
    Connection: keep-alive
    Onion-Location: http://c2fk5i7jqn7am7nfo7eb7hwrkclyj3jj4qcwgdh6ievp7v5ie4gd3mid.onion/
It would be interesting to try to see if the Tor Browser has a TOFU policy and warn its user if the onion address change after they visited the site once.

If it is the case then you combine the ease of access of typing a normal domain name and the Onion security through an HSTS equivalent mechanism.


Is there some sort of attack possible here where you could hand out unique onion addresses to each visitor, so when they connect with Tor you could fingerprint their Tor connection and match it to their cleartext connection? *takes off his black hat*


No, since the redirect only works in the Tor Browser, in which case the cleartext connection is still a Tor connection.


Doing so you would only identify Tor exit nodes.


I think the avoiding exit nodes part is probably the most important to me. Exit nodes have always been problematic - from memory about 20% of relays have an exit flag but most of the traffic is directed to the most performant relays. Tor actively discourages using the network for file sharing because of the exit node bottleneck.

I think there are probably some uses of the Tor network that aren't fully realised yet - file sharing (something similar to I2P) which avoids the exit node using onion addressing and chat applications (like Briar which uses onion addresses, or Secure Scuttlebutt).

As for web traffic, it is nice to offer an onion address. I wonder if websites could offer an "upgrade" to onion addresses, similar to how IPFS does?


Yes there is the onion-location http header to upgrade from clearnet to .onion [0]

[0]: https://community.torproject.org/onion-services/advanced/oni...


The Tor network has 1Tbps+ of real exit capacity available, real usage is a small fraction of that.

Exit capacity as a significant bottleneck has not been a realistic issue for many years.


I think some comments here are misunderstanding the intent of the article. For those saying TLS already solves... it is not advocating Tor as a replacement for transport layer security, indeed most Tor users also use TLS (and site certs) with little overhead.

No, the article is asking how you could, as a website owner, make things easier on Tor users and yourself! It starts with the assumption that you care, and want to help users who require better privacy.

It answers, though not in detail, the many HN readers who invariably post replies concerning Tor that "All my abuse comes through Tor".

Creating an .onion address mitigates that significantly.


I'm not clear from the article how having an onion address helps website operators who receive abusive traffic through Tor. Perhaps some of that abusive traffic will come in via the onion address instead, but presumably such an operator will want to continue serving their regular site to Tor exit nodes as well, so I don't see how it would actually mitigate anything, nor make the malicious traffic easier to segregate from valid traffic over Tor. What am I missing?


> I'm not clear from the article how having an onion address helps website operators who receive abusive traffic through Tor.

No, it's not clear. Also "abusive traffic" is vague. Are you mainly concerned with shitposters, trolls, DOS attacks?

> What am I missing?

Maybe you're not missing it, but essentially it's a behavioural/social rather than technical challenge. Most abusers, ones that technical changes can address, operate at scale over HTTP/S and use Tor simply as a free VPN via regular exit nodes to hide their IP. The author calls this the "Wheat/chaff problem". Viewed this way, it's easiest for a site owner to just block all of Tor and kill all legitimate users too.

Most of those bulk abusers cannot be bothered to deal with marginal cases like using an overlay network with .onion addresses whereas those who _need_ Tor are highly motivated.

Other kinds of abusers, like persistent troll posters, are better dealt with by other means even if you're using HTTP/S.


Back when I was staff on (pre-madness) freenode providing an onion address was pretty much the only way we could afford to support tor at all given the moderation resources available.

Smaller networks often (usually regretfully) end up blocking tor entirely if they don't have the capacity to set up such infrastructure.


So you'd offer an onion address, but then block Tor traffic that didn't use it?


That's how we did it and so far as I'm aware is the most common answer to "How do we keep Tor while keeping the abuse mitigation efforts required within the resources available?" for services in general.


This article beats around the bush but never explains why Onion addresses solve these issues.

From Wikipedia:

> Addresses in the onion TLD are […] automatically generated based on a public key when an onion service is configured.

> 256-bit ed25519 public key along with a version number and a checksum of the key and version number

That's all you need to know.


> The first benefits are authenticity and availability: if you are running Tor Browser and if you click/type in exactly the proper Onion address, you are guaranteed to be connected to what you expect — or not at all.

What? Writing raw onion addresses is like writing raw IPv6 addresses. Nobody can remember then and check them.

What is easier

> https://nytimes.com

or

> ej3kv4ebuugcmuwxctx5ic7zxh73rnxt42soi3tdneu2c2em55thufqd.onion


You can use the onion location header[0] to redirect the user, as mentioned in another comment thread.

0: https://community.torproject.org/onion-services/advanced/oni...


That has all the problems listed. The header could be modified or the response blocked by anyone who could modify or block the plain HTTP(S) response.


> you are guaranteed to be connected to what you expect — or not at all.

Exactly the same guarantees are also achieved by putting your clearnet address on HSTS Preload lists, or by writing https:// in front of the url on the users side.


But then you are relying on the CA system which is a huge risk. A significant benifit of onion addresses is that The key is distributed with the address. So as long as you get the address over a secure channel you are safe.

With https you need to get the address over a secure channel and hope that no CAs are compromised. The secure channel might be easier (because you can quickly memozrize twitter.com) but to avoid the second you need some complicated and not officially supported certificate pinning.


Thanks to certificate transparency the CA system is really not a huge risk.


Are all the CAs in your browser (and those of your site's users) trust list doing proper public logging now?


It’s been mandatory since 2018. Browsers will reject certificates which have not been publicly logged.

Perhaps next you’ll wonder if it’s as simple as compromising a CA and a CT log? Nope, as browsers require cryptographic attestations from multiple CT logs. If you’re using Chrome, one of those logs has to be the one operated by Google.

Also such collusion will soon be defeated by SCT auditing https://www.hardenize.com/blog/certificate-transparency-sct-...

https://docs.google.com/document/d/16G-Q7iN3kB46GSW5b-sfH5MO...


Neither. Either can be mistyped. Nobody enters addresses directly anymore. Either you google them or you get them from bookmarks.


Yeah so in case of Tor, people use DDG which is the default. And DDG, being bad and handling SEO spam worse than Google, often returns wrong onion address. (Which happened to me several times.)

And you cannot really check if it's the correct one.

At least on regular net, you have a chance to spot nytime5 is fake.


It's very easy to think that things we do ourselves are universal because they seem so intuitive and natural for us. I for one type addresses from scratch all the time.


Onion addresses that are mistyped are almost certainly an invalid address.

It is not possible to squat onion domains for typo errors like you can clearnet addresses.

Similar to bitcoin, one character swapped breaks the hash-checksum, making the address 99.99999999% likely to be invalid.


> “.onion” address demands that the person is using a TorBrowser

Actually this is not true. Tor runs as SOCKS5 proxy, and you can use any browser or application with it.


I think the only legit reason (assuming your clearnet site is using HSTS) is that .onion site reduces the risks of users screwing up. And i suppose better performance if you don't have to use exit bandwidth (i would guess, dont actually know)

Users are bad at security. If they fail to set up tor, .onion links don't work, so it acts as a barrier against users shooting themselves in the foot.

This is counterbalanced by higher phishing risks.


> This is counterbalanced by higher phishing risks

I would argue that this is the much bigger footgun for users. Just look at how much money darknet users are losing to the big industry of .onion phishing pages.


Its a fair argument. I think incorrect tor setup is a bigger risk for things like securedrop leaker stuff, where it is likely the first and only time the user will use TOR.


I agree about securedrop, but the blog post seems to discuss “platforms such as Facebook, the BBC or NYT”.

Also in the case of securedrop it might make sense to have that separate from the rest of your infrastructure, so the “hidden” part of “hidden services” suddenly becomes useful.


It's good motivation to start using client certs instead of passwords.


Hiding the server IP is probably quite important when you want to get around nation states trying to blockade factual information.


Another good reason is that Twitter launching an onion address has given Tor a lot of positive press in the mainstream media for a change.


Heh. An article for a two word answer: Tor exits.


One reason that i haven't mentioned today is forcing users to use tor by only publishing a .onion address.


[dead]


> https://hstspreload.org/ offers the same benefits. You are guaranteed to be connected to what you expect - or not at all.

TLS/HSTS is still subject to CA attacks, e.g. diginotar.

CA/X.509 is a complex stack too.

> TLS mitigates attacks that can be executed by malicious exit nodes (or WiFi networks, or ISPs), that is the whole purpose of TLS.

A malicious exit node could refuse to serve some websites. This seems a minor risk though.

Reducing load on exit nodes is a technical benefit that's in that blog post.

Another benefit to using Tor onion services for large sites is that the Tor circuit ID can be used as an additional key in an IP rate limit cache. This helps block Tor bots (on the basis that establishing a Tor circuit is expensive).


>TLS/HSTS is still subject to CA attacks, e.g. diginotar.

Largely solved by Certificate Transparency. If you compromise a CA, you can issue certificates. However, you can't issue new certificates without broadcasting that fact to the whole world as browsers will not accept certificates without SCTs.

>Reducing load on exit nodes is a technical benefit that's in that blog post.

This hasn't been a real benefit for years. Exit nodes are running at something like 10% capacity.

>Another benefit to using Tor onion services for large sites is that the Tor circuit ID can be used as an additional key in an IP rate limit cache. This helps block Tor bots (on the basis that establishing a Tor circuit is expensive).

This is just another problem with hidden services. Opening circuits costs malicious clients far less cpu time than it costs the server.


> This hasn't been a real benefit for years. Exit nodes are running at something like 10% capacity.

https://metrics.torproject.org/bandwidth-flags.html corroborates what you say about utilitisation.

Exit nodes are legally difficult to host in many countries, which reduces diversity, which is a risk.

Onion services avoid the need the exit nodes, and thus the diversity risk.

> This is just another problem with hidden services. Opening circuits costs malicious clients far less cpu time than it costs the server.

I had a look for this but couldn't find it.


Most of the technical points listed here are pretty much entirely mitigated by TLS. Exit nodes can of course deny access to specific sites, but hidden services suffer from comparable (or worse) issues.

There are no other practical attacks that malicious exit nodes could execute against sites using TLS and HSTS preload lists. If you’re a website administrator, fixing those things should be your priority before implementing onion addresses.

Onion addresses also come with slight drawbacks. They’re difficult for users and more vulnerable to phishing. Hidden services are also extremely vulnerable to CPU-based DoS attacks.


But, but … BUT TLS man-in-the-middle at exit node isn’t fully mitigated … UNLESS TLS Client mode is used as well.

We all should know how infrequent this TLS Client mode get evoked, right, right? Yeah, righto.


What real attacks would that enable?


Wat? That's not true (i assume by client mode you mean client certificates aka mutual-tls)


ThAt … mutual part, yes, of Client-side TLS.


It's much harder to deanonymize people who are connecting to hidden services because they don't have to use exit relays which are often illegal to run.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: