Hacker News new | past | comments | ask | show | jobs | submit login
Onion names reserved by the IETF (torproject.org)
161 points by finnn on Oct 28, 2015 | hide | past | favorite | 66 comments



It's the sensible, pragmatic thing for them to do. Allowing .onion to be allocated as a "real" TLD would just be disruptive and confusing at this point.

The implications with regard to SSL certificates are interesting, though, and I'm curious how long it'll take for SSL providers to start supporting that. :)


Digicert has already issued at least one certificate to a .onion address[0]. See [1] for additional info

[0]: https://blog.digicert.com/anonymous-facebook-via-tor/

[1]: https://blog.digicert.com/the-current-state-of-onion-certifi...


blockchain.info also has an SSL cert on their .onion site


Fun fact: Cyph's certificate from DigiCert is the first EV cert ever deployed to a .onion address (https://cyphdbyhiddenbhs.onion).


What are the benefits to using SSL/TLS certificates on a hidden service?

Perhaps a better question is: are there any benefits other than just providing an additional layer of encryption that a potential attacker would have to defeat -- there's already end-to-end encryption when using hidden services (even if there isn't any encryption at the application layer)?

ETA: I just remembered that hidden services use 1024-bit RSA keys and there's been some arguments lately that that may not be enough bits. For some sites, using (at least) a 2048-bit key may be necessary.


EV certificates add green buttons with company names. Registrar checked that this company indeed exists and legal. So you can be sure, that you didn't mistype URL. And may be some users are comfortable with HTTPS scheme.

Another concern is that major browsers might obsolete HTTP scheme, so their UI will warn user about insecure connection. I'm not sure whether browsers will be able to distinct between onion sites and regular sites in that aspect.


> EV certificates add green buttons with company names. Registrar checked that this company indeed exists and legal. So you can be sure, that you didn't mistype URL. And may be some users are comfortable with HTTPS scheme.

Yeah, but those are companies that might as well have non-hidden websites, or be serving .onion mirrors of their regular domains.

The main "use case" for .onion domains are people for whom EV certificates would defeat the whole point.


Washington Post, Gawker, and others who use the SecureDrop software for soliciting leaks are a great use case for .onion+EV. You get the assurance that your connection is encrypted and, as much as possible, anonymous from the person you're connecting to; you get the assertion of the person you're leaking to that they are who they claim to be; and you get a 3rd-party validating with high confidence that someone authorized to represent that company set up the domain you're leaking to.


> be serving .onion mirrors of their regular domains.

Which is an excellent usecase for a cert to make sure that you actually are connected to the mirror. They could even have a cert that works for both domains, linking those together.


That's not true. One of the points of TOR services is that they can't be (easily) blocked by governments and/or ISPs. There's a huge HUGE difference between a TLD and .onion.


Well, it would encrypt the traffic from your TOR endpoint to the application. Usually this is on the same box so it's not a big deal, but not always.


All traffic to TOR hidden services is encrypted end to end, SSL can add additional level of authentication tho.

If it's not a hidden service then you can't really use an .onion address anyhow.


You're correct, but I think his point was that the Tor endpoint (i.e. the host connected to the Tor network) and, e.g., the host actually serving up the content aren't necessarily one and the same (although they usually are).

In those instances, an SSL certificate would provide encryption all the way from the "Tor client", through the Tor network, the rendevous point, Tor endpoint, and to the actual application server. Without additional encryption in use at the application layer, the link from the Tor (hidden service) endpoint and the actual server would not be encrypted and, thus, vulnerable.

To (perhaps) explain better, this would be similar to how Cloudflare offers SSL for all sites and while the path from the end user to Cloudflare is (can be) encrypted, the link from Cloudflare back to the origin server isn't necessarily encrypted. Alternatively, think of the link from an SSL terminating device to the backend web servers. Again, in most cases, this is a non-issue but there certainly are some instances in which it would apply (and this is probably more likely the bigger a site (hidden service) is).


Authentication. You're talking to foo.onion, not a man in the middle - either at the start, or the end of the Tor pipeline.


I thought each onion layers is already encrypted and authenticated?


Yes, within a Tor, if you access certain hidden service, you are pulling data from the node that has the corresponding private key. I guess the threat model is when there's some MITM between a user agent and Tor, perhaps via some type of malware. (Because your browser can't tell if you are really accessing hidden service or not.) At this least verifying against certificate coming from a hidden service server would verify that it is coming from the host intended.


Usually the Tor client runs on the same machine as the browser, so if you have a MITM, you probably already lost (e.g. the malware probably could have simply injected its own CA root cert into your browser).


It's somewhat of rhetorical, indeed. But it does give at least additional checks in the place. (Also potentially helps to prevent a certain class of vulnerability with hidden service, whether they are caused by a bug or attacks.)


Facebook does Tor - load balancer - SSL terminator - servers, and the load balancers might not be trusted. So here using SSL protects against malicious load balancers.


Measuring the Leakage of Onion at the Root “A measurement of Tor’s .onion pseudo-top-level domain in the global domain name system”

[https://www.petsymposium.org/2014/papers/Thomas.pdf]


Isn't SSL on .onion domains redundant? It makes sense for onion -> open web, but shouldn't onion -> onion connections be already both authenticated and encrypted?


I'm not too knowledgable about Tor, but I'd imagine there's a benefit to well-known CAs issuing certificates that are trusted by existing browser infrastructure. As far as I gather, Tor authentication merely verifies that the owner of the server you're connecting to also has ownership over the .onion domain, not necessarily that that owner is who they say they are.

Granted, I'm not sure the HTTPS cert infrastructure guarantees that either. I'd love to be more informed about this.


Regular certs are Domain Validated, meaning the CA only verified that you do, in fact, own the domain in question.

EV (Extended Validation) certificates actually require the CA to verify that you are who you claim to be. This is mostly used by banks and payment processors, as it costs more money. Most browsers will identify an EV cert by turning the URL bar green, and/or displaying the name of whoever owns the cert.

    https://en.wikipedia.org/wiki/Domain-validated_certificate
    https://en.wikipedia.org/wiki/Extended_Validation_Certificate


Thank you!


You could use cert that is valid for both a clearnet and an onion domain to "prove" that they belong to the same entity.


In addition to this, Facebook does Tor - load balancer - SSL terminator - servers. This way the cert protects your against untrusted load balancers.


Why does Facebook have untrusted load balancers?


Outsourced?


I think you're right. But I also think this is more important, symbolically. This gives .onion a status which prevents outbacksteakhouse.onion from being plausible, and ensures that "Any domain ending in .onion needs to be routed to Tor" is consistently true.


I would expect Outback Steakhouse to go with "blooming.onion" instead.


Somewhat. However there are some marginal benefits such as SSL supporting stronger encryption than what Tor provides, availability of extended validation and the browser being aware of the encryption (secure cookies, same-origin policy, etc.).


I asked the same question and the answer is yes. However, I recalled that hidden services use 1024-bit RSA keys and there's been some question lately as to whether that's enough bits. For some sites/hidden services, (at least) a 2048-bit key may be desired.


Both Tor and I2P are actively working on upgrading their cipher suites


"I2P"

People still using this insecure garbage?


Can you elaborate on what is insecure about I2P?


Are you talking about anything other than that one XSS bug in the console a while ago?


I believe the exit node may still be able to view traffic in plaintext. This is part of the reason that running an exit node is so "dangerous" in the US.

edit: Though with a quick Google, I'm led to believe that an exit node is only important when you are leaving the onion network (i.e. when entering into the Internet), and thus it sounds like SSL on a hidden service would indeed be superfluous to me.

However, SSL also proves authenticity, not just encryption. It would let you know that the hidden service you are accessing is indeed who you think it is.


However, SSL also proves authenticity, not just encryption. It would let you know that the hidden service you are accessing is indeed who you think it is.

So do .onion address; they are an hash of the key pair you get when you generate a new one, and the client verifies that the server it's connecting to does in fact control the associated private key.

By abdicating readable domains, the Tor hidden services system eliminates the need for external authentication mechanisms like CAs; the address is all you need.

https://www.torproject.org/docs/hidden-services.html.en


Assuming a .onion's key were to be bruteforced or stolen however, you would also need to steal the SSL private key in order to continue to appear authentic.

I'm not saying Tor doesn't cover authenticity, but that SSL provides an additional authenticity check on top of that.

edit: On the topic of bruteforcing, the linked Stack Overflow post leads me to believe it's not terribly infeasible.

Additionally, stealing the .onion's key would likely expose the SSL private key as well (as you'd likely have access to the server at that point), unless the .onion's key is exposed due to misconfiguration or another form of human error.

I also think, lastly, that the point about the browser understanding its dealing with a secure connection and enforcing general browser SSL rules has merit.

edit 2: Forgot the link - https://security.stackexchange.com/questions/29772/how-do-yo...


14:2.6 million years


With a single core.


So a million cores still takes years. What would you consider infeasible, may I ask?

Also, you're wrong about bruteforcing the domain implying you can decrypt if not for ssl. If you bruteforce (for millions or billions), you won't get the same key. You'll get a key that shares the first 80 bits of its hash with the other key used. So you can use it to mitm or impersonate the site, but you can't use it passively to decrypt connections to the onion.


Aren't bruteforces of .onion plausible in the mid-future by powerful actors?


That's why the Tor project is planning a cipher suite upgrade in the near future


Exit nodes are not involved when making connections to Hidden Services. See https://www.torproject.org/docs/hidden-services.html.en


I edited my comment a minute after posting to reflect that. Thanks.


That's onion -> web. If you are connecting to onion address, your packets do not enter plain text internet. Unless you are using some sort of "enter node".


I've always thought that these addresses should have a different scheme, not a different TLD. For example, onion://aoeusnth instead of http://aoeusnth.onion

Is there any reason in particular why the TLD approach was settled upon instead of a scheme-based approach?


Because xxx.onion is essentially hostname. One can conceivably have https://example.onion, ftp://example.onion, telnet://example.onion, and so on.


The reason it has to be done at the DNS level, rather than at the URI scheme level, is because any protocol can be routed over TOR.


Well, there's nothing keeping a well-defined tor scheme from including the protocol information in it, is there? For example, I could imagine specifying a tor URI in my git config: onion:ssh:pcl@aoeusnth or onion:http:aoeusnth


The point is its still http.

Think of TOR as acting like a VPN or point-to-point tunnel. You can conceptually think of it as another network interface plugged into your network. The policy you choose what to route over it is your own. It doesn't affect how any other protocols function.

I can still access regular sites over TOR. I can also access regular websites over a VPN. openvpn+http:// isn't exactly useful either for the same reason.

And there are other special tld. Your multicast domain group (e.g. .local) is also special. Your dns resolver sees the TLD and resolves it specially. But once again, doing multicast DNS doesn't impact http, git, ssh, etc. So it be silly to have to write mdns+http://... as well.

And if you where to join them, then you have to describe what kind of behaviour should happen if, for example, on openvpn+http://foobar.tld you hit a hyperlink to http://baz.tld. Do I rewrite this to prepend openvpn+? Fail? etc.


Because you then would have to patch all your software to understand the scheme. Which might actually be a good idea from a security POV, but is harder.


I think the simple answer is that they started doing it a certain way and then didn't want to change it.


Now we just need to find a Tor user with enough money to buy EV certificates...



What's Facebook (privacy enemy) doing in there.


They're not the enemy of privacy in general. They're the enemy of privacy between you and them.

If they can make sure that not only they can get your data, but also that noone else can get it too, that's a win in their book.


facebookcorewwwi.onion

You can use fb from tor, so they have an interest in that staying the case. The reason was that their fraud detection was going off on tor users, so they just made a service https://www.facebook.com/notes/protect-the-graph/making-conn...


A leaky data silo isn't a silo anymore


Damage control?


Tor gets an upvote from the establishment. Is that furthering the cause of privacy? So we'll now get more exit nodes?

Personally believe that nothing less than a wholesale transport-layer alternative to the internet is necessary to maintain communications freedom. Not far fetched to suggest that rooftop antennae running peer-to-peer mesh networks will gather momentum in coming years. Not to replace the internet, just as backup, to keep centralised government interests and moneymen at bay.


It's not far fetched to suggest people start seriously running peer-to-peer mesh networks, it's ridiculous. You won't be able to convince the people who know about them to run them and you'd need to convince far more people than that to actually get mesh networks to run reasonably well.

This is ignoring the technical and legal challenges not to mention the fact that people have tried this for a very long time now and they've failed to get anywhere significant for just as long.


If your criteria for "getting anywhere significant" is to be even within two orders of magnitude as performant as the current internet, then you are correct. If however the aim is to build a distributed, no single point of failure/control, kbps-class, backup communications system, that's primarily community-based, and that is NOT controlled by any corporation, ISP, or government, then there are many such networks already in existence. They're based on wifi, and completely legal. Clearly they're hobby projects but that doesn't make them uninteresting or indeed, potentially extremely useful in low-delta scenarios of societal breakdown and/or centralised oppression.


I'm not talking about bandwidth, I'm talking about coverage alone. Berlin for example are many people that are into Freifunk and such networks. There are many more people into technology and the culture and politics associated with it. Nevertheless they can't even get decent coverage.

The network is worthless not because it works badly, they fail to create an accessible one to begin with.


> many people into Freifunk .....

I rest my case. There is demand for such a thing even if it's far from perfect. Not everybody is willing to trust the increasing encroachment by the authorities into the mainline internet. They're willing to experiment to ensure that technologies are being worked on that permit independence from potential authoritarianism (corporate or government).




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: