Hacker News new | past | comments | ask | show | jobs | submit login
Tor 0day: Finding IP Addresses (hackerfactor.com)
257 points by dyslexit on Sept 17, 2020 | hide | past | favorite | 117 comments



This article is obviously written by someone that doesn't know what they're talking about.

>0day

This is not a "0day".

>As it turns out, this is an open secret among the internet service community: You are not anonymous on Tor.

Careful there with the big assertions.

>The last hop is the exit node. It can see all of your decrypted network traffic.

I thought we were talking about onion services here, why the subtle context switch? Does the author even know that onion services don't use exit nodes at all?

>(Don't assume that HTTPS is keeping you safe.)

Why?

>One claimed to see over 70% of all internet traffic worldwide. Another claimed over 50%

The key word here is "claimed".

>If you're a low volume hidden service, like a test box only used by yourself, then you're safe enough. But if you're a big drug market, counterfeiter, child porn operator, or involved in any other kind of potentially illegal distribution, then you may end up having a bad day.

I like how the author assumes that these are the only two uses of Tor.

>you simply need a list of known onion services

Good luck getting that with v3 addresses (unless the author of the service has poor OPSEC).

Not to mention that Tor has provided many fixes for the DDoS issues, but the author obviously didn't mention them.


>>(Don't assume that HTTPS is keeping you safe.)

>Why?

because https depends on certificate authorities and CAs depend on coercible companies which depend on governments from not molesting them.

The existence of QUANTUM INSERT and FOXACID attacks show CA-based authentication is weak (either due to their keys being compromised or coerced). DigitNotar also got pwned.

Strong authentication is one of the unrivaled advantages to onion addresses in tor.

The CIA also advocates to not solely rely on TLS for transport encryption: https://news.ycombinator.com/item?id=24426818


That's all true. But surely that's not specific to Tor, is it?


The author is salty that Tor would not fix a "bug" of being able to tell a publuclly listed Tor Node is a Tor Node.


> Does the author even know that onion services don't use exit nodes at all?

Does this mean that traffic correlation and confirmation attacks cannot be performed on users of hidden services?


No, those attacks still work. Traffic correlation and timing attacks don't actually interact with Tor at all; while a Tor exit relay is a good spot to be in for one (if you're targeting streams that use it), all you need is two vantage points that uniquely identify the network route as a whole. So e.g., the client and server's respective ISPs, or in the case of an onion service, both guard relays. GP is correct though, onion service connections are e2e encrypted, there's no vantage point on the network that sees any plaintext or TLS client traffic. The author of this blog clearly has no idea what they're talking about.


> The last hop is the exit node. It can see all of your decrypted network traffic.

Didn't see it clarified in the article, but IIRC for onion services like OP's the traffic doesn't go out of traditional internet exit nodes and traffic is end-to-end encrypted. Not only can the last relay before the onion service not see all of your decrypted network traffic, I don't believe they can tell they are even the last relay.

Traffic analysis has been a known issue as long as Tor has existed. What I'd like to see are solutions. Can Tor be used with some kind of fixed-rate noise type of protocol (I toyed w/ a rudimentary fixed-rate traffic algo once[0])? Or is it too broken and do we need another P2P (fixed-transfer-rate) protocol? i2p, tribbler, etc haven't gained mass adoption.

0 - https://github.com/cretz/deaf9/blob/master/mask/context_read...


This is a very good point. There is a huge difference between a Tor client chatting with a Tor hidden service, and a Tor client chatting with a clearnet service.

Furthermore, it's not as simple as 'see all of your decrypted network traffic'. Perhaps the Tor client is talking with the clearnet server over TLS 1.3. This presents much more difficulty for the malicious exit node.


Interesting article, but I don't see why it's titled '0-day' when he references a research paper from 2012.

"Although these are old, they are classified as zero-day attacks because there is no solution."

They are?


This is the same person who had a blog post on the front page ~2 months back, rambling about a JavaScript scroll bar issue, the fact that an ISP could block entry node traffic, and a few other things along those lines ("omg, 0 day!"):

https://news.ycombinator.com/item?id=23929312

Both blog posts probably ended up on the front page and got quite a bit of attention because the words "0 day" and "tor" were used in close proximity, something the author is apparently very fond of doing (the posts are part of a series titled "Tor 0day").



I even remember you wrote that. I come here too often.


Here is a technique that is used to uncover hidden services:

1. purchase VPS products at a bunch of providers who accept bitcoin / crypto

2. ddos your target

3. see if you notice any of your hosted boxes go down

4. once you know the provider pop them (they're usually running some shitty WHMCS or similar homebrew solution, old Cpanel, etc. etc. and they're almost always resellers and amateurs) and move laterally to your target

When the feds do it against online drug markets (and they have been for years) they have the bonus of having decent network insight / view by working with backbone providers

There is just no way to hide multi-Gb of traffic


That's why you split your drug empire into a server for images and static stuff, and another server to run the rest. The server for the rest can be proxied through the image server, and through another tor connection. That way the feds only find your image server...


Most of them do that now - the better markets have gateways that are CAPTCHA'd and spread out over other providers that then proxy to the real hidden service (you can do this with a second step over Tor again)

Checkout this project which is now being more widely deployed to prevent a lot of these attacks:

https://github.com/onionltd/EndGame


Very cool. I clicked a bunch of links from there and got to here: https://github.com/mikeperry-tor/vanguards/blob/master/READM...

Good overview of threats.


Pretty cool, but you'd think they would just have a Dockerimage ready to be deployed and a network driver or something similar for these types of challenges.


Hope this password advice isn't taken seriously

> TORAUTHPASSWORD - Password which is used for your Tor Control Port Authentication with NGINX. Alphanumeric without spaces (example: passwordIcanremembertyping)

> KEY - Alphanumeric Key for the shared front session key. Random between 64-128 would do fine. (example: isthis64charactorsalreadyicantbelieveitwowsocoolwaitnotyetohdarn)


And perhaps throw some needles of steganography into the haystack.


Are you really saying that feds DDoS hosting providers and actively hack control panels for lateral access? More than once? Have you got any documented cases of this happening? Because it really sounds... Unlikely.


Hitting a hosting provider (indirectly) with a ddos for a few minutes doesn't sound that difficult to do, and could easily pass as a routing glitch or something like that. If it's the only way to track down the server it seems reasonable that it would get a rubber stamp approval. After all, who's going to be able to prove the feds did it?


You too have the capability to do it - should we some it's you? I din't disagree it's viable. Just that the parent said "when feds do it", that's a big leap from "it's possible". How do you know feds do it?


> and actively hack control panels for lateral access

I would definitely see this under the umbrella of various secret agencies, especially those with an "offensive" position in cyber-warfare.

You won't ever see any of this documented due to the "parallel construction" method. And for what it's worth, almost all of the Wikipedia content on that one (https://en.wikipedia.org/wiki/Parallel_construction) deals with DEA usage of this.

Besides, the general way of "hacking services to distribute malware" has already been done, most famously in 2013 when the feds burned a Firefox 0-day to unmask a child porn ring: https://krebsonsecurity.com/2013/08/firefox-zero-day-used-in...


Why not? They're not above sending literal malware to people's computers in order to compromise them:

https://en.wikipedia.org/wiki/Network_Investigative_Techniqu...

https://www.eff.org/pages/playpen-cases-frequently-asked-que...

A DoS attack is nothing to these people. The only difference between them and cybercriminals is the fact the law legitimizes their work.


That's not a documented case of DDoSing a company to get location.


They're examples of a US federal agency using malware to compromise the computer systems of suspects and collect evidence to use against them. If they're willing to exploit vulnerabilities in order to break into computers, surely they won't balk at denial of service attacks. They even let the suspect go free in order to avoid revealing the vulnerability they exploited.


They are not. Zero counts from the time the vulnerability is disclosed, not when it's fixed. This is not a "zero day" vulnerability.


So if 10 years passed and this can still be exploited using tactics discussed a decade ago, how to classify this?


You classify it as a threat that is outside the threat model that tor can defend against.

Tor is a tool like any other. It has certain strengths and certain weaknesses. When you're evaluating any security product you always have to determine if the security properties the tool provides match up with the security properties you need. Tor is no different.


it's a "known issue" maybe it should be mentioned in the documents with more clarity. it seems the author of this post stumbled over what is well known to many people and then went on a diatribe. looks like he hit a nerve so maybe it's really an issue with documentation. on the other hand people should learn about how to think via threat models. not sure if the tor project should be expected to cover every hypothetical scenario, and if they did would people study it?


A whole set of mitigations were published on Tor Project's blog in 2018.

https://blog.torproject.org/announcing-vanguards-add-onion-s...


As a PhD student my main topic was the period of time just after the disclosure of a vulnerability. This is the most threatening stage of its life cycle as mitigations are not disseminated in the community yet (a patch may be available but nobody installed it yet, etc.).

We struggled to find a commonly accepted term for vulnerabilities at this stage of their life cycle, but we finally settled on n-day vulnerability. This term have been relatively well accepted by the vulnerability research community.

The exact length of this period is completely dependent on the velocity of the community to adopt a mitigation such as a patch. Heartbleed and Shellshock had been massively mitigated in a matter of days or weeks, but EternalBlue based-attacks still caught a lot of production systems off-guard more than a year after its disclosure.


For what it's worth, until the widespread dissemination of auto-updaters, into the mid-late 2000's, you were describing most vulnerabilities. Most things stayed highly exploitable for a very long time. We didn't have a special name for them.


I see a consistent definition for the arguments of others: "You have zero days until this can and will be exploited. You have zero days until you need this patched."

A zero day starts with it's exploit or public disclosure and ends with a released patch. It's not a zero day for private disclosure.

Edited based on child comment about clarity


For clarity: "private disclosure", even to the vendor, doesn't mean anything. At the point a vulnerability is publicly disclosed, patch or no patch, users can mitigate it (if only by ceasing their use of the affected software). "Zero" refers to the interval elapsed since the public, meaningful, disclosure of the vulnerability.

If I find an RCE in Cisco IOS and report it Cisco, who sits on it for a few dozen months, and you later find the same RCE and circulate it amongst your friends, who exploit it, your friends are exploiting a zero-day vulnerability.


So back a long time ago, we counted days from the time an exploit was 'known'. Now people seem to use it for the time since a patch has been released? I don't know, but every time I ask I get downvoted.


A 0-day generally means either one person, a single organization, or a small group of people, know about the exploit in question. As soon as an exploit is widely known or published, it's not a 0-day since anyone can find it, even if the exploit is for abandoned software that'll never receive a security fix.


Yep, that's what I said I think (by 'known' I meant publicly). Once it's widely known, it's a 0 day for 24 hours, then it's not anymore. That's what we used to mean by 0-day anyway. Other people will tell you that it's a 0-day until there is a patch against it. I think different people use this this phrase so divergently that using it isn't a useful way to communicate anymore.


What is it called after there is a patch against it? Does it just stay at like a 27-day if it takes 27 days to patch?

I feel like in common parlance calling something a 0-day would imply that it is something the manufacturer didn't expect and has no solution for which is a big problem. I guess whatever communicates information best. I kind of feel like we just use 0-day to mean big problems, everything else is just a bug that has some age, and then fixed stuff doesn't get remembered. Right?

That seems fairly useful, at least in communicating to the general tech media.


If you know about the bug, but manufacturer does not provide any patches, you can still mitigate it, put in place detection measures or just stop using that software. You can't (necessarily) do those if you don't even know about the bug yet as it hasn't been published. That's why 0day is a useful term.


Not "known", but "disclosed" to the public, or at least those responsible for a patch.

Zero days are known, exploited and used all the time by all sorts of black hats, govt institutions etc.


At this point bloggers seem to mostly use 0-day to mean: i havez cool 1337 hackorz skillz, regardless of merit.


I've always understood it to mean : time/version since the software is vulnerable. 0 day ,from first release of the software, meaning all versions are vulnerable.

Which was/is more relevant when commercial software is updated at most once ever year or two

But it appears no one can agree anymore, making the term useless.


Hold the line! Let's make the term useful. An 0-day is an 0-day before public disclosure and for 24 hours after.


At one point in history a "0day" referred to a pirated release of software on/before it hit the shelves (and/or the crack that was released simultaneous to protected software aka "warez").

It was years later that a "0day" went from a copy protection removal/crack ("0day warez") to its more general modern usage in computer security.

See: https://www.google.com/books/edition/_/8ETRQhDytIsC?hl=en&gb...


Got us reading so that's good


It started out as a 0-day in 2012, and since it has remained unpatched, it continues to be called a 0-day. That is how it is commonly used.


Disagree. An 0-day is only a 0-day for 1 day after public disclosure. (and before)

It's a useful distinction. 0-days are special because your target has no idea such a vulnerability even exists. This makes them very different than known but still unpatched vulnerabilities.


How does that make them very different? The latest version of the software is still exploitable in either case. In my opinion, that is why it's useful to call them 0-days until they are patched.


One difference: With an 0-day you know your target can't have done anything to specifically mitigate that vulnerability. If a vulnerability is well known but still unpatched by the vendor, a potential target can take their own steps to protect themselves.

For example, if you're running some ancient mailing list software that you know has an unpatched XSS vulnerability, you can have your front end servers scan for attempts to exploit that and abort the requests. Or lock it down with a CSP policy. Or if you know your image manipulation library has tons of vulnerabilities, you could run it in a locked-down sandboxed environment where exploitation doesn't get the attacker much of anything.


This is simply absurd. Please stop things making up.


Known vulnerabilities or weaknesses that don’t have patches are not 0-days. A 0-day is a vulnerability that you don’t know exists yet. That’s how the term is used in risk management and threat modelling. You don’t have 0-days that you’ve known about for 8 years. They’re just known risks.


No, that's not how the term is used.


I spoke to Adam Levine on the same topic in the summer of 2013, right after Snowden told us (for the nth time; credit also of course to Mark Klein et al) about the large-scale passive monitoring of network traffic.

This is a known issue, which, like GMail being accessible to the US government without a warrant, one that a lot of people simply need to block out to go on with their daily lives. It's difficult to emotionally integrate the fact that you can't travel anywhere while holding a cellphone without the military knowing exactly where you are, and exactly where you've been, for the entire time you've had a cellphone.

I encourage you to watch the interview, where I describe this precise attack:

https://youtu.be/9k4GP3Evh9c?t=2018


According to this [0] there are only about 1.5k exit nodes and over 6k relays total. It's a pretty small network. I'm not an expert on tor internals but it sounds to me like a sufficiently dedicated player could easily control a big chunk of this. Don't even need crazy money.

I understand that they have mechanisms preventing obviously fake new servers from flooding the network. But still at these numbers it doesn't seem that tough to play the long game.

[0] https://metrics.torproject.org/networksize.html


Yeah, it happens: https://blog.torproject.org/bad-exit-relays-may-june-2020

One reason why it's not devastating to the network as a whole is that the process for getting your relays to make up such a large fraction of the network is social. If you run a ton of capacity, especially if added all at once, people are going to notice, and reach out to find out who you are (and if they can't, expect to get removed). This means that while yes, you can do this (and as above, people have), once it's detected, all of your resources are dropped at once, and you have to start a pretty expensive and time-consuming process over again. It's also the case that adversaries generally don't collude, so e.g., the above attack was for cryptocurrency theft, and those adversaries likely aren't working with the FBI or China to deanonymize circuits. This means you only have to worry about a few of these happening at a time, which makes it easier to detect (pull on one thread, and the rest start to unravel).

That said, just based on the blog post above, it's something that TPO seems to be thinking about new ways to address, and sibyl detection has a long history of research in the academic community as well that has plenty of space left to explore. Something like Salmon[0] is in the process of being implemented by TPO for bridge distribution[1], and the constraints for this reputation problem are far less onerous than that setting.

[0] https://content.sciendo.com/view/journals/popets/2016/4/arti...

[1] https://gitlab.torproject.org/tpo/anti-censorship/bridgedb/-...


Are you planning to add any relays?


Is strong anonymity even theoretically possible on IP based networks?


Yes (under certain, decently reasonable assumptions), but all solutions have very significant tradeoffs.

High latency mixnets (e.g. https://en.m.wikipedia.org/wiki/Anonymous_remailer ) have the drawback that latency means it cannot work with interactive protocols.

Dining cryptographer networks have much lower latency but scale very poorly. (https://en.m.wikipedia.org/wiki/Dining_cryptographers_proble... )

Tor (a low latency mixnet) trades a weaker threat model for low latency and scalability.

So basically you can pick two of scalability, low-latency and resistence to global passive adversaries.


You can get pretty strong anonymity. That actually got built out pretty well in 2000 by Zero Knowledge Systems, and they open sourced all the components into the Linux kernel back in 2000: https://adam.shostack.org/zeroknowledgewhitepapers/Freedom_S...

They actually dealt a lot with the traffic analysis problem, and had both a technical and financial model to encourage defense against it. It wasn't perfect, but it would be more resilient to this stuff than Tor IMHO. It just had the disadvantage that latency was atrocious (for obvious reasons), and ultimately it turns out people don't care about anonymity.


If your adversary has a god level view of the network then it's really hard to achieve strong anonymity. The article mentioned large network operators that can monitor a significant fraction of all traffic in the country. If you were the Chinese Government you would have an even better view into the network. Especially if you send a large file somewhere which makes it easy to correlate the TCP session.

For real anonymity you need something that scrambles and delays your traffic to make it harder to track. Something that breaks big transfers up into a bunch of small transfers, sends them via different routes, and generally makes your experience miserably slow.


You need a system which sends constant numbers of bytes/second along every network link.

It would actually be pretty easy to implement for tor (either for the whole network, or individual nodes or routes), but as far as I can see nobody wants to work on it.


It sounds like your almost describing the Nym mixnet

https://nymtech.net/#protocol

https://youtu.be/_2DQ_iYZi5U?t=1580

The tradeoff is you necessarily need to smooth traffic bursts out to meet the fixed rate and that introduces high latency. Unfortunately most user traffic is bursty and not continuous.


> Something that breaks big transfers up into a bunch of small transfers, sends them via different routes, and generally makes your experience miserably slow.

Bittorrent has the "break into a bunch of small transfers" part solved. Just need to modify Bittorrent to somehow transfer each piece over a different route.


"Browsing the web over BitTorrent" is an interesting problem statement.


Sending data via several routes should increase throughput.


Why wouldn't it? IP only provides best-effort delivery anyway, yet TCP which is built on it makes the connections reliable. Similarly you can use IP which isn't anonymous to build strongly anonymous systems.


Of course, and Tor is an excellent example of an accessible turnkey solution to the problem. But you should never rely solely Tor or any single measure. 50% of anonymity is your own opsec.



Could you explain why i2p works better than tor for the type of attacks from the article?


It's not and it's much less anonymous.



That link doesn't support your statement. Tor is more anonymous than I2P.


Almost everything in the section "Benefits of Tor over I2P" is about the size of the network. Technically I2P is more secure, it just lacks users.

I2P allows to increase the number of hops to any value.


The fundamental flaw in TOR (and, by extension, all other anonymity clients) is that its traffic patterns make you stand out from everybody else.

Just using it makes you automatically interesting to state actors.


This is not a theoretical assumption: leaked XKEYSCORE “selectors” target anyone (1) searching for Tor on search engines or (2) using Tor.

Code:

https://daserste.ndr.de/panorama/xkeyscorerules100.txt


Tor relays are publicly known so you don't need traffic pattern analysis to know if someone is using Tor. Or were you referring to something else?


You could be using a vpn or a proxy making it harder to be matched only based on IP address you connect to. Traffic pattern analysis would still work.


Who are the providers referred to in this as 'God'? Is that providers like akamai/cloudflare/L3 that have big pipes/route lots of traffic?

Edit: I'm assuming Tier 1 network providers for AT&T/CenturyLink (aka L3) etc as per this list https://en.wikipedia.org/wiki/Tier_1_network


It looks like they are specialized monitoring companies. They aggregate traffic data from many ISPs and give them back a global picture. It is to mitigate DDOS attacks at the network level.


Just thinking: From a client's standpoint could the "large download" traffic correlation be avoided if the client split the large download into multiple HTTP requests (assuming the server supports that) that are about as large as a regular webpage request with random delays in between so that it looks like normal web noise? Of course, it'd take wayyy longer to download but wouldn't this make the traffic indistinguishable from page requests?


Excellent read, if you are going to be using Tor and want to stay off the grid, ie. journalism, keeping sources hidden - you need a laptop (netbook that has no personal info, pre-loaded with your own bridge node as first hop) and a wifi stick - only connect from remote wifi sites and don't create any patterns in visiting your physical locations nor sit in front of security cameras. Swapping the wifi stick between each use will make you virtually invisible.


Why swap wifi sticks? Why not just change the MAC address via the OS between each use?


I don't think anyone in the net-privacy realm would be surprised by anything written in this blog. A better title would have been: "On overview of Traffic Analysis intelligence leaks on Tor".

All tor does is provide onion addressing and strong authentication with increased the observation costs for passive observers. Anything beyond that is a user's myopic extension of crypto-is-a-panacea. Cryptography can provide protections for observability, it cannot provide protection against identifiability. Mixnets like remailers or modern traffic mixing like Nym attempt to address identifiability.

>I read off the address: "152 dot" and they repeated back "152 dot". "19 dot" "19 dot" and then they told me the rest of the network address. (I was stunned.) Tor is supposed to be anonymous.

It's hard to tell the author's genuine understanding of Tor is versus what is hyperbolic. How surprising is the quoted feat? IPv4 is roughly 2^32 in size. There's roughly 2.4 million tor users [1], so an observer would need ~22.2 bits to exactly identify them.

The author gives at-least (assuming uniformly random IP address distribution, which isn't the case) ~16 bits of entropy (log(255)/log(2)*2). Which leaves their counter party a 1/32 eg 2^(22.2-16) chance of guessing their IP. Unless your ip space is chock full of tor users, it's not surprising an exit node was able to autocomplete the rest of your ip address. PS If we know the country of their IP, we need at least ~15.3 bits and at most ~19.7 bits

The trick is akin to living on a street with a unique name and a retailer auto completing your address and customer details because you've ordered from them before and you gave them your street name.

If I was a Global Passive Adversary, I would be probing and rerouting traffic to see how systems responded: https://www.ndss-symposium.org/wp-content/uploads/2017/09/ND...

https://www.muckrock.com/foi/united-states-of-america-10/req...

[1] https://metrics.torproject.org/userstats-relay-table.html


how is this 0-day... old news of whats been public for a long time sounds more like a n00b on tor


https://fingerprintjs.com/demo shows that Tor can still be fingerprinted and uniquely identified across IP addresses. Your Javascript (navigator) user-agent and timezone are some of the dead giveaways as they leak the true values.


Tor will hide/obscure your true identity (i.e. the one that gets fingerprinted when you use a non-Tor-proxied browser) behind a separate, distinct "Tor identity"; but Tor makes no claim to be fit-for-purpose for obscuring/garbling the persistent aspects of said "Tor identity." (The Tor Project has this as a long-term goal, but they're nowhere near there yet.)

There's a "new Tor circuit for this site" button in the Tor Browser, but it's for circumventing dumb WAFs who've blacklisted a Tor exit node's IP. It's not for OPSEC.

> Javascript

Nobody who cares about doing anything secretive is using Tor with Javascript enabled. (Fun fact: most of the "dark web" stuff operates using early-2000s-era phpBB forum tech, which works perfectly fine without JS.)


5 minute exercise to try -

Download Tor https://www.torproject.org/download/

Go to your site, and see if you think it works - https://fingerprintjs.com/demo

(Also notice how Tor changes the screen size everytime you open it)


And javascript is disabled, the rest spoofed


TOR brosser != TOR


yes the network is different than the browser, but the point is that IP based anonymity is already obsolete


use whois. Good online tool to find ip addresses


theory: silkroad dpr, as sloppy as his opsec was, was parallel construction


DPR advertised for / discussed Silk Road using public accounts with email associations and usernames which led to his public persona. It was just bad opsec.


That's the public explanation. It doesn't preclude parallel construction nor does it mean that's how they caught him.


Why would you need to create parallel construction when the process of finding out Ulbricht's identity was painstakingly simple after basic utilizing basic OSINT?

He posted stupid things in very public and monitored places and it only took a little research in the right places to put the pieces together. The economics of the parallel construction theory are simply untenable. Anyone can search for keywords on Shroomery and other forums. It's grunt work and loads easier than actual hacking.


If it's "painstakingly simple", why did it take 2.5 years for them to find him?


OSINT is a simple process, it just takes time. In hindsight the red flags were obvious but you don't just immediately know where to look when investigating or what to look for.

Your argument doesn't hold water because if parallel construction is so much easier then why did it take them 2.5 years?


I don't think it was easy to find him at all. Parallel construction is also not easy, they have to figure out how to unmask the server through whatever 0day they choose, then they have to issue NSL or, more likely, get diplomatic assistance to clone the server's hard drives.

Do you know how long it takes to get another country to cooperate with an investigation, even if you're buddy-buddy with the country?


All I can say is that the information surrounding the investigation, the case, the court proceedings, all of the evidence, is largely publicly available and instead of making guesses as to the legitimacy of the ostensible reasons for Ulbricht's capture, you can do what I've done and read up on all of it. There's nothing I can say or do to convince you if your attitude is such, and only direct knowledge of the case will satisfy you.


I think the known construction only required access to his public writings posted to silkroad, google, and patience. He posted uncommon unique phrases to both the silkroad as well as public websites using accounts linked to his real identity. If anything I think it shows quite a bit of incompetence that it took so long to figure out his real identity.


This is absolutely plausible


His opsec was so poor, even if parallel construction was a possibility, im not sure why they would bother.


Not so sure. Don't remember the exact details but it involved a post on the shroomery website and another one on Stack Overflow and he was using the same handle on various sites. It seemed much easier to unravel than exploiting the Tor network, it was almost surprising it took them that long.


This is not news.


> I read off the address: "152 dot" and they repeated back "152 dot". "19 dot" "19 dot" and then they told me the rest of the network address.

This line seems like the big deal. Doesn't matter if it's from 2012 or not a 0 day or about previous posts from this author, how is this possible in 2020 by anyone, but even a corporation?

Is it this line? - "They just didn't know that this specific address was mine."

Tor should have shutdown Onions if this line is true as it seems to read.


So, what I understand is that hidden services can easily be deanonymized. People using tor only as a proxy are safe, provided they do not download big files (who want to do that anyway, given how slow it is?).

As a Tor user, I'm quite glad to read that, actually. Tor hidden services are the reason why Tor has this ugly and well-deserved reputation of being a tool for everything illegal and morally unacceptable. As someone who just want strong privacy, I see hidden services as problematic neighbors and I would be glad to see them go.


Well sample size of 1 and all that but I can attest to at least my legit use case. I'm hosting some friends-and-family small services from home and front them with a couple Hidden Services. That gives me some measure of mutual privacy from my users as well as strict access control via auth being baked in the transport protocol.

The alternative would have been providing a dynamic DNS type URL, mucking around with LetsEncrypt and the DNS provider periodically and then implementing all access control in the servers. I'm lazy, Tor works for this use case and I'm lucky my users understand the 3 steps to configure their Tor browser so I'm sold on the usefulness of this mechanism!


Oh yes, indeed, I'm not implying that all hidden services are objectionable, but that those who are are many and a major reputation problem - to the point where we would be better off without hidden services.

I love how you used it for relatives group privacy, though, that sounds cool.


This has been well known for a long time. It's even vaguely referenced in the Snowden leaks.

I wish Tor never became an activist project. There are a lot of groups with nice sounding names like 'Human Rights Watch' - that seem less nice once you find out who funds them and some of the things they support - that started offering loads of money starting around 2010 to groups which produced this kind of technology.

Tor took the money and transformed from an academic project into an activist one in both terms of both staff and marketing, and I think a lot of people are now using technology they have been told will keep them safe but is actually only a few steps away from bunkum.


Do you have a better alternative to Tor? All of my understanding and reading has lead to it being a great solution for anonymous web browsing.


I mean, I still like Tor. It depends entirely about what your threat models and goals are.

I2P never gets any attention compared to Tor, but it keeps chugging along. In many ways, it's a lot better. Their most recent release was on 2020-08-24.

https://geti2p.net/en/


I can't assess the merits of i2p but in the comment section of the post I found this

> i2p is substantially worse.

> It is worse BECAUSE every user is also a relay. I can sit at watch the connection, allowing me to map out each user's address. If your server is up long enough, you should see everyone eventually.


Really, it's just a different thing. It is an entirely P2P network, so yes, that's what you get. But, anybody can run a Tor node too, as the article points out. (Practically speaking, I2P better for torrents.)


I've heard good things about Nym. https://nymtech.net/docs/overview/


i have a bookmark on gnunet but never dug deeper into it.


The post seems to be focused on hidden service operation. Traffic confirmation can be used to attack normal users but it is a lot harder.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: