Hacker News new | past | comments | ask | show | jobs | submit login
Tribler Makes BitTorrent Impossible to Shut Down (torrentfreak.com)
178 points by Garbage on Feb 8, 2012 | hide | past | favorite | 60 comments



For people interested in the more "anonymous" parts of filesharing, I wrote my Bachelor's thesis on the subject: http://blog.marc-seeger.de/2008/07/23/the-current-state-of-a...

It's a bit dated at this point, but still give an ok overview of networks like Gnutella, EDonkey and Bittorrent and also goes into the more exotic ones.

As far as I see it, the currently 'usable' clients that use darknets and turtle-hopping are Oneswarm (http://oneswarm.cs.washington.edu) and AllianceP2P (http://code.google.com/p/alliancep2pbeta/)

p.s. Sorry about the language, it was my first bigger paper written in English, so I'm a bit heavy on the passive forms and runaway sentences.


Sadly no mention of I2P even though it was working quite well even back then. http://www.i2p2.de/

You can use Bittorrent in it, there is a emule port, gnutella etc. Works quite well. I often have download speeds of ~50Kilobytes/s in Bittorrent. It is a self-contained net with some trackers.


True, if I had more time I would have added I2P too. Since the thesis was only 12 out of 30 credits in that semester, I opted for looking at the anonymizing properties of 'regular' file-sharing software and software with anonymity as a main focus rather than looking at generic overlay networks too. This would have been the next step though :)


What do you think of the Phantom Protocol? It's very focused on the anonymity part, but also aims to be "indestructible":

http://code.google.com/p/phantom/


I remember adding it to my "read later" list at some point in the past, but that's an ever growing stack of PDF files :)

While I can't say anything about it from a networking point of view, a big problem is that there is no easy "how can I use this" guide on the project page. I firmly believe that a well packaged implementation is the basis for anything :)

It also seems to be an overlay network. I personally think that an encrypted friend2friend Darknet approach is more useful as an first step rather than trying to do it on an internet scale. "Social" Darknets offer a good balance between transfer speed, security and people being ok with their upload being used (since it's by their direct friends). I'm still waiting for a usable software that integrates within existing social networks (twitter, facebook, ...) to gather peers


A thesis on anonymous p2p networks that fails to mention basically any of the notable anonymous p2p networks (e.g. freenet, Share, Perfect Dark, ...) or protocols (WASTE).


This wasn't a thesis on anonymous P2P networks themselves, I was only looking at end-user compatible, actual implementations of file-sharing software rather than overlay networks. I should have probably clarified that a bit more.

I was also mostly focusing on networks with active communities that are still in development.

- WASTE isn't developed any more and hasn't been for quite some time

- Share and Perfect Dark are both closed source and only active in Japan. This makes them pretty much useless (closed source) and I wouldn't call them "notable" if they're limited to a single country and some scattered anime fans. Users of these networks have also been arrested in Japan for copyright violation, so I'm not sure how good their implementation actually was.

As for freenet: This is true, I should have probably spent the time. Sadly, this thesis was only 12 of 30 credits in that semester, so most of the time I had to do something else and was a bit limited when it came to depth and breadth.


Thanks - I'm reading your thesis now. I like how you discuss the impact of anonymity on each of the layers of the network stack.


Very nice. But the concept is not new. Don't you remember Kad/Kademlia network which was introduced together with eDonkey/eMule and invented back in 2002?

It didn't get any traction then but it was completely decentralized (based on distributed hash tables).


Kademlia did get traction; it is the protocol for BitTorrent's use of magnet links, trackerless torrents and other features.

It was not designed to act as a gossip-like overlay network for indexing content (similar to eDonkey/eMule), it was designed to act as an overlay network for identifying metadata of known keys stored in DHT.

The BitTorrent checklist is as follows:

1. [X] peer to peer file transfer

2. [X] Kademlia/magnet links for decentralized metadata

3. [X] Kademlia/peer exchange for decentralized tracking

4. [X] Gossip protocol for indexing magnet links

5. [X] integrated data proxying for anonymity/plausible deniability

Tribler solved step 4, OneSwarm and some others are trying to solve step 5.

We're indeed on the cusp of an impenetrable file sharing network.


>impenetrable

Until they just block the protocol with deep packet inspection. (Encryption won't work, you can analyse packet size/timing to fingerprint protocols pretty accurately.)


If packet timing or sizing gives away the protocol, then it seems to me that one could re-rig it to mimic the timing and sizing of another protocol.


Interesting. Has there been any success in disguising encrypted BitTorrent traffic as another protocol? I guess it'd have to be a protocol whose packets contain a large amount of random data and a large number of connections with different foreign addresses. Hmmm.


Skype would seem to fit the bill.


That was my best guess too, and Skype is also ubiquitous and would be extremely difficult (popularity wise) to censor. But I doubt video data is truly random -- it probably has some structure to it.


Encrypted video/voice data looks like encrypted Bittorrent data.


Oops, forgot Skype video was encrypted. Duh. In that case it'd do fine.


Rogers already does this (cripples download and limits upload to ~80KBps). They've forced me to switch to Usenet as even popular Torrents slow to a crawl on their Ultimate plan (50 down/2 up).


There's also the possibility of a 'seed box,' a remote server that downloads your torrents, then you just directly download them to your local machine from the server. I've heard of them (mostly through reading articles/comments on TorrentFreak), but I've never been curious enough to investigate the economics of them.


Curious why you would move to Usenet unless you are using a free service? The remaining DDL services (such as DepositFiles and RapidShare) are cheaper and tend to have faster DL times. Something I'm missing?


I pay less than $7/month for my Usenet connection, which is relatively cheap if you use it frequently. The connection is over SSL so it's completely secure end-to-end. My average download speed is 9 MB/s (saturating my line) which means I can download a 720p HD movie in just under 13 minutes. There's also (like with torrents) the ability to set-up TV show RSS feeds, so my shows automatically download to a folder, ready to watch, as they're released. It comes down to ease of use.


Megaupload was taken down and people are leaving DDL serves in droves. Usenet is a relatively safe, very fast, and a service thats been around for years.


It's important to note that, at least in the U.S., the Supreme Court ruled in Reno v. ACLU that blanket censorship of online content simply because they can be illegal is unconstitutional.

It is further reaffirmed in MGM v. Grokster that a filesharing service is only accountable for its actions if it advertises or supports the service's use of violating the law.


Wouldn't it be possible to randomize packet padding and delays to get around this?


This is absolutely inevitable. Attacking file sharing will just lead to better file sharing technology. We should be thanking the RIAA and MPAA really.


Oddly enough, the top levels of the piracy chain still use FTP and IRC. Even though security improved right after the early 2000 busts, it has remained relatively unchanged since then (even taking into account the RELOADED-related busts ~8-12 months ago). But I guess it all comes down to scale.. it's easier to control the security of 100 users over the security of millions.


Sort of like the rabbit thanking the eagle for the evolutionary pressure to make them faster and faster?


I was being sarcastic


Sarcasm or not, it's a good point. It's hard to see a viable endgame for the MPAA/RIAA if newer file sharing technology simply routes around the obstacles they through up.

The broad trend has been toward easier, faster, more secure file sharing. Does anyone really believe that trend is going to reverse course?


I don't see how this is any different (practically, not technically) from eDonkey/KaZaA/all the other file sharing networks of old. It sounds like it will suffer from the same problem of low SNR as the others did.

Basically, when anyone can add any file they want (for example, eMule etc scanned directories and added whatever it could find), noise is invariably much higher. Torrent trackers act as curators, ensuring there's high quality of uploads (especially for private trackers), thus increasing SNR.

I'd love it if someone could prove me wrong, though. Does anyone have any insight on this?


Doesn't Skype have a similar "superpeer" architecture? They showed with their outage last year that it is still possible for a decentralized network to go down.


Skype is a hybrid. They rely on central servers for authentication, contact lists and, I think, seeding the peer list - and, obviously, interchange with POTS. Only the "heavy lifting" (actual voice/video/chat traffic) is P2P.



It's less likely that someone will be able to take out all (or most) superpeers at the same time... at least not as effectively as the core software malfunctioning.


Less likely for sure, but it doesn't mean that a network relying on superpeers is invulnerable. The risk of a centralized dependency is spread out among many, many nodes, but the risk still exists (I'm just annoyed at articles like this that make the decentralized approach sound bulletproof, it's hyperbole).


The network itself is still resilient.

Skype's issue was getting the P2P network back up and running ASAP because 1) people pay them for a service that was down and 2) (voice|video|text) chat generally has a higher priority than file sharing. People wouldn't be as ticked off if the P2P filesharing network went down for a few days (or a week) before rebuilding itself.

What happens when all superpeers go offline at once is that the 'normal' peers then start DDOS'ing any remaining superpeers (or the few superpeers that are able to get back up and running quickly). As people settled down with trying to reconnect to the superpeers, eventually the network would rebuilt itself (possibly with completely different superpeers). Once you knock out all superpeers at once, the network becomes quite a bit less distributed.

That said, once all superpeers are down, it could be easy to continue manually DDOS superpeers as their appeared on the network for a targeted attack. If someone was being malicious, they could keep up the DDOS even as regular users are letting up on their inadvertent DDOS due to the sudden change in network makeup.


Every time I start the application, it creates a "TriblerDonwloads" folder on my desktop (I'm on a mac). I've even changed the default downloads folder to ~/Downloads/TriblerDownloads... it's still creates the desktop folder at launch.

Also, is there a way to force outgoing encryption (you can do this in uTorrent)?


I remember testing Tribler a few months ago, and I while I think the idea is brilliant, I didn't keep it for long for the reason you mention above. One of the reasons I switched to OS X in the first place is the high quality of the applications. I would love to try Tribler again if they create a native version for each platform, or at least a version which looks and behaves according to the standards of each platform.


Something I've always wondered (but been too lazy to research on my own) with this sort of thing is, in the absence of any centralized point, how do new users even find the swarm? My assumption is that all of these networks have at least one (or a handful of) centralized server that acts as a stepping stone to find the swarm. Is there something obvious I'm missing?


That's the typical approach. Either the client ships with a list of IPs / hostnames for seed servers, or you update a single well known DNS name with a bunch of A records. For maximum resilience you can put a whole series of DNS names in the client under different national registrars. This is often how botnets receive command and control messages - lookup RANDOM_HEX.ch and download a signed script to run.


I figured that's what existing p2p solutions did. Does Tribler do that too? Because it can't really claim to not rely on central servers at all if it still needs them as a stepping stone.


from article: "One thing that could theoretically cause issues, is the capability for starting users to find new peers. To be on the safe side the Tribler team is still looking for people who want to act as so called bootstraptribler peers. These users will act as superpeers, who distribute lists of active downloaders.

“Together with software bugs and a code cleanup, that is now our last known weakness,” says Pouwelse.

"

I guess to solve that problem the distribution must be decentralized and supernode adress delivered with client (as stated in above comment)


What's special about the .ch ?


Switzerland is neutral.


Does Tribler contain countermeasures for the Sybil attack?


Sybil attack, insightful question.. Yes we protect against it: http://www.asci.tudelft.nl/media/proceedings_asci_conference... -tribler founder


"The BarterCast mechanism was designed by Meulpolderet al. to distinguish free-riders and cooperative peers in file-sharing environments. After the first release, Seuken et al. [?] proposed an improvement to make it more resilient against misreporting attacks. Their solution is based on ignoring some of the feedback reports. Also, this solution could cut down the severity of the attack, but on the other hand it increases the feedback sparsity."

Do you have the cite handy for the paper referred to in section 2? I'm curious about the problem of network degradation due to pollution by an adversary whose clones attempt to maximize their reputation so as to isolate non-clones before initiating the attack.


I wonder why people seem to avoid the fact that the laws already know it is impossible to shut down file sharing. It is a power play. Just like it is impossible to stop marijuana usage, but that has no affect on the repercussions of the laws and the results for certain corporations and those accused.


Honest question here: I always thought that Kazaa and eMule worked decentralized. How was that set up?


They were, the problem were that the protocol wasn't very good, and that they were closed source so there was a single point at which to cut them off - at the point of distribution. It seems to me that the same reasoning that was used in the lawsuits against Kazaa (which ultimately caused Kazaa's demise) could be applied to this product. Time will tell I guess, when it gets big enough to target - for now the centralized torrent trackers are a lot easier to use and more widely known. Tribler being open source might make it harder to suppress with legal tools. I think the logical step (if it would become mainstream) would be to sue uploaders individually.


Tribler might be helped by its veneer of respectability - its funded by P2P Next (http://www.p2p-next.org/) as part of an EU wide program to create P2P distribution channels.



Sorry for that, our central servers are flooded. We're bringing a few more online now for dedicated .EXE downloading. Should be running fairly quickly.


Tribler.org is down, is there an alternative link?


Not everyone wants to seed all the time so splitting up the program into a BT core and BT file manager might be an idea.

The BT core program looks after the online user and search data, sharing this with other clients. This could be always on.

While the BT file manager looks after seeding / leeching through the BT core program. This allows you to participate in the group acting as a super peer and having up to date search data while not having to have a full blown BT client running.


Coral Cache for Tribler Downloads: http://dl.tribler.org.nyud.net/download.html


This sounds to me a lot like DC++, but using p2p. Anyone remember that DC client that had multiple peers?


StrongDC++ was the one I used to use. There are many with swarming enabled.


If this mattered, people would be using Gnutella.


What do people do with this?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: