Hacker News new | past | comments | ask | show | jobs | submit login
BitTorrent v2 (libtorrent.org)
1227 points by jakobdabo on Sept 7, 2020 | hide | past | favorite | 560 comments



I make P2P tools too. [0] Let me tell you this: Bittorrent is one of the few things in the space that actually ... works.

It works not in the sense that there is a white paper that should work. Not in the sense that there are a few company-made swarms hosted on industrial servers that keep everyone up and alive, so that the thing gives the impression the 'P2P' network does work. Not in the sense that there is a very-well oiled marketing machinery talking about Web 3.0 that allows its founder to go on TechCrunch and talk about the upcoming distributed paradise.

Bittorrent works in the way you install it into your computer and it does something for you. And for that alone, it has my immense respect and attention.

It's a tool that doesn't pitch that it's a P2P tool - it doesn't try to convince you with sob stories about how using P2P helps fight against the big bad evil web. Instead, you use it because it's genuinely the best at what it does: it being P2P is not a selling point, it's just how it happens to work, and that is exactly what it should be.

That is something all P2P developers should aspire towards.

[0] Aether P2P: https://getaether.net


> It's a tool that doesn't pitch that it's a P2P tool - it doesn't try to convince you with sob stories in how using P2P helps fight against the big bad evil web. Instead, you use it because it's genuinely the best at what it does: it being P2P is not a selling point, it's just how it happens to work, and that is exactly what it should be.

This should be something that every creator who markets or sells products should learn. Consumers don't care how it works, it just has to be better than the solution before it. Appealing to how things ought to be moralistically doesn't work (for the vast majority of consumers). They just don't care, they want a solution to their problem.


This is exactly the problem with all cryptocurrency currently. It’s a massive user experience issue, in the sense that users have to experience the technical bullshit of how the currencies work, completely missing the brilliant part of real money: it just works. I hand people money, they give me things. I swipe my credit card, I get things.

I can’t remember who aid it originally, but there’s a great test you can give to any statement or idea. Just immediately ask “who cares?” The best product demos I’ve ever seen answer who cares in each part of the pitch. The worst ones just rattle off mumbo jumbo forever.


It has been a few years, and despite a few attempts, I still don't really understand bitcoin. Yes yes ledgers and proof of work and yada yada, but I still don't know what I would need to do to buy something with bitcoin right now.

As I understand I need to do something that's pretty much exactly like opening a bank account, but transactions take forever, they cost money, and the currency is not accepted by any business I use, unlike my free credit card.

I have 55€ in my wallet and everyone on this continent knows what to do with it.


It's simple: it is an abject failure as a currency. It is fairly successful as a security for the purpose of speculation. If you aren't a financial speculator, you really don't need to pay attention to it.


It's also fairly successful as a way to transfer money.


I am pretty lucky that I only have to transfer money to between 2 countries that have heavily linked banking systems (US -> Canada), but I am not entirely sure what Bitcoin would get me in terms of transferring money. The fees are quite small on Transferwise, and also, I have…I don’t know what you want to call it…accountability? Reversibility? Reliability? Whatever fee I have to pay, the fact I get a known third-party with a papertrail is peace of mind.


It’s great to live a in developed world during the peace time.

Sometimes you have to transfer money in and out of the country at war with it’s currency in free fall and capital controls in place. At times like that “normal” ways take about 30% of the sum as transactional overhead, while bitcoin doesn’t.

Sometimes “normal“ ways just don’t work normally. Sometimes your government is actively working against your ability to transfer your money for whatever reason — be it drug laws or capital controls or what not. Sometimes you enter account number into the “normal“ system and field just turns red for no reason at all. Sometimes you have your access to “normal“ system severely restricted because of your legal status and residency rights.

So yeah transferring money between Canada and US being resident of either of them and not doing anything funny — is not a use case for Bitcoin.


Does this hypothetical situation you describe exist anywhere outside of the hypothetical situation posited by a character in a Neal Stephenson novel?

Seriously in reality the specifics of any sort of wild situation like the one you describe matter. Which countries? Which currency? Which kind of illegal behavior?

And (I ask out of ignorance) in this scenario you describe why would whatever group managing whatever bitcoin exchange or whatever is being used remain virtuous and not charge a 30% markup themselves?


> Does this hypothetical situation you describe exist anywhere outside of the hypothetical situation posited by a character in a Neal Stephenson novel?

That indeed is a bunch of real situations that have happened to me in 2014-2015 and some of that is still a thing. Check your privileges, sweet summer child.

Like seriously — for the first 30 years of my life it was outright illegal to have a bank account in a foreign country in my name. Not enforceable in practice, except I can’t make wires between two.

So 2014, fucking Russia is being Russia again, USD wired from foreign clients goes in... and is sold for UAH at government mandated rate. I sit in a third country, withdraw it through ATM. Guess what? It is converted back to USD at a market rate. Of course you can still bring cash or btc in and sell it at market rate plus some margin, but not 30 damn percent.

Sometimes it’s less dramatic and web interface of your bank doesn’t have regexp for IBAN format of destination country. Ridiculous but have happened as well, so I have proxied IBAN that starts with GB instead of UA and that always works.


This exact situation is happening now in Argentina. If someone sends you USD, they get converted to pesos at a ridiculously low rate. That's why most freelancers use crypto as a way to get their earnings.


Fair. I don't know much about this world.

But, like, the classic "society could collapse" fear-and-doubt, doom-and-gloom, grab-the-gold-and-the-burner-laptop-and-the-ammo argument can be used for just about anything.

Heck, even when society actually is collapsing, the argument isn't actually that great an argument!

I'm not even arguing that cryptocurrency couldn't have value in the scenario you describe. Just that genuine, specific, examples are more compelling than generalized scenarios. Generally speaking ; )

For example, all of the more specific examples folks talk about below (Argentina, etc).

Plus the parent poster made fair points, based on my limited knowledge, about accountability issues with cryptocurrency. And trust. Especially for nontechnical folk, and even for technical folk. Even (especially!) when society is collapsing, there's always someone out to scam you ...

For arguments sake, if someone is in the scenario you describe, what are good initial resources for a nontechnical person? For instance, should I use something like a coinbase wallet?

https://www.reddit.com/r/BitcoinBeginners/comments/fdykz3/is...


Well, I haven’t really talked about collapse of society here. I guess it’s difficult to even define what it means precisely. I could imagine that Sarajevo siege and few years around 1991 in former USSR could qualify.

Then again people who I talked to in 2014, were still a bit in denial till the start of school year and only then left the area directly affected by war.

My point is simple — you take the ability to access banking system for granted, when it’s far from universal. It’s also not as unrestricted and interconnected as bitcoin or internet itself. Sometimes you have a bank and a card, but your ability to transact is restricted. It doesn’t always mean that you sit in the basement hiding from daily artillery strike or wandering post-nuclear desert on a cool bike.


For sure. And that makes sense.

I guess I was more pointing out that right now it seems highly scenario-specific whether or not cryptocurrency can help in the way you described.

Doesn't it depend on the amount of money you're dealing with, what you may or may not need to do with said money in the near-future, your own technical competence/facility, and perhaps even the current state of cryptocurrency itself?


The government doesn't have to collapse for BTC to be useful for this (indeed, if it did, it'd likely bring down the infrastructure that BTC runs on!).

You just need to be in a situation where the law prohibits, or somehow restricts or severely taxes, the money transfer itself, or the use of that money to procure something you need.


> Check your privileges, sweet summer child.

Please don't do this here. It is needlessly condescending and insulting..


It is, but maybe start telling op?

"Does this hypothetical situation you describe exist anywhere outside of the hypothetical situation posited by a character in a Neal Stephenson novel"


That's a legitimate question


I think they're arguing about tone, not accuracy.


Some people feel insulted when someone thinks that their real problems are imaginary, so a hostile reaction is understandable, I guess.


My bad. The comment I replied to was so tone-deaf, that I mistaken myself being on reddit.


Your 'real' situation contains fallacies. How is Russia involved in money transfers to Ukraine? UAH is hryvna. Ukraine has it's own currency since 1991. RUB is used in Russia.


Totally nothing happened in 2014 that involved Russia, da. Definitely not a war.


Argentina is literally in that situation now. The government charges a surplus 30% tax on any foreign currency purchase (cash, Netflix, anything outside the country), and only allows you to buy 200 usd a month legally. The inflation rate is over 50% there, Bitcoin is more stable than the Argentinian peso. Basically, the government has made it illegal for you to legally save money on a different currency than the Argentinian peso.


and yet, bitcoin use is still negligible.


How would you even know that? Most of the value in terms of pesos would be from a small number of wealthy people who have a very strong interest in secrecy, and are likely using VPNs and other means to shield their activities. My guess is that, for wealthy people who are somewhat tech savvy or know someone who is who they can trust, it's a very good option. Certainly better than sending relatives on a plane with $9k in USD cash to get it to a relative abroad for safe keeping.


Depends if we measure use by number of people moving, amount of currency moved, or something else.


Sounds like it's actually quite popular there now https://www.nasdaq.com/articles/economic-uncertainty-restric...


Never said it wasn’t, o just mentioned a case where it’s useful.


Fearing return to drachma, some Greeks use bitcoin to dodge capital controls https://www.reuters.com/article/uk-eurozone-greece-bitcoin/f...

WikiLeaks may have amassed more than $46 million in Bitcoin based on the number of coins held by its known wallet address. https://bitcoinist.com/wikileaks-has-received-more-than-46-m...


Yes, what you call hypothetical situation is my daily life. The country where I live, Argentina, frequently imposes heavy restrictions on buying and selling foreign currencies. There is a hard monthly limit of $200 per person per month and there are talks of reducing the limit even further.

Sometimes crypto is the only way of sending/receiving money from other countries reliably and cheaply.

Other countries are much worse, Venezuela for instance has destroyed the value of their money, so bad that people use it wallpaper.


I, for example, have friends that use Bitcoin to move money in and out of Zimbabwe.


A few years ago my company had to get US dollars out of Nigeria from the sale of the products we manufactured to pay the suppliers we have in other countries. Perfectly legitimate reason, but due to currency restrictions in Nigeria the finance guys had to use a scheme using a mix of crypto currencies (mostly bitcoin, I think, but not only). I don't know the cost of the transaction, but we barely covered it from the sales.

I heard there was a similar problem in Egypt, no idea if and how it was solved.


Iraq, Afghanistan, Pakistan, Syria, Libya, Iran, Kosovo, Venezuela, El Salvador: these are all places where war and strife and economic turmoil and the Wests' sanctions have led a significant number of their citizens to use Bitcoin - and other cryptocurrencies - to manage their lives and feed themselves, and their families.

It beggars belief that someone would so callously call the situation in these countries 'hypothetical'. Please, inform yourself about the war and strife that is occurring outside your own bubble. Its very important that we in the West stop ignoring the plight of the people our military industrial complex is targeting with their weapons.

Bitcoin has held things together in many such places.

Also: Russia.


> Please, inform yourself about the war and strife that is occurring outside your own bubble.

User adamsea is informing himself, that's why he's asking a question here.


Yeah, I didn't see the questioning as "is there really war someplace right now?" but rather "is there really a place where you (or your grandparents) can get bread and milk for bitcoins when normal money collapsed?"


We were specifically talking about BTC as a way to transfer money, as opposed to being used as money itself. I can't think of any place where you'd buy bread with BTC, but there are a few where you might be receiving BTC transfers, cashing them out locally, and using that money. A hawala based on blockchain rather than pure trust, so to speak.


You can buy anything, from anyone, with bitcoin. All it takes is two bitcoin users.

Just because people are starving behind economic blockades and sanctions and mafia hell, doesn't mean they don't RTFM.

(Oh, and you can use BTC on the street, all over the world.)


> You can buy anything, from anyone, with bitcoin. All it takes is two bitcoin users.

Bitcoin users are a tiny, tiny percentage of "anyone".


> You can buy anything, from anyone, with bitcoin. All it takes is two bitcoin users.

Shouldn't we amend that statement to be "You can use bitcoin to buy anything which is being sold by some other bitcoin user?" ;)

Which sure, is potentially anything/anyone, but will obviously vary on circumstances.


Some time ago I had a bunch of Chinese yuan on a Chinese account that I wanted to transfer to Europe. I couldn't do a wire transfer, and I couldn't bring them as cash since my German bank didn't want them. I could either pull 500 € a day via ATM (with large fees), or just send it via Bitcoin.


Why couldn't you do a wire transfer?


Before 2009, export of RMB was completely prohibited. Afterwards, some restrictions were lifted, but AIUI they do not apply to J. Random Laowai.


Nowadays, can people even have bank accounts in their names in China without Chinese ID ?


Fiat financial systems are full of most arbitrary restrictions. My anecdote: I once wanted to deposit money, came to a bank, gave them a bank card,

They: What is this?

Me: Money.

They: No, bring money.


I don't know if "reliable" is a good word for the legacy banking system. The banks in Cyprus for example siezed money from peoples accounts only a few years ago... its much harder for banks to seize bitcoin that you self-custody. Many wealthy people and now companies are beginning to view bitcoin as a safe haven asset because of its decentralized security model and censorship/seizure resistant properties.

https://en.wikipedia.org/wiki/2012%E2%80%932013_Cypriot_fina...


Argentina did exactly that on 2001. Life savings stolen forever. To this day, most people save their dollars in their houses because no one trust banks in the long term.


It sounds like A) you don't deal with very much money B) you don't travel much outside the first world.

Bitcoin was the backstop that saved my ass when I went to SE Asia and none of my first-class Australian or US bank cards would let me get cash.


And until recently, a highly successful way to _launder_ it.


Some other cryptocurrencies designed around anonymity are still very effective for that purpose. The only weaknesses are at the spots where the funds enter and exit the "laundry". (Converting to/from fiat or a less anonymous cryptocurrency.)


Why did it stop being that?


Because more secure cryptos, e.g. Monero, have picked up pace.


However "transfer money" is a market need, and there are a number of companies that are doing that. I had to switch after I ran the costs of the service vs transaction fees of bitcoin. It's not much, but it adds up.


> it is an abject failure as a currency.

In the US perhaps. In at least one other country however...

https://www.bbc.com/news/business-47553048

https://www.somagnews.com/bitcoin-becomes-the-dominant-curre...


The second article makes a pretty strong claim but does not seem to have any proof other than "Venezuela uses crypto a lot", doesn't it?


I feel it's also a pretty decent store of value. Now some people will probably want to tell me that it's a bad store of value, since the value can fluctuate quite a bit on a day-to-day basis. Sure that true, but in the long run I am confident the value will only go up, whereas many fiat currencies will only go down, especially now due to QE (basically printing money) that's happening in Eurozone, the US, and many other regions [0][1] in the world as well.

So in my view, it's pretty nice as an alternative to holding gold. Gold can be nice as well of course, but it's difficult to transport. If you keep gold in your home, at some point someone might break into your home and steal your gold. You could pay some company to store your gold, but perhaps in times of crises a government might step in and claim some of your gold.

Crypto currencies can be easily held in a digital wallet or people can even just memorise the keys to their wallet in their brain. It's hard for governments to control crypto currencies and I'm sure some of the wealthiest people in the world now keep a certain small percentage of their wealth in cryptos as well as a hedge against fiat currencies, stock market crashes, etc...

---

[0]: https://news.bitcoin.com/venezuela-bitcoin-use-hyperinflatio...

[1]: https://www.financemagnates.com/cryptocurrency/the-impact-of...


>As I understand I need to do something that's pretty much exactly like opening a bank account

Not really. If I want a bank account I need to fill out a page of paperwork requiring my name, address, social security number, citizenship information, income, occupation, date of birth, scans of various pieces of ID and/or other documents. Compare this to bitcoin where you only need to install an app, and new identities can be generated on-demand. It's true that trying to buy/sell bitcoins via an exchange will subject you to AML requirements, but in-person transactions won't.

>transactions take forever, they cost money [...] unlike my free credit card.

The cost and time it takes is mostly a function of how much demand is there for blockchain space. During periods of high transaction volume, you'll either have to pay high fees or wait a long time. You can see in this chart https://jochen-hoenicke.de/queue/#0,30d that there are periods of very high transaction fees (the yellow peaks correspond to a $3 fee for a typical transaction), but also periods with low transaction fees (the parts with blue troughs correspond to a $0.12 fee for a typical transaction). People in a hurry pay more and people who can wait pay less. It's a classic microeconomics.

Credit cards aren't free either. The cost (around 2-3% in the US) is borne by the merchant. Cash has fees for handling, transportation, and storage. "Free" bank transfers (eg. vemo, SEPA) are typically restricted to consumers, are often part of a bigger banking package, which have fees or minimum balance requirements.


> It's true that trying to buy/sell bitcoins via an exchange will subject you to AML requirements, but in-person transactions won't.

Nobody does bitcoin transactions in person, at least not in America and not anyone I know. I have several bitcoins and the only way to use them is through an exchange like coinbase, who require the same identity documents that a bank does.

You could argue “but you just have to find vendors who accept bitcoin!” But with 3 bitcoin I won’t find enough vendors to spend it on.


>Nobody does bitcoin transactions in person, at least not in America and not anyone I know.

https://localbitcoins.com/

>You could argue “but you just have to find vendors who accept bitcoin!” But with 3 bitcoin I won’t find enough vendors to spend it on.

There are also bitcoin-for-gift-card stores, which vastly expands your vendor options.


How is buying gift cards not just a more restrictive way of converting your Bitcoin to the currency on the gift card?


It is, but the restrictiveness of mostly mitigated because you can either 1) buy gift cards to stores you visit on a regular basis (eg. nearby supermarket) 2) buy gift cards only after you've decided what to buy. It's also adventurous compared to converting to cash directly because there's usually a discount on the gift card (compared to having to pay a fee for cash), and that AML regulations are laxer.


And Overstock accepts BTC and isnt famous for over-working and under-paying their employees.


localbitcoins has KYC now


Tor project does. Coinbase is not the only way to use bitcoin.


> Credit cards aren't free either. The cost (around 2-3% in the US) is borne by the merchant.

In EU, regulation caps interchange fees at 0.2% of the transaction value for debit cards and at 0.3% for credit cards.

> "Free" bank transfers (eg. vemo, SEPA) are typically restricted to consumers, are often part of a bigger banking package, which have fees or minimum balance requirement

A bank that i use has free SEPA transfers, no montly fees and negligible minimum balance reqirements, both for consumer and business accounts.


The parent commenter is European, so they have probably a lots better experience with banking than you might have in the US.


> I still don't know what I would need to do to buy something with bitcoin right now.

but I think you do. You find a seller who is willing to accept the currency you're using, and you use whatever interface to the bitcoin network you've chosen to do "send x money to y identifier". No different to a regular EFT.

You identified the exact problems. It's not that bitcoin transfers are some weird foreign concept, it's that they take forever, have high fees and aren't accepted universally.

The major users of bitcoin as an actual currency rather than a speculative vehicle are places that can't use the more convenient forms of currency. Guaranteed they'd rather use USD if they could.


> You find a seller who is willing to accept the currency you're using

Well that's a non-starter. If I wanted to buy a beer with bitcoin right now, I'd probably end up drinking tap water.


Same deal if you tried to pay for a beer in the US using vietnamese dong.

However if you wanted to purchase "illegal" drugs online you'd be hard pressed to find anyone who will accept your USD.


The problem as I see it that it is nothing like opening a bank account... You don’t need to understand the banking system to use a bank account. Meanwhile normal people without a serious networking/distributed systems background will not really grasp what they are doing when using bitcoin and have to resort to googling steps and following them blindly. The fact that ETH gas prices are going through the roof because of Defi would be a huge leak in the abstraction for something like a bank.


> You don’t need to understand the banking system to use a bank account.

Same for Bitcoin, you install a wallet app and use it. It's actually much easier than opening a bank account.

> Meanwhile normal people without a serious networking/distributed systems background will not really grasp what they are doing

They won't grasp what they're doing on their bank account without a serious economic/financial/law background either.


Offtopic -

I saw Bitcoin project on GitHub's artic code vault[1] and advertised heavily. But I couldn't find libbitorrent on it besides it being hosted on GitHub as well. Is this because the project owner hasn't enabled it or is there some other reason behind it?

[1]https://archiveprogram.github.com/


You don't REALLY need something like a bank account (exchange account) except to convert to/from the currency. You only need an electronic wallet. The transactions are where it gets really weird though.

That said, unfortunately, short of a large nation state, or collection of banks also allowing exchange/transactions, I don't think it will ever take off. Even then, transaction times as you mention are not fast, and often are controlled by a limited number of organizations.

What the technology does offer, is a means for multiple parties that don't trust each other to proceed with a transaction. This is often where "smart contracts" come into play, and also why banks are considering a cryptocurrency system(s) for inter-bank exchanges.

In the end, those that win will largely be the incumbents imho.

I setup bitcoin on a few machines really early on... and had a couple coins... but I had no way to "use" them and it was just costing electricity at the time... I deleted it all and didn't look back until they cleared $20k a coin. Kind of wish I at least saved a zip file of the wallets somewhere. Though if I'd kept mining, may have been worth something.


Most people have no use for most currencies. I, for example, have no use for the Argentine peso.

That doesn't make it useless for someone else, who might for example live in Argentina.

The day you want to hire someone who wants their payment in Bitcoin for some reason, that's when you'll have use for it.


The Argentine peso is also useless to people living in Argentina.


Sad but true.


PGP suffered from a similar problem.


> pretty much exactly like opening a bank account

You don't have to go anywhere, you don't have to provide any kind of ID or documentation, you don't have to wait around for a free salesperson.

> but transactions take forever

Bitcoin transactions are instantaneous. Look up "settlement cycle" if you want to make a direct comparison. Bitcoin final settlement usually takes about an hour, compared to anywhere from several days to several months for traditional payment. Credit card settlement is usually 60-90 days.

> they cost money

It costs less than a dollar to move ~any amount of money anywhere in the world with ~1hr settlement. A credit card usually costs something like $0.25 + 3%.

> the currency is not accepted by any business I use... I have 55€ in my wallet and everyone on this continent knows what to do with it.

I have AU$80 in my wallet and I'm pretty sure no one on your continent would take it.

In fact, last time I traveled to SE Asia none of my debit cards worked for cash withdrawal and the only way I was able to get cash was with a Bitcoin ATM. So it worked very well for me!


Communities can create value. The Bitcoin community believes Bitcoin has value, so it does. It has unique tokenomics with a limited 21m supply and is backed by math and cryptography. That's all there is to understand. Look into the history of money. Crypto just allows communities to have value. Soon you will see many communities online creating governance tokens, something like "reddit coins" but better, and you will be able to sell them for another coin like DAI or USDC.

You can already purchase Reddit MOONS from the r/cryptocurrency subreddit here: http://xmoon.exchange/


> Crypto just allows communities to have value.

So do stamps, coins, Star Wars figurines in original packaging, baseball cards, yada yada.

At least you can look at those other things while you possess them.


You can transact billions of dollars across the globe securely with a minimal transaction fee via Bitcoin. Don't see how that's possible with Star Wars cards. Guess some people prefer government fiat currency with unlimited supply. I'd rather store my wealth in something that was launched more fairly for all and deflationary, even if it is high risk. The future for blockchain tech is bright, a lot has happened during the past bear market and the DeFi giant is looming. https://defipulse.com


Its bottom up, those in a "first world" bubble dont notice its effects. Its already a revolution, banking the unbanked and those at the bottom of the dollar\euro tottem pole. Africa for example:

https://www.reuters.com/article/us-crypto-currencies-africa-...

Adoption will establish itself in Africa, S.America, Asia first.. and then it will compete head on with traditional western banking services.


The biggest problem that I have with crypto is it's insane power usage. As soon as we have a commonly available crypto currency that's secure, doesn't use 'proof of work', is efficient and stable, I'm in. Until then I can't take it seriously. Crypto is the digital equivalent of driving a hummer.


TBH, I don't think that's a bad thing at the current moment - the tools and base layers (most notably ETH scalability) are not in a position to handle a huge influx of mainstream users with expectations that things Just Work.

There's great progress being made, but we'll not get there during 2020.


Interestingly, Bram Cohen, the inventor of BitTorrent (v1) is currently working on a cryptocurrency called Chia that is trying to solve all of these problems. Obviously exercise scepticism with all things cryptocurrency, but I'm pretty optimistic about it.


As far as I'm aware many crypto exchanges also act as wallets and have smartphone apps that feature QR scanning for transfers / payments.

The largely obviates the need for any domain knowledge.


So at this point you're really no longer using the core feature of any blockchain based currency but make changes in some SQL row in a central database that maybe later results in changes made to some blockchain on your behalf. Why even bother at this point? Might as well trade gift cards or airline miles.


[flagged]


There is no universe, neither real nor imagined, where the statement "the rest of the world is moving on from standard [fiat] money" does now or will forseeably parse as true.


I imagine that the pitch for Tesla includes some answer to “who cares?” quite easily. People who want to reduce emissions is a pretty big audience right now.


What are they moving to? Where is the data or transaction volume to support that the rest of the world except the US and EU are moving to alternative money?


UX has been a main focus of Bitcoin ever since Lightning made scaling possible. Checkout out the Strike app for seamless bitcoin (lightning) payments https://strike.zaphq.io/

Who cares? Anyone who feels like their purchasing power isn't where it should be is either underpaid, or a victim of central banks diluting their dollars with printing.


Meanwhile 3 shady guys in the non-extradition offshore are printing 100 million dollars per months of their toilet paper tokens, inflating price of every other token, the only auditing firm dropped them immediately after looking at their dealings, and that is praised as a modern money way. Revolutionary even.


> victim of central banks diluting their dollars with printing.

So, nobody in the EU or North America, then.


It's true that there hasn't been much inflation in CPI, but there's no doubt that all the printed money has inflated asset prices.


Well, I think diluting is actually a pretty good word to describe what is happening. It's not really inflation. It's more about rising inequality. Money arrives on the stock market but it doesn't arrive in the job market. The result is an increasingly growing imbalance. Publicly traded companies and their owners greatly benefit meanwhile everyone else is being left behind.


Right, this is surely more disastrous than everyone holding onto their bitcoin because it will be “worth more” in a year by design. /s


In Germany, house prices have doubled the last 10 years.

In the US, stocks are 187.8% market cap to GDP.

But please, tell us more about how there's no inflation.


Perhaps a better threat scenario would have been "victim of banks confiscating their deposits", which is a much more realistic risk for people in the EU:

https://www.cbc.ca/news/business/cyprus-bank-account-tax-put...

(Also, the grandparent post perhaps should have given some context to the statement "Lightning made scaling possible", such as the fact that Bitcoin originally supported 32 MB blocks of transactions, before being "temporarily" limited to 1 MB. Alternative blockchains have managed without this artificial restriction, and without relying on complicated second-layer "solutions" like Lightning.)


The probability of a bank run is null when compared to the probability of a BTC provider tanking/getting hacked/actually being a fraudulent criminal scheme.


Banks in Cyprus got recapitalized with some amount of Russian money during the 2011-2013 Eurozone crisis, or at least that's the commonly known narrative. It was basically a convenience crackdown on a fiscal paradise within the EU. If you read the last line of the article you have linked it says:

European leaders wanted to limit the size of the rescue loans — which are backed by European taxpayers — to €10 billion. Leaders were also reluctant to bail out Russian depositors whose funds may be the result of tax evasion, crime or money laundering.

Additionally, there are sanctions in place since the 2014 Crimean crisis.

https://www.themoscowtimes.com/2019/01/10/cyprus-no-longer-m...


>This should be something that every creator who markets or sells products should learn. Consumers don't care how it works.

I distinctly remember being a teenager and learning how bittorrent worked, thinking it's the coolest thing I'd heard of and laboriously explaining how it worked to a friend that sounded interested. At the end of the explanation he asked "So how do you download music with it?"

And I realized all he cared about was using it to pirate music.

It's really easy for programmers to get lost in how cool some code or technology is and lose sight of how the rest of the world views it.


That’s been exactly my experience with amateur radio.

I’ve given my friends the spiel about how cool it is that I can use the EM spectrum to send my voice through the air to the other side of town. And how other people have put up repeaters which mean my voice can actually go a lot further than that —- maybe across the country!

And their response? “Sure, but I can already do that on my phone”


Wow I wish I had a friend interested in stuff like that. That sounds so cool!

After losing a lot of my curiosity the past few years, it’s coming back and my confidence is growing and I want to explore various fields and technologies deeper [and for the first time].

Do you think people don’t know about this beautiful stuff because inventions and products have become overly corporatized and use intellectual property and patents, which stifles curiosity?

I can’t help but dream of a fully open source world where we don’t dominate each other by keeping information about the inner workings of things artificially scarce like we do today - as evidenced by Aaron Swartz, Library Genesis, corporate fight back against right to repair etc.


> It's really easy for programmers to get lost in how cool some code or technology is and lose sight of how the rest of the world views it.

Story of my fucking life.


There's a clip from the first episode of the show "Halt and Catch Fire" (which is a historical fiction show with each season hitting some breakthrough in personal computing, starting with IBM-compatible compact personal computers) where one character tells another, "Computer's aren't The Thing. They're the thing that gets us to The Thing."

I share it occasionally with friends to remind them of this. Computers are just tools. The really interesting question is what those tools can do for us.


I have similar feelings... I often think of ways one might be able to have anonymous + secure + decentralized messages... unfortunately, any solution becomes easily DDoS-friendly. Then, thinking about which pillar to relax would be best. Then, thinking to hell with it, SMTP works well enough.


I think everyone aspires to this. They just fall short.


Perhaps. I see a lot of developers make stuff (non-dev related even) but then only list out their technical specs and features rather than benefits, as if the programming language its written in is supposed to entice (non-dev) consumers. A corollary is devs making software for their own enjoyment but that which doesn't solve an actual problem. I think it's more that they don't think in that entrepreneurial mindset yet, but once they realize no one is buying, they'll understand for the next time around.


It's a problem with the creative process itself, I've learned, and it isn't different in other mediums. It follows the McCloud "Six Steps to Art" - starting with surface changes, gradually learning the elements of the craft, and eventually settling on a purposeful idea in the medium or about the medium.

Most developers, and therefore most projects, get stuck somewhere in the middle area, where there is enough skill to create features, but no direction or vision for those features, other than a basic template based on other software.

Often, the project will be called "minimal" to excuse having few features, but only a few projects adhere to a genuine restriction like SLOC limits or eliminating dependencies.

Appeals to commerce as the goal often have a further narrowing effect of making one anxious with thoughts like "project must moonshot on day 1", but then you look at Bittorrent, an all-time software success, and it's like, no, this probably wasn't ever going to create the next Microsoft. Funds were raised to have a Bittorrent company, but the market success it had was always modest at best. And yet it did and still does represent an idea that people can believe in.

The actual thing that I see every great project do, whether it's defined artistically or not, is to define up-front some themes and principles that you believe cohere well, and then direct the specific elements around deeply studying and exploring them. This can manifest as a corporate mission statement, or as an artist's manifesto, or an academic research subject. In final form, it generally manifests as "do one thing exceptionally well", since if you have a very clear idea of the goal, you can direct all your energy upon it. But in the intermediate stages it is still trial and error to learn what fits the theme best, what the actual success metrics are.

The thing is, if you have coherence in the abstract, it's way easier to explain the reason for being: "This is just a manifestation of the principles". Being easy to explain in turn makes it marketable without resorting to sales tricks. And because it deals with general concepts it taps into a breadth of appeal that can't be found just by looking at any one feature. So adhering to principle sets you up for success in a general sense.


> Consumers don't care how it works

In the case of Bittorrent they should know a little bit about how it works before they start pirating software, music or movies using it, however.


Or distributing Linux ISOs or large magnetometer datasets or astronomical observations or...


I think everyone knows that, it is just much, much easier to say about how it works with fancy buzz words than to actually make something good.


Not sure how intentional this was, but for piracy, the killer feature is that curation and quality control could be centralized to well managed trackers, instead of the previous bazaar like situation on Napster/Kazaa/DC++/etc. where everyone shared their inconsistently named mess of files, lots of slightly different versions of the same movie etc. With torrents you can browse the tracker's website and see one canonical library of content, and dozens or more people seeding the same stuff.

The other big thing was seeding the already downloaded bits while downloading the rest and making this behavior very hard to disable. This combined with ratio systems of the various trackers ensured that people played nice and gave back as much as they took.

The downside was the elimination of the long tail very niche content that you could find on DC++ while browsing individual people's messy but unique collections.


One thing BitTorrent did that was smart was to completely ignore discovery. We already have the web, and putting tiny files up on web pages is a much better solution than some half baked built-in distributed search service.


It was good design to decouple the transfer protocol from the search protocol, to allow different technologies to be explored, but in practice people ended up relying on the "good enough" solution of using the web for search.

Setting the precedent that media companies can seize domain names and force ISPs to block access to search engines if they don't like some of the search results (while ruining the lives of the people running those search engines) is arguably a high price to pay for this improved search service.


Yes. For all the P2P distributed praise ITT, torrent as a whole, as it is used in the real world, is very centralized. If the Pirate Bay is shut down, that stuff is gone.

Uploading became a privilege, something that normal average people don't do. Earlier people ripped their own CDs, DVDs, magazine scans, digitized their VHS, recorded shows with their TV card etc. and shared these files. I don't know anybody who has ever created a torrent themselves. Everybody just downloads and seeds the pre-packaged stuff from the centralized sites, much like the walled garden philosophy of app stores.


It's fairly uncontroversial that the success of decentralized protocols can promote - and sometimes, even rely upon - centralized points of engagement.

But your experiences sound like what goes on in the public trackers. It's reasonable that folks who go to Pirate Bay to get their TV shows are not going to be as engaged as those who go to TVV, MS, or MTV. There's also the security risks of letting untrusted users upload files that may get consumed by thousands of people in the first hour.

In private trackers, I have often found that uploading is encouraged, going so far as to provide very detailed guides in their wikis showing how to rip the media, prepare the torrent, and upload it to the tracker. If a better quality version can be uploaded to trump the previous version, that is also encouraged. Learning, quality, and appreciation are cornerstones of the private tracker ecosystem, in my experience.


> earlier people ripped their own CDs, DVDs, magazine scans, digitized their VHS, recorded shows with their TV card etc. and shared these files.

For that, you want Direct Connect [0]. As a one sentence pitch: IRC for file sharing ;)

[0] https://en.wikipedia.org/wiki/Direct_Connect_(protocol)


Or just share magnet links over IRC….


There's still an XDCC warez scene, too. Most of the servers are hacked computers, though, and some people have ethical issues with that.


There must be 1000 Pirate Bay mirrors. Many of which served malware or scams on their web properties through ad networks, but faithfully reproduced the torrent database. It was centralization in precisely the way your post doesn't suggest: nothing was irrevocably lost but one wished for a trusted source with curated links to content.


Well, no. If The Pirate Bay shuts down, you still have the hashes and the files are invariably somewhere else too. That, and thousands upon thousands of backups of their database made by thousands of different people.


There are reencodes, translations and unofficial batches.


Can it be solved with IPFS-based trackers?


Unfortunately search is something that is extremely efficient when centralized but very inefficient when distributed.

A few years ago I was very interested in IPFS but once I learned about the limitations it was pretty much not suitable for me. If you really insist on using IPFS then your best bet would be to have a central tracker host an IPNS based site and publish its database index as a series of json files. The javascript client will then have to query the index on its own in the browser.

The alternative would be to download the entire database and create a local index. In theory that's not a bad idea but over time your database will grow to several gigabytes. That's not comparable to just going to whatever site exist today and submitting a search term but it could be highly resilient.

The fundamental problem is that untrusted nodes can't provide search services. It's possible that they end up redirecting you to a fake listing. Blockchain style consensus doesn't work for something that requires responses immediately like a search engine.

Also bittorrent trackers are slightly more complicated than just a file store


IPFS has an incentivization problem just like torrents.

The main reward for torrent uploaders is fame, would that translate well in IPFS?


Popularity is usually only relevant for individual files but if you have an entire website hosted on IPFS that is very popular then it is quite likely that some people are pinning it. Last time I used IPFS it just didn't have the necessary user friendliness though. Pinning felt more like an afterthought but this was years ago so they may have changed things.


BitTorrent was one of the few things that once I understood how it worked, I realized I was looking at the product of true genius.


The first time I ever felt that way was with the Gnutella network; I instantly wrote a client for it from scratch.

Bittorrent only became that level of magical for me when the DHT got factored in some time later and magnet links became workable.

The other time was Bitcoin.

I’m a big fan of lack of centralized/coordination/tracker nodes.


Really? It's a just an iteration from how eDonkey2000/eMule worked, which actually also had a DHT based 'trackeless' mode, many years before BT came around.


The tit-for-tat algorithm is what makes bit torrent special and solved a real problem with the previous gnutella-esque generation of p2p file sharing programs. As far as i know emule did not have that at the time.


BitTorrent uses TCP properly instead of implementing a poor version of TCP using UDP which is why it is much faster than eDonkey.


Emulating sorta-TCP via UDP is something modern protocols like QUIC also do. It's the difference between 'a poor version' and 'a decent version' that matters.


Except when you are using μTP...


How does tit-for-tat work? Is it still enabled? I don't think I've seen it much lately.


It's implemented in the choking algorithm. Basically if bandwidth or upload slots are limited, your peer with preferentially upload to peers that have reciprocated to you. AFAIK most clients implement this.


Interesting, thanks!


It isn't so important now most people have fast enough internet.

Back when people left things downloading for days, there were severe shortages of people willing to upload, and tit-for-tat encouraged uploading, solving the issue.


I know that feeling; I first got it reading this paper by some guy named Nakamoto.


Hacking around distributed hash table open source software gave me the utmost respect for the people who came up with this stuff.

Crypto currencies owe a lot of their successes to the pioneers behind technologies that also power bit torrent.


It's been said that git, bittorrent, and proof-of-work were the three necessary prerequisites for the discovery of blockchains.


Don't forget:

- Patricia-Merkle Trees proven in DC++

- DHT proven in eMule Kademlia


Many Gnutella clients were also using Merkle Trees by about 2002.


I wouldn't consider merkle trees to be a core component of Bitcoin. Full nodes don't benefit from them, they're only relevant for light SPV clients, which didn't even exists for a few years after Bitcoin was released.


Merkle trees are everywhere in Bitcoin (full node or otherwise).

The block header has the merkle root of all transactions that are a part of that block. The witness merkle root is stored in the coinbase transaction (if the miner is segwit enabled).

And proof of work is done for the block header, which includes all these merkle roots.


They are used in Bitcoin since day one in the full node implementation, but full nodes don't benefit from the merkle tree structure in any way. The first client that did benefit from it was bitcoinj, which was released several years after Satoshi birthed Bitcoin.

If light SPV clients weren't a consideration, we could just concatenate all txids together and use the hash of that in the block header instead of a merkle root, and get the same effect.

What merkle trees give you is an efficient way to prove that a certain txid is committed to within a block, without the verifier having to fetch the full list of txids. Instead, he just needs a valid merkle path from the txid to the root, which is much smaller to communicate and to store.

For a full node that has the full list of txids regardless, this is basically meaningless. Full nodes don't (ever) verify merkle inclusion proofs, only that the merkle root in the header matches the full list of block txids.

I would still consider Satoshi's invention to be an incredible breakthrough even if he didn't consider light SPV clients since day one and only described the full node operation mode, therefore I don't consider SPV to be a core component of the Bitcoin breakthrough.

(And also, we know today that SPV is not as great as it was once hoped to be. It puts users at the whims of the miners, with XT/Classic/Unlimited/S2X/BCash being marvelous examples of how that can go terribly wrong. The fraud proof concept that Satoshi described in the whitepaper as part of the SPV model (under the name "alerts") was discovered to not actually be workable due to the data withhold problem, giving this model much weaker security guarantees. And privacy is totally and utterly broken in traditional SPV -- though Neutrino is making good progress on that front.)


From #bitcoin-core-dev on Freenode (shesek is me, sipa is Pieter Wuille [0], one of the most veteran bitcoin core devs)

<shesek> does bitcoin core ever verify merkle inclusion proofs? (I assume not, it only verifies that the merkle root matches the set of txids. but maybe I'm missing some other ways its being used?)

<sipa> i don't think anything verifies them

<sipa> shesek: they don't even ever receive any

<sipa> though they were an essential part of BIP37 [related to light SPV clients]

<phantomcircuit> shesek, for a full node theres no real difference between receiving a merkle tree and a hash of a list

<sipa> yeah, for a full-blocks-only bitcoin like protocol, the "merkle root" stored in the block header could just be a flat hash of all txids

[0] http://pieterwuillefacts.com/


I pretty strongly believe that "blockchain" is a misnomer given how much the structure relies on Merkle Trees. Forks aren't some aberration in the data structure, but a direct exploration of branches in the Merkle Tree. Most blockchain algorithms, Proof of Work especially, are just very rigorous "rebase operations" in git terms.


Blocks are not organized into merkle trees in Bitcoin. Full nodes pick the longest (PoW wise) valid chain and discards any other competing chains and their blocks.

There are some alternative cryptosystem designs that do take blocks in "losing chains" into consideration using a DAG structure, like GHOST and its successor SPECTRE (by Aviv Zohar et. al). Ethereum also has a concept of "uncle blocks", which are rewarded and contribute to chain selection.


> Blocks are not organized into merkle trees in Bitcoin. Full nodes pick the longest (PoW wise) valid chain and discards any other competing chains and their blocks.

That's pretty much the definition of "rebase" and again, that's a functionality of the algorithm on top of the data structure (Proof of Work) not the data structure. The raw data structure is still a merkle tree even if in practice the algorithm suggests to people there is only one rebased trunk. But even that isn't entirely true in practice because there are still multiple rebased "branches" among the Bitcoin forks such as Bitcoin Classic, Bitcoin Gold, etc. All of those are branches that share the same conceptual merkle tree. Even if they aren't "Bitcoin" that's more of an algorithmic and political distinction at that point, not a technical one by means of data structure. It's not the data structure that makes it a chain, it's the algorithm and the politics, hence why I think blockchain is a misnomer for the data structure itself.


Are you sure you mean merkle trees specifically and not just a tree structure in general?

A merkle tree is a very specific type of hash tree, which Bitcoin only uses for transactions and not for blocks. Merkle proofs are used to prove that a txid exists within the root hash committed in the header block. What would be the reason to organize blocks into a merkle tree? What would that let you prove?

See this SE question for more information on how Bitcoin uses merkle trees: https://bitcoin.stackexchange.com/questions/69018/merkle-roo...

> here are still multiple rebased "branches" among the Bitcoin forks such as Bitcoin Classic, Bitcoin Gold, etc. All of those are branches that share...

Bitcoin, BCash and BGold each have incompatible rule sets; A full node will only accept chains that are valid according to its own local set of rules (embedded in it software), so chains of different coins will not even be considered for chain selection, regardless of the proof-of-work backing them. They just don't exists from the full node's PoV. Validity of blocks/transactions comes first, everything else is second.


Today's consumers and developers of "tech" have been trained and convinced to readily accept stuff that works "most of the time".

It is the software I download for free that is the most reliable IME. Spending more money does not make today's consumer computers any more valuable. And that's how it should be. The price of hardware (and software) should continue to fall.


There is something of an unpopular ideology in Computer Science that suggests that you shouldn't build architure to use cool tools, you should build the cheapest easiest architecture that meets the user's needs. This ideology is unpopular because using cheap and easy tools isn't exciting,and we developers often can't stand writing boring applications.

Quite a few projects are using P2P, machine learning, Kubernetes, No-Sql or block-chain not because the project actually requires scalability, decentralization, advanced data analysis or responsive big data, but because somebody got excited and refused to accept that it was a waste of time and money to do so.


Reminds me of one of the points Joel Spolsky made about peer-to-peer not being the reason Napster was successful:

https://www.joelonsoftware.com/2001/04/21/dont-let-architect...

> Your typical architecture astronaut will take a fact like “Napster is a peer-to-peer service for downloading music” and ignore everything but the architecture, thinking it’s interesting because it’s peer to peer, completely missing the point that it’s interesting because you can type the name of a song and listen to it right away.

> [...]

> If Napster wasn’t peer-to-peer but it did let you type the name of a song and then listen to it, it would have been just as popular.


Off-topic but it was fun reading your original HN submission from 2013 announcing your project: https://news.ycombinator.com/item?id=6787807


Ha, it‘s been awhile. Funny how the world changes.


Aether looks so interesting. Do you have communities that have adopted it ? In what use cases? How do they use it daily?

I like how the data expiration is set as an intentional feature.


We do have a small but dedicated community. It’s quite nice, you should check it out.


> it doesn't try to convince you with sob stories about how using P2P helps fight against the big bad evil web.

A few years ago, that was precisely their marketing pitch.


Aether looks cool, but why do you provide only Snap builds on Linux?

I wanted to try Aether, but I avoid Snap as much as possible


We’re actually just in the middle of our move off Snap. Do subscribe to the mailing list and I can send an update once that’s out.


Given how vocal anti-snap people are, I just wanted to provide a voice in favour of snap. It's always my preferred method of install for anything not in the default archives.


You should keep snap and offer other options if you want, the noise you hear is not a majority, snap works very well for many people and they don't have posts that reach top of HN.


"there are a few company-made swarms hosted on industrial servers that keep everyone up and alive, so that the thing gives the impression the 'P2P' network does work"

What about the trackers?


DHT makes trackers simply an aggregation and redundancy measure.


It should, but in practice it doesn't. I've had torrents that I left sitting for weeks not being able to complete, until I added a list of trackers to them and found some seeders with 100% of the torrent.


Just because not everyone is using DHT does not mean that it doesnt solve the issue.


Same here.

BT only worked for new popular stuff.


I've yet to find something that's truly unavailable. Sometimes it takes a few different searches on different sites but I've always found the obscure Linux distros I've been looking for.


Totally right, though I wouldn’t underestimate the growing community of people for whom p2p is a feature. Being able to use something forever because no one can shut it down is important to some.


I'm confident Bitcoin was based on Bittorrent, there's a lot of overlap. And I feel like it has a lot of the benefits. The main problem with Bitcoin is that it's not purely decentralized (big companies can throw calculating power at it and sway decisions / dominate) and really, really expensive to run.


The actual computation in Bitcoin is cheap. The real challenge is spam prevention in a decentralized system. In bitcoin there is this rule that the longest chain wins. If a spammer can create 1000 blocks based on a chain that is 100 blocks behind the longest then he can "rewrite history" by making his chain win. So how do you prevent spam? Only allow a message every 10 minutes. How do you make sure nobody cheats? Force them to complete a cryptographic challenge that takes 10 minutes on average. Of course the side effect is that the more nodes you have, the more power has to be thrown away to maintain the 10 minutes timer. This is not a scaling problem though. You can just increase the block size to process more transactions or reduce the block time to have more frequent blocks.

Bitcoin is kinda like San Francisco. They decided to stick with the current block size and that made any use case other than buying bitcoin to speculate very expensive.


>being P2P is not a selling point

It is kind of a selling point with regards to what most people actually use it for. If not P2P the movie / record industries would shut down the servers.


I think the point is its an implementation detail not an end in itself. Users care about the properties it provides, not the method (p2p) of obtaining then


BT has a very strong theoretical underpinning based on FEC coding. It’s not like it was just bumbled into. It is also quite well engineered.


> not only uses a hash tree, but it forms a hash tree for every file in the torrent [...]

> Files that are identical can also more easily be identified across different swarms, since their root hash only depends on the content of the file.

Wait, content addressed blobs across swarms... does that mean torrents made by completely different people at different times that happen to contain one or more identical files can benefit from each other's peers? If so this feels like a significant feature that would boost the long term health of a lot of torrents and connect more peers that could be helping each other.


This right here is the biggest new feature in my opinion, and should have been discussed in more depth in the announcement. I also think they should have pursued this avenue further before releasing v2.

Content addressing is one of the big advantages of p2p applications, and IPFS has been pushing it for a long time.

Just imagine if this was done all the way down to the piece level. You have two different torrents (say, Linux ISOs) where 20% of the pieces overlap due to similarity (maybe a point upgrade or something and you want both versions). Rather than 100% of both, you only need to download the shared pieces once. Not only that, but say that the latest version has many more seeds/peers, you could download the pieces from that swarm instead, saving the bandwidth of the older torrent's swarm for the remaining 80% you need to download.

This could probably be done client side somehow, but it would be good to see actual protocol support for it so that bittorent can move in that direction.


Only if blocks have the same offset in the same binary and if they align with the block boundaries. Otherwise, different hashes would be generated. I don't expect that to happen a lot.


There are ways around this. See "content-aware chunking", e.g. implemented using rolling hashes [1]. This is for example what rsync does.

The idea is to make blocks (slightly) variable in size. Block boundaries are determined based on a limited window of preceding bytes. This way a change in one location will only have a limited impact on the following blocks.

[1] https://en.wikipedia.org/wiki/Rolling_hash


Rolling hashing is really only useful for finding nonaligned duplicates.

There isn't a way to advertise some "rolling hash value" in a way that allows other people with a differently-aligned copy to notice that you and them have some duplicated byte ranges.

Rolling hashes only work when one person (or two people engaged in a conversation, like rsync) already has both copies.


I think you misunderstood how the rolling hash is used in this context. It's not used to address a chunk; you'd use a plain old cryptographic hash function for that.

The rolling hash is used to find the chunk boundary: Hash a window before every byte (which is cheap with a rolling hash) and compare it against a defined bit mask. For example: Check if the first 20 bytes are zero. If so, you'd get chunks with about 2^20 bytes (1 MiB) average length.

As a good explanation, I'd encourage you to look at borgbackup's internals documentation: https://borgbackup.readthedocs.io/en/stable/internals.html


I think they understood just fine.

If I discover that the file I want to publish shares a range with an existing file, that does very little because the existing file has already chosen its chunk boundaries and I can’t influence those. That ship has sailed.

I can only benefit if the a priori chunks are small enough that some subset of the identified match is still addressable. And then I may only get half of a two thirds of the improvement I was after.


that does very little because the existing file has already chosen its chunk boundaries

If they both used the same rolling hash function on the same or similar data, regardless of the initial and final boundary and regardless of when they chose the boundaries, they will share many chunks with high probability. That’s just how splitting with rolling hashes work. They produce variable-length chunks.


The idea is that on none random data, you are able to use a heuristic that would create variable-sized chunks that fit the data. The simplest way seems to detect padding zeros and start a new block on the first following none zero byte. There probably are other ways, knowing the data type should help.


That seems fairly unlikely. Not a lot of big files have zero padding, and if they did them compress them. It will reduce your transfers more than and range substitutions ever will.


It will still definitely help some use cases:

> Identical files will always have the same hash and can more easily be moved from one torrent to another (when creating torrents) without having to re-hash anything.


Well if files in an ISO are aligned to some boundary it would help a lot, similar to how filesystems on disk have a sector size, where all files begin at the start of a sector. However, I don't know if this is true in ISO9660 or any of its extensions.


ISO is a mountable filesystem so this could be the case if it doesn't have any space saving optimizations that create variable block sizes.

also need to watch out for compression


Hopefully it would also be good for interopt between the two, as anything with a small enough block size could be picked up by IPLD.


>Just imagine if this was done all the way down to the piece level.

I don't know about that. With a reasonably large number of users, wouldn't you start seeing hash collisions via birthday paradox? Might be better off keeping it at the file level. (Though this does incentivize malicious users to find hash collisions of particular files they want to defend, and seeding the swarm with garbage files)


Hashes are now 256 bits. In order to get a collision with probability 1% you need to produce about 4e37 hashes. For comparison, if you had a 5GHz computer that did a SHA-256 hash every cycle and gave 100 of these computers to every human on Earth, it'd take over 300 million years to produce 4e37 hashes.


Yep, basically while it's entirely possible (as per Murphy's Law) that there'll be an accidental collision, the likelihood is close to zero. It's a fair risk to be taking.


> In order to get a collision with probability 1% you need to produce about 4e37 hashes

Does "get a collision" here mean finding a particular collision with a known hash, or finding any collision between any two of the generated hashes?


The latter. I used the formula from the Wikipedia page on the birthday attack: p ~= 1 - e^{-n^2/2H} where n is the number of items you've picked and H is the size of the universe


I see, thanks!


All git objects on GitHub has unique hash in 2017 even though git uses relatively weak SHA-1. SHA-256 should be safe.

https://github.blog/2017-03-20-sha-1-collision-detection-on-...


the birthday paradox comes from the fact that there are an extremly small number of birthdays.

sha-256 was chosen because it provides a sufficiently high number of different hashes that this won't be an issue for the forseeable future.

more details here: https://stackoverflow.com/questions/4014090/is-it-safe-to-ig...


> Wait, content addressed blobs across swarms... does that mean torrents made by completely different people at different times that happen to contain one or more identical files can benefit from each other's peers?

Yes. It also makes your files more discoverable for copyright trolls. They will be able to fully automate the process of sending takedown notices for each copy of zlib's README.txt, that was "stolen" from their software by evil pirates.


> this feels like a significant feature that would boost the long term health of a lot of torrents

Except for so many that are a single archive file. It seems logical that you'd want to compress what you're sending as much as possible before uploading... but in today's bandwidth environment, it actually makes sense to leave certain things uncompressed so you can benefit from swarm overlaps?!? Wild!


do two identical compressions produce the same output? if so, wouldn't the same thing happened just against the compressed files?

It's not uncommon to see multiple .torrent files with the same content FYI.


Normally no, compression formats like zip are not guaranteed to produce the same output for the same input. That's why things like torrentzip and torrent7zip were created, to ensure that the same input did create the same output.


A client can do that optimization locally, but there's no discovery mechanism in the protocolto find additional infohashes that contain some of the same files . But I suppose torrent indexing sites could offer something like that.


I haven't looked at it for a while, but wasn't that the big feature of ipfs?


Yep. But the difference is that everyone uses BitTorrent and no one uses IPFS.


> completely different people at different times that happen to contain one or more identical files

Yes, this won't be useful for speed.

But it will be useful for health and appearance of health.

So a torrent that has no seeds because it's missing "RARBG_DO_NOT_MIRROR.exe" can be filled in quickly and have proper seed info.

Also large seasons can be filled in. They get corrupted at later episodes because people will prioritise the first episodes then the seeds can jump.

So anyone filling in the later episodes through single episodes torrents might repair the season torrent.

It'll allow over lap between torrents only differentiated be a missing RARBG.txt if they are both fragmented.

There's always been talk of allowing a torrent to have files added to it. It's be interesting to see if this might be useful client side to do this in a Claytons way.

Personally I don't think it's useful for Linux isos. Not sure on this. Do Linux isos get unhealthy?


patch releases, especially security patches would only have a small number of changes that might allow me to take a full new release but only update the changes to the release i already have.

but it would take some handcrafting to make sure the new files are at the end of the ISO so everything else stays in place.

technically this should be possible since ISO should allow for adding files given that it was designed for write-once media.


If you want to try a client with v2 (and v1+v2 hybrid) support, I've just released PicoTorrent v0.20 [0] based on Rasterbar-libtorrent 2.0 :)

[0] https://github.com/picotorrent/picotorrent/releases/tag/v0.2...


Nice. If anyone is wondering if they should try this, picotorrent is probably the most lightweight GUI torrent client for windows (no linux/mac version).


What are you using for the GUI layer, and what's your stance on accessibility? The fact it's Windows only gives me hope.


I'm using wxWidgets, so it's the Win32 API at the lowest layer. For accessibility I want to make it support the NVDA screen reader which I think would make it support other readers as well :)


wxWidgets is a decent choice. Not all of its controls are accessible though. Most infamously, the wxHTML control (wx's own HTML engine) isn't accessible, but the web view control (using the OS's web browsing engine) is accessible if you futz around at the native layer to force the keyboard focus into the right place. Also, on Windows, the list view control is accessible but the data view control is not. Frustratingly, the opposite is the case on Mac and (I think) GTK.

I suggest you test with Narrator rather than NVDA. Disclosure: I work on the Narrator team at Microsoft. But that's not why I say this. The reason is that NVDA and JAWS have some ugly hacks that they can use to make some GUI implementations (particularly using Win32 with GDI for graphics) accessible even if they're not accessible by design. For details, do some searching on the term "off-screen model". Narrator doesn't have this, so if your product works with Narrator, you know it's really accessible.


That's really valuable, thank you!


Thank You. Will definitely check it out given uTorrent on Windows does't get much update anymore.

I want to ask, were Rasterbar-Libtorrent and Libtorrent always the same thing? I thought they were different implementation? Did they merge or did memory serve me wrong.

I couldn't Google anything useful so I just ask. In the era of streaming It has been far too long since I look at anything BT.


There are two libraries called libtorrent - Rasterbar-libtorrent (the one in this discussion, made by arvidn) and rakshasa-libtorrent (made by rakshasa).


Arh... that is why I keep getting confused with the naming. Thank You.


It feels like this isn't a large enough leap forward. It would be nice if BitTorrent v2 made it harder for ISPs to identify what is bit torrent traffic. AT&T artificially slows down upload speeds.


It's nearly impossible to obfuscate a protocol to work around filtering.

You'd want to look like some other protocol and you want that protocol be encrypted by default. Otherwise yours will get fingerprinted via the deep packet inspection.

The most obvious choice is to run your protocol over TLS. But then they can just throttle long-lived bulky TLS connections where neither side is on 443.

You can then require the responding side be on 443 (which is already a big hassle), but they will then throttle down TLS connections towards residential IPs or throttle them down cumulatively, as a group.

Other choices here are OpenVPN, WireGuard and, possibly, IPsec. But, again, it will come down to defeating the throttling of multiple inbound bulky connections of the same type, which is doubly hard to bypass if you are on a residential IP and your ISP is really bent on throttling.

Any successful obfuscation technique is short-lived, so it has no place in the protocol itself. Instead it should be delegated to a transport layer and the clients should be coded to support BT tunneling over X or over Y.


This is all true. The best way to make your application work now matter what filtering is in place is to disguise it as something which the ISP has to make work in order to get customers.

Fundamentally, nothing has the same characteristics as BitTorrent - almost nothing has the same high uplink requirements, which is an absolute signature of BitTorrent traffic.

Oh, also some of the shaping is implicit, not explicit. Most home internet connections are asymmetric - they dedicate more channels to downlink than uplink.


> almost nothing has the same high uplink requirements

Streaming video, whether that's a Skype conversation or showing your gameplay to Twitch, can be a pretty bulky upbound stream.

I wonder how fingerprintable those are, and if they have diverse enough endpoints to be able to disguise other traffic as them.


The problem is that most residential users don't care about upload speed.

This allows ISPs to throttle uploads with very low risk of pissing off the bulk of their customer base when their detection algorithm gives a false positive.


Presumably this is changing somewhat with so many people working from home.


I doubt working from home changes all that much. The pattern for most office workloads is still almost all pull. More VoIP is a not insignificant change but it still isn't the same kind of traffic that torrents put down.


Upload is pretty heavy with video chat. However, it's relatively easy to detect from an ISP's standpoint. Almost all the upload will be to a single IP address of a known video conference provider (Zoom, teams, whatever google is doing now, facebook, etc).


Yes, any decent ISP has already worked with zoom, google, teams, etc. to optimize the network.

The real question is whether they care enough about torrents to do anything about it.

https://iknowwhatyoudownload.com/en/stat/US/daily

Given that 0.31% of internet users torrent, maybe?


Zoom sends videos and some managers even ask to show faces.


>Other choices here are OpenVPN, WireGuard and, possibly, IPsec

or, get a better ISP? not all technical problems need a technical solution.


Not a viable solution in the US. Most ISPs operate on exclusivity agreements, and like 90% of users effectively have only a single choice of ISP for internet of a reasonable speed.

A better solution would be to have rules to make that kind of throttling illegal, but we've seen how that played out.


> 90% of users effectively have only a single choice of ISP for internet of a reasonable speed

Which is mainly because we have privatized roads here in the USA.

Yes, you read that right. The "gold standard" is underground fiber, which lies in the public right-of-way yet is 100% privately owned with no "duty to serve" like the electric utility has. Oftentimes the fiber owner doesn't even have to dig up the dirt -- if they lay fiber along a newly built highway the government does all the digging for them (to create the roadside drainage ditch). The phone company just unreels a spool of armored OS2 into the ditch and hey, it's Miller Time.

Until privatized right-of-way stops we will continue to get screwed.

[*] Transcontinental railroad right-of-ways are the exception to the above, but there's only four of them. And, frankly, that land was privatized through outright fraud. Read the book _Railroaded_ sometime, it's shocking.


New highways are going up everywhere - straight to each person's home!


BitTorrent has encryption and most clients use it by default. The problem here is that you can't really have something that's as wide open as the BitTorrent network but at the same time impervious to "bad guys" like copyright holders and ISPs.

In other words, how would you authenticate peers? How would you prevent ISPs from mitming your encrypted p2p connections if you have no authentication for them, and you in principle can't have any?


...by using bittorrent over i2p.


> It feels like this isn't a large enough leap forward.

The hash it relies on is broken, so their hand has been forced.


sha1 is not meaningfully broken for BitTorrent, It would require a second pre-image attack to meaningfully hurt it, you can not take an existing hash you don't control and synthesize a matching set of incorrect/malicious data.

Second preimage attacks are MUCH harder to pull off, even md5 is still safe from them, many years after they were found to be broken in other contexts.

The only thing that sha1's weakness would let you do in a bittorrent context is create a torrent, then let you send fake data for a chunk, which really does not seem very useful, because you could just make the data malicious when you created the torrent.


It's not super easy, but you could create a legitimate torrent of some popular content, as well as a malicious version. Use the legit version to gain popularity and then distribute malicious chunks to select peers.

Certainly not a trivial attack, but not benign either.


The writing is on the wall, no need to wait for these attacks to actually exist before beginning migration


Everyone thought that for MD5...


> AT&T artificially slows down upload speeds

Do you have a source on this? I believe you I just want to know more. It explains a lot.

I have a gigabit link. I can download torrents at almost line speed. But I can barely get uploads past 5K/s. I spent hours at one point trying to tweak every possible setting and eliminate every bottleneck, and still couldn't get past 5K/s.


> I just want to know more.

There is little to know, it's largely a commercial preference (that has, over the years, driven technological research; see ADSL for example, the first A is for Asymmetric).

Average consumers are precisely that, consumers: they rarely upload anything, but they download tons of content (from webpages to streamed media). So it makes sense for residential ISPs to maximize downstream bandwidth, since it's what consumers will evaluate them on. One way to do that is to simply throttle upstream, to ensure resources stay available for downstream. (This has the side benefit of reducing headaches, i.e. less people distributing questionable material on your network...).

If you need good upload speeds, you simply have to talk to your ISP.


I already have a symmetric line. I can upload at full line speed to other things. It's only torrents that seem to have a problem.


Just first hand experience like yourself. I'm on gigabit duplex too. What's interesting is while I'm downloading my upload far less throttled (I upload at 100mbps sometimes), but when I'm seeding I tend to max at 1 mbps out of the 1000 I'm paying for.

Something else is the router AT&T requires will not completely open ports for you. What it does is it is closed for incoming connections that do not hit it multiple times then it opens up, so I appear like my ports are not open on AT&T. But even when I get around that (you can rip the certificates off of the router) AT&T itself limits the connection. If you transfer too much on one port on the backend they will remotely block the port sometimes. (Transferring too much being very little here, around 50MB, maybe even 10MB.)

Good news is Comcast does not pull this crap and has fiber internet. Bad news is they start at 2gbps and start at $210 a month in most areas. I'm paying $60 for AT&T gigabit.


Fwiw on a French ISP network (gigabit too) I can upload at around 50 MB/s and saturate my Ethernet card in download.

I just got it last week, coming from the slowest ADSL option available, it blows my mind.


Why would the that be the protocol's responsibility?


I'd assume that a protocol that gets faster for files the more nodes who use it and the faster those nodes transmit would try to solve real world issues stopping more nodes/faster speeds from those nodes.

The word 'responsibility' that you used is of course too strong a word, however the comment 'it would be nice' to have is definitely true assuming we want downloads to be faster.


Maybe not 'responsibility', but I guess it'd be nice to have some kind of proxy protocol so that seeders can proxy requests for people with annoying ISPs.. or they could just use a VPN like everyone else =P


I'm not sure that makes sense for regular torrents (e.g. linux distros, large audio software packages, WoW updates, Adobe downloader, etc. etc.) even if it would be a nice-to-have for folks who use torrents for less than legal purposes. And as such, probably not something they can put in without it basically being a signal that they're "helping piracy" (something bittorrent has had to fight an uphill battle for already... I doubt they want to redo that fight)


may be "design goal" is more appropriate a term, rather than responsibility.


Would be nice if it made QUIC and/or TCP-FO heavily recommended.


why?


For multiple reasons. QUIC or HTTP/3 is going to be the next web standard widely deployed. Getting mandatory encrypted UDP traffic that looks like web is a big benefit. TFO recommendation primarily because torrenting is big, the feature is not super well-known, it would be a chance to increase its adoption, need for its support and that would benefit the web in general.


Bittorrent wouldn't benefit much from TFO because that would require a cookie from a previous connection to that remote client, which is unlikely to happen among random peers. Plus connection setup is not really a limiting factor for current use-cases. Some low-latency applications of bittorrent might benefit, but that requires specialized clients tuned for low latency throughout the stack, a single change won't do it. QUIC's encryption could be beneficial, but bittorrent-over-TLS would work equally well (not standardized, but libtorrent supports it). QUIC's stream multiplexing would not be all that useful either since bittorrent's maximum message size already is fairly small so clients can already interleave and prioritize control messages as needed. Bittorrent clients usually don't run on roaming (mobile) devices due to bandwidth limits, so QUIC's path migration probably doesn't provide much of a benefit either. Being UDP-based might make NAT-traversal a bit easier, but so does µTP.

If you want QUIC's congestion controller benefits then you can also get that with TCP if you're on linux. Set BBR as congestion controller and set the TCP_NOTSENT_LOWAT socket option and bittorrent should work well over long fat pipes (i.e. international peers).

I'm not saying QUIC is terrible, but it's mostly designed to improve web traffic. Bulk transfers à la bittorrent benefit only marginally if you have a modern TCP stack and use it properly.


> which is unlikely to happen among random peers.

Yes, it'd be a minor suggestion at best. The biggest benefit would be to the internet as whole, motivating middle box makers into not breaking it and so on.

The low-latency idea seems interesting, I suspect WebTorrent people would appreciate it. But live streams over P2P sound too fancy for now.

> but bittorrent-over-TLS would work equally well (not standardized, but libtorrent supports it).

Which is where the suggestion to use QUIC would have the biggest effect and getting the swarms more encrypted.

> Bulk transfers à la bittorrent benefit only marginally if you have a modern TCP stack and use it properly.

Unfortunately very little software actually goes into the effort at figuring each such detail out and the end result, defaults are often rather poor.

I guess this more of a questions of "what are the defaults" not "what's possible."


What are you doing to help? Coding? Contributing money?


You're allowed to have an opinion on the quality of things that you're not involved in the production of. You should hear me talk about movies.


I see your point, but... Movies are (largely) by definition well-funded with people who make union wages whether they do a bad job. Parent seemed a little flippant to me considering that the work discussed is a labor of love used by millions, but remunerated by (maybe) thousands


I'm sure parent would be first in line to contribute if a random investor showed up and said "Hey I'll pay you more than your current boss to stop making apps focused on children gambling via microtransactions and do literally anything else."


Switch ISP or retire its management by any means necessary.


Switch to what?


Why don't ISPs bill by the gigabyte and be done with it?


Some thought experiments for you to try:

- Does this guarantee that you get the speed that you pay for? If not, then should the price during heavy congestion cost less per GB? I’ve had the unfortunate luck of having lived in areas where the internet is unusable for anything other than email and light browsing during peak hours. Bonus: Do you know why this happens? If not, I encourage you to research this.

- How will this affect the advertising model of the entire internet economy? If you, like me, have lived your whole life on a mobile data plan that is priced based on data usage, you’ll realize just how much bandwidth is consumed by advertisements (worst are the video/audio types that auto play).

- How will this affect “future” technology such at smart homes? Most devices send data back (Amazon Echo devices) and the growing security camera ecosystem that sends recordings to the cloud would suddenly cost users a ton more.

- Speaking of cloud, what will happen to the gaming industry that is heavily cloud based? Many games don’t even ship in a completed state anymore. Instead, part of the install requires downloading a ton of additional updates. Not to mention online play, DLC, etc.

There a many many more everyday examples like this that would be heavily affected by pricing per GB. Your question sounds like a simple solution but like many things in life, that is rarely the case.


I think the answer to most of these questions is, let the market decide. E.g. consider this: congestion during peak hours is bad now, but there is currently no penalty on an ISP for being slow. However if billing was based on bytes moved, for an ISP to underserve during times of demand means lost revenue. Thus ISPs would be incentivized to provide the best service.


With respect to the US, this answer is incomplete. The theory of letting the market decide only works when there is a free market. Most places I’ve lived has only had a single choice for ISP effectively making them monopolies.


> With respect to the US, this answer is incomplete. The theory of letting the market decide only works when there is a free market. Most places I’ve lived has only had a single choice for ISP effectively making them monopolies.

Furthermore, even in places where you do have some sort of choice for internet service, it is almost certainly limited to choosing between a cable monopoly and a phone monopoly as your ISP.


Because customers lose their minds with that model, largely as a result of conditioning that "internet" is an unlimited resource (which of course it is when instantaneous demand <= supply, but during high traffic periods that isn't the case).

Personally I understand the economics around it, but still don't like the idea of paying per GB. I would end up skipping some Netflix and would get mad at kids for playing Netflix to an empty room (much like I currently get mad when they leave the lights on in an empty room). It's nice on a personal level to avoid that.


I really don't like the idea of limited bandwidth or charging per gigabyte because I know ISPs will rip people off and still not upgrade their networks to handle more traffic.

But I think it would considerably change the Internet landscape. No more listening to the same exact song multiple times on Youtube or Spotify or whatever. No more downloading then deleting the same stuff over and over again. I think about it often, what a massive waste of bandwidth, I don't even know why. It's a lot of electricity used, I guess?

Even though I've got unlimited fiber, I'm looking for some sort of local "Internet cache" solution that would store everything so it would be re-downloaded from my home instead of across the ocean. Would be great for outages, too.


>It's a lot of electricity used, I guess?

Is it though? How much electricity is used to serve a 1080p Netflix movie several times vs. playing the same movie from a local storage medium?

If you asked me, I'd rather burn Bitcoin mining rigs if I wanted to get rid of electricity waste.


Yeah, I don't know. I just think about the bandwidth we all use sometimes and it seems extremely wasteful and it bothers me for some reason. I use around ~200-300 GB/month, which I thought was a lot until I saw how much other people use :D


>I use around ~200-300 GB/month

Pretty sure I use more than that in a day.


That's a lot of linux isos to download per day.

Seriously, why would you use that much data per day for? Even getting full blu-rays you wouldn't be able to watch that many.


A couple years ago I participated in a private tracker and wanted to get a good ratio, I uploaded constantly at about 95Mbps

95/8 = 11.875 (Mbit to MByte)

11.875 * 3600 = 42750 (MByte/s to MByte/h)

42750 / 1000 = 42.750 (MByte/g to GByte/h)

42.750 * 24 = 1026 (GByte/h to GByte/d)

1026 / 1000 = 1.026 (GByte/d to TByte/d)

In a two month period I uploaded about 50TB according to the tracker so my calculations seem about right.

It's easy to reach these numbers in a country where net neutrality is a thing.


I was once working on some automation stuff once. I did, in fact, download 4tb worth of (the exact same) linux ISO in a day. From my house.

Luckily it was over fiber and through a corporate VPN and I barely noticed it. But I was shocked when I saw the usage on my router later that month.


It's more up than down.


Bless you, good sir.


> I'd rather burn Bitcoin mining rigs

that's because you benefit from it. You're not currently paying the cost of that 1080p stream - netflix (and the ISP/peering networks) are paying. Netflix recoups the cost from your subscription, and the ISP/peering arrangement is mostly cost neutral to them (save for a small amount i presume).

But it's still not "free".


Wait until you hear how much CPU time and RAM is being wasted...


It's actually the reverse that bothers me in that case! I have more RAM and CPU time than I can possibly use :D

Unless that's what you meant...


I think what they're getting is how bloated software has become. Some websites download several megabytes of data to display a kilobyte or less of actual content.


> I have more RAM and CPU time than I can possibly use :D

Just open another tab; problem solved. :|


Yeah but your hardware requirements are driven by software bloat.


That would probably increase the amount of storage space people would end up buying which would greatly increase the electricity demands on the end user.

I would bet this is already the most efficient way of it working.

You can setup a proxy service to cache internet requests too, but they're getting less useful due to greater use of HTTPS.


That kind of depends on the price pr gb.

To use the power analogy, when I was young the price of power was relatively high but in recent times it has now dropped to 0.6 NOK / kWh so even charging the car from 10-80% (~61 kWh) cost 40 NOK or $4. I do not get mad at the kids leaving the light on like my mother did to me.

I (or rather my job) currently pays about 1000 NOK/ month for unlimited 500 MBit synchronous fiber internet. I transfer maybe 500 GB/month so for it to be cheaper for me it would be under 2 NOK/GB or $.2 which sounds really high but that is what I currently pay with my usage.


You pay for peace of mind that little Johnny doesn't download some crazy amount by accident. Now you don't have to monitor family usage so that makes things easier as well.


i had per gb pricing in new zealand. the key feature to address your issue was that i could set an upper limit, so, say start throttling the connection once i reach $50. that's the same that other ISPs would do anyways once i reached a certain amount of usage, but with this ISP i decided the limit and i could change it at any time.

the price was competitive too. other ISPs charged $70 per month, and had a limit of 50gb (that was 15 years ago) my ISP charged $20 as base fee and $1 per gb of usage. if i used less than 50gb, i saved money, if i used more then i could...


It's unfortunate that cellular providers don't offer this. I get around it by setting my data connection to prefer 3g. It helps with a 500mb limit (data is expensive on cheap plans), but it'd be nice to be able to throttle apps individually (no, I do not want video ads to be downloaded, but internet connectivity is required for the app to work online. A HN client with webview, for example)


years ago i came across per gb pricing but the cost was insane. it was essentially: prepay your usage or we charge you 10 times as much as the prepay would have cost you.


i mean i came across gb pricing for mobile


I thought internet in Norway was cheaper than in Denmark. Interesting. For comparison I pay 449 DKR/month for 1000/1000 unlimited fiber with a guaranteed bandwidth of minimum 950 (they are upgrading the network atm. hence not a guarantee on the full 1000 yet as some customers equipment is too old).

Is it a normal connection or a business connection perhaps?


Bandwidth to where? Do they really have 1gbit per customer peering with say level3? And with cogent, and telia? And a full non blocking internal network? And enough packet buffers to ensure that microbursts don’t saturate any link?


Depending on the SLA, or lack thereof, they could have enough peering for a sustained +(x = 3?)σ demand spike without having a full dedicated peering for each customer, and still reasonably guarantee throughput availability.


Even ignoring peering, I’m still trying to picture the non blocking network. An Arista 7368X4 isn’t cheap, but will cope with 12,000 customers with appropriate switches downstream (128x100G ports with each port breaking out to 96 customers. Not sure how you’d scale beyond that without blocking.

Your peering will only scale to your (combined) interface speed regardless of your SLA. Our 2x 40G peer with level 3 serve far more than 40 1G devices (I personally have 600 in one building alone, and that sets aside the rest of the users), but it’s rare it’s more than 30% utilised.

Clearly that’s not going to be a domestic isp architecture, so a reasonable question is what does uncontended actually mean.

I very much doubt an isp with 10,000 customers has 10TB of peering physically available. Linx public peering is less than half that for the entire UK, and while private peering will likely increase that, the suggest is it’s welll under tenfold, so the 50 million plus internet users in the UK only use at peak times 1M each, and that ignores all the non-domestic use.

Even a 100:1 contention ratio seems enough at a core level, so any isp spending money on improving on 10:1 ratios seems that they are just burning cash.

Clearly as you go the the edge contention ratio needs to drop - but 40G, or maybe even 20G uplink for 48x1G users would be reasonable to me.


It's a normal connection (Altibox). I'm allowed to runs services and whatnot on it though I mostly use it for data analytics and downloading large datasets.


> guaranteed bandwidth of minimum 950

Wow really interesting to see such high (assume Mbps) bandwidth guarantee for consumer service. Is the guarantee really work?


Because their costs aren’t based on usage, but installing and maintaining the infrastructure then collecting rent on it. Actual usage is an exponential curve and if you bill on it you have a few angry users paying for everybody else’s infrastructure.


Why won't ISPs stop advertising 'unlimited' service that is actually limited to N gigabytes a month, where N is substantially lower than what's possible given the speeds they provide?

Because they can get away with it.


Because it is insanely expensive to actually provide that. The nature of internet traffic is short bursts, not 100% utilization. In a commercial setting you can purchase fixed pipes that are entirely yours and they’re tens or hundreds of times more expensive.

Why wouldn’t you want to pool bandwidth with your neighbors so you could all get faster speeds when you were using it instead of rate limiting everyone?


The question was not "why won't they provide it." The question was why won't they stop advertising it?

I can't sell you a pony made out of diamonds because that's impossible. Consequently, there is no legitimate reason for me to be advertising the sale of a diamond pony!


>Because it is insanely expensive to actually provide that.

So, the defense of using a misnomer to name your service is because the service warranted is actually impossible to supply given the margins?

Call me a fool, but that still seems like a company that is getting away with lying to the vast majority of people that don't bother reading the asterisk ( like T-mobile style "Unlimited" plan that gives X amount unthrottled, and some arbitrarily low rate after).

Criminal issue? Of course not, that's why the companies present such things this way.

Dishonest? You bet.


Plenty of other countries have actually unlimited high speed internet with little issue and reasonable pricing. It's obviously not so "insanely expensive" that it can't be done.


The issue is that there's no way to know what this limit is. I understand the cost, but then tell us exactly how many GB I can use at full speed and when does it start to throttle. Instead I have to rely on internet anecdotes.


They do. I'm pretty aware of what my data transfer limits are on both my home and wireless connections. Granted my new ISP has no data caps and symmetric gigabit speeds so I'm a bit spoiled.


Unless you work for that ISP though, you don't know what the limits to their network are or how many people you actually share that line with.

It's gigabit speeds until all your neighbours suddenly want to download something too.


The problem is my fiber gigabit duplex internet throttles me from the first KB. Net neutrality is dead.


So watching Netflix and YouTube gets more expensive? No, thanks. I'm fine with European net-neutrality and pay for bandwidth.


Well in New Zealand we used to charge-per-byte but as things got cheaper this has largely gone away. Some of the cheaper how fibre accounts have something like a 100G/month limit but I guess it isn't worth the trouble to charge even here where bandwidth is much more expensive than the US.


Flashback: I was on Actrix which charged $5 per megabyte. Downloading Netscape Navigator 2 nearly bankrupted me. I used to browse the internet with images disabled, which ironically gave me an appreciation for accessibility issues which served me well later in life.

When Xtra came out at $2.50 per hour, it was a complete game changer.


I remember when Xtra bought out their $27.95/month unlimited dialup plan.

It changed our internet browsing habits so much. With hourly charges, you would try to plan out what you would do before connecting and instead of reading webpages, you would save them to disk for reading later.

With unlimited, you could just sit there and browse. Or leave it on overnight to download files.

At some point we got a second phoneline and I was downloading torrents on dialup all day and night for years before we finally moved somewhere with ADSL in 2006.


Netscape Navigator 2 is 8Mb, for anyone who was wondering.


100Gb/month is practically nothing. I primarily use a hotspot for internet access and routinely do way more traffic than that.


If you don't use video then it'll be plenty. Plenty of people get their TV the old-fashioned way but still want fast Internet.

Looking at pricing for Spark (one of the largerproviders). [1]

For each of the fibre speed plans you can get 60GB/month, for another $10 you get 120GB/month and for a further $10 you get unlimited traffic.

BTW: "Mb" = Megabits, "MB" = MegaBytes

[1] https://www.spark.co.nz/shop/internet/plans-and-pricing/


I mean, the video games I play are 100-200gb each, with crazy large updates every couple days. I would probably go through 120gb in a couple days.


Easily. Between updates and video downloads I expect to use about 200GB today alone.


It would be horrible to live in a world like that. Imagine deciding to watch a shorter film on Netflix because you will pay less.


That's basically what happens with mobile data plans in India. Fixed speeds, pay X amount per 1 or 1.2 GB


It's also how Google Fi works. Every time I loaded something, in the back of my mind I'd think "this page is costing me two cents". It makes everything you load feel like an individual transaction, making it super uncomfortable to use.


Because then people will get REALLY grouchy about ads.

In addition, it gives people a solid measure over which to sue you. If you aren't delivering, you're headed to court.

The ISPs like all the vagueness.


I imagine the data whales would not cover the loss revenue from people who just use online banking yet pay the same amount.

I've used 15 TB in the last 5 weeks with my torrent client alone (and considering my Backblaze backup size, that's not nearly all of my traffic), but how many people like me are there in the general public?


Agreed, this would align incentives nicely. Maybe we would see more innovation in compression technology that way.


BC billshock scares people away. It would set us back to the early '90s


Very interesting. But I'm not totally clear -- what does this mean for compatibility with v1?

The article states that hybrid torrents that support v1 and v2 can be created. But what does this mean for end-users (clients)?

Will most torrent software be upgraded to support both v1 and v2? And will a client be forced to choose from the v1 or v2 swarm, or will it be able to download from and seed to both?

I mean it seems like clients would participate in both swarms -- I'd just like to know if that's confirmed.

Also, is is possible to add v2 to existing torrents "retroactively"? Who would do that? Or would this solely be for new torrents moving forwards?


A v2-aware client that also supported v1 and hybrid torrents would tick all boxes and be able to participate in multiple swarms (for the same torrent).

The main issue that I see is the existence of millions of torrents in private trackers that would have to be manually updated for v2.

Are people going to bother? I think not. So for me, v2 is practically a new-torrents-only affair.


I think this is fine. Slow migration is often the right choice. The important part is to start early, and in a decade maybe the vast majority of torrents will be V2.


The sites could perform a bulk update for all their torrents if they so chose.


To update you need the data so you can rehash it. A typical tracker only has the torrent files and would take a very long time and spend a lot of possibly expensive bandwidth to download all the tracked torrents. And some torrents may not be seeded all the time.


If nobody has the data, the torrent is effectively dead and most private trackers will remove it in time. If it's not dead, they can incentivize seeders to re-hash/re-upload a v2 compatible torrent if they really wanted to.

Private trackers tend to gamify quite a few aspects of their sites (obvious example being the seed ratio itself).


They often just have the magnet links. I haven't looked into how that works, but I think the complete data is only on the seeders' machines, and the torrent propagates through DHT.


Private trackers do not use magnet links afaik.


> Also, is is possible to add v2 to existing torrents "retroactively"? Who would do that?

You would need the file data to do that, so torrent indexing sites could not do it. And since you need the full data you could only do it after downloading at which point it doesn't add that much value.

> Or would this solely be for new torrents moving forwards?

Indeed. v2 offers a few nice improvements but not world-changing ones. So there's no pressure to upgrade existing torrents. They'll keep working as-is.


Could a client that has all the files just advertise a v2 version via DHT, even though the actual torrent was only a v1?


The issue is that there would be no way for clients which are currently downloading the v1 torrent to verify that the v2 matches. You could only do that after you have the data, at which point you could compute it yourself.


I remember one of the original bittorrent guys asking for help/recruiting on some IRC channel. He said it was a transformative project. He was laughed off as the next "wanna create mmorpg" guy. For once they were wrong.


I've written a torrent client, and I'm skeptical than v2 will ever catch on. While it does solve some minor problems, it's not a large enough leap forward to justify the costs.


Minor problems? Isn't this a security issue? Somebody can modify a binary and still have it return the same hash and distribute it to people who think that they are receiving an authentic file. Is it even an option to keep going with SHA1? Even Git, which this is less of an issue, has a plan for migrating to SHA2. https://git-scm.com/docs/hash-function-transition/


This isn't really true, sha1's weakness would require you to be the creator of the torrent, which if you are, you can just make the binary malicious to begin with.


The issue is that you can change it later on - after people have reviewed your torrent, breaking the immutability property of bittorrent.


> sha1's weakness would require you to be the creator of the torrent

Huh, why?


I'm not an expert here, but I'm thinking about it like this:

Creating a SHA-1 collision is doable, but it's still hard. If you want to serve someone a malicious piece of data, that's already one hash of the two colliding hashes that you've used up. Now you have to create harmless or "benevolent" data that collides with the hash of your malicious data so that you can create a positive reputation for your file from users who aren't your targets. That way, when your target inevitably goes to download the file, you wrestle into the protocol with a lot of speed and/or nodes, and you serve the malicious data to your target instead of the data you've been serving to everyone else.

If you don't need the positive reputation, and someone will just download and run whatever you put in the torrent, you don't need the collision in the first place.


It sounds like the perfect scenario for movie companies to target pirates.


if you feel like using centuries of computer time per torrent that nobody will download.


because you can create two hashes that match, but you cannot create a hash that matches an arbitrary hash you do not control.

That is a much more serious weakness called a "second pre-image attack"


So, as I understand, that's expected to happen in foreseeable future. Otherwise, why switch from SHA1 if you can't create a collision with unaltered data?


It is not expected to happen in the foreseeable future, MD5 for instance hasn't broken in a second pre-image way, more than a decade after it was known to be weak.

This class of attacks is MUCH harder to construct against a cryptographic hash.


Then, why did BitTorrent work on such a costly change if it's not vulnerable against it?


For the purpose of operating a bait and switch on the files, the torrent creators controls the two hashes somewhat, so it's an easier to pull attack.


If you have control over the bait, I don't understand the reason to switch at all. Just make the bait evil and be done with it?


I don’t know much about protocol version pacing, but was lack of substantial changes part of the reason it has taken 12 full years to get a client to support v2?

http://bittorrent.org/beps/bep_0052.html


3 years, not 12. BEP 52 was based on BEP 3 and kept the same metadata, including the creation date.

http://bittorrent.org/beps/bep_0003.html


No. It is still a draft proposal.


If a couple major players adopt a hybrid system then it could perhaps


libtorrent backs a number of popular clients (Deluge, qBittorrent), so that's a good start.


From the perspective of the client would it be a drop in replacement like changing a static library or a from the ground up rework?


If they used libtorrent, then it would be easy. If they wrote their own system, then it would require a moderate amount of work to chance to v2.


I am going to take a guess, Most people on Mac would be using transmission, and People on Windows would be using Classic Utorrent.

Both are not based on Libtorrent.


uTP seemed like a small improvement to me but it happened.


It's backward compatible.


The use of SHA-1 in v1 seems like a major problem.

[edit]

That doesn't imply that BTv2 is the right successor protocol.


It’s not currently - collisions are very expensive and only in limited contexts. If you wanted to distribute malware via torrents your efforts are better spent elsewhere.


Attacks only get better with time.

Also, this seems quite bad: https://en.wikipedia.org/wiki/Collision_attack#Chosen-prefix...


That attack isn't relevant to bittorrent swarms. What you want is a preimage attack on SHA1.


Yeah this definitely feels like a Python 2/3 type situation. They solved one minor issue but require the whole world to update.

I'm not even convinced they couldn't have done it in a backwards compatible way. Why not stick with SHA-1 but also add SHA-256 for verification for clients that support it?


> Why not stick with SHA-1 but also add SHA-256 for verification for clients that support it?

Apparently you can do that with 'hybrid' magnet links.

The problem is the two swarms are different so as more people move over to v2 the v1 swarms will become smaller and smaller -- thus giving people an incentive to upgrade, I suppose.

...and they seem to have solved more than one minor issue since they were breaking things anyway...


Surely people will just sit in both swarms?


That's an interesting comparison, given that the first BitTorrent client was written in Python (2).


SHA1 has a collision, so now it uses SHA256. How long until a SHA256 collision? Shouldn't the new protocol just add support for many modern hash functions, and client updates can disable support for hashes that become insecure later? Or does this introduce its own headaches? That's what SSH does, right?


I don't think anyone is expecting a sha256 collision in the next 20 years. The crypto doesn't seem to leave much wiggle room for exploitation.


Don't people always say exactly that? What are the chances of a collision if we simultaneously use multiple hashes, does it become significantly less likely?


> don't people always say exactly that?

This is an understandable reaction, but the security margin on new crypto is way higher than old crypto. Roughly speaking, we went from "I guess if a state-level actor dedicated all their resources to this for a few decades, they could probably brute-force it" to "Even if you broke 9 out of 10 rounds in this algorithm, you'd still need to harness the energy of every star in the universe for 10 billion years to brute-force it."

Most algorithms today have been "attacked" in the sense that there are tricks we can do that allow us to recover the key faster than a simple brute-force attack. But "faster" usually means doing something like 2^100 operations instead of 2^128 -- still far beyond the realm of practicality.

It's telling that cryptographers are now seriously discussing reducing the security of various algorithms: https://eprint.iacr.org/2019/1492


This isn't wholly unreasonable. At some point we will stop worrying about state actors and start worrying about the Xeelee.


If they have access to such power were toast anyways.


> Don't people always say exactly that?

No, "they" don't. SHA1 collisions had been "in the wind" for a while, they had been in sight ever since MD5 started showing signs of clear weakness in the early '00s. Wikipedia has a Rivest quote about it from 2005. There is nothing like that for SHA2, although attacks are improving.

> What are the chances of a collision if we simultaneously use multiple hashes

Define "simultaneous". Shipping twice the hashes for each piece seems a big waste of space. If you mean re-hashing hashes, it's just a waste of cpu power, since an attacker only has to break one or the other to get in a position to poison data.


you are massively prematurely optimizing, the vast majority of torrents are greater than a few hundred megabytes, nobody cares about the overhead of a few KB of hashes, you are already hashing data when you download or upload it, adding in another hash when data is fresh in the CPU cache is basically free.


In twenty years, So they have 10-15 years to safely do other things.


> How long until a SHA256 collision

Very unlikely that this is going to happen any time soon.

Most modern symmetric cryptographic primitives with sizes >=256 bits are considered safe even against quantum computers. SHA256 turned out to be even stronger than expected. SHA-3 adoption is delayed in many protocols because there is no much need for it and hw implementations for SHA256 are commonplace.


They use multi-hash [0] in magnet links, presumably for exactly this reason.

... but for consistency (like their narrowing of valid bencode), they’ve presumably chosen one main hash for now, so that every client and server doesn’t have to handle all of these cases as people provide a million variants of the same torrent.

[0]: https://github.com/multiformats/multihash


Isn’t the fact that OpenSSL et al allow so many arbitrary ciphers the reason of a whole load of problems?


Yep: https://en.wikipedia.org/wiki/Downgrade_attack

> Downgrade attacks have been a consistent problem with the SSL/TLS family of protocols; examples of such attacks include the POODLE attack.


Nope, the problem is that software never upgrade their ssl stack to support the newer ciphers. Especially Microsoft that's easily 10 years behind on the current SSL version.

Without the ability to support multiple versions, it would be impossible to upgrade anything at all. That would be a whole load of other problems.


Why not BLAKE3? I am really curious. I mean, since it exists, why not that over SHA256? Because it is relatively new?


I would guess that the reasoning is similar to why Git is moving to SHA256 (from SHA1) rather than to BLAKE3 — SHA256 was around 5 years ago and the major design change has been in the works for a while (BEP 52 dates to 2017). BLAKE3 (2019) would be a fine choice today.


I see. Thank you! By the way, I have not thought much about it, so in case you may know: would not it be possible to implement this in a way that allows swapping the hash function? So for example when we run into issues with SHA-256, change the hash function to something else.


We already can: the "swap" will just be a v3 along the same lines.


Yeah, but would not they have to create v4, v5 and so forth every N years, for different hash functions?


Sure, but this is not any more expensive than any other versioning scheme you might be thinking of. Consider also that they got 19+ years out of v1, and that there is no reason to believe SHA2 will be broken faster than SHA1.


Probably, but would it be possible to make it so that one could easily swap the hash function? Like I am curious about the details here. I think it would be. Clients probably will have to implement a couple of commonly used hash functions, and so forth. I am not sure how it would work in practice or if it is worth it at all. I am interested in all the details though.


In addition to loeg's point about timing, I'll add that BLAKE3 has its own internal tree structure. It would be unfortunate to have two tree structures layered on top of one another, both because it's "ugly" and because it won't do as good a job of parallelizing things. However, unifying the tree structures would be a big commitment. Every detail of the layout would need to be exactly as it is in BLAKE3. There wouldn't be any space for custom metadata on interior tree nodes, for example. I'm not familiar with the protocol details of BitTorrent myself, but I wouldn't be surprised if that unified approach turned out to be too limiting. (But for a file/tree project that does use the exact BLAKE3 structure, see github.com/oconnor663/bao.)


SHA2 is hardware accelerated on many new CPUs, Blake family not so much.


I know, but according to the graphs, it is much more faster than SHA, despite hardware acceleration, so I am not sure.


That depends on the platform, the size of the input, and whether multithreading is used.

On Ice Lake, where BLAKE3 benefits from AVX-512 and SHA-256 benefits from the SHA extensions, BLAKE3 seems to do better on both long and short messages. But maybe surprisingly, SHA-256 does better in a medium-length regime, where SHA-256's poorer startup time* has been mostly amortized out, but BLAKE3's chunk parallelism hasn't yet kicked in. See for example the 1536-byte results here: https://bench.cr.yp.to/results-hash.html#amd64-icelake . Using multithreading would exaggerate BLAKE3's advantage for very long messages (usually about 1 MiB and above), but it wouldn't improve the results for any of the message lengths measured there.

* I don't actually know where SHA-256's startup overhead comes from. Maybe someone who knows more could jump in here?

On ARM chips, the performance benefits of NEON are less dramatic than AVX-512, and the performance advantage of SHA-256 hardware acceleration is comparatively larger. I think it's rare for BLAKE3 to beat accelerated SHA-256 on ARM without at least some multithreading, but I've only personally benchmarked a few Raspberry Pis, and I want to be careful not to overgeneralize.


What if the hashes differ?


I expect there would have to be other changes to allow this feature, such as the server advertising all the available/supported hashes for each chunk.


Collisions in the context of bittorrent are not a big deal. We could (and will, there are millions of torrents around that will never be updated) keep using SHA1 and the world is not going to end.


They were a big deal on other networks using for example MD5, due to collisions malicious clients would just send you garbage parts. You wouldn't notice until finishing the complete download just to see that the file was corrupted.


It isn't? Is it not possible to be tricked into downloading a malicious binary that you then execute on your computer?


Unlike the Photoshop 2020 WareZ Cracked Unlocked 2020 Xvid Torrentz WZ FUN.torrent, that I just downloaded


Yes, because warez is the only valid use of bittorrent.

Most Linux distros offer an installation iso via torrent, large files with many blocks. If you can change just a small part of those files, you’ve got compromised machines before the install even begins.


Most of the practical hash attacks we've seen allow one to create two chunks of data that hash to the same thing, not to collide with an arbitrary other block. This greatly limits the attack scenarios we need to worry about.

(That is, we've got practical collision attacks emerging for SHA1, not pre-image attacks).


You would need to create a colliding pair (because the single existing one is so well known), itself not a simple thing, and create the two executables specifically with additional code discriminating between the two pairs to do two different things. You can't replace existing files with this attack, which means you'd have to create your own Linux distro with this extra "feature" and can't attack existing ones.


I don't think so.

1. The user trusts the source of the .torrent file.

2. A malicious peer makes a preimage attack in some block in an executable file with contents containing some malicious executable payload.

3. The executable wasn't signed, or the targeted block must include executable headers.

4. Some peers get the malicious exe, some don't.

The (2) step is still hard — preimage attacks on SHA1 are still expensive.

And it is probably much easier to bypass SHA1/SHA256 entirely by just uploading a malicious torrent directly and hoping (1) still applies.


Remember that for cryptographers, "practical" or "broken" doesn't really mean "everyone can do it", and AFAIK there has only been one publicly released collision pair for SHA1, which also took an enormous amount of time and money to find.

Even MD5, for which you can generate colliding blocks in seconds on an average PC, is still quite resistant to preimage attacks.

In other words, even after spending the resources to find a colliding block, you'd also need to create both files with the same hash, and can't simply collide existing torrents' files and replace them with malicious ones.


You're assuming a malicious peer, but what about a malicious seeder? One could take an existing unsigned executable, add in NOPs and no one would notice (same thing for certain noises in audio/video files).

Since there's some control over the original hash value, executing step (4) is not exactly a preimage attack, it should be a bit easier.


If the seeder is malicious, there's no need to attack SHA1 at all. The seeder can't control which peers get which versions of any identical-SHA blocks — your target peer may share the bad block — so it seems easier to just upload the malicious content to everyone.


They aren’t? So someone can craft a payload that contains a rootkit or similar and has the same name and hash as the thing you wanted, and this is okay? (Disclaimer: I don’t know all that much about how hashes are used internal to the protocol)


The weakness is a collision, not a pre-image attack. That means that a pair of payloads can be crafted together, one innocent and one malicious, that have the same hash. But it is not feasible to take a given file and make a new malicious payload with a matching hash.

So if you get your hash from whoever made the original binary, you can know that any binary with a matching hash is fine. But it could be a problem if someone creates a new pair of binaries and uploads the hash to TPB. You might see lots of good reviews from people who got the innocent version, but then the peers you connect to send you a malicious version with a matching hash.


The vast majority of data shared over bittorrent is not executable.

So your scenario becomes an issue if you're downloading executable data that you deem :trusted: without any additional verification besides the hash itself. If that's the case, you have bigger worries than the hash.


Or if there's a vulnerability in a popular media player.


> per-file hash trees

This is huge. Now the protocol supports downloading identical files from multiple torrents simultaneously.


Has there been any progress on advancing BEP-46 (mutable torrents) [0] along the standards track? I didn't see any mention of it in this announcement, despite my hopes of seeing it as a flagship feature.

[0]: http://www.bittorrent.org/beps/bep_0046.html


The BEP itself is almost trivial. The difficult work is implementing it in a client that makes it useful for users and content providers.

In the wild west of the internet "update" really only means "add" because you don't want the source you barely trust to provide some data to issue an update that deletes all the previous download from that source. But you also want to avoid wasting storage so some size caps and rehashing old data to see if it's an incremental update will also be needed.


What is the current relationship between BitTorrent Inc, which I think developed the original BT v1 protocol and client, and Libtorrent? Which I think was a 3rd party BT Library written in C++ because the official version was in Python which was resources hungry.


You got your relationships wrong.

Bram Cohen developed BitTorrent and released a (Python) reference implementation in the public domain (later under MIT and then GPL licenses); he later founded BitTorrent Inc. and assigned this implementation to the company to maintain. Eventually BitTorrent Inc dropped this codebase altogether and became closed-source with a completely separate project.

Libtorrent is just one of many independent BT libraries that have been developed since Bram published the first BT client.


>You got your relationships wrong.

??

So there are no relationship between the two?

>Eventually BitTorrent Inc dropped this codebase altogether and became closed-source with a completely separate project.

That was from the acquisition of utorrent.


Correct, no relation between the two.


Talk about an un-mobile friendly website


It even manages to evade the usual "Show simplified view" bottom bar that Chrome mobile shows when it detects that a page is likely hard to read.

Rather ironically, you get a much more readable view if you select "Desktop site" in the Chrome mobile menu and then double-tap the main text column.


It works just fine on Opera on Android thanks to text reflow.

Most of the time it's not web sites but browsers, who are unfriendly.


works fine on my phone.


Since the article mentioned bencoding and provides sample .torrent files, I'd like to show my code for parsing and visualizing bencode structures: https://www.nayuki.io/page/bittorrent-bencode-format-tools


Can I suggest including an example .torrent file (for something legal) for those that don't have one to upload.


This is pretty cool. Per-file hash trees is something I've been missing for some time now.


According to the blog, BitTorrent v2 now tightens the possible block size to power of 2. But does that still allow for the blocks to be variably sized and created by, for example, a rolling hash-based chunker, like Buzzhash or Rabin?

I'm asking, because this would allow for sharing of big files across swarms, even though the files might be slightly different.


> All new features in BitTorrent v2 that are not backwards compatible have been carefully given new names, to allow them to coexist with the v1 counterparts.

This seems like a really cool approach that could be applied lots of other places where versioning can be tricky. But probably also hard to come up with new, good names.


> It is possible to include both a v1 (btih) and v2 (btmh) info-hash in a magnet link, for backwards compatibility.

The amount of effort into backward compat and keeping the general UX the same (ie: no overly-large new Magnet urls) is really appreciated.

Hopefully this will find quick adoption in both clients and submissions.


I remember World of Warcraft using the bittorrent protocol internally to distribute updates, which was an smart use at that moment considering the horde of people downloading at the same time.


> Identical files will always have the same hash and can more easily be moved from one torrent to another (when creating torrents) without having to re-hash anything. Files that are identical can also more easily be identified across different swarms, since their root hash only depends on the content of the file.

As a question for someone who knows better - does this mean that you can download single files within torrents themselves? I imagine if each file is independently hashed, this should be possible.


>As a question for someone who knows better - does this mean that you can download single files within torrents themselves?

Most torrent clients can do this already, but depending on how the torrent was created and the size the files involved you might download a some of the files on either side.


If I understand your question correctly, then yes, people do that all the time and most bittorrent clients make it super easy.


Does anyone know any tutorial, article or any learning material related to creating BitTorrent-powered applications? Something simple, like sharing pictures or even text files.


It seems like this breaking-change would have provided an opportunity to improve compatibility with WebTorrent, which AFAIK requires a separate swarm (connected via WebRTC)


There are a few things I wish this addressed, but it doesn't. Off the top of my head:

1. the new hash function is going to be broken eventually - what happens then? 2. support for "remixes". It would be nice to reference pieces from another torrent. Example use case: adding subtitles for a movie. Right now it requires either downloading the "main" version of the movie and getting the subtitles externally, or sharing the file from scratch.


> the new hash function is going to be broken eventually

SHA-2 will not be broken as easily as SHA-1. This seems to be a common misconception in this thread.

Wikipedia: "Since 2005, SHA-1 has not been considered secure against well-funded opponents".

This was 10 years after its introduction (1995) and 15 years ago. We had 15 friggin' years and we're finally switching git and bittorrent around. It took 2017-2005=12 years to get from first serious signs of issues to https://shattered.io.

SHA-2 is now 19 years old and the "uh oh, better switch before it's too late" recommendation has not come yet. It's withstanding the test of time better and there haven't been 12 years to mature any weaknesses. The theoretically known attacks for SHA-2 are fairly insignificant.

Since SHA-2 had a similar construction to SHA-1 (Merkle-Dåmgard), NIST figured they better launch the SHA-3 competition at the first sign of trouble and picked something with a very different operating principle. For now, however, it's still fine. Thomas Pornin put this a bit better than I can: https://security.stackexchange.com/a/21116/10863

As for "what happens if/when it will be broken": we could make BitTorrentv2 another multi-crypto soup like with TLS, but then you open up a can of downgrade attacks, potential null ciphers or other such tricks simply due to increased complexity, and you still can't switch that quickly because everyone needs to take manual action in changing configuration files. Much better if we can instead do apt upgrade and let the software take care of making security decisions rather than those who install the software. (Remember that SSL was designed in 1994, when LiveScript/JavaScript didn't even exist yet, DES was state of the art, and we wrote books with algorithms because cryptography was ammunition. Having multiple options for strong/weak ciphers was not yet a crazy idea.)


> Identical files will always have the same hash and can more easily be moved from one torrent to another (when creating torrents) without having to re-hash anything. Files that are identical can also more easily be identified across different swarms, since their root hash only depends on the content of the file.


Does anyone care to summarize the changes?


As far as I can see, TFA is the summary of the protocol changes. If you want a summary of TFA, I suggest reading each section's heading and first sentence or two. Possible exceptions are the hash trees and directory structure sections, which are more technical than user-facing (read all or none of them).

Not trying to be rude – I found the blog post rather informative and concise!


Since we're on this topic, what's your torrent client of choice HN? I plainly use BitTorrent nowadays


qbittorrent, hands down :) Open source, cross platform and relatively small footprint (native), uTorrent-like UI.

I believe it uses libtorrent under the hood, so this might be integrated soon.


> I believe it uses libtorrent under the hood

Yep: https://www.libtorrent.org/projects.html


I use it too, the main reason I switched is it seems to be the only one capable of maintaining line speed downloads on 100+ megabit links on a Mac.


It seems they still can't be bothered to implement a proper "stop" button, despite many requests over the years. When the devs refuse to listen to their users, that's software I won't use.


Went to look up the issue - the dev's response seems reasonable to me.

https://github.com/qbittorrent/qBittorrent/issues/4965

The only difference between qbit's pause and uTorrent's stop is the check if files still exist.


The fact that users do not see the behavior as the same is the problem. When the users are telling you how they use your software and you respond by saying "that's bizarre, don't do that", then there's a serious disconnect. Developers should be responsive to user feedback, not dictate how their software should be used.


I think the serious disconnect is in user expectations. Developers of paid produces should be responsive to user feedback. Developers of open source software are free (legally, ethically and morally) to care or not about user feedback exactly as much as they wish.

Users of open source software are free to fork the software if the original developer isn't responsive to their particular needs. They're not entitled to demand that the original developers respond to their feedback. They're even less entitled to complain when the develop does respond with a specific reason why they won't act on their feedback.


I disagree. It's one thing to not have the time or bandwidth to change or add a feature. Certainly free users aren't entitled to a developers time. It's another thing to disagree with a style of usage and refuse to accommodate users out of principle. The software is marketed as an open source replacement for utorrent. If he wanted to treat it as his little fiefdom, he shouldn't position it as a replacement for utorrent. If I had any expectation that a pull request for this feature would be accepted, I would fix it myself. But his demeanor suggests otherwise. That's not the way to approach software that positions itself as a community project.


Tried it, it's really nice. Thanks mate


Transmission. If you're looking for one on android, LibreTorrent looks absolutely amazing


Thanks. How's the Windows client? I mostly daily drive Windows whenever I'm not programming. I do intend on getting a separate Linux machine for NAS+torrent+Plex MS at some point, and it does seem promising


Transmission works on Android as well.


Well yeah but LibreTorrent's UI is where it shines IMO.


Deluge for a server/client model with optional web interface, if you for example want to command your Raspberry Pi to add som torrents. But for a more traditional one I prefer qBittorrent after Ludde abandoned uTorrent.


deluge is great. it has a mobile client that i can use to connect to my home server using zerotier to make the home server accessible anywhere.


Been a fan of Transmission on macOS for many years. I just wish I could install Transmission on a Linux box and still keep using the fantastic Cocoa interface it has on macOS.


You mean https://github.com/transmission-remote-gui/transgui or another gui? You should be able to connect remotely, that is pretty much the entire idea behind transmission. A torrent daemon with various UIs that use the API.


I mean the native Transmission Cocoa GUI.


No idea what that is and any searches for those terms yield transgui. In any case the only way the transmission daemon can be controlled is through an API, which by definition works remotely.


I use deluge just because it has a nice client and separate server which I leave running on another box.


uTorrent 2.2.1, which unfortunately seems like it won’t be viable going forward if v2 adoption picks up.


Was using that for a very long time as it's the last version without ads. Unfortunately there were multiple vulnerabilities disclosed in 2018 and some private trackers started blacklisting it (workarounds for the vulns were discussed but not sure if any are confirmed as working).

https://bugs.chromium.org/p/project-zero/issues/detail?id=15...


This is news to me, thanks for pointing that out. (Not sure why I find it surprising, it’s a very old piece of software now.)


I'm still using uTorrent 1.8.4 inside WinXP VM. It's what I use for past 10 or so years because it works and still works.


Why not 3.x?


It's basically malware.


You can disable every single ad/feature, then it's really good.


Better yet, which torrent client can consistently utilise 1 Gbps fibre Internet links?

uTorrent seems to have some efficiency problems, and in my experience it can't even do 1 Gbps with a cross-over cable between two peers, let alone across the Internet...


- Picotorrent mostly, because of the simplicity

- BiglyBT sometimes, because of Swarm Merging


LibreTorrent for Android.


uTorrent, it has always been fast and stable.


The Hefur tracker software is going to be adding support for BitTorrent v2 soon:

https://github.com/abique/hefur/issues/31


Won't the "per-file hash trees" make it easy to detect copyright infringement? (i.e. the same .txt signature file of a release group)


Adding noise and/or padding is trivial, it would just be done on a per-file basis. Of course, this would also lose the protocol's identical file feature, but that's not a regression vs v1.


That's a pretty fair point. There is no expectation of privacy on public torrents however.

As for how this works with private trackers is a different story. I guess since DHT isn't used for private torrents, this shouldn't be an issue.


What are people thoughts on transmission? I have never tried BitTorrent but I am curious to hear from others who might have tried both.


Currently using transmission-remote for a headless Linux box living beside my desk, and a count approximating 1100 torrents (with the only public ones being Linux ISOs).

It works exceedingly well. I've never had it falter or slow, despite the number of torrents. My preferred client for desktop and server :)


Can anyone give examples regarding how they are currently using BitTorrent ?


Linux ISOs downloads mostly.


Need to work on their mobile friendly.


What is the use space of BitTorrent?



TLDR

https://twitter.com/markopolojarvi/status/130324298806421094...

- new hash function

- more efficient and less error prone .torrent files

- each file in the torrent gets its own hash meaning deduplication across torrents is feasible

- backwards compatible with v1


hmm, looks like still no support for data streaming :(


BitTorrent (v1) supports streaming if your client knows the appropriate way to prioritize packets. I've written a streaming client using libtorrent-rasterbar and it worked fine. What's the problem?


Nothing stops you from downloading pieces in a specific order that allows for streaming. It's the client that determines order, not the bittorrent protocol.


Popcorn time has some support to start playing before full file download.


well, I meant live streaming here.


That seems like such a fundamentally different architecture is needed, why would you want that to be part of BitTorrent rather than its own protocol?

What is your opinion of existing P2PTV apps, is something missing?

https://en.wikipedia.org/wiki/P2PTV

(I don't know much about the space.)


Um, now this is the third subcomment; you could have edited your original comment.


That would require mutable torrents correct?


yeah, that is one way to do that.


It would be nice to have a torrent system where noone that is uploading has all the data for a specific torrent (or at least they arent uploading the whole file to any specfic person... probably would be harder to prosecute in the event that some files become illegal)... You could take 1% of the billonaire's fortune and give it to the "victims" and everyone would be happy.


If you want to circumvent the law, you must find a legal technicality, not a technical technicality.

The key weakness in every smartass technical technicality to circumvent the law is forgetting that intent is core concept when the law is interpreted in the court.

Did the defendant have intent to do X? Did the defendant use some purpose build technique to avoid doing the most literal interpretation X ? .. Yeah, defendant is guilty of doing X.


"must find a legal technicality, not a technical technicality."

I think we are ok as long as there are no precedents? Isnt that how smartass laws work?


If you want to be the next David LaMacchia or Shawn Fanning I guess.


IANAL, but afaik this depends on whether the legislation is English law or Latin law from origin. Where Latin law is more about the intent and english, and therefore US law, is more about the letter of the law. Perhaps the distinction is along a different axis but this is how I remember.


You are thinking about common law (originated in England) vs civil law (based on Roman law) In both systems intent is very important in criminal cases.

The real difference is between criminal law and contract law. In contract law. In contract law the intention is fixed by the language of the contract document. It matters less if matter if one party didn't intent to violate the contract. It is what it is.


Thank you for clarifying this. Do you have a link to some overview material on the differences between laws in different countries? Is the distinction between criminal and contract law the same in every region?


Uploading parts of illegal files is equivalent from a security perspective. It seems like there's no advantage to your proposal.

The Whonix wiki has an incredible amount of information on topics such as this: https://www.whonix.org/wiki/Documentation

Basically, it's very hard to protect the security of users when a government seeks to prosecute dissidents. There's a lot to take into account. A simple "we saw an upload from IP address x.y.z.w containing <illegal file>" will be enough to hang whoever it is.

On second thought, I guess it would be a nice addition to let users control which files they're seeding, rather than the whole torrent. I'm just not sure it's enough to get them off the hook in the event of legal troubles.


Ideally, you'd want to come up with some way to pack files such that many files share the same chunks. Perhaps you could make the act of sharing (all or most of the) legal and illegal files indistinguishable.


You could have a protocol where every block of the file is actually two "random" blocks XOR'd together, but this doesn't really work. If you create a new 1GB torrent, you'll need 1GB of new (never seen before) blocks with ~100% probability, so it will be obvious who's seeding the data.

Or you could make the block size smaller (e.g. 1 bit) and tell the lawyers to piss off because 0 and 1 are public domain.



Off-topic meta discussion about that second link you posted. It is an example of a fourth-level municipal domain[0], which are a relic of the past internet. Sooke is a municipality on Vancouver Island.

Sadly, new domains of this sort were discontinued in 2010. A cool relic of the internet past and massive geek cred. I remember these URLs from my childhood and they make me very nostalgic.

[0] - https://en.wikipedia.org/wiki/.ca#Third-level_(provincial)_a...


> You could have a protocol where every block of the file is actually two "random" blocks XOR'd together

One implementation of this is the "Owner-Free File System"[0], but it is no longer being maintained.

> If you create a new 1GB torrent, you'll need 1GB of new (never seen before) blocks

If you XOR your 1GB file (X) with 1GB of blocks that already exist on the system (Y), you get a new set of 1GB blocks (Z), but it will be hard for an observer to prove that your Z blocks don't actually pre-date Y.

Your defence would be that someone generated Y based on your Z in order to frame you as having created X, when actually your Z, when combined with another set of random-looking blocks Y2 produce a different 1GB file, X2, which is completely legitimately.

[0] https://en.wikipedia.org/wiki/OFFSystem


Even if people aren't scraping the network to record the order of block creation, it will be mighty suspicious when the Y blocks are scattered randomly across the network, and the Z blocks are conveniently hosted in one place.


> Even if people aren't scraping the network to record the order of block creation

I'm not sure if "scraping the network" is possible if chunks have unguessable names. Also, it's possible that someone could have a file shared among friends (over TLS), for years, without it being publicly announced.

> it will be mighty suspicious when the Y blocks are scattered randomly across the network, and the Z blocks are conveniently hosted in one place.

I suppose it depends on how blocks are distributed in the system. If the blocks of Y are all served from y.com and you host your Z blocks on z.com, then both hosts will look equally suspicious is Y XOR Z produces an infringing file X. The owner of z.com just needs to be able to credibly claim that y2.com was already hosting the Y2 that XORs with Z to produce a legitimate file.

Alternatively, a single node could host Z, Y, and Y2, all created by users, with no logs kept of when each block was created or requested (and no search/listing function). Such wilful ignorance may not impress a court, but is roughly equivalent to running a non-logging VPN, or a chat service that doesn't retain metadata, or a Tor node, or an online encrypted backup service. The service could even offer to follow a DMCA takedown procedure, in case non-XORed non-encrypted blocks were stored on it.


One change in the v2 protocol stands out here. "Files that are identical can also more easily be identified across different swarms". Would this per file hash not make it easier for such a government to find illegal material shared by users. It appears v2 is not suitable for sharing of material which could get one in trouble.


I don't think this really matters in practice. The DHT is public and you can already scan it to find all the public torrents, all the files they contain, and what IPs are sharing them.


.zip


Can't this already be true? You do not need the entire torrent in order to seed downloads. Clients leeching know which blocks seeders are providing and will download what's available.


Sounds a bit like freenet: https://freenetproject.org


If you share significant parts of a copyright protected work, you're probably still on the hook. For most things, it's not actually the files that are copyrighted, it's the content encoded in them.

At some point, I suppose things get very fuzzy, e.g. you're just uploading a single second of an album, but I don't know who practically useful that would be.


>probably would be harder to prosecute in the event that some files become illegal

Cynically I'd assume they'd just use it to add a conspiracy to commit a crime charge to the list since you technically need to coordinate with other people.


No one can have all the files and BitTorrent works just fine. Though, I don't see why anyone wouldn't want the whole file. What's the point if you can't actually use what you downloaded?


You could or could not have all files, but you would never upload a whole file to a single user (ie: part of a file is possibly meaningless if you upload random chunks)


No reason you can't do this with the existing protocol. I'm not sure how much it helps from a legal POV and i suppose it depends upon your threat model, but, its fairly trivially doable.


Im not sure how that would work technically speaking if there is just a single seeder and leecher, also don't you become an accomplice for helping?


every seeder could technically have all the files but would only share random chunks with each user


Great! Looking forward ysing it


To me, BitTorrent feels a lot like Blockchain: interesting tech that doesn't seem to be able to find a practical and useful application (proportionate to the attendant hype). Which is not to say that either can't, they just haven't yet. Why?

EDIT: while I don't (yet) consider myself delirious, responses to this comment have illuminated how little I know about the influence and application of BitTorrent. In other words, I retract the above opinion.


You're delirious.

Setting aside the fact that BitTorrent came about long before BitCoin was a glint in Satoshi's eye, it's still massively used for content distribution, on top of being a foundational technology to several industrial applications: Apache Spark, for example, uses BitTorrent to shuffle/broadcast data around cluster nodes.

Even when BitTorrent alternatives are used in its place (such as IPFS, Dat, or Kademlia), BT was still a massive influence in all of their designs, and to the design of any other DHT that came about after its release.

All of these, plus other systems that are (in abstract) similar constructions, have several industrial applications for asset distribution (Netflix uses IPFS to distribute container images internally [0], Uber and Alibaba do similar things) and network management (DNS for example, is really just a DHT, as are several other network protocols)

Ever since BitTorrent came about, it has unequivocally been a resounding success. It never for a moment had to look for applications, there were already plenty since day one.

[0] https://blog.ipfs.io/2020-02-14-improved-bitswap-for-contain...


> it's still massively used for content distribution

Is it, though? Youtube doesn't use it, neither do Netflix, Hulu, Amazon Prime Video, Disney+, Spotify, etc. Does anybody (except for Blizzard) use it? I don't think Steam or the Epic store or the App Store or the Play Store use it, either.

I'd be extremely glad to be proven wrong.


I believe Steam uses it. The network graphs they spout show behaviours consistent with BT-like activity.

I'm pretty sure the BBC iPlayer used it too, at some point (no idea if it's still the case).

Most Linux projects use it.

Any company offering a "download manager" for 1GB+ files, is likely to be implementing something similar to BT behind the scenes.

The thing is: if bandwidth costs are a real problem for you, BT is a killer solution to offload some of those costs. If they are little more than a footnote (like for the mega-services you list), then BT is unnecessary and it will probably slow down overall performance too.


v1 of iPlayer used bittorrent, but these days it's all streaming.


Windows has some sort of P2P update distribution system, although (1) I don't know that it is bittorrent and (2) it seems to be massively slower than just using MSFT's servers directly, so I'd suggest disabling it. On the order of 100x slower.


I've never managed to see it working on personal PCs. Unfortunate.

Still waiting for the day such a feature arrives on Linux, it'd be an instant hit everywhere where uplink is slow and expensive, especially in developing countries.


Wasn't there an `apt-transport-torrent`package a while back? It shouldn't be that hard to get going, just include a magnet link in the metadata for the package.


I remember reading about how Facebook uses it to distribute updates amongst servers.


Wasn’t the original Skype that Microsoft paid billions for also based on BitTorrent technology?

Not BitTorrent exactly, but close enough that if the blockchain spinoff was bought for that much the blockchain folks would certainly consider (rightfully IMO) a victory in the blockchain column.


Skype was based on JoltID/Kazaa.


You mean beside being responsible for ~50% of the global internet traffic in 2009? https://en.wikipedia.org/wiki/BitTorrent


Yeah, but what's the current number? I'd be surprised if it were more than 5-10% these days. 2009 was 11 years ago, the internet was a much smaller and less regulated place.


Even if it were only 5-10% of internet traffic, that’d be huge! Think about how much the internet has exploded in use since 2009.


You think that if something once took up 50% of all traffic on the internet it never found a practical application?


> proportionate to the attendant hype

From the original comment.

Also, just because it was 50% at some point, it doesn't really matter today. Paraphrasing, a technology is only as good as it's latest match result.

Perl was once powering the web, nowadays you have to look at the web with an electronic microscope to find it...


This is silly. Are you actually saying that bittorrent was hyped so much that constituting half the internet was a letdown? And that Perl didn't live up to its hype because it's only common today, and not as universal as it once was?


> This is silly. Are you actually saying that bittorrent was hyped so much that constituting half the internet was a letdown?

My point is that it was a fad, it's barely used anymore and it was 50% in 2009, now its usage is much, much lower. I've seen a lot of statistics of streaming services being 20-30% of the internet each (Netflix, Youtube), so I'd be amazed if BitTorrent is more than 10% 2020 (and that's a very generous estimate), as I said previously. Anyway, it doesn't matter, there are a lot of techs that were super hyped and in the end, faded away. SOAP, CORBA, XML databases, semantic web techs (RDF, etc.), RSS, ...

> And that Perl didn't live up to its hype because it's only common today, and not as universal as it once was?

Perl is not common, it's uncommon. Let's say you're a developer in one of 100 big cities around the world, how likely it is for you to find a job developing Perl web apps? I'd say that in 80-90 of those, you can't even find that job. That's not a common tech in 2020 ;-)


Pogs were a fad. Bittorrent took downloading large files from being an unreliable pain to being effortless.

Perl was not a 'fad', it was what powered the internet's dynamic sites in the early days. Foundational technologies that eventually are hidden from the average person is not what 'fad' refers to.

Just because you don't use something doesn't mean it isn't used. You keep digging in to nonsense. (Also total internet bandwidth usage has increased significantly over time).


I use BT much more than 5 years ago since most popular content was removed from Netflix. Many folks here have said the same.


That makes zero sense. How much of the entire internet should have been bittorrent to live up to the "hype". Also what hype are you talking about? I only remember bittorrent gaining traction because it worked. Where is your idea of hype coming from?


Youtube is 37% of all mobile traffic. [1]

[1]- https://www.statista.com/chart/17321/global-downstream-mobil...


that is still way WAY more than enough to justify it's utility.


Insofar as the majority of that traffic is likely illegal (in terms of sharing pirated property) I would not consider it useful.


It's not legal, therefore it's not practical? That doesn't make any sense at all.


This falls apart when you examine prohibited goods. Guns are illegal in most countries because they are very useful.


Are they though? I'm 38 and can't remember a situation in my life where a gun would have been useful. Not saying that I can't imagine such situations, I just find it a slightly weird example to make this point.


I also don't think that a fishing pole is useful, but that's because I don't like to fish. With a gun I could

* defend myself

* rob you

* murder you very effectively

* enjoy target shooting

* hunt wild game

* provide peace of mind in troubled times when law and order break down

The fewer guns there are per capita, the more useful they become for criminals. They are so useful, in fact, that people will spend a lot of money and subject themselves to costly and invasive regulations just to buy one gun. If no one found them useful, no one would find it necessary to ban them.


> I would not consider it useful.

Perhaps you meant (and people should have charitably interpreted you as meaning) "I would consider it to have no legitimate uses" or "... no uses worth supporting".


useful != legal


Why...?

BitTorrent has been influential in lots of applications. If you really need proof that BitTorrent can move milestones, it help perpetuate sites like thepiratebay.org and assisted in the advent of modern day streaming services that both the music and movie industry neglected for years. It is also a central influence for p2p networking and file sharing.


Yeah, but it's not used anymore, from what I know. Few places use it for distribution, I guess they don't consider it worth the hassle. Blizzard was using it to distribute binaries.

I guess mobile internet was its downfall as few people want to seed from a mobile connection.


There are vast data archives exposed through private trackers that work on top of bittorrent. This isn't widely known due to their private, invite-only (and in some cases, entirely closed) nature but the fact remains that these exist and are fundamental in multiple, widely divergent communities.


Can you give some examples? It all sounds very... mysterious. At least, what kind of content are we talking about?


A public example is rutracker.org. Another is sci-hub. See Joe Karaganis for some cannonical info: http://piracy.americanassembly.org/shadow-libraries/


Music, movies, tv, video games, books. Some trackers specialize in one type of content, some have everything.

Finding URLs of private trackers is trivial in google. Getting an invite usually requires knowing another user, unless they temporarily open signups for everyone.

Here's one of many lists of private trackers

https://hdvinnie.github.io/Private-Trackers-Spreadsheet/


There's also The Internet Archive that provides torrents that's in the clearnet.


Lots of companies use BT in a datacenter environment to efficiently distribute binaries to a large number of hosts.


Again, this is in the context of the original BitTorrent hype. Which was huge. What you're mentioning is small potatoes.


Again it’s not small potatoes if the protocol is widely used. There is no strict definition of what kind of data can be used with BitTorrent. And because you only know of a few public examples doesn’t make the protocol unused, abandoned, or any less significant.


>I guess mobile internet was its downfall as few people want to seed from a mobile connection.

CGNAT has a lot to answer for, too


Well, both threaten the establishment. Blockchain would basically upend states, with unclear benefits. BitTorrent would allow for very efficient distribution of a lot of data, which can conflict with copyright.

At least BitTorrent is environment friendly :-)


I estimate that thousands of people use torrent every day since 20 years. I use it very often. maybe it's just that nobody uses it in your close network


I estimate that a lot of people won't even know if they're using it. Especially if we take things like PopCorn Time into account.


Is there ever going to be a production ready BitTorrent application implementation in Python or is just not feasible with the GIL? It would be nice to be able to use BitTorrent for basic p2p capabilities in a pip installable fully Python app.


I feel like a lot of the terms has blockchain in nature but they never specifically state it.


Bittorrent was pioneering distributed architecture while bitcoin was still a future thought in Satoshi's mind.


Blockchains are just unbalanced merkle trees.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: