In hindsight, I think most would agree it was a good idea to force the smarter scaling solutions rather than brute-force blocksize scaling (which sooner or later would have hit a ceiling anyways).
The "smarter" scaling solutions still aren't here after years of waiting, fees are hitting all-time highs, and other chains can do >4,000 TPS on layer 1. So no, small blocks weren't and aren't a good idea.
read: other chains are centralized databases that have no business being blockchains in the first place and can be replaced by single mysql instance.
the trick is not to have high TPS, bitcoin could have unlimited TPS too, it's just a single line of code change. the trick is to have a decentralized system that functions at saturation on commodity hardware.
the conservative minds prevailed during scaling debate in 2015-2017, creating selection pressure for scaling solutions that optimize limited resource - chain space, rather than populist simplistic "solutions" pushed by cheap propaganda slogans like "we can do this much TPS!".
> read: other chains are centralized databases that have no business being blockchains in the first place and can be replaced by single mysql instance.
This, too, has been a thought of mine, but I'm not exactly sure how I would block diagram the 'locking' mechanism for the MySQL instance.
I've come to believe the locking mechanism is the nonce produced to sufficient solution parameters that appears to use the difficulty of finding said solution as its defensibility.
So how do you get the 'strength' of the miners throwing ExaHashes at a solution with a single instance? We could easily snapshot the DB and sign them, however, the point of failure is no longer in a 50% attack, but in losing said key... which intuitively feels far less secure?
locking is needed to order transactions, mysql already has acid transactions. i use mysql metaphorically to describe a single centralized database protected by single entity that controls all the writes. we already have such payment systems: visa, mastercard, paypal, venmo, etc.
blockchain and miners only make sense if you need and actually maintain decentralization. if you don't and/or can't - miners (and validators in PoS systems) are useless overhead.
Thank you, I had never heard of ACID -- yeah, centralized vs. decentralized becomes fairly philosophical (eco vs. democratized arguments, etc).
To build on the discussion, I'm more amazed at blockchain's capabilities as a cooperation engine outside of the normal channels. To that end, I'm really surprised we haven't seen significant attrition in the current legal system to smart contract systems. I guess the lack of adoption more highlights just how 'customized' a 'standard' contract or dispute is in the real world.
blockchains are great in environments where parties can't audit and trust each other but need write access to the same database. so far it remains an open question whether blockchains will ever be useful for anything other than settlement ledger (and things you can build on top of that).
i don't believe the hype of using blockchains to track some logistical or supply chain data will prove useful simply because all these international corporations already have a system to resolve conflicts and enforce contracts - law.
Right -- many friends are lawyers and they basically say law is a slow grinding gear. So, my naive thought is that any improvement in speed should be a deterrent for the mischievous.
And thinking out loud, it sounds like if blockchain were used to run financial operations for a company, we could make the infamous audits for Luckin Coffee's and GSX's a thing of the past. Or at least after one mistake (the LC coupon issue was an interesting hack to avoid detection), update the contract, then all future instances are robust to the same issues.
GAAP Rules and Arm's Length Transactions could be factually monitored too... interesting
i'm not saying there will never be a valid use case for blockchain beyond bitcoin, i'm just skeptical about the hype of recent years where it's applied left and right to everything. so far all the use cases i've heard of could be replaced with a centralized database maintained by a consortium of interested parties.
They are here, the publicly known nodes alone on lightning have 1.2+k BTC locked in [1], that's ignoring all the private channels which form the majority of the channels (private channels are the end-user channels which don't route transactions themselves and are not broadcast publicly).
We have easy to use wallets [2], easy ways to run lightning nodes [3].
And if you look at the average transaction value, you can see that the blockchain itself is acting more of a settlement layer rather than "coffee transactions" as it should [4].
Becoming the most trusted decentralized worldwide settlement layer is considered a failure now?
Many of the cryptos that claim to scale to thousands of tps on layer 1 are all sacrificing decentralization to achieve it, at which point, I might as well use Paypal/Visa. There's no point in being able to scale to those levels in theory if in practice no one uses them.
> Becoming the most trusted decentralized worldwide settlement layer is considered a failure now?
it is, if the original goal was to be something else.
If you start a marathon and stop to eat the greatest hot dog ever made, it wouldn't be considered a successful run, although you can say "but it was the best!" and be happy about it.
Not sure if it can be called a pivot when it was in the original release of the software.
Payment channels aren't in the white paper but they were in the first public release and had dedicated opcodes. That first implementation wasn't very practical nor was it secure.
The fact that modern payment channels are implemented with other opcodes may be a pivot in implementation but not in concept.
Startups have a duty to their shareholders to make money, whereas Bitcoin has at least a nominal duty to its inventor[0] and its whitepaper.
There's nothing wrong, though, with someone taking the open source code and creating LightningCoin from it, though. That would be the equivalent of a pivot.
Payment channels were invented by Satoshi and actually baked into the transaction format from day one-- it's why transactions have locktimes and sequence numbers.
The nice thing about using blockchain for settlement is that (using lightning) you don't actually need to wait for the settlement to clear before you are certain you own funds.
What is a sign of success, multicasting every transaction in the world and storing it on a public ledger? I don't remember that being in the white paper.
Can you elaborate on your perspective? I must admit that I don't use layer2 solutions and many people that think they are, are not (ie. users of Polygon's Proof of Stake network instead of their Plasma network), but I appreciate the progress on those layer2 solutions and actively support the bridges to them
A low-priority transaction cost $2.36; medium is $4.39 and high is $7.48.
To put this into context: you can send $10,000 worth of BTC anywhere in the world in an hour for $2.36 right now.
A bank wire transfer for $10,000 is going to be at least $35; often it's more, depending on which country you're sending to and if you have to use an intermediate bank to get to the bank you're ultimately are attempting to reach.
Bank holidays, weekends, etc. don't apply to bitcoin.
Bitcoin fees have ranged between $0.79 and $33.00 within the past week (feerates between 10 sat/vB and 250 sat/vB) [0].
There have been periods where the prevailing feerate has remained over 150 sat/vB ($12 - $24) for 24 hours.
And unless you're already in $10K of BTC, you will lose exchange fees and are subject to the forex (equivalent) rate. Your receiver will have the same burdens.
Wire transfers/SWIFT/FedWire etc are effectively instantaneous, the fees are 100% predictable, and they are accepted everywhere worldwide for all legitimate business. As you note, they can be inconvenient or impossible on overnights, holidays, and weekends.
I'm not disagreeing with you -- just noting that the quoted Bitcoin fees at any point in time are not reliable, and that the whole process is more complicated than it might appear.
[0] (background for other readers) Bitcoin fees are a function of the transaction size in (virtual) bytes (minimally either ~140 vB or ~240 vB, depending on the type of addresses used), multiplied by the feerate in satoshis per virtual byte. Converting to USD, you have to consider the BTC-USD exchange rate which has been bouncing around $55K lately. So, e.g.:
240 vB * 250 sat/vB == 60_000 sat
60_000 sat == 0.0006 BTC (100_000_000 satoshis per BTC)
0.0006 BTC == 33.00 USD (at $55_000 USD per BTC)
You cannot send $10,000 anywhere in the world via Bitcoin for $2-7, you have to first convert the $10,000 to BTC at a local exchange, wait 3 days, pay 1-2%, suffer slippage, pay $2-7, wait an hour, pay 1-2% at the remote exchange and wait for the money to deposit into a bank account. This is true because you can't spend Bitcoin for goods and services - generally speaking anyways. Bitcoin in this context is just the intermediary unit which is elided in a wire.
Re: wires, domestic US wires are offered free of charge by many institutions, and are instant during regular business hours. Obviously delays apply outside. For reference, an ACH transaction costs banks $0.002 in bulk to the depository institution. A FedWire costs $0.033 in bulk to the depository institution. [1]
Transfers outside the US cost more and take longer because of AML and KYC.
Not to mention that the move in the US from ACH to RTP makes ~free and instant domestic transfers 24/7. No blockchain needed. Because of course there isn't, the current system was based on policy not technical limitations of MySQL. [2]
Those transaction cost estimates are extremely volatile. I recently tried to pay 50 sat/vB (~$4) for a transaction on a Monday and it didn’t clear until Saturday. Unfortunately mempool.space doesn’t seem to publish historical data, but it seems that weekends do affect Bitcoin, in that transactions are significantly cheaper.
My Schwab account does international wires settled in USD or in a number of foreign currencies for free. (The official fee is $15 but it’s always been refunded for me.)
As the other @garmaine mentions, the currency conversion is at mid-market rates, no fees. I've compared the rates to Wise and other options and it's far less cost, and certainly less cost than the median tx fee of BTC by itself, let alone slippage or fiat conversions or exchange fees. Anyone who thinks BTC has any advantage as a medium of exchange I'm convinced has never actually used it/compared it to existing options.
Schwab forex exchange really is without fees. It’s done at the current forex rate with no basis points taken. I’ve checked my transfers against the forex rate for the day.
Schwab operates these retail banking services as a loss-leader for their investment products.
I've never experienced a bank that hasn't silently gouged on forex, and this is the first I've even heard of one that didn't.
Definitely good to continue to be vigilant about such claims.
As for BTC, I wouldn't use it for anything other than a store of value - there are far better crypto options for transferring value. Some cost fractions of cent and take seconds.
Others cost single-digit dollars, but are fiat-pegged so there's no volatility risk at all.
I didn't do wires with Schwab but I'll trust it's cheaper than your average bank when it comes to conversion rates. (They do refund ATM withdrawal fees for international ATMs)
Not sure what an accouting trick is, but it might be interesting to note that Bitcoin right now averages 1.4MB blocks.
Would 2MB be preferable? Would it be useful? Just the same magnitude of transactions that visa settles would require slightly above 500MB sized blocks, and that's without any more sophisticated transactions such as atomic swaps, which could potentially be useful.
I believe the point of segwit is more than accounting. Essentially these signatures /spend-scripts are pruneable once the blocks they are in are old enough.
After all, if a transaction had an invalid spend script, it would not have gotten 100 blocks worth of confirmations.
They only exist today as a mechanism to prevent nodes from being flooded by low difficulty fork blocks, forking off back before height 230k, because the initial difficulty of Bitcoin (2^32 hash operations per block) is too low relative to multiple TH/s asic mining devices.
I'm not sure, I haven't been actively involved in development for a couple years.
There were a couple distinct activities. One is the rolling utxo hashes, which has no major engineering hurdles, and can allow a compromised security "bootstrap from a utxo set".
The other are schemes that allow nodes to not have the utxo set but still validate-- these have historically had unfavorable IO costs, and the bandwidth storage tradeoff hasn't seemed that appealing-- e.g. would you find reducing storage from 10 GB to 1MB but at a cost for increasing bandwidth 10x to be appealing? In some applications it would be, not others.
I believe work related to both has been ongoing, however.
> compromised security "bootstrap from a utxo set"
could you elaborate what's compromised about including a sha256 of utxo set in every block and allowing users to choose how far back they want to bootstrap from?
isn't it strictly better than current situation with assumevalid?
Depending on a utxo state in blocks is effectively the SPV security model, -- it's an utter blind trust in miners to set the value honestly. Something which is only theoretically sound on the assumption that someone else is checking.
If you're happy with the spv security model-- perhaps you should be using SPV? :) This is a little trite I know, because it's not quite identical because of the "past": but the vast majority of the sync time is in the last two years in any case, and practical considerations mean you wouldn't be able to just arbitrarily choose how far to sync from (as you need to be able to get the utxo set as of that height).
In the ethereum world effectively almost all synchronization is done using 'fast sync' which is essentially the committed utxo blindly trust miners model. Performance and storage considerations mean you can't go back more than a tiny amount of time (I believe its normally 4 hours). Many commercial entities operate with multiple nodes and if they detect they've fallen behind they just auto-restart and fast sync to catch back up. Effectively this means that if miners commit invalid state they'll just blindly accept it after a couple hours outage.
All assumevalid is doing is asserting that the ancestors two weeks back and further of a specific block hash all have valid signatures. When you get a setting there as part of the software you're running you're assuming that the software isn't backdoored (e.g. because of a public review process, or your own review). Assumevalid is strictly easier to review than pretty much any other aspect of the software integrity. E.g. there are 100 places where a one character change would silently bypass validation completely. Reviewing AV simply requires checking that the value set in it is an accepted block in some existing running node. AV as implemented also requires the blockchain to agree and have two weeks of work ontop of it, so it's just in every way harder to undermine validation by messing with it than changing the code some other way.
On a technically pedantic point. It takes a minute or so to sha256 the UTXO set, so doing literally what you suggest would utterly obliterate validation performance. (fortunately rolling hashes accomplish what you mean without the huge performance hit.)
> Depending on a utxo state in blocks is effectively the SPV security model, -- it's an utter blind trust in miners to set the value honestly
if it was hardforked in as part of consensus protocol - miner's wouldn't be able to set invalid utxo set hash any more than they are able to "produce" blocks with invalid signatures, or am i missing something?
as for storage and performance, maybe it would make sense to take the performance hit of maintaining a persistent immutable set such that you would be able to travel back as far as you like with minimal overhead.
do you know of any active PRs/branches where utxo commitment work is/has been happening?
They can produce blocks with invalid signatures, but they're stopped by nodes validating. But if instead of validation nodes skip blocks and use a commitment to the state, then they're not validating anymore. How that fails is why I gave the ethereum example, because I think the security has actually practically failed there-- just not been exploited yet.
> as for storage and performance, maybe it would make sense to take the performance hit of maintaining a persistent immutable set such that you would be able to travel back as far as you like with minimal overhead.
The cost of supporting that arbitrarily would be extremely high over and above the cost of having the complete blockchain. I don't see why anyone would choose to run a node to serve that. I certainly wouldn't-- it's obnoxious enough just to have an archive node. But having some periodic snapshots would probably be fine ... but not that many since each would be on the order of 7GB of additional storage.
No, there is work ongoing I haven't been following closely. Sounds like you're more interested in the assumeutxo style usage, so search for that and muhash.
2 MB doesn't solve the issue, does it? Bitcoin is having exponential growth. You don't go for a hard-fork to just "double" the capacity. If the hard-fork was promising x100-x1.000 capacity, I'd imagine it being less contentious.
yes, in number of transactions. just realize that if bitcoin went unlimited, most of the altcoin activity would be happening on bitcoin chain. blockchain space being limited resource priced out economically un-viable activity into shitcoins.
In hindsight it should be obvious that these "smarter" scaling solutions have utterly failed to keep Bitcoin functioning as peer-to-peer electronic cash, like the original intention was.