Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Curious to know whether, in your opinion, the use of verifiable delay functions and proof of history represents a more reliable and scalable approach to decentralized time-ordering.

https://solana.com/news/proof-of-history---a-clock-for-block...



Need more time to think this through. A few comments:

1. The problem space being solved in proof-of-history needs special mention. It assumes byzantine faults. (My article does not).

2. If I understood it correctly, the idea behind proof-of-history is that you perceive the flow of time in "windows". Each window is linked to the state of the previous window (ala blockchain) and has enough randomness built into it that predicting window attributes is a very-low-probability case. When a new event is generated you declare that event to be part of a certain window. You could not have fraudulently backdated your event because the future windows already considered the original future window state (i.e., the state before you inserted your event). You cannot future-date your event because you cannot predict the randomness.

3. At the outset, this is a clever idea. But you mentioned "scalable", I wonder how you would deal with the order of events that are rightfully binned to the same window. Wouldn't you end up with "concurrent events" and find yourself back at square one?

4. At some level, this design can be classified as a causal consistent system. In a non-byzantine world, causal consistent systems can afford network partitions. But in this world, you assume the systems have access to the window-ing system of time flow.

Apologies if I grossly misinterpreted the article.


Yes, 1 seems to be the most important difference, with 4 as one of its most important consequences. It's definitely intended to be a solution to Byzantine faults (or double payments as they would show up in practice if the transactions are financial!).

I think your summary in 2 is accurate. The answer to scalability 3 seems to be that verifiable delay functions are always costly to solve the first time, but can be checked in parallel -- like a maze that takes time to solve the first time, but can be checked instantly simply by looking to see whether the solution crossed cross any walls.

It seems to me like a reasonable technological solution to time-ordering transactions in a decentralized ledger subject to byzantine faults, but I'm not an expert here. It's at the core of the Solana layer 1 blockchain. Solana took it on the chin with the FTX collapse because FTX had chosen Solana as its main exchange and made a heavy investment in its ecosystem. Wondering whether that was an exogenous shock that makes Solana a relative good investment opportunity at the moment. The capability of doing verifications in parallel means verification of 100k transactions per second. Which is on par with what the credit card companies can do.


net TPS for by a distributed and BFT system is going to be bottlenecked by the cost of information exchange between participating nodes, which is a function of protocol design and, ultimately, network bandwidth/latency

any basic server-class hardware can saturate normal network capabilities with no special effort: a single core, a few hundreds of megabytes of system memory, is all you need to push 1Gbps, probably even 10Gbps

but solana defines hardware requirements for validator nodes https://docs.solana.com/running-validator/validator-reqs that stipulate, at minimum, 12 physical cores, 128GB of RAM, and independent PCIe NVME SSDs for specific types of persisted state

even accounting for the additional work you need to do in a trustless/open/BFT/etc. protocol, these requirements are orders of magnitude beyond what should be necessary, and pretty clearly indicate that solana is very much at the "naive prototype" stage, technically

also, fwiw, solana is not doing the claimed theoretical maximum ~700k TPS, or even 100K TPS, but just 3k TPS, as reported by https://explorer.solana.com


Can't speak for op, but can speak as just an observer - probably not. Blockchains are ultimately CP systems, not AP. From that link - "Data can be inserted into the sequence by appending the data to the previous generated state. The state, input data, and count are all published. Appending the input causes all future output to change unpredictably."

Meaning there is a single source of truth. That single source is decentralized (because blockchain), but it still requires a quorum to agree to accept a bit of data, and to treat the new state as the valid state. I.e., multiple competing parallel writes must be serialized across a network of computers, which is always more expensive than if they multiple competing parallel writes can be done in parallel.

Lamport clocks and etc are AP. It fully expects nodes to make decisions autonomously, and to synchronize after the fact. And data can be lost without realizing it.

(The above is all a bit of a simplification, but fundamentally, they're solving different problems)


Blockchains as commonly deployed seem to be more AP to me: you can get into split-brain and while there's a way to select the canonical form after the partition is healed, that's after the fact. Whereas e.g. Paxos won't even let you do anything without knowing it has quorum.


alas, because blockchains allow open participation, there is no reliable definition of split-brain, or network membership, or anything at all, really

because there is no canonical definition of the set of nodes which represent a source of truth for the state of the blockchain

the blockchain state is a function of game theoretical (read: unreliable and non-deterministic) algorithms that assume the result provided a majority (typically 2/3) of participating nodes can be assumed to be valid

they are I guess CP based on these assumptions (and others), but those assumptions are probabilistic, they absolutely can be violated, and those violations create forks of the chain which invalidate even the weakest consistency models

and nobody seems to care

the notion that blockchains are "trustless" is a complete fiction, all state necessarily exists in the context of some trusted authority, decentralization just means that this authority is not well defined, arbitrary from the perspective of its downstream consumers, unaccountable


Yeah, I should probably say "they're treated as CP systems", in that they purport CP guarantees, when in reality they adhere to almost none of them (but, they also adhere to none of the traditional expectations of an AP system, i.e., a way to heal partitions and restore some form of consistency).


Yup. Different problem space! Thanks for laying it out clearly.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: