Hacker Newsnew | past | comments | ask | show | jobs | submit | sougou's commentslogin

Hi HN, this is a series on a way to generalize consensus protocols and how to adapt them to existing systems.

This is part 7: Discovery and Propagation. With this, we complete everything about the protocol part of Generalized Consensus. There are a few more advanced topics to cover: They are needed for running in production.

One of our goals is to make this approach work for Postgres via Multigres. But the principles can be used to implement your own solutions also. This will make Durability and High Availability more accessible for users.


Hi HN, this is a series on a way to generalize consensus protocols and how to adapt them to existing systems.

This is part 6: It covers Revocation and Candidacy, the prerequisites for a leader change. We explain how to revoke previous leaderships and recruit for a new candidacy. It's a bit complex, but there are plenty of animations to help you along.

One of our goals is to make this approach work for Postgres via Multigres. But the principles can be used to implement your own solutions also. This will make Durability and High Availability more accessible for users.


Hi HN, this is a series on a way to generalize consensus protocols and how to adapt them to existing systems.

This is part 5: This one covers how to safely handle multiple coordinators racing to take action. We explore term numbers, coordinators as separate agents, and lock-free approaches.

One of our goals is to make this work for Postgres via Multigres. But the principles can be used to implement your own solutions also. This will make Durability and High Availability more accessible for users.


Hi HN, this is a series on a way to generalize consensus protocols and how to adapt them to existing systems.

This is part 4: This one has a nice animation that shows how requests get fulfilled.

One of our goals is to make this work for Postgres via Multigres. But the principles can be used to implement your own solutions also. This will make Durability and High Availability more accessible for users.

I'll post here as I release the subsequent parts.


Hi HN, this is a series on a way to generalize consensus protocols and how to adapt them to existing systems.

This is part 3: Governing Rules. This is the foundation for the rest of the series. We'll be repetitively applying these rules to map out all parts of the protocol. Also, every system that I know of implicitly follows these rules.

One of our goals is to make this work for Postgres via Multigres. But the principles can be used to implement your own solutions also. This will make Durability and High Availability more accessible for users.

I'll post here as I release the subsequent parts.


Hi HN, this is a series on a way to generalize consensus protocols and how to adapt them to existing systems. I've released two parts so far. They explain the problem and the requirements. You can follow the link to the second part from the first part.

One of our goals is to make this work for Postgres via Multigres. But the principles can be used to implement your own solutions also. This will make Durability and High Availability more accessible for users.

I'll post here as I release the subsequent parts.


I oversaw this work, and I'm open to feedback on how things can be improved. There are some factors that make this particular situation different:

This was an LLM assisted translation of the C parser from Postgres, not something from the ground up.

For work of this magnitude, you cannot review line by line. The only thing we could do was to establish a process to ensure correctness.

We did control the process carefully. It was a daily toil. This is why it took two months.

We've ported most of the tests from Postgres. Enough to be confident that it works correctly.

Also, we are in the early stages for Multigres. We intend to do more bulk copies and bulk translations like this from other projects, especially Vitess. We'll incorporate any possible improvements here.

The author is working on a blog post explaining the entire process and its pitfalls. Please be on the lookout.

I was personally amazed at how much we could achieve using LLM. Of course, this wouldn't have been possible without a certain level of skill. This person exceeds all expectations listed here: https://github.com/multigres/multigres/discussions/78.


"We intend to do more bulk copies and bulk translations like this from other projects"

Supabase’s playbook is to replicate existing products and open source projects, release them under open source, and monetize the adoption. They’ve repeated this approach across multiple offerings. With AI, the replication process becomes even faster, though it risks producing low-quality imitations that alienate the broader community and people will resent the stealing of their work.


An alternative viewpoint which we are pretty open about in our docs:

> our technological choices are quite different; everything we use is open source; and wherever possible, we use and support existing tools rather than developing from scratch.

I understand that people get frustrated when there is any commercial interest associated to open source. But someone needs to fund open source efforts and we’re doing our best here. Some (perhaps non-obvious) examples

* we employ the maintainers of PostgREST, contributing directly to the project - not some private fork

* we employ maintainers of Postgres, contributing patches directly

* we have purchased and open sourced private companies, like OrioleDB, open sourced the code and made the patents freely available to everyone

* we picked up unmaintained tools and maintained them at our own cost, like the Auth server, which we upstreamed until the previous owner/company stopped accepting contributions

* we worked with open source tools/standards like TUS to contribute missing functionality like Postgres support and advisory locks

* we have sponsored adjacent open source initiatives like adding types to Elixir

* we have given equity to framework creators, which I’m certain will be the largest donation that these creators have (and will) ever receive for their open source work

* and yes, we employ the maintainers of Vitess to create a similar offering for the Postgres ecosystem under the same Apache2 license


And I'm not sure about their ability to release said code under a different license either.

Postgres has a pretty permissive license, but that doesn't mean you can just ignore it.


We absolutely need this!!! I vote yes :).


You won't believe what a nightmare it was to work with transactionless DDLs in MySQL. Transactional DDL will be a dream come true for Vitess: we can throw away all the hacks we had to do for MySQL's sake.

I also see such a clean 2PC API. This was another huge mess on the MySQL side.


The Vitess architecture was traditionally built in a database agnostic fashion. Most of its features should port smoothly over to Postgres.

The cool features that I can think of: A formal sharding scheme based on relational foundations, a fairly advanced query analyzer and routing engine capable of cross-shard functionality, HA and durability, abilities to reshard safely, seamless migrations, etc.

It's a pretty big list of capabilities.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: