I work mainly in Python, it's always seemed really bad that there are 3 main implementations of Protobufs, instead of the C++ being the real implementation and other platforms just dlopen'ing and using it (there are a million software engineering arguments around this; I've heard them all before, have my own opinions, and have listened to the opinions of people I disagree with). It seems like the velocity of a project is the reciprocal of the number of independent implementations of a spec because any one of the implementations can slow down all the implementations (like what happened with proto3 around required and optional).
From what I can tell, a major source of the problem was that protobuf field semantics were absolutely critical to the scaling of google in the early days (as an inter-server protocol for rapidly evolving things like the search stack), but it's also being used as a data modelling toolkit (as a way of representing data with a high level of fidelity). And those two groups- along with the multiple language developers who don't want to deal with native code- do not see eye to eye, and want to drive the spec in their preferred direction.
(FWIW nowadays I use pydantic for type descriptions and JSON for transport, but I really prefer having an external IDL unrelated to any specific programming language)
> It seems like the velocity of a project is the reciprocal of the number of independent implementations of a spec because any one of the implementations can slow down all the implementations (like what happened with proto3 around required and optional).
Velocity and stability/maturity are in tension, sure. I think for a foundational protocol like protobuf you want the stability and reliability that come from multiple independent implementations more than you want it to be moving fast and breaking things.
Additionally calling C++ from other languages is a pain and you're forced to make a bridge C API. Doing so from Go is even less than ideal from what I gather. It requires using cgo and forces Go to interface with C call stacks, slows down the compilation, etc.
You'll be thrilled to hear about upb then, which was designed to be embeddable to power other languages without a from-scratch implementation - and now powers python protos.
From what I can tell, a major source of the problem was that protobuf field semantics were absolutely critical to the scaling of google in the early days (as an inter-server protocol for rapidly evolving things like the search stack), but it's also being used as a data modelling toolkit (as a way of representing data with a high level of fidelity). And those two groups- along with the multiple language developers who don't want to deal with native code- do not see eye to eye, and want to drive the spec in their preferred direction.
(FWIW nowadays I use pydantic for type descriptions and JSON for transport, but I really prefer having an external IDL unrelated to any specific programming language)