Hacker News new | past | comments | ask | show | jobs | submit login

I've been testing gRPC a bit. It's promising, but the tools are clearly alpha quality, so if you're going to build apps on top of it, you're in for some early-adopter hurt.

For example, you need the latest Protobuf 3.0 beta release, which no current OS distro currently has, and build gRPC from source. The language-specific packages (Ruby, Node, etc.) have releases that lag behind gRPC itself, so you'll probably have to build those from source, too (linking directly to HEAD from a Gemfile or package.json doesn't work, last I checked). Performance also seems decidedly lackluster, though it's been a few months since I did some casual benchmarking.

As for the API and feel of the library, it's similar to Thrift, and much like CORBA without the OO and attempts at location transparency. With the current generators, you'll get some low-level and not very friendly wrappers generated from the Protobuf declarations that don't attempt to hide the fact that you're writing RPC requests and responses as Protobuf structs.

One disappointing aspect of gRPC at this point is the lack of discoverability. Clients have to connect to a specific host and port, and you have to build your own glue based on ZooKeeper/etcd/Consul/DNS/whatever. Since no fault tolerance is built in, things like retries and load balancing is left as an exercise for the reader.




I think your frustration with the release process reflects Google's weakness in this area. Inside Google, everything is built and statically linked from the moving head of a single gigantic source code repository. Library releases, versions, and dependencies are not something the average Google ever thinks about.


Retries and load balancing are on their way (expect it in a release cycle or two), and we're starting to figure out the discoverability part - especially on client side.


Regarding the language-specific packages, we also have releases of the repository as a whole (https://github.com/grpc/grpc/releases), and the published language-specific packages correspond to those releases.


That is good to know. Do you have any recommendations for a more mature version of grpc (cross-platform communication with a high speed encoding)? Thrift?


I've never used Thrift. One option I have evaluated is NATS [1], which is a very fast, non-persistent, distributed MQ written in Go, with client libs in all sorts of languages. It's extremely fast and supports RPC-style request/response.

You get load balancing and discoverability for free, since NATS will distribute messages equally to consumers: Just fire up new consumers, and they will get messages to the topics they subscribe to, and all a client needs is the host/port of the NATS server, which is going to be the same for all parties. Couple that with some hand-coded serialization — Protobuf, Msgpack or even JSON — and you have a fairly resilient, fault-tolerant RPC.

You could trivially do streaming RPC if you handled the request/response matching yourself — if you look at the clients, NAT's RPC is all handled on the client side, using ordinary messages with a unique topic name generated for each request/response. Extending it to support multiple replies per request would be simple.

There are other, more feature-rich message queues, of course, such as RabbitMQ. NATS' advantage is that it's extremely simple.

Another option is ZeroMQ, but it's a bit lower-level and doesn't solve the discoverability part. You'll end up writing much more glue for each client and server.

[1] http://nats.io




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: