Hacker Newsnew | past | comments | ask | show | jobs | submit | jadeklerk's commentslogin

+1 to this. You have to realize that over a billion users use gmail. You can't possibly imagine that prioritizing .0000000001% of users that know about something like filtering on custom headers is remotely good decision-making against all the other work being done that helps the majority of users.


I have Gmail at work, and I can't filter on list ID in the header.

Gmail is not just simple users, it's also corporate who's paying for it.


It's not like 99% of corporate users know about headers and header filters either.


but you can. isn’t list name filter the list id? works for me.


I'm a fan and daily user of golang, but I thought his understanding seemed very solid and his reasoning concise.


A decade of growth....?? Where were you in 2008?


How long ago was 2008? It's 2018, so about ten years. Now what's another word for ten years. It's on the tip of my tongue.


I said almost a decade

Take a look at the 10 year S&P 500 chart:

http://www.macrotrends.net/2488/sp500-10-year-daily-chart

It bottomed out in early March 2009. The period from March 2009 to right now can roughly be described as "growth." We describe it as "growth" because the overall movement is upwards. 9 years and 1 month is "almost a decade" because a decade is 10 years and 9 is pretty close to 10.


Import paths are nicer with a flat structure. E.g. the difference between import "github.com/whomever/orm" and import "github.com/whomever/orm/fiz/bang/whiz/resolvers".


Because you didn't want to set an env var?


The problem is not the env var, because whether or not it was set or with a default value, you have to clone the project https://github.com/foo/bar.git at ~/go/src/github.com/foo/bar and name the imports github.com/foo/bar accordingly and run the go commands from there. I used to (automatically) just export GOPATH="${PWD}/.gopath" but this doesn't work anymore since 1.8 or 1.9.

People† at large just want to clone https://github.com/foo/bar.git wherever they see fit, like directly at ~/Workspace/bar, ~/work/bar, ~/projects/contrib/bar or even /tmp/bar.

† "People" includes CI systems that expect the typical "clone and cd straight into" thing to work, resulting in a lot of boilerplate and/or symlink hacks to work around such expectations.


> Because you didn't want to set an env var?

You haven't needed to set a GOPATH since 1.8, which was released over a year ago (we're now at 1.10). Since 1.8, the Go toolchain will use a default GOPATH; the environment variable is only needed as an override.


GOPATH has a default. Unix: $HOME/go Windows: %USERPATH%/go

but on ci servers it could be akward.



You mentioned problems with gRPC, but I think every one of your problems is with protobuf. Is that correct?

Also, regarding point 3, I'm confused with two things:

- You want "free form" data, but you're talking about protos in the context of Go. How would you define this "free form" data in Go?

- You explain that "free form structured data is important for systems that accept foreign data ... where the schema isnt known". Why are you using protobufs for this usecase? Protobufs are specifically meant to make the schema known, and be enforced by serialization.


True, but gRPC inherits these problems as it's based on Protobuf.

As for free-form data, it should be representable as map[string]interface{}. Our specific use case is a document store that stores documents on behalf of clients. The format of documents cannot be not known by the store, but the API to the store is gRPC. Also, we have a desire for documents to contain higher-level types such as dates, but we're forced to use google.protobuf.Value for this, and treat dates as strings, since Value cannot contain Proto types.

(Our next step is probably to model this explicitly, by defining our own Value message that uses "oneof" to represent all the possible types we need to support, and then using a map of these. But it would be nicer if Protobuf had first-class support.)


Can I ask why you want JSON with gRPC? The benefits to protobuf are tremendous, with little to no downsides


On the other hand, plain URLs with JSON are much easier to work with without writing any code. You can do everything want with curl from the shell, and often an API allows doing almost anything from a browser (Elasticsearch comes to mind). The simplicity of it all comes in handy when you want to do something trivial — load a small piece of data into the server, do some diagnostics, run some ad-hoc queries, etc. — without really wanting to write a program.

Debugging with lower-level tools like strace and tcpdump is also something that's trivial with JSON-over-HTTP/1.1, but virtually impossible with gRPC. (I mean, you could grab the payload and run it through gRPC deserialization, but would you?)

I'm a big fan of gRPC, but it is pretty top-heavy — lots of code to accomplish relatively little. If you have a language that can build client bindings dynamically from .proto files without recompilation, that would ease things a lot, but if you're using Go, for example, the bar is pretty high going from zero to productive.



To complement the grpc_cli recommendation, I've been using grpcc for 1-2 years now: https://github.com/njpatel/grpcc


I think the only RPC mechanism I've been happy with, that required little work and didn't constantly get in the way was Stubby - the precursor to gRPC used inside Google.

For a few years inside Google I experienced zero discussions about almost every aspect of RPC. It took a trivial amount of time to implement stuff interfaces, clients and servers in multiple languages and it was trivial to understand the interface of, and implement a client for, other people's code.

I didn't necessarily like everything in Stubby, but I absolutely loved not needing to have pointless discussions about RPC mechanisms or protocol design.

Since I left Google, anything even remotely resembling RPC (including REST) has been an utter waste of time mostly spent bickering over this crap solution or that crap solution – mostly with people who don't care about the same things you care about.

REST is a crap solution in my eyes because it invites absolutely endless discussions on an endless list of subtopics. From the dogmatic/fundamentalist HATEOAS end of the spectrum to the RPC-using-HTTP-and-JSON-and-let's-call-it-REST camp. Not to mention that in addition you need to have an IDL and toolchain discussion. (Of course, none of the toolchains or ways to describe interfaces are very good. In fact, they all suck in part because the attention is being spread across so many efforts that don't get the job done).

I have yet to see an IDL that works better than a commented .proto file from a "get stuff done" point of view.

I completely understand where you are coming from when it comes to having human readable wire format. For 20 years I was a strong believer in the same, and for some systems I still believe in human readable formats.

But RPC and RPC-like mechanisms is no longer among them. RPC is for computers to talk to each other and not for humans trying to manually repair stuff.

(I'm a pragmatist, so I'm allowed to both change my mind and have seemingly inconsistent opinions :-))

For RPC you should encourage the creation of tools. If you need to look at the traffic manually: fine, make a Wireshark plugin or a proxy that can view calls in real time. That's annoying, but cheaper than going off and inventing yet another mechanism. And once it is done, it is done and there's one more thing that is sane.

We should really encourage people to build tools so we can automate things and have more precise and predictable control over what we are doing without having to reimplement parsing (which is what happens if people think they understand your protocol - which they often don't)

Also, make sure it works for a large enough set of implementation languages and understand how to work in mechanical sympathy with build systems. I don't care if Cap'n Proto is marginally better than Protobuf if it lacks decent support for languages I have to care about.

I have no idea how much time we wasted on trying to get Thrift to work in a Java project that needed to build on Windows, Linux and OSX back in the day, but I was ready to strangle the makers of Thrift for not paying attention to this.

At this point I'm beyond caring about the design of RPC systems. I just want something that works for software development and doesn't have to be a discussion. Hence, I get annoyed every time I see a new RPC mechanism instead of attempts to make some of the existing mechanisms work by just making just one aspect of them a bit more sane and exhibit a bit more empathy with programmers rather than the egos of protagonists of various libraries, frameworks and formats.


How does Stubby compare to gRPC?

I imagine part of the lack of friction around Stubby was that Google was the only consumer, and could maintain client and server bindings/tools for the strict subset of the languages that Google standardized on.


It was pretty similar, but gRPC is a bit simpler since Stubby had a lot of other stuff to deal with authorization etc.

I wouldn't say the lack of friction was mostly due to Google being the only consumer. It was mostly because there was a clear path from A to B when you wanted to give something an RPC interface and that this path was made efficient.

Or at least more efficient than trying to use REST-like stuff in a large organization with lots of different teams using different technologies.

It also helped that it wasn't a democracy. You had to use it. If you didn't like that you were free to leave. As a result people will focus more effort on making the tools better and make friction points go away.

In practical terms: we can spend weeks on getting a REST-like interface to work with other projects because everyone has an opinion on every bit of the design, and everyone uses different, and quirky libraries and tools. For Stubby in Google back then, it was mostly about defining the data structures, the RPC calls, discuss semantics and then the mechanics were taken care of. This is far, far, far from the actual case for many other technologies.

(And while I appreciate HATEOAS as a design philosophy, and I've tried to make use of it several times, it just is not worth the effort. It is just takes too much time to do right and to get everyone on the same page. Most proponents are more keen on telling everyone how they are using it wrong, than on writing good tools that actually help people use it right. There's very little empathy with the developer).


We ran into problems where we had embedded Ruby and Python interpreters (Chef/SaltStack) that made it a big pain to ship new libraries. It was much easier to use the grpc-gateway (HTTP/JSON) for those clients and the generated grpc bindings (HTTP2/proto) for services.


One reason is to be able to call gRPC service from a web browser. Native JSON support makes that much easier.


There has recently been a lot of work for native grpc client in the browser. It’s not fully there yet but is looking real promising!

https://github.com/improbable-eng/grpc-web


Also grpc web is coming along

https://github.com/grpc/grpc-web


Link is dead


Fantastic to see open source alternatives. At a previous job, I was repeatedly frustrated using AWS lambda's extremely closed-source environment.


I've used both, and ECS was definitely the bigger pain. I deployed to and managed an ECS cluster for 8 months; the lack of replicability locally (which sucks for testing and CI/CD), the opaque and limited management options, and the lack of community interest / discussion all turned us off.

I think hosted kubernetes was easily the better option for us.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: