Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The State of Go (golang.org)
231 points by xkarga00 on Feb 14, 2015 | hide | past | favorite | 263 comments


Edit: This commit was written to the original submission: https://talks.golang.org/2015/state-of-go.slide#7

--------------

It seems like these people simply don't understand Github very well.

    Can only view diffs on a single page (can be very slow).
    Cannot compare differences between patch sets.
    Accepting a patch creates a "merge commit" (ugly repo history).
Don't use the merge button, just add the requester's repo as a remote to yours and use your familiar tools. If possible and done, a fast-forward merge + push will also close the PR.

    Comments are sent as they are written; you cannot "draft" comments.
How is that different from pull requests via emails? (Also, on the website itself the comments can be edited.)

    To create a patch one must fork the repository publicly
    (weird and unnecessary).
What exactly is weird about that? It makes it possible for the requestor to craft their changes with full control, without requiring upstream to give them write access. This point is just entirely nonsensical.

    In general, pull request culture is not about code review.
A strong claim, but one without any justification, and which is, in my experience, as far from the truth as possible.

Edit:

In light of their complaints about the need to fork, i have to say that their current contribution process can in its entirety be described as weird, unnecessary and baroque:

https://golang.org/doc/contribute.html


> Don't use the merge button, just add the requester's repo as a remote to yours

Actually, you don't need to do that. GitHub provides a special pulls remote "namespace" on the upstream repo, so you can add it as a fetch pattern to your .git/config like so:

    [remote "upstream"]
      url = https://github.com/neovim/neovim.git
      fetch = +refs/heads/*:refs/remotes/upstream/*
      fetch = +refs/pull/*/head:refs/pull/upstream/*
Then when you `git fetch --all`, you will have ALL pull requests available in your local repo in the local pull/ namespace. To check out PR #42:

    git checkout -b foo refs/pull/upstream/42


Wow you are my new favorite person. What I was going to say was that the GP was correct in that it just took a bit more work on the maintainers part to keep the history clean, but even that is unnecessary apparently :-)


Good tip.

It's interesting that other systems has a nicer implementation for it.

In Phabricator, it's:

  $ arc patch D12345
and it checks out revision 12345 into an appropriately named branch.


Yes! I've been wanting to do this and didn't know it was this easy.


That is great, and i thank you for bringing that to my, and everyone else's attention! :)


This is... amazing. Thank you so much.


is this documented somewhere on github?


Yeah, I have to google the precise path every time I need it: https://help.github.com/articles/checking-out-pull-requests-...


WOW, this has been missing from my life for way too long!

Thank you :D


You rock!


>> Comments are sent as they are written; you cannot "draft" comments. > How is that different from pull requests via emails? (Also, on the website itself the comments can be edited.)

With github PRs, notifications are sent as soon as you write your first comment; in systems like gerrit and rietveld (and email) they are not sent until the reviewer chooses to send them. This leads to either awkward interactions if you start replying to comments while the reviewer is still reviewing, or unresponsiveness if everyone introduces hysteresis to avoid this situation.

This is IMHO the biggest problem with Github PRs; I don't necessarily agree with the Go team's decision to abandon the PR system but on smaller projects I have pushed for rietveld over PRs for this reason.


My favorite are the comments like:

    1:12pm   line 33: Why are you doing this?
    1:13pm   line 33: I see, sorry, ignore my earlier comment.
A system that lets you draft comments and then send them out in a batch can avoid a bunch of noise. On the other hand, though, people who aren't expecting it can get stuck with draft comments they don't send out.


I think we have the technology to show that a piece of text is a draft and hasn't yet been sent.

Imho, the end user should be able to choose between live commenting vs draft+send.


Usually if this happens, it's a good hint that this piece of code could use a good comment. Not always, of course.


Half your post is saying we should just use different tools, but we are. We're using Gerrit, that lets us work the way we want.

On the contributor instructions: note that if you just want to use plain git (and not our git-codereview tool) then you can stop at "Register with Gerrit."

We created the tool to provide a more familiar review process for the people that used our previous system.

If we took your advice and used offline diff tools, etc, we would need a similar page explaining how to do all of that. None of this convenience comes without an up front cost.


Thanks for the direct answer. I think my stance wasn't clear enough, so i'll reword what i wrote in another comment here.

My issues are two-fold:

You could have just said on that slide: "We want a central code review system, so people do not have to learn Git. Github isn't terrible, but Gerrit is much better."

Instead you ended up putting up a list of points that make github seem like some kind of fatally flawed thing, while frankly putting people off with inaccuracies/subjectivities.

Secondly, by not allowing things by github you're forcing people to learn something else. Most developers experienced with Git will also be very familiar with Github. You're telling those people to instead go and learn something else. That will result in some people deciding it's not worth the trouble. I'm fully aware it's up to you to decide whether you're willing to pay that price, but personally i find it a bit odd that you can't simply do both.

And as for, apparently, most of the documentation on your contribute page being safely ignorable: If that is truly the case, i recommend rewriting that to make it obvious, because right now it's anything but. :)

If the last bit is the only good thing that comes out of this, then i'll be happy.


If they need to learn something new, they will. It's part of being a programmer. For me, github's way of doing things is way more flawed that gerrit.


> What exactly is weird about that?

I have no idea what they specifically mean, but I can tell you, that the github requirement of pr to come from public repositories on github is something that bothers me occasionally. There are lots of reasons why I may not want my github fork to be public or I don't want to have a github fork at all but do want to contribute to a repo hosted there. This is a distributed version control system we are talking about after all.

As for their claim about pull request culture, again I'm not sure what they specifically mean, but I find the github code review tools to be very rudimentary and suspect that lots of other people with experience with more sophisticated code review workflows feel the same way. Github is a great service for some things but it certainly is not centrally about code review.


Honest curiosity here. Given that there's nothing stopping you from deleting your fork after the PR is done, what reasons can you name for this:

    There are lots of reasons why I may not want my github
    fork to be public or I don't want to have a github fork
    at all


I also am scared off by the public aspect, and find it a very awkward requirement. If there was an option to create a private branch on Github, I'd probably use it often. I'd love if there was an easy way to submit a quick patch using only the Github interface. As it is, I've clicked once in my life on the fork button, realized that it created something visible to the outside world, and have never used it again. Instead, I make a local checkout, and generate patch files.

It's difficult for me to explain why I feel this way. The original stigma of 'fork' is definitely part of it, but most of it is just my distaste for visibility: it feels horribly immodest to associate myself with a project with which I probably have only a passing association. I can't defend my attitude, but it's probably worth noting that people like me (and apparently the author of the Go article) will be dissuaded if public affiliation is a requirement.


Now this is an interesting answer and i thank you for it. You don't have a problem that is in search of a solution. You have a real reason why github's publicity can actually be a bit hindering.

And i am even more grateful because it's truly a way i have not yet considered. See, for me github is a tool that i approach as dispassionately as my garage door opener. It is a thing with which i personally solve problems. Those problems are broken software or broken documentation. I use github to fix things. And i'm almost always doing so with a sense of nearly full confidence in that my issues or PRs will either fix a problem, or allow me to gain information so i can fix it. The only doubt i have is when i recognize the maintainer is not very skilled or otherwise mentally a little of the beaten path and might need some convincing.

The least concern i ever have is "someone might see what i'm doing", which is why i've ended up with 210 forks in my account. I didn't think that concern could ever be a thing since i started using Github long after the time when i last had reason to feel truly self-conscious about the code or documentation i'm creating, and you did remind me of the time before that point, so i can understand you now.

Maybe with time it will get better for you, maybe not. Please keep sending emails with patch files if that's what you feel most comfortable with.

And thank you. :)


I think that it's a little bit of a problem with a lot of the modern "social web" mindset; it all implies that you should feel comfortable doing everything in public, with the whole world watching you.

The problem is, there are people for whom this really doesn't work well. Some might just personally feel uncomfortable about it. Some might be concerned about future career prospects. Some might be women, who are worried about online harassment. Some might be people with stalkers, who are trying to avoid any kind of traceable online public trail.

This whole "share all the things" mentality leads to somewhat creepy exposure of everyone's private lives to governments, corporations, the general public (which can act as a mob on occasion), and also specific private individuals who you might be trying to avoid exposing things to.


So, if I understand correctly, it's the user oriented approach of Github that put you off ? Gitorious is project oriented, does that suits you more ?


The simplest reason is if what I was contributing was done on my employers time and they have strict IP restrictions. It is much, much easier to vet a single changeset than it is to a) verify a whole repo history and b) coordinate around my public/private github identities.


I'm not sure I understand. Vetting a changeset is basically the same thing as verifying the repo. Just look at the hash of repo A (your clone), check that it matches B (upstream), check that your repo A + "patch" matches pull request.

What am I missing?


Branches and the ref log.


I was hoping for more reasons, since the very first and simple one is usually the one that has the most and simplest workarounds. You could for example make a patch with git on your local machine and attach that to an issue.


And then I have to follow a completely different work flow and inconvenience the maintainers.


If PRs were a instantaneous thing, I don't think it would be a problem. However, if you've ever looked at a network graph of a repo that requires rigorous code review or doesn't have a diligent maintainer, some of those forks can sit in limbo for a really long time. And, as more pull requests pile up, it's very difficult as a user to figure out which version I should be using for my code to run correctly.

I just ran into this issue the other day when I was using Node with a lot of dependencies that still haven't been patched for issues that cropped up in OSX Yosemite.


> it's very difficult as a user to figure out which version I should be using for my code to run correctly.

I've rarely had trouble with assuming master (or ideally a tagged release) on the original repo (not a fork) is the one I should be using. Assume all forks are forks.

In any cases where this hasn't been true, it's been clear to me the blame is due to poor release management (or poor communication of release management, which is the same thing), rather than somehow the fault of github's PR or forking system. Although you can hypothetically argue that certain UI's encourage poor release management and others support it, I personally have not seen this to be an issue in github's UI.


    There are lots of reasons why I may not want my github
    fork to be public or I don't want to have a github fork
    at all
The authors of the Lua language develop in private. They like to experiment freely with ideas, many of which never see the light of day.

They said if people saw what they were doing in real time, it would probably cause mass panic among the community. (Oh no! You're removing feature X?)


Github didn't always distinguish well between repos you'd substantially authored and long-lived off-hand pull request forks. This made it annoying as a showcase of your work. Fortunately, they've made things much better now.


Creating the public fork and pushing to it is among the last parts of the process of making a contribution in my process. At that point you're making your changes public anyway, so I don't undersand this concern at all.


Don't use the merge button, just

This is the smell of git plumbing again. Don't use the obvious UX that's been presented to you, do some other workflow that doesn't appear in the documentation.


Git != github.

(Does this really need to be said?)


Saddly it does. There is probably a significant number of developers who never used git without Github.

As a funny aside, when Github breaks the thing to say is "I wish one day someone would invent a decentralized version control system" ;-)


For many developers the only reason we use git is github. I for one much prefer mercurial but to collaborate with others (and now for my job) github is a de facto standard.


Well, it depends on what documentation you mean. The alternative is using standard git commands that are in the git documentation, which is exactly what OP's alternative presumably involves?

(But I am entirely sympathetic to the opinion that git itself exposes too much plumbing and has a pretty baroque end-user interface for doing certain things. And it's also certainly legit to wish or suggest that _github_'s UI worked differently than it does, although the merge commits don't really bother me, and some people prefer them, it's a point of some contention).


I see you've been downvoted and I think unfairly. I agree with you though.

Looking back at arguments people make about how to "do it properly", it looks like something is broken. Git is broken, maybe Github is broken, documentation, UI, marketing. Something is though. When an obvious UI element is there, seemingly designed to do merges, and then everyone says "no, no, do this other thing", like send emails, then rebase here on top of that, make a ref pattern in your ~/.gitconfig ...


But really... stop using the merge button.


Just out of curiosity, have you ever used a tool such as Gerrit or Atlassian Stash? There are different mindsets, and both have their advantages.


I must admit i have not, and to make clear: I am not taking offense to their implication that Github is worse at some things than other tools (it is). My post above is merely objecting to the opinions that were borne out of some lack of knowledge (which is acceptable), yet are posited as facts (which is not).


Mm, I'm not sure. These are slides, presumably designed to go alongside a talk, which could have fleshed out the points better. If you consider them as being summaries of what a speaker's saying, it makes much more sense.

Fwiw, of the six bullets, I see three which are clearly subjective, one which is a clear advantage of gerrit, and two which I don't really know enough about to refute.

When it comes down to build systems, I think it's very easy to spend a lot of time moving sideways or backwards - and if the tool you're moving to has deficiencies compared with what you had before, it can be very frustrating. Certainly, if the tool people want you to move to offers few advantages over your current infrastructure, a quick dismissal is reasonable.

Getting set up with their current contribution system might seem a little clunky, but I don't think it's particularly hard to do, and definitely seems like a 'run-once' thing. Once it's set up, it seems to integrate into a workflow well.

Their contributors will have their current workflows set up nicely - and unless github's issues system has compelling advantages, it's definitely not worth them switching due to the temporary loss in productivity.


I'll concur that the GitHub code-review UX is atrocious. Having used a small number of other code review tools, I can't fathom how GitHub's tooling is so brain-damaged.

It does a few cute and clever things, but I totally agree with this slide deck's gripes. Email and GitHub are not really compatible.


These are all common issues with people that came from a different CVS and didn't learn what was different about git. They just learn enough to reach parity with the "everything is linearly developed" workflow.

When you live with that assumption, things like merge commits and pull requests seem silly and overkill since you live in the "patch is developed/reviewed in isolation" world.


It's not exactly the first the time Go team is showing that they haven't spent much time researching current practices, theory, research and tools in the field of software engineering.

Everything about the language screams of coming from minds who stopped learning new things in the late 90s.


I think you mean s/researching/adopting/, since you are only referring to the outcome, not the study or decision making that arrived at the outcome. The Go team generally knows about modern things; we deliberately choose not to use them when we don't think they suit Go.


This is so colossally misguided a comment it's hard to know where to begin. So I won't.


See my comment about "not wanting to learn".

Happy to engage in a discussion about modern software engineering.

Are you?


The comments here represent one of the best examples of bikeshedding I've ever seen. The deep details of go are too complex for some people to really have an opinion on but god damn does everyone want to get their voices heard about github!


Agreed. However, the original submission called attention to that slide specifically, not the whole State of Go presentation. So people kept on topic :)


What do suspect when someone injects pretty strong opinions into a slide and presents them as factual in a major project like this?

What do you think would happen if there was a slide that mentioned that the culture around UTF was mainly about misogyny so they weren't going to support it?


Right? Instead of discussing the rest of the content of the slides, let's all discuss one slide.

Out of context these comments read like they're in response to a poorly written article with an inflammatory title like "Why GitHub sucks".


Yeah, I'd prefer all those comments to be about the bootstrapping and GC (they can even keep the same tone, it'd be funny at least).


Personal opinion here, but I much prefer merge commits as opposed to fast forwarded commits. With a merge commit, there's one commit to revoke if something breaks, and there's one commit per PR to step through with git bisect, and more importantly it maintains history, which to my eyes is more useful than a "pretty" repository.

Drafting comments... if you want to draft comments, can't you do that in a separate editor? Better than relying on the browser as an editor.

I agree, though, the public repository requirement for making a PR is a bit awkward.


This is actually one of the things I miss about SVN: in addition to a record of the commit being generated on a branch, there's a record of the commit being accepted onto trunk, and when and why.

Rust has an integration robot that does all the merges into master, that records who reviewed it, what the link to the PR (and the code review on that PR! it exists!) was, etc. It was a little weird as a complete newcomer not to see humans in the `git log --first-parent` view, but I really like it now, because if I want to know why some commit was merged the way it was, that discussion is recorded nicely. (The merge commit has the subject/body of the PR, which doesn't need to match the subject/body of any commit in the PR.) Compare with, like, the Linux kernel, where the best you can do is Google for LKML threads with the same subject line. There's even a convention of [PATCH 0/10] for summaries, but those summaries are nowhere to be seen in the git repository.

https://github.com/rust-lang/rust/commits/master


> which to my eyes is more useful than a "pretty" repository.

A pretty history is extremely important when you need to attract new contributors to your repository. I've maintained extremely clean git logs and extremely nasty ones too. New contributors are immediately turned off in the latter case, because a good developer will generally git log extremely early when discovering a new project.


It is a false dichotomy. You can have a dirty repository history with fast forward merges as well. And you can have a clean repository history with explicit merges. As I wrote in a sibling comment, feature branch commits should be rebased to looks clean and meaningful (squashed in some cases if needed) but then merged into main/master branch using --no-ff flag in order to generate a merge commit.


Here's my specific issue with that: Even in such a merge it would be possible for someone to hide a change in the merge commit, and due to the nature of git it is quite difficult to figure out with confidence whether a merge commit contains additional changes or not. I much prefer the rebase/merge no ff approach over the common "just merge whatever the fuck whenever", but due to the possibility of hidden details in merge commits, i prefer a linear history.


Who is doing the merge commit? In every project I've contributed to, that's the automatic part by the code review system. If you can't trust that, then you are screwed in any workflow.

If you're talking about merges into an outstanding pull request, then that's a non-issue because any changes will still show up in the diff against the target branch/repo.


It's extremely important if you want to attract cretins who favor "git history prettiness" over code quality.


Opinions do vary a lot. I am iterating over history logs quite a lot and IMHO nothing beats clean repositories. Again IMO merge commits have no reason of existence whatsoever.


I much prefer merge commits as opposed to fast forwarded commits

That's cool, but it's still constraining a flexible system away from someone else's preferences. I prefer "--no-ff" myself too, but think that's irrelevant.


For integration branches it is nice to have cleanly rebased changesets which are then merged with "--no-ff" option so it generates an explicit merge commit.


You're leaving out the alternative, which is the important part.

If you pull rebase, then that merge commit will be whatever the author of the commit decides it should be.

If they currently have ten files open, the merge commit will be ten files, regardless of how connected these files are, which is a lot of noise. With a pull-rebase, when they are ready to commit, they will decide to break it down nicely in 2+3+3+2 files.

The only person who should decide how to merge their files should be the person that modified these files, not git.


What is the better alternative Go is using?

"Can only view diffs on a single page"

You don't need to use GitHub to view diffs.

"To create a patch one must fork the repository publicly (weird and unnecessary)."

I don't think it's weird.

"Accepting a patch creates a "merge commit" (ugly repo history)."

You don't need to have a merge commit, although I don't think that creates an ugly repo history.

"In general, pull request culture is not about code review."

I have no idea what this means.


> "In general, pull request culture is not about code review."

Companies/organizations have different requirements for code reviews. Yes you can review code in pull requests, but in my opinion, it was not built for code reviews. In my previous company, we had a very rigorous code review process, which includes pre and post commit code reviews. We used Collaborator[1] and ReviewBoard[2] for code reviews and I totally understand why the Go team would use Gerrit over pull requests for code review.

[1] http://smartbear.com/product/collaborator/overview/ [2] https://www.reviewboard.org/


> "In general, pull request culture is not about code review." > I have no idea what this means.

Me either. To me, the entire point of pull requests is about triggering (code) reviews. If I don't want to bother with review, why not just grant direct push access and skip the rubber stamp ceremony?


Pull request culture is about enabling distributed collaboration, i.e. I don't have push permission to your repo and you don't have push permission to mine, but we can still collaborate by sending each other pull requests and sharing our work that way. It's all about collaboration while not trusting the other person with write access to your repo.


> It's all about collaboration while not trusting the other person with write access to your repo.

If I don't trust someone enough to grant write access to my repo, I certainly don't trust them enough to rubber stamp their pull requests.


Nor should you. You treat their pull requests just as you should any code review: you study them and decide whether to approve (accept) them or not.


They're probably using Gerrit.

Although I agree, I don't think merge commits are ugly. I think people coming from SVN/CVS where history is strictly linear have this obsession with keeping it that way.

In fact I find lots of developers just have an obsession with "clean" history, and fetishes for particular tools. It baffles me.


> I think people coming from SVN/CVS where history is strictly linear have this obsession with keeping it that way.

This has absolutely nothing to do with those, especially since they didn't even allow cleaning up histories. There is one very simple reason for why some developers prefer a linear history of master:

It makes debugging very easy.

With branch merges, especially when the branch lines cross, or the merge is an octopus merge, the complexity of the code necessitating inspection to find the root cause of a bug straight-up explodes. Meanwhile with a linear history it's not only easy, but automatable to find a commit that breaks a thing.

I do realize that you may not have had the displeasure yet to be in the situation to learn these things. Please feel free to consider yourself fortunate, but please also do try to understand that the things i just wrote are in fact simple observation of realities.


git bisect is a great tool for automating this sort of debugging in non-linear histories. That said I agree in general linear histories are much easier to deal with.


Linearized histories are much harder to find bugs in. You end up looking through revisions that never existed; in SVN you merge the remote history into your local history without it ever showing up as a merge. The explicit git approach tells you something much closer to the truth.

Your condescension is only hurting yourself.


Your building strawmen does not help your credibility much, nor your attempt to convince me of your view. "Revisions that never existed" do exist, and if you put a rebase of a branch on master without verifying the rebase, then you may end up in a mess, but it's your fault. Code review is a thing that is done for a reason.

Frankly, i find your style of argument through implication, and through trying to disregard something because it doesn't fit your definition of truth to be much more condescending than anything i wrote before.


> "Revisions that never existed" do exist

"Revision that were never built nor tested" is probably closer to the truth. Do you rewind through all your history and rebuild and retest every commit in a branch every time you rebase? Sure, they're similar, and you probably didn't mess up the merges. There's likely no subtle lingering bugs that QA's only going to catch weeks down the line. Probably.

> then you may end up in a mess, but it's your fault. Code review is a thing that is done for a reason.

Sure. But I've missed so many things in code reviews, had so many things in my own code missed in code reviews, and generally make mistakes and messes.

I do agree that simpler branch topology tend to be easier to reason about. But branches do have their advantages... and I've also found that preserving the original branch topology has helped me untangle merge mistakes that were missed, committed, and then only discovered a year or more later.


> Do you rewind through all your history and rebuild and retest every commit in a branch every time you rebase?

Yes.


I hope that's all automated with short build+test iteration times! That'd easily eat a day for minor feature branches for me - I can't afford that kind of turnaround time.


Local history is not a concept in SVN. There is only one history which is the same for everyone.


GitLab CEO here, I completely agree with your observation that people coming from SVN have a hard time adjusting to non-linear history. At GitLab we recommend embracing it https://about.gitlab.com/2014/09/29/gitlab-flow/ But we also believe in the freedom to do what you want, see there is a version of GitLab with automatic rebasing to allow a clean linear history without having to rebase by hand https://about.gitlab.com/2014/12/22/gitlab-7-6-and-ci-5-3-re...


One of the previous slides says they are using Gerrit.


Can only view diffs on a single page (can be very slow).

I've seen GitHub take five seconds to render a 100K line diff. In my experience all of the other tools I've used, including some of the ones listed, can take longer to render individual file sections of such a diff. It's fast enough.

Comments are sent as they are written; you cannot "draft" comments.

I'm a bit puzzled as to why you would need to draft comments inside the PR interface, especially in light of the fact that they can be edited.

Cannot compare differences between patch sets.

You most certainly can. Just use branch/revision compare.

To create a patch one must fork the repository publicly (weird and unnecessary).

Also immaterial.

Accepting a patch creates a "merge commit" (ugly repo history).

This comes closest to being a legitimate complaint. Because GitHub emphasizes recognition of contributors, it doesn't let you rewrite the commits in the PR as you merge them with the merge button, but there are multiple ways to merge on the command line that avoid the merge commit and play nice with the PR.

In general, pull request culture is not about code review.

WTF?

Sounds like a strong case of NIH.


> I'm a bit puzzled as to why you would need to draft comments inside the PR interface, especially in light of the fact that they can be edited.

This scenario happens often: I read through a change, making comments as I go. Then I reach some part of the change and realise "Oh, that explains why they did that in that other file!" So I go back and delete or alter my comments.

In Gerrit or Rietveld, the reviewee never sees those earlier comments.

On GitHub, the reviewee has already received the comment notifications and started responding before I have a chance to make the changes. The reviewee wastes time responding to questions to which I already know the answer. It's clunky, inefficient, and unnecessary.

It seems like you haven't used a tool like Gerrit or Rietveld. You should check them out.


A lot of people commenting here probably haven't had the experience of working with mature pre-submit code review tools. It's funny how much of an opinion people seem to have about things they don't know about.

I'm shocked at how bad Githubs PR review UI is given how much funding they have had for so many years. Even abandoned side projects like Rietveld have vastly better review UIs and workflows for larger patch sets.


I don't see a problem. Google people use what their NIH-filter-bubble tells them and the rest of the world uses Git(Hub).

Everyone is happy.


I probably sounded overly dismissive of github. Which probably isn't fair. But I'll leave my post un-edited for posterity.

I use github every day for work. There are lots of things it gets right. But if you want to work on a project where you more or less have a central repository, take contributions from external and internal contributors, and have a strict policy of pre-submit reviews for individual commits to master that vary in size and complexity, then you want a tool that is 'code review centric'. That is, something that supports comment drafts, back and forth exchanges across many files, and notions of iterations as the reviewee responds to feedback. Github is lacking in this regard, for all the reasons the Go team pointed out. I mean... they just recently shipped side by side diffs!

The main thrust of my comment was to point out that lots of people in this thread are criticizing the Go team's choice to use Gerrit instead of Github, and the points they make seem to stem from an ignorance of both tools like Gerrit, and of workflows that work well that aren't pull requests. It's a bit of "what you see is all there is" where all they've seen is Github. And statements like "github is good enough" seem overly dismissive of the decisions of a lot of very smart people on the Go team.

For folks that are familiar with Gerrit and still poopoo their decision to use it. Well... we can agree to disagree :).

I'm rooting for Github. I want to see it get better so I can stop running a separate code review tool for my own projects! But it's got a ways to go still.


@piotrkaminski Comment nesting seemed to run out. So replying to myself.

Our team is currently using Rietveld. But I've used Gerrit in the past, as well as internal tools of the same flavor back when I was at Google.

I don't particularly love Rietveld. But it's simple to maintain and does the job. That being said, I'm genuinely looking forward to one day being able to just use Github for this.


Heh, that's almost exactly like me: used internal tools at Google, brought up a Rietveld instance after I left. Except that I got frustrated with Rietveld and built https://reviewable.io -- you might want to check it out. :)

FullStory looks awesome, BTW, I just wish I could afford it.


Shoot me an email and let's see what we can do.

Also, I'll definitely have to check out Reviewable!


I'm curious -- which code review tool are you using for your projects? Gerrit, or something else?


You might also want to check out https://reviewable.io. It's got most of the useful features of Rietveld (and, to a lesser degree, Gerrit), but with a UI that isn't stuck in the '90s and one-click integration with your GitHub project. Draft comments, threaded discussions that don't disappear, explicit tracking of resolution, quick diffs between (multi-commit) revisions, support for rebasing/merging without losing review history, etc. It trades off the deep customization possible with Gerrit, but is still much better than GitHub's PRs (IMHO -- I'm the author).


Thanks for replying directly. I have; I've used Gerrit and Phabricator (which also does inline comment draft keeping) a lot, and I've clicked around Rietveld. I just never have seen much use for the comment draft feature; keeping comments in draft is not what I naturally do. I can see where others might like that, so it could be a nice to have (though you can also use your browser to keep draft comments).

I think GitHub and to a lesser extent Phabricator have enormous UI/UX upsides over Gerrit and Rietveld, which perpetually look like internal tools that never underwent proper UX review. What's more important is that GitHub has done so much for open source software and has achieved such critical mass that a decision to not use it now starts to also shut you off from a population of developers for whom the activation energy is too high.


You're talking about GitHub commit comments, right? Can you not simply click to comment, compose your comments one by one, and then not send them immediately? You can send them all together after you finish reviewing the entire commit.

Although it would be nice to have a 'Send All Comments' button at the bottom of the commit page so you didn't have to click through them one by one.


> Better bindings for calling Go from Java.

If that is available for general Java code (i.e. uses JNI) and not just for Android, that could be really huge for Go. Writing performance-sensitive low-level code in Java is still fairly painful, wheras Go still isn't great for writing big programs. I can imagine (for example) Hadoop, Lucene/Elasticsearch and PrestoDB all using this.


You jest?

Go is GC'd just like JVM. The only possible benefit -- even if Go catches up at runtime -- is the compact form of memory objects in Go vs Java object. But then again, if you are writing such systems (in either language) you are very likely to spend quite a lot of time in 'unsafe' land.


> if you are writing such systems (in either language) you are very likely to spend quite a lot of time in 'unsafe' land.

I don't think that's necessarily true. Go does a much better job than Java at letting you manage your allocations and re-use memory. You can write tight, performance-critical code in Go without resorting to 'unsafe'; it just requires care, as it does in any language.


I don't know if I completely agree with this. To write truly performance-critical Go, you end up throwing away many of the language's qualities. Channels are slow, defers add overhead, interfaces add overhead (e.g. I2T), etc.

Don't get me wrong, I like Go, but in my (and others I work with) experience, it's not the right choice for performance-critical systems.


I mean, Go is (or was) in the loop whenever YouTube hit its MySQL clusters (vitess) and on on the server providing downloads of Chrome and so forth (dl.google.com). CloudFlare has it sitting in the middle of every request for some sites (in the form of Railgun, their delta compression for pages). I'm not exactly a perf ninja and got some Go code packing Wikipedia history dumps at >100 MB/s. Dropbox uses Go for, their words, "our performance-critical backends" (https://blogs.dropbox.com/tech/2014/07/open-sourcing-our-go-...). So folks manage to heavy lifting with it.

I see "I couldn't possibly use X for performance reasons" here a lot, as an almost immediate response about a wide range of different tools, and two things come up in my mind: 1) you can always set the standard arbitrarily high. If you're doing AAA shoot-'em-up games or something, yes, use something else. 2) you're often not as good at tools that're new or different. It's possible you could go further with just a few more tricks about profiling or using free pools or whatever, or a little more info about how your code executes or what the runtime does. FWIW, if you hit a specific wall that's a problem for your app, folks on golang-nuts (or StackOverflow, where I've hung out sometimes) are often happy to try and help.

Hope this is more helpful than fussy. Mostly just don't want folks to be discouraged into thinking certain things are impossible when in some cases they're really being done in production out there.


No one doubts that Go is production ready at this point. But the performance requirements of a systems programming language are just very different from the requirements for a web server or applications language.

From my point of view, and the industry I'm in, the latency requirements of a web server stack look completely laughable. That's not to say there aren't challenges in a web stack, because many web servers have to handle orders of magnitude more I/O than we do, and efficient load balancing over a distributed system is extremely difficult and I am fortunate to rarely have to think about that kind of thing. But a requirement like "99% of requests need to have a response time of 200ms or lower" looks like fiddlesticks compared to a requirement like "a round-trip-time of more than 100us will be a large problem and get noticed".

My point is that Google's, CloudFlare's, and Dropbox's "performance-critical backends" don't require the same things as some other workloads. As a more mundane example, the latency requirements of Google's search engine are probably less stringent than the latency requirements of your text editor.

None of this should be taken to mean Go is slow or can't handle 99% of the world's performance requirements. But you can't just say "Go is high performance and fits in all of these company's critical paths. Try harder and it will work." Sometimes it might just not be cut out for it.


>I mean, Go is (or was) in the loop whenever YouTube hit its MySQL clusters (vitess) and on on the server providing downloads of Chrome and so forth (dl.google.com).

Not as impressive as it sounds. Most of it is cached in memory anyway...


Agreed with setting the standard arbitrarily high. It depends on what problem you're trying to solve—just like deciding when to use any other tool. Context matters.

Where I work, we've been trying to build some super-fast systems and things like the GC and even calling interface methods matter. We open sourced some data structures that show just how optimized we're trying to get (https://github.com/Workiva/go-datastructures), but in hindsight, Go might not have been the right tool for the problem.


JVM is in use in a lot more performance critical paths than Go. Go just seems new so it is easy to list the 2-3 places that use at scale.


The JVM is awesome for long running programs with minimal memory constraints. For transient programs, not so much.


I disagree with "channels are slow". They may be faster than you think. Defers needn't add as much overhead as they do today; this can be fixed.

I think it really depends on your definition of "performance-critical". I agree Go isn't suitable for all performance-critical tasks, but it covers a vast swathe of them quite comfortably.


> this can be fixed.

I believe this and am counting on it. Go2.0 and beyond should be a solid choice.

You should also note that it is entirely understood that more mature tech e.g. JVM have had the benefit of multibillion Dollar investment by SUN, IBM, Oracle, etc.

I feel it is regrettable that (imo valid and reasonable) criticism of what is currently not up to par with this tech always seemingly requires a disclaimer that "I love Go". I have been using this language since the day it was released. I know it fairly well. I like it. But excessive hype and sensitivity around it is frankly somewhat irritating.

peace out and happy v. day Go <3


Valentines day was yesterday for me. You wouldn't see me spending my time responding here if it were today!


Not an uncommon view: https://twitter.com/kellabyte/status/564531804837654528

(That's not me, to be clear.)

We all like Go. Just wish we could discuss these matters without unduly raising temperatures. "It's just code".


Sommers is an absolute beast, does totally astounding work and is an asset to the community. But even when we say "performance-critical" most of us don't mean getting 3M req/sec from one box like she did w/Haywire (https://twitter.com/kellabyte/status/547063455048404992). In other words, pushing the hardware absolutely to its limits is another game (Sommers' game :) ) from simply getting a tricky part of your app performing at the level you need. And, confusingly, "high-performance" or "performance-critical" are terms you might use to describe either task.

For context, the person whose presentation triggered Kelly's comment, MIT scholar Neha Narula, was experimenting with an 80 core machine, and she replied "it's kind of amazing I could push Go that far," and "to be fair they weren't really optimizing for my use case :)" -- her whole presentation is on YouTube at https://www.youtube.com/watch?v=Mbg1COjhsJU (some wild stuff--she got improvements for >48-core machines pushed into the Go GC) and those replies I quoted are at https://twitter.com/neha/status/564569903219634176 .


But even when we say "performance-critical" most of us don't mean getting 3M req/sec from one box like she did w/Haywire

We are not all writing web apps. Some of use are in machine learning, NLP, signal processing, etc. where squeezing out as much performance as possible does matter.

In those fields Go is still weak. No autovectorization, no OpenMP, no direct CUDA integration, GC overhead, etc. Luckily, this can often be worked around since cgo is so good. One can write performance-intensive parts in C or C++, compile with the latest gcc or clang and link it with the Go code to drive it. This is often an understated advantage of Go compared to Java, where JNI calls are expensive. But the Go camp always advocate for porting everything to Go (because fast compile times).


Indeed, we agree that Go is suitable for some things people call high-performance but not others. I do not advocate porting everything to Go. Not only is it not always a good fit, but there's a very high bar to clear to justify revamping any perfectly good codebase. (I use Python at work and wouldn't advocate for porting anything.)


What does OpenMP do for you that Go's built-in concurrency doesn't?


OpenMP is for parallelisation, not concurrency. OpenMP is designed for embarrassingly parallel loops that you typically encounter in numeric code, where parallelization is as simple as tagging a for loop with a pragma and (if necessary) marking variables that should be synchronized and/or critical sections.

E.g. you can make the libsvm library parellalized and scale up to many cores by adding two pragma statements.

I have a Go package that attempts to bring some of this functionality to Go [1]. But it's definitely not the same as having OpenMP.

[1] http://godoc.org/github.com/danieldk/par


If you read her tweets she states C++ would have been a better option, though.


I don't believe that this is true. You use the exact same patterns and tricks in both for perf-critical code: off-GC primitives and object pools rule the day. What are you considering a "much better" option available in Go?

(I considered stack-allocated structs in Go, but honestly that doesn't strike me as a particularly major thing; it may be slightly more terse, but fundamentally the same behavior.)


Go gives you data structures with fewer pointers than Java, which helps cut down on GC pressure.

    type point struct {
        x int16
        y int16
    }
    points := make([]point, 1e6)
The value points uses 4 MB of memory and contains one pointer, not a million.


Sure. Now do this in Java (I have no idea why I'm getting downvoted for this thread because I think this stuff is fairly straightforward and nobody's actually making an argument to the contrary, but whatever):

  short[] x = new short[1000000];
  short[] y = new short[1000000];
4MB of shorts. Now you can use the argument of locality of reference, to be sure--but you're already throwing out the window by keeping around a million items.

This is why I'm saying that in the what I would call most perf-critical cases, value type structs are a convenience much more than a tool to realize significant benefits. They are nice-to-haves. They don't make your code magicfast.


Values ("off-GC primitives?") are first class-citizens in Go. Stack-allocated structs have much more value than you think.


More value than I think? I'm pretty sure that I know exactly their value. I'll maintain that stack-allocated structs are a nice-to-have on top of the sort of stack-allocated primitives that Java provides; I'd like to have those structs as a convenience when I write Java, which is why I alluded to terseness in my prior post. But there's very little (or should be, I suppose, depending on implementation) performance difference to speak of between a struct and parallel declaration of the same values.


You said:

> I considered stack-allocated structs in Go, but honestly that doesn't strike me as a particularly major thing;

But for me they're one of the major ways in which I control memory use in Go programs. So, yeah, I think you underestimate them. No condescension implied. Apologies if it came across that way.

I think we probably agree more than we disagree.


Yeah, I edited it out because after I took a breath I figured you didn't mean it that way. =) Hugs all around.

But I really don't think it's a major thing, because there's no meaningful difference, in terms of performance, between "struct { int x; int y; } A" and "int Ax; int Ay;". I'm suspicious of claims that Go, as fundamentally a not that different language to a JVM language, is going to yield significant performance benefits. I'm not saying they aren't a nice convenience (value types are one of the things I love about C# when I'm writing code under MonoGame), I'm saying that I can't think of a reasonable way, barring bugs or short-sighted implementations, that they made for faster code.


If it is not a major thing JVM architects would not deliberating on this issue in such details. http://cr.openjdk.java.net/~jrose/values/values-0.html

Java value types are going to be in Java 10 i.e. years away. So may be it is not big deal for you but JVM developers think it is going to be big deal for lot of performance sensitive code.

This is despite the fact that most advanced GC available in Java.

See overhead for Java data structures.

https://www.cs.virginia.edu/kim/publicity/pldi09tutorials/me...

Go does not have this heavy overhead.


Are you aware that Java already has value types? Do you realize that the JVM has primitive collections, which completely ignore all of the boxed types that are, yes, a performance suck, via Trove? Nobody writing perf-conscious Java is using bleeping HashMap<K, V> or even ArrayList<T>. They're using Trove's TIntIntMap, TLongSet, etc. and getting the. exact. same. thing you're saying they're not.

Or they're using arrays. And here's the one difference that I have acknowledged since my first post, but there's a but to it: there is one material performance-relevant difference between parallel arrays-of-members and arrays-of-structs, and that's locality of reference. But any multithreaded (or cooperative, for that matter) system of nontrivial size is already chucking cache coherency out the window to the point where I'm very, very skeptical of the claims of magicfastness because two int members are next to one another. If you can prove that cache coherency is killing you and you need to run more consistently to avoid eviction, then you can push the problem into a minimal process without much going on and `nice` it to keep your cache lines for longer, but you're still in the land of Things That Are Not Made Easier In Go, Either.

Those JVM architects are considering structs--using the CLR term for "stack allocated aggregate types" because they're already there and I've done this side-by-side comparison in that environment, which is as close a one to the JVM as exists that supports them--as a convenience and, in rare cases and in extremity, a legitimate performance improvement. A good idea to have. But it's such a corner case that even they feel comfortable pushing it, and its ramifications, to Java 10. (If you want to see why it's a corner case: again, go look at the CLR and how rarely structs are used. I'm almost as comfortable on the CLR as the JVM, and I make video games. I use structs. I've never, ever seen them in the wild in somebody else's non-library code, where you can encapsulate your perf grossness anyway.)

Go still has the heaviest performance overhead of all: having a garbage collector in the first place. The same things that cause memory pressure in Java cause memory pressure in Go. Which is what I am saying and getting downvoted for my troubles--that there is so very little daylight between the Go VM (yes, it's compiled to native code, it still has a frigging VM, go look at its bogus ART with ALWAYS CAPITALIZED INSTRUCTIONS because Rob Pike and company think not-actually-assembly programming is a "fraught endeavor" and you can see it yourself) and the JVM that claims about performance are real, real sketchy.

I've been down this road. I've looked. I don't see it. Linking to corner-case proposals (again: good ones, but marginal) from Java architects who are in the unenviable boat of trying to create bullet-point equivalence between the JVM and the CLR--that's not actually an argument.


Thanks for the tip on the Trove library. Even though I hate the JavaXML language, the JVM is really hard to beat for long running processes on a dedicated server. And the kids gotta eat, so JavaXML it is...

(have an upvote from an otherwise Java lang hater for being convincing)

Aside: I propose renaming the primary language for enterprise apps to just one word, "JavaXML" (zha VOX em el), since the two are essentially inseparable anyway. I wish the other JVM languages would get more traction in BigCo development.


Please don't mistake any of this as defense of Java, I think the Java language is an inexpressive slog (though for my money superior to Go, the lack of generics really is that much of a problem when you write modular and composable code where the HTTP server is not the IPC layer). I use Scala or JRuby when I use the JVM, except when I need to know exactly what the compiler is going to be spitting out, like when writing performance-sensitive code. This is rare, and I think the last time I wrote any Java at all was in a Google interview where they wanted me to juggle byte buffers. I use Ruby for things I don't care about or where a type system actively works against me. I use Scala for things I do care about or where a type system can help me. I use C++ for things where a garbage collector is antithetical to my purposes. (I have some hope that Rust will be a good candidate to replace both of the latter.)

Though, food for thought: I have written a fairly decent amount of Java in the past, and in what I would consider "modern practice" it has very little to do with XML. With Play, Dropwizard, and similar, you have no obligation to put up with something like Spring herpderp anymore. Or even Maven; SBT or Gradle are fine.

.

Anyway--what grinds my gears, and why I posted at all, is that I have noticed in the Go community--not, I hasten to mention, enneff, as he said I think he and I are probably more in the space place than not--a really weird unwillingness to credit other environments for anything, whether from stubbornness or ignorance. If I can speculate--and I can--I think that comes from two places. I think one is the origination of many Go advocates being Python and Ruby, which are both former new-hotness ecosystems that themselves don't encourage breadth or depth in the programming languages space; in the Ruby community at least Java is often held as this inscrutable "enterprise" thing that can't possibly have any real benefits, and I feel like that's leaked into Go. The other is the cultural origination of Go in Plan 9--Keith Wesolowski's views on the second-system effects of Plan 9 and the epistemic closure and cult-of-personality effect of its developers and community are good ones and I don't need to repeat them here.

Right now, to me, Go is a mishmash of Java 1.3 and Java 1.4, right down to the overuse of green threads and the too-simple type system that forces you to trade safety when you want code reuse. And that's totally fine for people who like it. But it's nothing special, and the breathless hype around it from people who plainly haven't gotten their hands dirty with what came before makes me want to boil my head. Or their heads. After all, I like my head.


If you're mmaping a file and processing the majority of it, you really are typically doing that in C today if you care about performance. You can get acceptable performance in Java with ByteBuffers, but because of the lack of value types it doesn't feel like Java any more. Go should be able to get much closer to C's performance, while still being closer to idiomatic Go code. And Go can plug in small pieces of C code / assembler for the really performance critical stuff.

All this can be done with JNI, but JNI is so un-fun that I can see Go making big inroads here.


People use mmapped files all the time in performance critical Java. Typically using the unsafe packages. In fact, if you want to communicate with C/assembler level things, this is the way people who do fast Java do it because JNI is very slow.

If you want to use an abstraction around ByteBuffers that feels like value types take a look at the javalution structs.

As a counter to your argument, C# has had value types for quite a while and has not achieved Java levels of performance, so those in and of themselves aren't enough. Mostly thats because if you are doing any allocation in fast code you are doing it wrong regardless of language. Even in C object pools and arena allocation are standard for performance critical work. The gap between Java and C (or really C++) right now is almost entirely around control of the memory model, not allocation (that said, I'd love the JVM to have value types and am glad that go started with them).

If anything will allow go to achieve better performance than Java its that it will be able to incorporate the lessons learned from Java without the support burden. I do think the constrained nature of go will give it a very good chance at impressive performance.


I don't think C# vs Java performance can always be directly compared.. C#'s SuppressUnmanagedCodeSecurity attribute for PInvoke interfaces, when you can use it, reduces C#'s call overhead to something like 20% over using a Managed C++ library as a bridge. If you don't need to pass complex numbers (doubles) it isn't bad at all. Just depends on your needs.

Then again, depending on your needs, being able to scale out or up is far more important than raw performance characteristics... just depends on your needs.


That is exactly my point. Value types vs lack of value types is largely immaterial to the performance story.


JNI is un-fun, so JVM developers are working on the solution. I think Panama will arrive sooner than Go making any inroads.

http://openjdk.java.net/projects/panama/


Good thing. Though Go made inroads in all kids of Web / Cloud infrastructure considering Java should be the first(only?) choice with production grade application servers etc already present. I think Panama shows Java was late to realize that native code easy access is getting more important for Java even with better hardware.

Where I work ~16GB Java heap makes gc pauses huge and unpredictable. I think Java performance is great in benchmarks but the way code is written in most enterprises Java is hugely memory hungry and slow.

IMO Java position remains secure till management is on Java side. Technical merits limit to evaluating different Java technologies not Java vs Non Java technologies.


Just curious but why would the same type of programmers that make Java "Enterprise" do better when given GO instead? Taking into account that GO GC is not better than the Java ones, and you currently can't solve it by trowing money at it at all either. e.g. No Azul Zing or IBM Realtime.

Lets be honest here, seeing the popularity of Ruby a few years ago we know that performance is not all either. And knowing were Java was in 2000 we know the same ;) First languages need to be used and then they get fast (even if it takes quite a while before that is true).


Doesn't Go have the potential to be faster than Java since it's compiled to native code (rather than compiled to byte code)?


JIT has potential to be faster than compiled native code. Better runtime information. Compiled code needs to cover all potential options and this means more instructions to execute.

Say a variable value is set through a command line option to be a certain value. Compiled native code has to assume the value to be dynamic, but a JIT can optimize it away, effectively hardcoding it for that particular invocation. Same applies to more complicated type of software. Some configuration and invocation parameters tend to be effectively static during that particular invocation. JITs can capitalize on this fact.

JITs have also better chance to adapt to exact hardware it's running on. Compiled code is forced to make one or a limited number of assumptions of available CPU hardware configuration.

In the end, both options are running compiled native code. JIT just does it a bit before running.

Of course current reality is the opposite, but the key word here is potential.


I'm glad you emphasized potential and mentioned reality.

One other aspect that has become increasingly important is power consumption and heat. Huge data centers now have to worry about enormous electricity consumption and keeping all the equipment cool. JIT code must do more work to compile (and recompile to optimize) on the fly which means more power and more heat.

On the consumer end, Android just switched to Ahead of Time compilation instead of JIT because its JIT performance wasn't that good and it required more power thus sucking battery life.


Same on Windows Phone. .NET is also compiled to native code ahead of time.


Thanks, that's a very clear (and thorough) explanation.


There are quite a few JVMs that allow compilation to native ahead of time.


Java is also compiled to native code, but only after it runs for a bit.


Android does AOT compiling with ART, it became the default in Lollipop.


Android does the right thing for a transient GUI client program on a device with little memory. HotSpot JIT does the right thing for a (long running) server program on big, dedicated, hardware.


Yeah, I dunno, the github PR 'culture' I've engaged in has often in fact been about code review. I don't see any problems with forking a repo publicaly to make a PR (what is there to hide?), am not really bothered by merge commits (and in some cases they are actually quite useful, some people prefer them, it's a point of some contention), and I don't really understand what they mean by 'Comments are sent as they are written; you cannot "draft" comment'

But to each their own -- I'm curious what alternative system (if any) of accepting patches they have. If it's emailed git patches on a listserv, then I would definitely find it a barrier to submitting patches, myself, compared to github PR's.

I think in general, github PR's have proven succesful at soliciting code contributions from a wider field, which seems to be the goal of their UI (over command line git itself). Of course, this can seem a downside too, as committers have to spend time dealing with those submissions.



Moreover, you can fork a repo, then work on the actual repo in whatever form you want. Most of my forks are just to maintain public branches, and I have any number of private or unfinished stuff on my home machine.


> In general, pull request culture is not about code review.

This is not true. There are many projects on GitHub which do extensive code reviews on pull requests. It may not be as nice as Gerrit for the type of project like Go (where you often have many iterations or the diffs are large). But for many other projects the UI that GitHub provides is sufficient (and arguably more efficient than Gerrit).


> where you often have many iterations or the diffs are large). But for many other projects the UI that GitHub provides is sufficient

GitHub is painful for non-trivial reviews. Biggest WTFs:

- No comment threading (or at least collapsing). On a PR with 100 comments[1] it is unlikely that those revisiting the thread need to see (and download, and render...) the first bazillion comments.

- Source "annotations" are lost after a force-push (why not keep around a read-only view of old comments? We have lost some valuable discussions on GH pull requests)

Yes, we try to keep PRs small. But they also need to be meaningful, and sometimes they require (many) more reworks than expected.

[1] https://github.com/neovim/neovim/pull/1820


> GitHub is painful for non-trivial reviews.

That may very well be the case. But note how you said for non-tivial reviews, whereas in the presentation about go they said in general (see the line I quoted in my original comment). And argue that the vast majority of pull requests on GitHub are simple ones which don't need much discussion, so in general the GitHub UI works just fine. I don't have any hard numbers to back up my claim.


> And argue that the vast majority of pull requests on GitHub are simple ones which don't need much discussion, so in general the GitHub UI works just fine.

Right. And what you say validates my exact point: "pull request culture is not about code review." If all you want to do is cast your eye over it and click "merge", it works great. That's not how we work, though.


If you said "GitHub pull requests don't fit our culture" or "The GitHub pull request UI is insufficient for our needs" then it'd accept that. Both are perfectly valid reasons for preferring Gerrit. But placing this blanket statement about the whole "pull request culture" is just wrong.


We've spoken to GitHub about this. They're not happy with how PRs work, either. I stand by my statements.


I hope they are going to do something about it and someday see all the Go development happening on GitHub :)


> Source "annotations" are lost after a force-push (why not keep around a read-only view of old comments? We have lost some valuable discussions on GH pull requests)

Yes, that is incredibly annoying. I discovered that if you add your comments in the "Files changed" tab (which shows the diff of the entire pull request) instead of the "Commits" tab (which shows the diff commit-by-commit), then the comments aren't lost when you force-push.

Just FYI, might make life a bit easier if you're stuck with Github.


Wait. Force pushes? That's a terrible habit to get into; force pushes have decimated more than a few a open source project's repository.

https://news.ycombinator.com/item?id=6713742


What? Pull requests should be on their own branches, if you want to keep one commit per PR but need to edit it you must rebase it and thus force push. It’s a terrible habit if you’re sharing your branch with others, it’s completely normal if not.


Force pushes to the PR fork. We have set receive.denyNonFastForwards=true on the upstream master.


For a code review tool more powerful than GitHub's but with a much better UI than Gerrit, check out https://reviewable.io (disclosure: my project). It integrates smoothly with GitHub and doesn't require setting up your own host.


Agreed. What I like most about GitHub pull requests is that they encourage splitting commits into smaller ones. Gerrit is focused on doing a change in one big commit.


Actually, the Gerrit workflow is pretty agnostic about whether you use one or many commits.

In general, it's poor form to send massive commits anyway. Those poor reviewers!


I'm late to the hate-parade but I did want to chime in to say how much I appreciate all the work the go team has put into the language, and tools. Go is a wonderful tool to get things done with. I recently rewrote a cross platform `enterprise` app from java (25k LOC) to go (8k LOC) and saw improvements in readability, memory footprint, and overall quality. Some notes on my experiences so far:

Go binary size is a non-issue for most software. The java-rewrite I mentioned above went from a 80MB or so binary, to a 8MB executable. That said, there's been a few occasions when I've really wanted to use go for an embedded project, but couldn't due to it's size.

I read somewhere, someone said of go, "you'll come for the concurrency, but you'll stay for the interfaces. This is very true for me.

Generics. Go's red herring. Sure, there's been a handful of occasions where generics would have saved me some boiler-plate, but it's not been a pain point for me.

Tooling, from fmt, vet, to unit testing are all first rate. However, I wish there was a better debugger option for go. I know that gdb works with go (and with a great deal of difficulty if you develop with OSX) but I'm probably not alone when I say I really dislike GDB.

Overall, I've found the community to be friendly both online and in person.

As an aside, I've noticed much of the recent vitriol towards Go seems to come from the Rust crowd which I think is too bad. I enjoy both. Languages are not a zero-sum game. Who knows, the hate means Go has finally arrived.


There were some great new tools mentioned on there - like "callgraph" for plotting the callgraph.

I can't however find any instructions how to download and install them. Anyone able to help me?


    go get golang.org/x/tools/cmd/callgraph
Or similar for any of these commands: http://godoc.org/golang.org/x/tools/cmd


This presentation looks horrible on the iPhone screen. I wonder if they couldn't spend a few minutes to point phone users to a working version or at least not lock the viewport size so mobile users could pinch-zoom out.


Each slide was okay when viewed horizontally, but then advancing then shows the damn address bar. I had to annoyingly advance, rotate, rotate back, and then read. After a while I just gave up on the titles...


It's hijacking Alt+Left/Alt+Right (back/forward in the browser) too. Nasty.


I tried to mitigate some of the pains with GitHub code review UI by writing a helper userscript to track the progress of big code reviews. Some of the items listed in the link can be fixed in this way.

The idea is to expand/collapse files, store progress of the review and collapse status of the files in local storage of the browser, so you can stop and resume at any time (I also have in mind serializing this stuff into a hash in the URL so you can forward it to the other machine for instance, and recreate the progress there).

I work on it every now and then and have a number of items in the backlog. If someone is interested to contribute I'll be happy to accept pull requests (sic!) :)

https://github.com/jakub-g/gh-code-review-assistant


> In general, pull request culture is not about code review.

A lot of people seem to here insist that this is somewhat even remotely true. This is not. Look through Docker's pull requests on Github, look at their CI hooks.


Gerrit still produces merge commits unless they have it configured to cherry-pick onto master, which is insane because you are changing the commit sha at that point.


That's exactly what we do, and we need to change the commit hash because we want to include the review information in the commit message.

Not sure why this is "insane."


It's essentially rewriting history. As a contributor it's nice to know a commit went in exactly as you wrote it, which is not the case when the commit hash changes.

Git is powerful because it was written with the ability to merge trees. The cherry pick workflow is throwing all of that in the trash. Why not use SVN at that point?

Because of cherry picking in Gerrit, dependent patches are a nightmare to maintain. Say patch c depends on b, which depends on a. Now say that patch c requires a change that merged into master. Because you can't merge into your development branch, you have to rebase c AND b AND a. This really pissed of the owners of b and a because it shows up as a new changset and wipes out votes. God forbid you depend on two different patches that each have separate dependencies.

You can use the merge commit to add the review information if you want. Then you don't have to molest the code change commit.


One nice thing about Git is it lets you choose your workflow.

Our general workflow for the Go project is to review single commits, and sometimes do major new work in feature branches. When we submit a single change we cherry-pick. When we merge trees, we create a merge commit.

We don't write commits that depend on other pending work. That's overly complicated (IMO) even if you always use merge commits.


It's not complicated at all if you use merge commits. That's exactly what you are doing with a feature branch. Failing to realize merge commits are useful is what makes them complicated.


It doesn't matter what the mechanism is; writing code that depends on code that changes is more complex than simply not doing that.


You can't "simply not do that" if you're project is fast moving. That's essentially the "you're holding it wrong" defense for a terrible design flaw.

Imagine you are developing a plugin framework for something and would like to develop a reference plugin at the same time to flesh out the API. Neither belongs as part of the same change but the plugin certainly depends on the framework. This is basically impossible in Gerrit because of the awful way dependencies work. The only way it can work is with a feature branch, which is basically giving up on Gerrit anyway and using git in the way it was intended.

Gerrit ultimately becomes a choke point on the throughout a given project can have unless you have an extremely small set of contributors that can coordinate well (i.e. Not a large open source project). Maybe this isn't a problem for Go since there is a high barrier to entry for contributors, but it's something to keep in mind.


The Go project is sufficiently modular that this isn't an issue for us.


Didn't you say that you used feature branches? If so, that was a case where Gerrit failed and you had to use git in an almost normal way.


How is that Gerrit failing? Gerrit is designed to support all kinds of workflows, just like Git. As far as I can tell, we're using our various tools the way they were intended. Just because it doesn't line up with your exact view on how Git should be used, doesn't mean we're doing something wrong (or "insane").

This is a boring conversation.


If it's boring, why were you trying to defend such a suboptimal use of git?


The "don't rewrite history" train has left a _long_ time ago, when mercurial and monotone were still relevant.


Does GitHub still not support "fast-forward only" commits? The "ugly merge" thing is easily avoided with a rebase before committing.


GitLab CEO here. Both GitHub and GitLab normally always create a merge commit when accepting a merge request in the web UI. GitLab EE has a rebase feature where you can accept merge requests by automatically rebasing them just before merging, for more information see https://about.gitlab.com/2014/12/22/gitlab-7-6-and-ci-5-3-re...


I see this is down-voted (after being up-voted initially). I thought it was interesting that GitLab allows you to accept pull/merge requests without creating a merge commit. Should I not have included the url? Edit: Thanks for re-upvoting it


From the web interface, no, but you can close PRs with `git push` just fine.


I mean the option to explicitly block any changes that generate a merge (that isn't a simple fast forward). You can set this on a git repo you fully control, but IIRC you can't do this on GitHub. So yes, it works fine until someone accidentally hits the big green button and generates a "real" merge.


Breaking the mouse entirely probably isn't the greatest of frontend design patterns.


How is the mouse broken? I can

* See my mouse pointer

* Right-click and bring up the right-click menu

* Click on the link in the second slide

* Drag to select text

Chromium 40 on Linux


Besides the arrow keys, you can move forwards or backwards by clicking on the left or right edges of a slide.


The left and right buttons also do the trick.


You can also click on the vertical area between slides to go forward/back.


Also known as "easter egg navigation."


It's a tool for giving presentations, not necessarily for reading them.

But I've just sent a change to add some help text to these pages: https://go-review.googlesource.com/4910

edit: The change is now live. No more easter egg navigation. Yay!



> To create a patch one must fork the repository publicly (weird and unnecessary).

I think it's very fair to demand from a contributor to sync and build the entire app before they're allowed to submit a patch.

Interestingly, I note the Go team says it's "unnecessary" but doesn't provide their alternative.


We expect contributors to sync and run all the tests. The public forking is the weird and unnecessary part.

Gerrit is the alternative used, and contributors have a full local clone of the git repo, with their commit, and that's private on their machine until they push it to Gerrit for review.


What is so disturbing about the "public" part that it's worth discarding the whole approach?


No-one said "disturbing". It's weird if you've come from an environment where you do your individual work in private, and only show it to people when it's ready to be committed/merged.

No-one said that that point alone is worth discarding the whole approach. You're nitpicking a single bullet point from around a half dozen points that stacked up.


[flagged]


This probably reveals more about you than anything else.


Thank you, I agree


Grow up.



It would seem no one got the joke that this is a pencil drawing by Renee French illustrator of the Go Gopher.


I got it, and laughed :-)


[flagged]


Rob Pike has nothing to do with the Plan 9 port of Go. It has been entirely driven by the outside community. I don't think Rob has used Plan 9 in years.

> Most Googlers I've spoken to despise this guy's reactionary lordship over the project,

This is pure nonsense.


Google is a huge place. I don't pretend to know what everybody, most people, or even a sizeable fraction of people there think. But a significant majority of the people I have spoken with strongly disagree with the choice not to invest more in the type system. Take it with salt, but that is the noise I hear from my position on the outside. It may not match your experience, but it isn't nonsense. Just another datapoint.

On the flip side, I'm glad that the Plan 9 port was contributed by the outside community, and that my suspicions about a poor engineer being put up to overtly political work were untrue.


When first you say

> Most Googlers I've spoken to despise this guy's reactionary lordship over the project

and then water it down to

> a significant majority of the people I have spoken with strongly disagree with the choice not to invest more in the type system.

then it seems like you're editorialising. Bear in mind that you're talking about real people and real relationships. Fanning the flames of hatred is unnecessary at best and sociopathic at worst. (Yes, I know people do this on the Internet all the time. That doesn't make it right.)


Editorialization is a trivial truth of human expression. Real people really hate decisions that significantly increase the amount of work that it takes them to express complexity. This particular influencer of the project happens to have been blamed fairly harshly by internal users for doing so. As someone who is partially responsible for the progress of this tool, it's probably a good idea to try to improve user's productivity rather than claim that it's nonsense for people to have strong opinions against its decisions.


Did anyone expect all commenters to be hard-facts-only journalists here? I come here for the rumors.

And this is the 3rd time in a few months that I've heard that Googlers generally don't like Go and don't use it. So, that's kind of interesting.

Anyway, I don't think you can call an activity sociopathic if the vast majority of humans take part in it. (Having opinions and spreading rumors didn't start with the Internet either.)


Lots of Googlers like Go and use it. I'm using it for a hobby project that I'm working on right now.


> I don't think you can call an activity sociopathic if the vast majority of humans take part in it.

If you think the vast majority of people go around saying negative shit about people they don't know, then you need to reevaluate your worldview.

Maybe you're experiencing some observation bias, as the people who do this are highly visible, but the reality is that most people do not relish publicly spreading negativity.

There's a reason why people become known as the "town gossip" and are looked down upon for it.


I don't think it's true that "Googlers generally don't like Go". Some like it, and some don't. From my vantage point, the majority (maybe 80% or so) who have actually tried to do real with in Go seem to like it.


> Most Googlers I've spoken to despise this guy's reactionary lordship over the project

Wat‽ Talk to more of us. From where I sit (ie. not related in any way, just an observer who happens to be a Googler) the go team is fantastic.


That's really nice of you to say. Thank you.


As a Xoogler with some exposure to Go, let me second:

* Go is an excellent tool for getting things done. * Go is great for reading large amounts of other people's code. * Go team is truly impressive. I expect Go to only get better.


The Plan 9 support was contributed by other people. I'm sure that if there was nobody left to maintain it, it would have been dropped long ago.


> Where we're at in February 2015

Still producing 1.3M hello world executables.

I wonder if rewriting the linker from C to Go will be primarily rewriting, or maybe they will start fixing it somehow.


http://golang.org/doc/faq#Why_is_my_trivial_program_such_a_l...

The linkers in the gc tool chain (5l, 6l, and 8l) do static linking. All Go binaries therefore include the Go run-time, along with the run-time type information necessary to support dynamic type checks, reflection, and even panic-time stack traces.

A simple C "hello, world" program compiled and linked statically using gcc on Linux is around 750 kB, including an implementation of printf. An equivalent Go program using fmt.Printf is around 1.9 MB, but that includes more powerful run-time support and type information.


> A simple C "hello, world" program compiled and linked statically using gcc on Linux is around 750 kB

diet gcc -o hello hello.c; strip hello

2280 bytes on my system.

There are reasons why using glibc results in executables so big, and why it is tolerated (kind of). Those reasons hardly apply to a new language being actively developed. Yet said language produces executables almost twice the size.

"Run-time support and type information", why is it linked into a an executable that never allocates memory and does no introspection of any kind?


> "Run-time support and type information", why is it linked into a an executable that never allocates memory and does no introspection of any kind?

fmt.Print does use reflection.

Besides, bickering over the size of hello world is pretty pointless; better to compare the size of programs that actually do something.

We do recognise that Go binaries can and should be smaller, but probably not as small as you might hope.

https://github.com/golang/go/issues/6853


It is a valid point considering the lack of dynamic linking. Go (as is) strongly suggests having lots of "small programs" compose a larger (modular) system on a node. So those 1.9MBs do add up.


This makes me laugh, considering at Google we regularly deploy statically linked C++ programs that are two orders of magnitude larger.

"You call that a big binary? THIS..." etc


I was generating 7-15Mb binaries out of Delphi in the late 90's (it had a similar kitchen sink approach) and it simply wasn't an issue then and it certainly isn't an issue now.

I'm actually racking my brain for a case where a 500kb vs 5Mb binary would be a deal breaker, outside of embedded stuff I can't think of much.


Just as a minor point -

One of the major blockers to clojure in android is that the lack of treeshaking/deadcode elimination makes for 10 second+ startup times in most environments.


It matters when you have to download the file over a slow network.

"The ideal size is 10-15MB globally. Idea size for an app for tier 2/3 countries (like India) is below 5MB. 500MB+ is a non-starter. At 50MB+ the conversion rates fall off dramatically."

http://time.com/3589909/internet-next-billion-mobile/


Go binaries compress well in my experience. For example, I have a Go program that compiles to an 8.2 MB binary when compiled with gc 1.4.1. With bz2 compression, which takes about half a second on my box, it can be shrunk down to 2 MB.


If Google keeps Go away from Android, as first class language, only having the Go team doing NDK related support it doesn't really matter.

I don't foresee any changes on this regard at Google IO.


From what I hear you have the funds and resources to operate at that scale.


Part of the reason Go exists is because of such issues. Believe me, it's not desirable.

My point is, on typical machines today, even a 10mb binary is not an issue at all.


And this is a reason why, though I like the language, it's pretty useless for embedded work. Not all the world is lacking in resource constraints.


> Besides, bickering over the size of hello world is pretty pointless; better to compare the size of programs that actually do something.

It is not about the size of hello world executable, that is just a symptom. A code smell if you like. There is something badly broken in the dead code (or dead data) elimination area. And I hope that code is in fact dead, because if it is not, add code generation to the list of smelly things.

What I suspect I see here is a kind of C++ vtable problem built deep into the language design somewhere. And the reaction is, let's talk about large executables so that it would kinda become not so visible. Or maybe let's take a look at glibc, because glibc is definitely a paragon of clear design befitting a new language.

> fmt.Print does use reflection

There are two problems with this. The lesser one is why does it need reflection to print a string. The bigger one is why do I see about 600 reflect.* entries in the resulting ELF instead of a single one for the string type.


> The bigger one is why do I see about 600 reflect. entries in the resulting ELF instead of a single one for the string type.*

There is definitely work that can be done to improve dead code elimination in the Go tool chain. The transition to Go will make this easier to achieve.

> What I suspect I see here is a kind of C++ vtable problem built deep into the language design somewhere.

Don't suspect. Dig into the problem and make some informed commentary. Idly speculating on HN is just spreading FUD, and benefits no-one.

You should read about Go's implementation of interfaces. It's not the same as C++'s vtable issue. http://research.swtch.com/interfaces


> The lesser one is why does it need reflection to print a string.

It doesn't print just strings. It can print anything. http://golang.org/src/fmt/print.go?s=6420:6467#L221

After doing your research, you're welcome to submit a magical CL and PR that brings down the binary sizes, then you won't need to argue anymore.


> After doing your research, you're welcome to submit a magical CL and PR that brings down the binary sizes, then you won't need to argue anymore.

The original link is titled "The State of Go". The first thing I want to know about a state of a new language is whether it works. Then how well it works. Then, maybe, how to fix it and which VCS to use. There is an issue I think is within the range of these two questions, but it is not even mentioned there.

So to make life a bit easier for people who like me expect that issue to be discussed first, I posted a comment summarizing (in my opinion) the state of Go.

My thoughts on how to fix it are hardly relevant to the current state of Go.

> It doesn't print just strings. It can print anything.

The fact it is just a string should be statically (build-time) inferable in a strongly-typed language. Reflection, at least as I understand it, implies run-time type information. So the question does make sense. Yes, I understand why it may be needed for a particular implementation of printf, this is why I called it a lesser issue.


> I posted a comment summarizing (in my opinion) the state of Go.

Not trying to be rude, but your uninformed opinion is less valuable than you think.


Wow, Googlers being Googlers.


> A simple C "hello, world" program compiled and linked statically using gcc

Statically. Care to check "ldd hello" of your binary?


"not a dynamic executable"

In case you wonder, that's dietlibc which is typically built with no dynamic linking capabilities whatsover.


Are you sure that fmt doesn't allocate memory or do introspection?


Yeah, what an outrage this is.

1.3 megabytes. That's like $0.00004 USD worth of hard drive space.

Does the go team think we are all rich or something?


Plenty of CPUs out there with cache sizes smaller than that, or other things running on the machine that would also like to use the cache.


The size of the binary is completely irrelevant when considering if the code fits in the CPU cache. What's important is the size of code that actually executes. Go binaries have huge DWARF tables, and a lot of the code is dead code.

The code that actually executes is not bloated. It's not the most efficient code in the world, because the compilers don't have an optimizer as advanced as gcc's, but it's not unreasonably large.


I think the point he tried to make was that if only "hello world" produces a 1.3 megabytes executable, the file size of a fairly complicated program made in Go will be significantly larger than the same program implemented in another language.


Which makes no sense, anyway. The reason a Hello World program in Go is large is because it must include the baseline runtime support that is included in any Go program. A 10 line program won't be 13mb.


That's like trying to predict the cost of a flight by cost per mile using a quote from SFO to SJC as a baseline.


I'm looking at an executable of a medium program I'm working on, probably around 5k LOC of my own code, using tons of standard library modules and linking to probably another 10-20k LOC of 3rd party libraries, all debug symbols in place. Weighs 5MB. I don't think this program in C++ would weigh much less.

Anyway it can be improved, but to me there are far more important things to be improved about Go than the binary size of small programs.


A dependency parser (in other words, serious program) in Go, linking in some external dependencies and a C++ machine learning library (statically):

  % du -k eval/eval
  2040	eval/eval

  % strip eval/eval
  % du -k eval/eval
  1864	eval/eval
Don't just assume. Measure.


You're right - it will be significantly larger. But then again, it doesn't depend on your system having all of the necessary libraries of the correct version and the overhead of dynamic linking.

It's a tradeoff, and worth it in my opinion.


That's not a reasoned point then.


Most of career has been spent doing web programming, so maybe I'm ignorant, but if you have to build Go with a Go binary, instead of C, doesn't this break OSS's value proposition? (obviously I'd have to start with a C compiler, but that feels a bit purer in my mind, as I can start with a pretty bare-bones OS)


I don't understand the basis of your question. What does the bootstrapping process for a compiler have to do with "OSS's value proposition?" (By OSS I'm assuming you mean "Open Source Software"?)

To build gcc, you need a C compiler. To build Go, you need a Go compiler. To compile anything you need to start with some kind of compiler.


I think bdcravens means that to install Go from source on, say, a new Linux box, we will have to download a binary version of Go first, even though we probably already have gcc.

What happens if this catches on, and PyPy replaces CPython, and other languages do the same?

Hassles for those who prefer to install from source, and potentially a lot of duplication of effort writing compiler backends in every language.


> What happens if this catches on

I don't understand your comment, because we already live in your nightmare scenario: http://en.wikipedia.org/wiki/Bootstrapping_%28compilers%29#L...

In practice, it isn't a big deal. When was the last time you worried about the language your compiler was written in?

> potentially a lot of duplication of effort writing compiler backends in every language.

That's also the state of things for every compiler that doesn't use LLVM or compiles to another language.


I agree the installation issues are fairly trivial; they were just a response to the original comment.

But the duplication of effort (which won't end when the "GoGo" compiler is ready to replace cgo), and a potential freeze in Go while "GoGo" catches up, are more serious drawbacks.

Yes, that is a trade-off other compiler teams have made, and it may be the right choice for Go as well, but it should be (and no doubt has been) considered.


So you think all compilers should be written in C exclusively?


No, I was just pointing out a drawback. Of course there are advantages too.

In this particular case, though, I wonder why the Go team is planning to spend a lot of effort rewriting an already existing compiler. Rewriting a popular project from scratch can be a dangerous temptation.


They explain why in detail here: https://docs.google.com/document/d/1P3BLR31VA8cvLJLfMibSuTdw...

Also, you don't have to download a binary of Go to compile the latest Go. You can download the Go source back when it was compiled by C, compile it, and then compile the later, Go-sourced versions, if for any reason you need to go that far.



They want things in c unless they are tools for a language other than c. In which case it can be in that language.


I think go is failed, compare to rust. 1. not memory safe on multi-thread 2. no generic support 3. error handle is full of pain compare to rust's Result<T>, Option<T> and try! 4. gc can't be disable 5. no RAII support, defer can be forgot light-weight thread (spawn) and channel (thread safe FIFO) already exist in rust that make golang more meanless. If you think still any advantage of go please tell me.


Go is stable, wheras Rust is not. Go also targets other use cases than Rust, and so the two are not directly comparable. 1. Rust cannot guarantee multi-threaded memory safety in all cases. Care, as in Go, must still be taken. 2. I agree with you here, I think Go would be better with generics. But apparently, most people don't have a problem with this. 3. Rust and Go's error handling is based on the same principle: force the user to deal with errors, instead of ignoring them. While Go's way of handling things can be a little more verbose, in principle I don't see the big difference. 4. Alot of people will argue that this is a good thing. Using a GC by default makes certain things easier, like writing datastructures (especially immutable ones) without reaching for 'unsafe' code. Having a GC also means you don't have to mind memory fragmentation, as you do with a manually malloc/free based allocation. 5. RAII is great, but loses some of it's use in GC based language. Defer is, IMHO, a much better option than the other solutions I've seen in other GC languages (Javas try-with-resources and C#s using)

Just because you can do the same thing in language X, doesn't make language Y obsolete.

Advantage of Go, for me, is less verbose code (implicit interfaces for the win) and a fantastic, stable and huge standard library. I also like the strict compiler, and that the language has a GC by default, makes certain things easier, and for most tasks I don't need the predictability that you get with manual memory management.


While I do generally agree with what you've written, I do want to take issue with point (1). Rust is actually intended to guarantee multi-threaded memory safety in safe code. Period. If you can get it to behave otherwise, it's a bug in Rust.


True. What I really meant is that safe code can interact with unsafe code, and thus cannot guarantee everything working correctly. Of course, this is bug with unsafe code, my bad.


You are, of course, entitled to your opinion. But "go is failed" is a little strong, don't you think? Wouldn't it be more accurate, and less ornery, to say that you personally don't like Go?

For my part, I like Go because it offers a great, if nascent, alternative to PHP. That's the gist of it, anyway.


Just downvote if that make you feel good because your poor head can't prove go is better anywhere :P




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: