Hacker News new | past | comments | ask | show | jobs | submit login

Sourcegraph is one piece of the puzzle but there is still a long way to go to bring google-quality dev experience to the universe that exists outside of Big-G!

* Consistent build structure

* Rigorous code reviews

* Ginormous company-wide shared filesystem holding all code

* Point-click-drag report building

* Big-query

I'm sure I'm missing additional awesome internal aspects of being a developer at Google, and some parts are also horrendously bad.




* Build farm consisting of (tens?) of thousands of servers

* Automated presubmit/bisect queues

* Well built code review ui and not like github

On this topic, most companies will give up on “rigorous code reviews”.


I’d be curious what these super secret tools do. Most “rigorous code reviews” I’ve seen are junior devs fighting over “tabs vs spaces” or “doing a for loop vs a for each” - or something else borderline meaningless.

I’d love a tool that helped with context in a larger system design way. Coming in cold on some project and expecting to know all the design decisions, and if the code structure is going away from some higher level goal is where most problems that I’ve seen come in.

I don’t see how any of the current tools help with that, and I somewhat doubt these secret tools do either - but would love to seen them if they do.


One thing Google has going for it to make code review more efficient is the concept of a uniform per-language style guide. The style guides are absolute and comprehensive, meaning it’s not permitted to diverge from them, and equally importantly, that any style topics not covered by the style guide are not considered legitimate style issues to raise in code review.

Formatting can be almost entirely automated with pre-submit checks that the code is formatted, and automatic formatting tools.

So for your two examples, this practice of adopting a style guide prevents them from arising. The “tabs vs spaces” argument is simply settled, because the style guide has an answer. The “for loop vs for-each loop” is either addressed by the style guide, in which case there is an answer, or it’s not, in which case it is not a legitimate style complaint one way or the other, and the reviewer is making a mistake by bringing it up.

https://google.github.io/styleguide/


On top of that, reviewing is considered an important and valuable skill. You have to demonstrate a deep understanding of the language and go through a months-long meta-review process before you can even start doing (language, as opposed to business logic) reviews, and being an authorized reviewer is a legit achievement when you're going for promo.

Basically: reviews are actually treated as important.


Yeah sorry I was just trying to pull something silly out of the air - we do mostly go and gofmt (or clangd or whatever).

I was just trying to flowerly highlight that most “in depth code reviews” I’ve seen are very focused on surface level things. I’d dig a tool that focused more on structural changes.


Well. One could argue that if you have to discuss these very deep topics in a PR, you better had brought them up before coding that stuff wrong in the first place. Oh, that is not considered agile you say?

Discussing these deep topics is also socially very challenging and you need some quite grown up people to discuss them (that late after the damage has been done) without awkward "but I spent all this time" moments and at least some personal animosity. But the grown ups think and discuss first typically, which also might be why you don't see them discuss those topics in PRs...


It's really nice to have these discussions in person with the person sitting next to you. That's probably what I missed the most during the pandemic--impromptu code/design discussions. Screen sharing works but it's less spontaneous and natural.


> One could argue that if you have to discuss these very deep topics in a PR, you better had brought them up before coding that stuff wrong in the first place.

Fair point, and 100% on the nose. The times when this comes in handy is when you have to PR on a project you don’t know a lot about (which could be argued is a process problem, but I am getting way off trail now).

However, with your excellent point in mind, I am even more confused on what a “culture of rigorous code reviews” is.


It is a culture thing. People who receive reviews that address issues of design, structure, maintainability, and abstraction will learn to do the same for others. This is hard to establish but once it is there it sticks. Early stories about how some Googlers managed to shift the entire company to seeing code review as normal and expected are fascinating.

Google also has documentation, guidance, and training on how to review code effectively.


I would say any styleguide that is in prose form and not machine enforced is deficient. Modern linting and formatting tools are the best, most efficeiect means for enforcing style. No religious arguments are needed if the tooling decides what is correct.

Shameless plug for https://trunk.io/products/check - which will handle universal enforcement of all the tooling for all of the pieces of your tech stack.


One random thing I've been impressed with (that I know is public) is mutant testing: https://testing.googleblog.com/2021/04/mutation-testing.html , https://research.google/pubs/pub46584/

To pull an example from the paper, if your line of code says

    if (a == b || b == 1)
you might get a comment that says something like

> "Changing this line to

    if (a != b || b == 1)
> does not cause any test to fail."

Page 4 lists other mutations, like replacing logical ANDs/ORs with just `true` or `false`, switching arithmetic plus to minus, etc.

This isn't a larger system design context thing, though, just a testing one.

Disclosure: I work at Google.


This is for test coverage? I get it technically but not clear to me if this adds value. Covering all the internal states/code paths in tests seems very hard and potentially of diminishing returns? Isn't time better invested in other things?

I'm also guessing this ties into Google's tooling of being able to tell which line of code is hit by which tests and running those as part of the mutation test?

EDIT: The blog has some discussion about the correlation between some of these mutation failures and real bugs. I remain a little suspicious to be honest. Also

"I also looked into the developer behavior changes after using mutation testing on a project for longer periods of time, and discovered that projects that use mutation testing get more tests over time, as developers get exposed to more and more mutants. Not only do developers write more test cases, but those test cases are more effective in killing mutants: less and less mutants get reported over time. I noticed this from personal experience too: when writing unit tests, I would see where I cut some corners in the tests, and anticipated the mutant. Now I just add the missing test cases, rather than facing a mutant in my Code review, and I rarely see mutants these days, as I’ve learned to anticipate and preempt them."

Now this one is expected, if you build automated tooling that comments on your code review you expect people to try and avoid that. However there's still the question of what's the quality improvement and what's the productivity impact.


> Covering all the internal states/code paths in tests seems very hard and potentially of diminishing returns?

100% test coverage is the most easiest goal to achieve (in python). It's pretty much dumb activity. To write proper tests to check BL is a much harder task.

From my experience it gives the biggest value for dynamic languages where you must be sure that every line is touched before releasing code.


Production impact: if you can replace:

  if (a == 10)

With

  if (true)
Then you're not testing the behavior of your application. If in production `a != 10` and you've never tested that it might be a problem.

Mutation testing can also do stuff like removing entire statements, etc.


The question (well, one question) is why would we suspect the lack of a test indicates a bug? For most non-trivial software the possible state-space is enormous and we generally don't/can't test all of it. So "not testing the (full) behaviour of your application is the default for any test strategy", if we could we wouldn't have bugs... Last I checked most software (including Google's) has plenty of bugs.

The next question would be let's say I spend my time writing the tests to resolve this (could be a lot of work) is that time better spent vs. other things I could be doing? (i.e. what's the ROI)

Even ignoring that is there data to support that the quality of software where mutation testing was added improved measurably (e.g. less bugs files against the deployed product, better uptime, etc?)

Is this method better than just looking at code coverage? Possibly none of the tests enter the if statement at all?

EDIT: where I'm coming from is that it's not a given this is an improvement to the software development process. It seems like there was some experimentation around the validity of this method which is good but like a lot of software studies somewhat limited. It also seems there's a lot of heuristics based on user feedback, which is also good I guess, but presumably also somewhat biased.

The related paper has a lot of details including: "Since the ultimate goal is not just to write tests for mutants, but to prevent real bugs, we investigated a dataset of high-priority bugs and analyzed mutants before and after the fix with an experimental rig of our mutation testing system. In 70% of cases, a bug is coupled with a mutant that, had it been reported during code review, could have prevented the introduction of that bug."

Which should imply(???) that 70% of "high priority" bugs can be eliminated during the review process by using this sort of mutation testing. Seeing data to that effect would be cool (i.e. after the fact) and if it's real that'd be pretty incredible and we should all be doing that.


> The question (well, one question) is why would we suspect the lack of a test indicates a bug?

It's not the lack of a test, but the fact that the existing test doesn't cover a branch.

Rarely taken paths is exactly where bugs often hide, because they likely weren't well exercised in human-driven testing either, and because they're hit rarely in general so most users don't run into them often. If the condition is not easily reproducible it can be very hard to figure out what to even put in a bug report.

Say you're working with images and have a branch for a 16 bit color (565) image. Such images are rarely used today, but they still exist. Among many users of your code it's possible only 1% ever hit that branch -- and those end up experiencing weird issues nobody else sees.

Or another example of such things is error handling. An application that works perfectly fine on a LAN can be hell to use on a flaky connection.


(Opinions are my own)

> is why would we suspect the lack of a test indicates a bug?

I can only speak for my experience but the code is not better because it is mutation tested. It is better because we have thought about all of the edge cases that could happen when inputting data into the system.

Mutation testing, as a tool, helps you find statements that are not being exercised when parsing certain data. For example if I write an HTML parse and I only ever provide test data that looks like `<a href=....` as an input string and a mutation testing tool replaces:

   if (attrs.has("href"))
      return LINK;
with:

   if (true)
     return LINK;
It is clear to a human reader that this conditional is important but the test system doesn't have viability in this. This means in the following situations you can be screwed:

1. Someone (on the team, off the team) makes a code change and doesn't fully understand the implications of their change. They see that the tests pass if they always `return LINK;`.

2. If you are writing a state machine (parser, etc) it helps you think of cases which are not being tested (no assertion that you can arrive at a state).

3. It helps you find out if your tests are Volkswagening. For example if you replace:

   for (int i = 0; i < LENGTH; i++)
with:

   for (int i = 0; i < LENGTH; i += 10)
Then it is clear that the behavior of this for loop is either not important or not being tested. This could mean that the tests that you do have are not useful and can be deleted.

> For most non-trivial software the possible state-space is enormous and we generally don't/can't test all of it. So "not testing the (full) behaviour of your application is the default for any test strategy", if we could we wouldn't have bugs... Last I checked most software (including Google's) has plenty of bugs.

I have also used (setup, fixed findings) using https://google.github.io/clusterfuzz/ which uses coverage + properties to find bugs in the way C++ code handles pointers and other things.

> The next question would be let's say I spend my time writing the tests to resolve this (could be a lot of work) is that time better spent vs. other things I could be doing? (i.e. what's the ROI)

That is something that will depend largely on the team and the code you are on. If you are in experimental code that isn't in production, is there value to this? Likely not. If you are writing code that if it fails to parse some data correctly you'll have a huge headache trying to fix it? Likely yes.

The SRE workbook goes over making these calculations.

> Even ignoring that is there data to support that the quality of software where mutation testing was added improved measurably (e.g. less bugs files against the deployed product, better uptime, etc?)

I know that there are studies that show that tests reduce bugs but I do not know of studies that say that higher test coverage reduces bugs.

The goal of mutation testing isn't to drive up coverage though. It is to find out what cases are not being exercised and evaluating if they will cause a problem. For example mutation testing tools have picked up cases like this:

   if (debug) print("Got here!");
Alerting on this if statement is basically useless and it can be ignored.

> Is this method better than just looking at code coverage? Possibly none of the tests enter the if statement at all?

Coverage does not tell you what the same thing as what mutation tests tell you. Coverage tells you if a line was hit. Mutation tests tell you if the conditions that got you there were appropriately exercised.

For example:

   if (a.length > 10 && b.length < 2)
If your tests enter this if statement and also pass when when replaced with:

   if (a.length > 10 && true)
Or:

   if (true || b.length < 2)
You would still have the same line coverage. You would still have the same branch coverage. But, if these tests pass, it is clear that you are not exercising cases where a.length <= 10 or b.length >= 10.

> where I'm coming from is that it's not a given this is an improvement to the software development process

In my experience if I didn't write a test covering it, it was likely because I didn't think of that edge case while writing the code. If I didn't think of that edge case while writing the code then I am leaning heavily on defensive programming practices I have developed but which are not bulletproof. Instead of hoping that I am a good programmer 100% of the time and never make mistakes I can instead write tests to validate assumptions.

> Seeing data to that effect would be cool (i.e. after the fact) and if it's real that'd be pretty incredible and we should all be doing that.

Getting this kind of data out of various companies might be challenging.


There's a pretty good Ruby gem I've used for this before:

https://github.com/mbj/mutant


Fighting over style is resolved via eng-wide style guide and not having juniors approving each other’s code ;)


I’m curious if you can think of any tools with a well built code review UI?


Our company uses codeapprove.com to get a critique (Google's internal review tool) like experience. It's not all the way there, but miles ahead of GitHub


Based on the screenshots on codeapprove.com it looks similar to gerrit[1]. It also looks like gerrit is used for several google projects (eg. android, chromium), so overall it looks like a potential replacement as well. Are there any features that are in codeapprove.com/critique that aren't in gerrit?

[1] https://en.wikipedia.org/wiki/Gerrit_(software)


I have never used Gerrit, so I can't speak to it directly. My memory from folk who have worked on both seemed to prefer critique, but I am not sure if that's because of the tight integration with other tools like codesearch, linters, etc or something more fundamental to the review tool itself.


Gerrit UX has historically been subpar compared to critique, however it's seen a lot of investment in recent years afaict. A lot of the remaining benefits of critique are largely due to its tight integrations with other Google tools. Critique is fairly malleable so it's possible with enough effort to get a similar experience by integrating with your own tools, but it's definitely something that needs to be staffed to work that well.


Thanks, I’ll check it out!


CodeApprove founder here! If you want to chat, my email is in my HN profile.


Thanks!


I can highly recommend gerrit. It can lack of eye candy, but from UX perspective it's the most effective tool I've tried. Also it forces patch-oriented workflow from a GIT side and you get a nice linear history in the end.


Gerrit was a rewrite/partial opensourcing of previous generation of Google’s code review tool (Mondrian) so imo it’s the closest thing to the real deal.


I don't think an enormous company-wide filesystem matters when you're not an enormous company and you're not committing to fixing downstream dependencies maintained by other teams (or, often, by nobody) when you break them. Go's approach to versioning and dependencies works better when you don't have or want a monorepo managed by a single organization. Let unmaintained code sit, using the dependencies it was using at the time, until someone decides to maintain it.


One obvious benefit of the internal tools at Google is that they're all integrated with each other. So any Googler can quickly:

1. Search another team's code

2. Quickly jump into a Cloud IDE to make a change

3. Build and test the code, because everyone uses one build system

4. Send that code off for review and have the review tool automatically select the right reviewers and run the right tests

For a simple change (let's say a typo fix) all of the above can happen in a single browser tab in under 5 minutes even if you've never been in that part of the code before.

----

While I'm here a shameless but relevant plug: if you're looking for a Google-like code review experience on GitHub check out https://codeapprove.com


The hardest part of this cycle for most teams are steps 2 & 3, getting a consistent development environment and build system across projects just does not happen. The loop is interrupted by the process of reviewing outdated docs, finding the people with the knowledge, and chatting with them to get a development environment running and helping with deploys to various QA/test environments before merging.

At Coherence, we agree that code search and code review, along with automated code intelligence, are a huge step forward. Tools like sourcegraph, graphite, codeapprove are all solving important problems in the cycle. But we believe that the general approach of “building” vs “buying” development infrastructure for these key steps is holding teams back.

Check us out at withcoherence.com if these sound like problems your team faces, too! (Disclosure, I’m a cofounder)


I agree! But —

- BigQuery is available on GCP

- Bazel is an open source Blaze

- google source formatting (for java at least) is open source

There are probably more…


For those of us stuck on AWS it's sad not having BigQuery, but the thing that really gets me is not having Dataflow

Most of industry still seems unaware that no-knobs data query and pipeline systems even exist. If I only had a dollar for every time I saw a PR tweaking the memory settings of some Spark job or hive query that stopped running as the input data grew....

I'd love to see more people write their workflows using the Apache Beam API so they'll have the option to switch to a no-knobs, scalable pipeline engine in the future even if they're not using one today.


Blaze/Bazel actually sucks imo. The only good thing about it is that all of google uses that one piece of shit, making things nice and uniform and consistent.

There's a reason it isn't popular outside of elgoog.

IRL, every project tends to do a little bit or a lot of its own thing.


Bazel can be clunky, but not having some bazel equivalent can have very significant costs that are easy to get accustomed to or overlook.

Things like engineers losing time wondering why their node dependencies weren't correctly installed, or dealing with a pre-commit check that reminds them they didn't manually regenerate the generated files, or having humans write machine-friendly configuration that's not actually human-friendly because there's no easy way to introduce custom file transformations during the build.

Bazel doesn't spark joy for me and I wouldn't say I look forward to using it, but personally I would still always choose it for a codebase that's going to have multiple developers and last a long time. It's vastly easier to go with bazel from the beginning than to wish you could switch to it and realize people have already introduced a million circular dependencies and it's going to be a multi-month or multi-year process to migrate to it.


In my experience, Bazel is a net negative for most teams. Pretty much every JS engineer is familiar with npm; a tiny fraction are familiar with Bazel. Ditto with pip, cargo, etc. And it doesn't solve the hard part of the build process, which is distributed builds. Most of the user-perceptible value of Blaze is making builds fast by farming them out to a zillion machines — that's why it's called "blaze," because it's fast! — and Bazel doesn't do that for you.

And it's clunky, and you need to teach every new hire how to use it. The juice just isn't worth the squeeze. Just write an adapter in your build infra for the well known tools and be done with it. You'll get much more value putting work into something else, like code review tools, testing, dev environments, staging...


Being familiar with npm doesn't save the time it loses you on its clunkiness. I see for myself and other engineers around me that we waste real time every week clearing our yarn caches, waiting for "yarn install", realizing we forgot to run "run install" after syncing and we have to re-merge and re-upload, etc.

Here's a real-word example from last week: I spent several hours doing creative hackery trying to convince a particular compiler to compile the same file using different dependencies depending on context. That would have been a few minutes of work with bazel, but when your build process consists of "run this off the shelf compiler" and the compiler doesn't have any native support for building two different versions of the same thing with slightly different dependencies then you're in trouble.

Teaching new hires bazel is a one-time cost, and the only thing they'll need to get started on their first day is "bazel build <target>" and "bazel test <target>". When you don't have bazel, on your first day every new hire has to read through a gigantic wiki page explaining how to set up their dev environment (with different sections for mac and linux and special asides describing what to do if you're on a slightly outdated version of the OS, and then the wiki gets out of date and you have new engineers and the infra team all wasting their time debugging why the instructions suddenly stopped working for people on slightly newer machines, etc.)


There are many remote build backends for bazel available - some open source and some proprietary (EngFlow and Google’s RBE). Even without a build farm it will still be a huge performance boost due to caching - something make and co (recommended in sibling thread) can not do along with a bunch of other stuff that bazel does for you.


I've heard the "huge performance boost" argument before, and in my experience it's often kinda marginal, or even negative because the native build tools have been optimized for their specific task. (Sure, I'll give you that it's better than Make, but is it faster than the Go compiler?)

I wasn't familiar with EngFlow; it looks like it's a startup that raised seed funding less than a year ago. I think what you're referencing with "Google RBE" is a Google Cloud project now rebranded to "Cloud Builds" — which supports the underlying native tools like npm, pip, etc without requiring you to switch to Bazel.

Bazel is better than Make for building large C/C++ projects (although it's hardly the only game in town for "better than Make"). But aside from that use case, in my experience it's not really worth the hassle. You can get most of the benefits you want without using it, and people are already going to be familiar with the tools native to the ecosystems they work in like pip, npm, etc.


> (Sure, I'll give you that it's better than Make, but is it faster than the Go compiler?)

It depends on what you're doing. If you're compiling pure go code that's truly only go, no go build will be faster. But if you have cgo or generated code or generated data or config files or...then well maybe you want something that is more flexible. And of course if you aren't building just go, then things get complicated fast. What if you have a backend and a frontend? `go test` probably isn't running your webdriver tests that compile your typescript somewhere in the build pipeline. Having a unified toolchain with one command (`blaze test //...`) is valuable compared to various test.sh (or make layered on top of n independent build systems or...)

And of course if you're like me and need to do things that involve reasoning or searching about code and dependencies, blaze is super necessary. "Find all of the dependencies of this file and test only those" is a question that most build tools aren't even remotely equipped to answer.

So in polyglot situations, bazel and similar prevail I think, but there's absolutely a point below which you don't care (and that point is going to be basically only hit by applications, not libraries).


Depends on which Make we are talking about, commercial Make tooling like ClearMake certainly can do it, since the late 1990's.


I actually worked with clearmake and bunch of other rational tooling in one of my early gigs and don’t remember much in a way of cache improvements. Soon as you’re doing some non-sandboxes io which make-based tools totally allow it’s out the window anyway


Curious how clearcase works I found https://www.ibm.com/docs/en/rational-clearcase/7.1.2?topic=s.... They appear to be hijacking the open() and write() syscalls, so in some way they actually do have a sandbox that provides more accurate knowledge about the build graph than what the makefiles tell! Otherwise yes, makefiles themselves are very unsafe and frequently have errors such as underspecified dependencies or implicitly produced outputs, with no guardrails that can prevent them.

Wether or not that sandbox blocks incorrectly declared dependencies is unclear. Last time I used clearcase many eons ago it surely did not. Our project had tons of classic makefiles issues like not depending on included headers. Remote builds were also magnitudes slower than local builds, our network was maybe not the best, also reading the page above how the “shopping” algorithm works you can imagine it being fairly slow anyway. Maybe that was best, imagining how incorrect dependencies mixed with remote caching would result gives me nightmares.


All object files and libraries would be cached and shared across builds using the same views (Derived object sharing).


Given how Borg unleashed on the world as K8s came to dominate orchestration, Bazel's lack of similar uptake outside of the Google walled garden is indicative that it's not solving problems for non-Google teams significantly better than existing build tools.

Look at how much pain many companies have gone through to move to K8s from existing infrastructure, there is perceived value driving that.

Bazel lacks that perception of value.


>there's no easy way to introduce custom file transformations during the build.

Every real-world build system I’ve seen provides that functionality. In particular a standard make(1) can do it just fine.


make(1) has no native support for giving each build rule its own sandboxed view of the filesystem like bazel does.

If I could have a wish to upgrade file transformation with make(1), I'd probably want a widely-available, standard, simple command to make a rule-specific virtual filesystem that overlays a configurable read-only/copy-on-write view of selected existing files or directories behind a writable rule-specific output directory.


Why would you want it to mess with sandboxes? It’s a build system. There are other mechanisms for sandboxing, no need to reinvent the wheel.

Basel is as non-standard as it gets - essentially yet another Google’s case of NIH - but apart from that, how is an ad-hoc single-use pseudo-language better than reusing standard mechanism? To me it’s just a bad design.


I don't particularly want to "mess with sandboxes", but I do want my builds to be relatively fast, correct, reproducible, extendable/customizable, with bonus points for being secure (meaning a compiler shouldn't be able to tamper with parts of the output it has no business tampering with) and more bonus points for supporting distributed builds and/or distributed caching

If someone wanted to make a new build system to compete with bazel and have those kinds of features, it's probably a safe bet the competing system would use some kind of sandboxing as well

Even if you ignore everything else, just the security part is a big deal: supply chain attacks are an increasingly big concern for companies of all sizes. If your build system allows any script invoked during any part of build process to secretly read or modify any input or output file, hackers are going to love it.

Almost all tech companies (even the multi-billion dollars ones) that aren't doing something in the spirit of `bazel build` to generate their binaries have wide open, planet-sized security holes in their build systems where if you get one foot in the door you can pretty much do anything.


Are you calling bazel a nonstandard single use pseudo language, but make a standard tool?

That's just an argument from tradition.

And you want sandboxing because that's what gets you good caching. The value of bazel is never having to run make clean because artifacts aren't correctly being built from cache. Having no distinction between clean and incremental builds is really nice.


That is an argument from tradition - which you yourself have brought up, calling Bazel a "standard way".

>The value of bazel is never having to run make clean because artifacts aren't correctly being built from cache.

You can get that with make(1) too, check out FreeBSD's META_MODE for one example. And it didn't require reinventing the wheel.


> which you yourself have brought up, calling Bazel a "standard way".

I didn't do any such thing. My point is simply that make and bazel are similarly "nonstandard single use pseudo languages". In many ways, bazel is superior to make from a language perspective (it resembles other languages more closely, being a dialect of python, and avoids the loadbearing tab issue), so I think I could make the argument that bazel is in many ways less nonstandard, but make is certainly more common than bazel, so it could go either way.

> You can get that with make(1) too, check out FreeBSD's META_MODE for one example.

This suffers from the same issues that natural make does (notably the whole mtime thing). See https://apenwarr.ca/log/20181113 for a much better explanation than I can provide as to why make's entire model of "change" is irreparably broken, and why hash (+sandbox!) based approaches (which bazel and redo and nix and cargo and nearly every other modern build tool use) are far superior.

> And it didn't require reinventing the wheel.

You call inventing a new syscall to not-even-fully fix a limitation of the tool not reinventing the wheel? Like I guess its not, it's just like building a weird grand shrine around the broken wheel. It's far worse. I don't want to need to change my operating system to have make work better, but still worse than the alternatives. That's simply not a compelling argument.


As a googler, I agree with this. Blaze itself is not that remarkable. What makes it good is that it is the only tool you need to use to build anything, in any language, in Google's monorepo, and that build failures are (usually) aggressively corrected. Plenty of build systems these days support varying levels of hermetic builds. The important thing to copy from Google here is the discipline and consistency around using one build tool and fixing build failures, not the specific tool itself.


It isn’t popular? Like “uber and a dozen of similar sized companies and some major oss projects using it” not popular?


Uber engineering has a few gems but for every 1 gem they've hired 10 or more greedy idiots, so maybe not a great example?

If you love fighting with your hands tied behind your back, choose Bazel.

Otherwise, be pragmatic: Learn Make, Maven, and Gradle; then you'll be well-equipped for 95-99% of cases. Thankfully pip and npm are as straightforward as it gets.


> Uber engineering has a few gems but for every 1 gem they've hired 10 or more greedy idiots, so maybe not a great example?

What an odd sprinkling of something entirely personal.

> Otherwise, be pragmatic: Learn Make, Maven, and Gradle; then you'll be well-equipped for 95-99% of cases.

There's a time and a place for Bazel. Very large monorepos like those at Pinterest and Uber, with cross dependencies, and written in multiple languages benefit a lot from the remote backend and distributed cache of built artifacts.

Make, Maven, and Gradle, even only for JVM based projects seem to not be entirely comparable.


Can you explain why you bother with pip and npm if php and plain js already cover 95-99% of cases?


Because I do whatever it takes to get the job done.

I also appreciate what Python and Javascript offer, there are some amazing libraries and tools tied to those ecosystems.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: