It might be noticeable at extreme levels, which I've never come anywhere to noticing.
I tend to only use `log := clog.FromContext(ctx)` once at the top of a method, and not `clog.InfoContext(ctx, "...")`, but that's mostly for style reasons and not performance.
I think Bazel can be a good fit for larger polyglot organizations that need to manage large codebases in many languages in a uniform way. Basically Google-circa-2010-sized organizations, coincidentally!
For smaller teams, adopting Bazel too early can be a real productivity drain, where you get all of the downsides of its constraints without as many of its benefits. Bazel is overkill for a project of ~10 Go apps, for example. Ko was actually created to help such a project (Knative) migrate off of Bazel's rules_docker to something better, and I think it achieved the goal!
I do agree with you in general. However, while containers were being invented to solve this problem, Go was also solving this problem.
For the most part, for the most common simple Go applications, if you build the same code with the same version of Go installed, you'll get the same dependencies and the same artifact.
Building Go applications in containers is not necessary in general, and doing so makes it much more complicated to share a build cache. You can of course do it with enough additions to your Dockerfile, but why bother?
If your developer team and CI environment are all using a similar recent Go, they shouldn't have different behavior using ko together.
See now that right there is the issue. _IF_ your developer team and CI environment are all using a similar recent Go. Maybe you can keep this going with a single application and a small team that is actively maintaining that application, but even then I've run into issues with people using different computer with different architectures. Just this past week I had to solve a problem with a Go repo that wasn't building a container caused by the fact that whoever built out the development configuration was using an Intel Mac and certain aspects of the build didn't work on M1/2 Macs. I solved this by containerizing the entire thing, and now the entire setup for repo is a single command. No need to install any dependencies aside from Docker on your machine, no wondering if you're following the instructions correct, just clone the repo and `make run`.
Now extrapolate this out to larger orgs, where months might pass between changes to repos, and commits might be coming from disparate teams. Trying to mandate specific Go versions AND versions of any developer tooling that you might need to run your Go application locally, and you are inviting chaos.
Always build in the container. Local system dependencies are the enemy. Docker configuration for Go is actually extraordinarily simple compared to most languages, and even something like build caching is easy to handle via volumes.
I agree with this being an issue, although certainly less pronounced with go compared to other languages. I don't think containers solve the issue though: none of the mainstream approaches to building containers actually make the builds (and by extension images) inherently reproducible. It just shifts the issue from differing host systems to differing base layers (granted, the issue is less pronounced with those). As long as not every dependency is fully pinned your builds could break any day and arbitrarily between machines, if you don't build at least at the exact same time.
To solve the issue fully you need a more comprehensive approach to packaging. nix or guix can provide that.
Containers are more useful as a software distribution mechanism.
I would say that containers are the foundation of the practice of reproducible builds. They don't solve the problem on their own, but containerization is a core element of the best practice of reproducible builds, along with utilization of lockfiles and infrastructure as configuration. It's certainly a hell of a lot easier to get a containerized application to a reproducible state than one run on a local machine with an unknown architecture and OS.
> I would say that containers are the foundation of the practice of reproducible builds.
If you mean containers as in the isolation features that are utilised by docker et al. to provide their flavor of compartmentalization then yes, those are pretty useful for reproducibility (although not 100% necessary, I think).
If you mean containers as in a bundled linux userland (so, e.g. a docker container running some image) then no, that is entirely orthogonal to reproducibility, as demonstrated by nix and guix which AFAIK use the low-level isolation features for their build sanboxes but do not have anything resembling a container image involved.
Hey, ko maintainer here! I'd love to answer any questions or hear any feedback folks have.
Ko's simplicity comes from focusing on doing exactly one thing well -- it doesn't have to run containers to build code in any language, it just builds minimal Go containers, and that means it can focus on doing that as well as possible.
Another powerful benefit of focusing on Go (IMO) is that ko can be used to transform a Go importpath to a built image reference.
A common simple use: `docker run $(ko build ./cmd/app)`
This is also how Kubernetes YAML templating[1] works, and it's also how the Terraform provider[2] works -- you give it an importpath, and ko transforms it into a built image reference you can pass to the rest of your config. I don't even "use ko" day-to-day most days, I mostly just use terraform. :)
Ko works ok but lacks a lot of configurability, compared to something like jib, that does end up important. There are a few open issues on it but the commit history is basically dependabot, not much action for UX. Better than nothing I guess but it really does give off the feeling of a project spun off from Google to let it die.
Despite that it's probably still the best tool for building containers in Go, but it's not pleasant.
That works, but it also means maintaining a Dockerfile that does the COPYs, when all you're really doing is assembling image layers. This is more or less exactly what ko does, without using the Dockerfile language to describe it.
Not having to run a Docker daemon is just a nice bonus! :)
We eventually built our own registry in Go running on Cloud Run, which now serves all our images on cgr.dev.
Zero egress fees is really a game changer.