I think you just proved my point. We're all of us running around with our pants down because we think Docker is taking care of this stuff but it's merely a bunch of features that look like they should be fit for that purpose but aren't.
And this is why I am stuck with a separate build and package phase, because I have to have that separation between the data available at build time and what ends up shipped, but even there I'm pretty sure I'm making mistakes, due to some of the design decisions Docker made thinking they were helping but actually made things worse.
For instance, there's no really solid mechanism for guaranteeing that none of your secret files end up in your docker image, because they decided that symlinks were forbidden. So I have to maintain a .dockerignore file and I can never really be sure from one build to the next that I haven't screwed it up somehow. Which I will, sooner or later.
I'm always one bad merge away from having to revoke my signing keys. It's a backlash waiting to happen.
You should know there was a pretty big bug fixed in .dockerignore in just the last release. [edit] That bug was in the logic for white-listing files, which is generally the safest way to keep from accidentally publishing things (that is, if it works).
And it's possible a similar issue still exists in docker-compose but it's still open.
.gitignore keeps me from checking my files into git, but it doesn't keep me from publishing them in a docker image. So now I have a second way to screw up.
Can you link to this bug? I thought .dockerignore specifically didn't allow whitelisting and only allowed for blacklisting files that weren't to be included.
Are you saying that docker would include files that should have been excluded by .dockerignore? I'd be interested to learn more. Thanks in advance.
You could probably whitelist with a .dockerignore like
* # exclude everything
!README.md # include the README.
!run.sh # include the initiation script
You would want to check exactly what the globbing rules are for the .dockerignore file, though. I don't know whether '*' will catch .dotfiles, for instance.
There are a couple of frameworks where all of the production files end up in, for instance /dist and one other directory. Rather than having to constantly blacklist everything you just say "ignore everything except X and Y"
I'm sorry, things got hectic and I bailed on the discussion. I thought I had a handy link to the bug I was thinking of, but I couldn't find a back-link from the issue I'm watching to the one in docker/docker.
Some day I'm sure .dockerignore will be solid, but my confidence level isn't high enough yet (it's getting there) to base my trust on.
My point was that there are other ways that directory structures and what is visible to COPY could have played out where vigilance is less of a problem. It's usually immediately obvious if a file you actually needed is missing from a build, but less obvious that a file that you categorically did NOT want to be there is absent.
Because the system runs in one of those scenarios and dies conspicuously in the other.
The build-time environment variables were not designed to handle secrets.
By lack of other options, people are planning to use them for this.
To prevent giving the impression that they are suitable for secrets,
it's been decided to deliberately not encrypt those variables in the process.
How would they "encrypt" them that wouldn't be trivial to undo?
I think people aren't concerned about it because it doesn't make sense to try to put secrets into container images. Whatever you're using to deploy your Docker containers should make those secrets available to the appropriate instances at runtime. This is how Kubernetes handles secrets and provides them.
(For example, what if you have two instances of a service and they need to have different SSL certs? Are you going to maintain two different containers that have different certs? Or would you have a generic container and mount the appropriate SSL cert as a volume at runtime?)
I've actually read that. For context, it's a comment made before the feature was complete. Said feature, according to the manual, doesn't persist the value, thus is probably suitable to pass a build time secret.
From my testing though, as long as you set the build-arg and consume it directly, it doesn't seem to persist. That said, it's super easy to fuck that up if the tool you consume it with then goes on to save the secret somewhere.
Thus it's no doubt best to use expiring tokens or keep your build seperate. Also don't use it to seed a runtime secret unless you treat, that'd force you to treat the image as a secret itself.
I linked to that because it cross references to the PR where the build-args feature was added. If they're out of sync that's 1) news to me and 2) confusing and should be fixed.
I think one of the things we're seeing is that Docker is opinionated, a number of powerful dev tools and frameworks are also opinionated, and us poor developers are stuck between a rock and a hard place when those opinions differ.
For instance I'm still not clear how you'd use the docker-compose 'scale' argument with nginx. Nginx needs to know what its upstreams are, and there's IIRC still an open issue about docker-compose renumbering links for no good reason, and some Docker employee offering up how that's a feature not a bug. I could punch him.
Single use auth tokens and temporary keys sure would fix quite a few things, to be certain, but those opinions keep coming in and messing up good plans :/
I'm not sure if we should be really be having a go at them for whats on their git discussions verses whats in their documentation. I'd presume the documentation is canonical, I'd rather they weren't muting their discussions to remain consistent.
That said, as I said previously --build-args are dangerous, it's trivially easy to store then publish a secret, so it makes sense they weren't jumping for joy about implementing it. I'd say it is needed though, thus its now a thing.
docker build --build-arg OAUTH_TOKEN=blah -t example .