Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Docker-slim: Minify your Docker container image without changing anything (github.com/docker-slim)
379 points by Bender on Dec 10, 2019 | hide | past | favorite | 99 comments


Most people would benefit from the distroless base image: https://github.com/GoogleContainerTools/distroless

It's a base image with binaries from Debian deb package, and with necessary stuff like ca-certificates, and absolutely nothing else, while still glibc-based (unlike Alpine base images).

Example images I built with the base image - C binary, <10MB https://hub.docker.com/r/yegle/stubby-dns

- Python binary, <50MB https://hub.docker.com/r/yegle/fava

- Go binary, 5MB https://hub.docker.com/r/yegle/dns-over-https

Another trick I use is to use https://github.com/wagoodman/dive to find the deltas between layers and manually remove it in my Dockerfile.


What do you do when you need to debug an issue and the container contains no utils?

I expect someone will leave a comment saying "But you shouldn't be entering containers, you should be using Ansible/Kubernetes". Yes, that is how I manage changes but sometimes you just have to log in and see what is going on with htop/etc


My dream solution to this issue is a one liner command like:

    docker exec -it --augment=ubuntu my_container bash
Would start bash in the container, but also layer into the filesystem all the rest of a standard ubuntu image only for my tools, but not affecting the application in the running container.

I'm pretty sure that's possible with current linux kernel mount namespace/overlayfs infrastructure used by docker - all that's needed is the command line tool to support it.


The new ephemeral container support in kubernetes lets you do essentially that. You bring the filesystem from another container image into the PID/network namespace of a running container in a pod.


lol, it's fun to imagine going back in time to explain what you just said to my 2004 sysadmin self. back when I used to build servers, and colo them, and physically maintain them.


Which version of bash would you expect to run in this example?

(1) If it's the bash version from the standard ubuntu image, you will need to specify where to mount your application's filesystem inside the ubuntu filesystem.

(2) If it's the bash version from your application, then it's the other way around: you will need to specify where to mount the ubuntu filesystem inside your container.

Option (1) seems more practical. My point is that you will need to specify a mountpoint either way, and your commands will need to take this mountpoint into account.


Mount both filesystems at the same place as an overlay - if a file exists in both, I don't care which I see.


I see. That will work as long as your app is built on the same distro as your tools image (in your case, ubuntu).


You can debug such containers by running another debugging container to join their corresponding namespaces. For example the most frequently used namespaces are pid and network, with these two namespaces joined the target container, you can see its pid and binary as well as the network traffic.

For docker and k8s, there are two helpful tools which implement what I said with simple and intuitive UI:

* https://github.com/zeromake/docker-debug

* https://github.com/aylei/kubectl-debug

Edit: Add links for the helper tools.


The github page for Docker-slim goes into some detail on how this can be done using a side-car container[0].

[0] https://github.com/docker-slim/docker-slim#debugging-minifie...


A good pattern is to build an image specifically containing troubleshooting tools which can be run and attached to a problem container's namespace. That gives you a standard set of tools without having to bake them into every image.


Is there more doc on this idea somewhere? In particular, being able to strace without root (etc) would make life a lot easier.


FYI, you don't need any special permissions to strace in a docker container - you just need to disable the default seccomp profile (docker run --security-opt seccomp=unconfined), which blocks use of many unusual-in-production syscalls including ptrace: https://docs.docker.com/engine/security/seccomp/

One common workaround floating around the internets is to use --cap-add SYS_PTRACE. This has the side effect of permitting the ptrace syscall, but it also gives you the ability to ptrace processes owned by other users etc. That's more than you need and it's kind of dangerous in a production-ish container.


I might be thinking of a different scenario (and I'm generally using Singularity rather than Docker). I want to start my container under 'strace' and see everything. This is not generally possible in the obvious way, as there's a setuid-root binary in the process tree that blocks further strace'ing.

(One can still attach after everything's running, but that's not always good enough.)


For running network related debug tool: use nsenter (https://github.com/jpetazzo/nsenter/blob/master/README.md) which allows you to run network related tools using the network namespace of the container.

For simple shell access, use the :debug variant of distroless images which include a shell.

For more complex troubleshooting, I think other people has recommended many ways. I haven't had the need to do such troubleshooting but if I need to I would mount an image with necessary binaries into the container. This is where distroless becomes handy: I can mount a Debian image and don't worry about ABI compatibility.


use a docker sidecar, in fact explained in the docker-slim repo:

https://github.com/docker-slim/docker-slim#debugging-minifie...


And with Kubernetes you can use Ephemeral Containers for that.


Answer to "How do you debug an issue in a running container?" is "you read logs or you don't."

Generally you build a special debug image that has busybox or whatever. In case of distroless, debug image has busybox and everything that comes with it.

Also, what are you trying to see with top/htop? In ideal world you will see a single process pid 1 that is your entry point. There shouldn't be more than one process running in it.

You can get resource consumption of a container without logging into the container just like you can get running processes without getting into container.

There is nothing else you can do without dragging wholelot of dependencies:

- Anything java related will require a JDK

- Debugging any native code will require a whole debugger

- Debugging python/ruby will either work or will require dev dependencies

Sidenote: Who the fuck uses ansible to debug containers?


yum install


I went to a presentation on Sysdig thinking that would be some kind of solution. Not really; not unless you want to hunt down or write syscall filters (or find some online) or pay for the Enterprise version.

I just wish there was a way to do the basics:

   1. Look at files within my running container (maybe even modify them, without needing vim or nano installed inside it).
   2. Ping/ICMP something from within the container (again, without ping being in the container itself)
   3. DNS lookups from within the container
   4. Connect to a port on an IP or DNS name from within the container
   5. Inspect the contents of a dead container that won't start without having to commit it first.
I did a post a while back on how I feel about debuggin within containers, and I should probably write another one because I don't think I cover those 5 things:

https://battlepenguin.com/tech/my-love-hate-relationship-wit...


For point one you can grab the running container tag and then add a layer on top with any tools you need.

You obviously won’t get the same operational state but if you want to poke around a container you’ve built and see what’s in it, you can just extend it.


I'd love to have 50 upvotes to give to this particular comment. With Docker being so fashionable, too many applications which have strictly no business being containerized are shoehorned into containers (e.g. Confluence).

My rule of thumb is: as soon as I have to `docker exec` into a running container because something's wrong, this container needs to be stopped and a VM should be used instead.


I just became familiar with the Alpine base images.

Curious to know what the benefit conferred by it being 'glibc-based' is exactly?


Also, I beg you to not use Alpine. It is horrible from the security perspective. The Alpine team doesn't have sufficiently staffed security folks to upgrade packages when vulnerabilities are found. Most popular Linux distro vendors publish OVAL data[1] which can be used to find and fix vulnerable packages. But not with Alpine [2]

[1] Example- https://www.redhat.com/security/data/oval/

[2] This is the closest you can get- https://github.com/alpinelinux/alpine-secdb


Alpine uses musl instead of glibc. That works well for most things, but not all things. For example, there are some JVM implementations that don't like musl unless you do quite a lot of work.


> Alpine uses musl instead of glibc. That works well for most things, but not all things.

It works well until it doesn't. And screws up your whole deployment pipeline.


Never understood the whole alpine thing. I would say most sysadmins doing docker stuff do not understand the concept of what is a libc and its variants to understand the consequences of their choices. Hell I work in embedded and have worked with ulibc or musl and have been bitten quite hard. Some calls not working as expected are loads of fun. All for a few MB, when it could be done just using a proper small base image like has been suggested above.


How so? If you have a deployment pipeline, wouldn't you just detect failures?


The standard benefits of using something mainstream vs. something niche - x100 more scenarii tested in the wild.

In my limited experience, some things don't compile targetting musl. If everything works alpine is sup. Else, it may be fairly difficult (and mostly unjustified) to fix.


For one: with musl you can't reuse anylinux Python packages on PyPI (e.g. lxml) and you'll have to have GCC to build and install those packages.


Mostly compatibility. GNU's libc and Alpine's libc implementation should match the same ABI but might not be 100% the same


Yeah, because of that, it's really hard to get Alpine-based Docker images to work in some situations, for instance:

this: https://github.com/grpc/grpc/issues/18150#issuecomment-47999...

And this: https://github.com/pypa/manylinux/issues/37

I use Python. Anytime you need to compile the source to build a pip package and that the upstream package developer decided that they don't support Alpine's libc implementation (aka musl) then you will have a big problem, unless you can control your dependencies and include as few pip packages that requires compilation or find binary builds.


Some of the examples in the repo include compressing distroless-based images.

For example:

from python2.7:distroless - 60.7MB => 18.3MB (minified by 3.32X)


speaking of dive... there's a bit of overlap between docker-slim and dive. docker-slim produces similar container image reports (with more details about the layers, but no diffs, for now... the layer diff support is coming)


Have you had a better experience using this versus Alpine ?


Being able to use anylinux PyPI package is a huge benefit.


Nothing beats

    FROM scratch


This is misleading. For very basic things, you'll need timezone files, ca-certificate, for many Linux tool/libraries to work properly.


Yeah, and all these could be copied from another container. I've slimmed down and lots of sidecar containers and, on top, I used UPX [0] on the binaries as well. Premature optimization is the root of all evils, true, but sometimes, for example, for Prometheus exporters you need to run on a bunch of nodes as sidecars, it totally makes sense to go the extra mile.

[0]: https://upx.github.io/


i wish they would add more node version to the node distroless image.


Does it go there in term of size?

scratch < distroless ~= alpine < debian-slim


Well, distroless base is 3x alpine. Which is a clickbaity way of saying its 10mb bigger.

Distroless nodejs images is...10mb bigger than the same on alpine.

Main purpose of distroless is less attack surface rather than size. Without package manager mutating container is PITA. Not having `ps` or `cat` makes it hard to read secrets that you injected into container one way or another.


Alternatives if you don't want to risk missing some file that only gets loaded 10 minutes in:

1. Start with a small base image, e.g. for Python there's "python:3.7-slim". For Python I'm not a fan of Alpine, but for Go that gives you an extra small base image (see https://pythonspeed.com/articles/base-image-python-docker-im...).

2. Don't install unnecessary system packages (https://pythonspeed.com/articles/system-packages-docker/).

3. Multi-stage builds (in Python context, https://pythonspeed.com/articles/smaller-python-docker-image...).

You can find similar guides for non-Python as well. Basic idea being "don't install unnecessary stuff, and in final image only include the final build artifacts".


I think the importance of small Docker images is generally oversold. I regularly deploy multi-GB images on Google Cloud, and startup even on a fresh node only takes 60 seconds or so. If the node's already hot (i.e. has cached the image), starting a container takes no more than a few seconds.

I think what's more important is layering the Dockerfile in the right order. You should be putting most of your large, infrequently changing assets in the lowest layers. Then put smaller, more frequently changing assets in the top layer. If you have a 4GB image, but only change the top 10MB layer, then it only requires caching 10MB of new data when you update the container. But if you change a lower layer, then it requires re-building and re-caching everything above it.


Adding a minute to some deployment process can be fairly significant. The docker push and pull is probably 70% of my deployment process and I already have ~100MB or less images.

I don't think the concern is 'how long does deployment take' but 'how fast can we iterate?'; Building and loading the images on a local dev machine to test a 2 second change would take much longer with larger images. Getting feedback of a PR merge from a CI build agent would take minutes longer.

I don't think the importance has been oversold.


> I don't think the concern is 'how long does deployment take' but 'how fast can we iterate?

And for iteration, the only thing that matters is the size of the top layers that are being iterated on. Not the overall image size itself.

You can put `RUN apt-get [kitchen sink]` at the beginning of the Dockerfile and it pretty much won't matter. When you change anything in the project repository, that bigass giant base layer doesn't get re-pulled because it doesn't change.

To validate that a layer, Docker daemon just compares the hashes. So, for unmodified layers, docker pull is constant time with image size. The Docker daemon only downloads from the registry starting with the bottom-most modified layer relative to its cache.

If anything, throwing the kitchen sink in the base layer is better for fast iteration. When you're being parsimonious about third-party libraries and packages, you'll frequently have to rebuild the base image. If you `RUN apt-get [everything]`, then you'll hardly ever never need to rebuild/re-push/re-pull that layer, because you'll always have whatever you need already available.


It depends on the size of project and what you want to optimize. The time spent on push and pull of docker images used to be a real deal breaker once upon a time when I tried to improve the deployment speed of a project by switching from heroku build to docker based solution


In a fresh pod/vm it still needs to pull all the layers, even the big, old one.


In our environment, we update way more frequently for security updates of dependencies than we do for application updates. These are usually updates to base images.

A couple of gigs is not bad one time. But it's multiplicative, even without updating our application at all, we're transferring: [avg size] * [# of images] * [# of security updates] * [# of nodes]

Given that we are usually able to get small images just by using slim base images, it's a no-brainier for us. No, we don't pull our hair out trying to save a meg, but we're not inheriting Ubuntu as a base image for a 1MB microservice.

By using slimmer images we have fewer security updates to apply as well. Since we usually give security updates some human attention, that's fewer man-hours we have to spend on this too, which is way more expensive than bandwidth.


Having needed to build a 25GB container for production, I'm all for containers being as small as possible :) For the curious, we had a business case that required in cluster file storage with an AWS S3 compatible API which was better than the alternative options at the moment.

I learned a lot about how to keep the container image to "only" 25 GB. I had to download files into the container, start Minio (the object store) at build time, upload the files to Minio to generate some needed metadata, and then delete the downloaded copy of the file. All of this on a build server that had 80gb of storage. I tell myself I have embedded storage at scale :)


I've never seen a container image anywhere near that big, so still curious :)

What exactly did the 25GB include?


It came preloaded with a bunch of different versions of software package for various operating systems and architectures. This was for an on premise deploy of Kubernetes where downloading files from outside the cluster was not an option. We were in a rush and this was the best idea anyone had at the time.


Multi-GB images still seem... big. Though to be fair in many situations those 60 seconds are meaningless, so might not be worth the time to optimize.

Often images that large are the result of including build toolchains, which you can omit with multi-stage builds (e.g. for Python https://pythonspeed.com/articles/multi-stage-docker-python/.


> startup even on a fresh node only takes 60 seconds or so

There are very complex web applications that download and start in < 10s.

It's nice when things are fast not slow.


Small images are useful in many scenarios. Developers iterating locally don't have to wait as long to produce them, when you deploy them to many machines, you don't have to worry about as much bandwidth due to the thundering herd problem, and you're paying less for storage for your docker registry, this all adds up. I've got thousands of docker images in my registry currently, which is SaaS Artifactory, and the transfer and storage costs are significant.

A tiny image for running your app, such as an Alpine Linux base with a static executable in it, is also much more secure since your attack surface is significantly smaller.


I agree it’s oversold for most use cases, clouds do have pretty decent networking after all.

Having said that as an Australian with sub par internet at home, I do appreciate not making images gratuitously large. Just because SDK images won’t ever be deployed to prod doesn’t mean you shouldn’t strip out unnecessary junk.


Similarly, just yesterday I had to go home from the cafe I was working in because I had to push a docker image. The wifi just wasn't fast enough for that, though plenty fast for googling documentation


Clearly you've never deployed a 30 GB image, where 26 GB is the final, application building layer. And that's with staged building and working to minimize the size of overlay layers, redundancy, etc. Okay, so that might be a particularly pathological case, but still, resources are resources. I'm fine with a multi-gig machine learning image where the model and most of the heavy stuff is in a middle layer and the only redeploy is a few MB of python code. But that architecture isn't always easy, either.


It seems this is basically analyzing the running application inside the container and only packaging what's needed to make it work, at a more granular level than OS-level packages.

Interesting concept. I wonder how it expected to cover 100% of the app usage if certain things aren't triggered during the analysis phase.


I guess the burden is on the app dev to ensure 100% functional coverage? That seems a little "yikes".


Yes, the coverage might not always be ideal. The tool will have additional static analysis to figure out the coverage. For now it relies on you to create custom probes to ensure better coverage if you need it...


Is that any different than anything else? Compilers, asset pipelines, and build tools all work the same way: they make assumptions about how a system works and try to optimize on those assumptions. Test your app, run QA, etc. Most licenses make no promises that the software will work, so this tool doesn't seem any different.


Wow, that's an oddly defensive response.

My point is merely that this is quite a significant risk. If you fail to exercise 100% of your code paths via functional testing (so you've got to have comprehensive positive and negative functional testing, which is pretty rare in my experience), you risk producing an image with docker-slim that will break. You've got to think about exercising every single possible interaction with every other component running on an OS. That's no small feat.

Think about it. That's not just 100% of _your_ code paths, that's 100% of the code paths that you could possibly ever trigger in any library that you consume, and you have to think about what might influence those circumstances. There's all sorts of angles to consider, e.g. Does latency of DNS response matter? Does time of day matter? Does IPv4 vs IPv6 matter (answer is likely yes in this case, so you might need to think about running the functional tests coming from both address stacks).

docker-slim is a neat idea, but it seems to come with significant risk.


> Is that any different than anything else?

Yes. In practice test coverage tends to be well below 100%. This is fine if you're just running tests but if you're deciding which part of your package should be pruned based on this sort of analysis then it's very likely that this will cause problems.


Yes, it is a potential problem, but in most real world cases it's good enough and if you have a decent test coverage to begin with (which you should have :-)) you can run those tests when the container is minified and after to confirm that it's working as expected.


Isn't it more or less the same with or without minification? If one wants to be sure one more or less has to have a good test suite. In this case one would simply run those tests against the minified image.


There is a stage during the analysis where you interact with the container. So you can do something to trigger the resource usage that you want it to pick up on.


It seems like this tool is built upon the assumption that the containerized program will load libraries and read files while this tool is tracing it.

It seems like the biggest FAQ item is missing from the readme: What happens is my container reads a file only every so often and this tool doesn't capture it?

Also, do I need to keep images running for a while for this tool to minimize the files in the rootfs? It seems impractical, especially in headless environments like CI/CD.


The temporary containers usually don't run for too long. The probes (http, for now) are there to ensure that the app/services gets to do something useful exercising different code paths. Still, it is possible that something could be missing or you might want to keep something extra. For those cases you can tell docker-slim what else you want to keep in your container image (it has a few flags for that).


It seems they do try and solve this, but potentially not completely depending on what triggers your file load. It seems to be able to simulate HTTP traffic.


Their FAQ doesn't answer what it is that they are removing. Can someone shed light on that? As others have said, it seems to watch your application running and then remove anything it doesn't see your application using. Seems like a very high-risk + high-reward method.


At this point in time the tool is mostly relying on dynamic analysis though there's a bit of static analysis too when you want to include extra artifacts. The dynamic analysis part is done using a couple of different monitors that look at what files are accessed in the container and what system calls are made. Yes, there's a potential risk that something is missing, but you can mitigate this risk in a number of ways. First, you can run your own container test to ensure better coverage. Second, you can include additional artifacts by explicitly telling docker-slim what you want to keep regardless of what it sees.


It also has application-specific magic hidden deep in the source. Example: https://github.com/docker-slim/docker-slim/blob/master/inter...


https://github.com/docker-slim/docker-slim/blob/master/inter...

    func fixPy3CacheFile(src, dst string) error {
    ...}
Yeahhh, that is your classic example of "things you definitely DO want an explanation comment for"


Different application runtimes have interesting hidden behaviors. With Python, for example, the runtime will generate cache files from your .py files and then it'll use it instead of those .py file. However, it still checks if the original source code is still there. If it's not there the runtime will refuse to use the cache files.


Yes, this needs a "what is the catch?" section in readme.


The main catch is that it targets application container images and not generic base images. This is the most common gotcha many people encounter. There has to be an app/service in the container that does something specific.

And, of course, it is possible that not all artifacts will be identified. There are a couple of ways to mitigate this. First, you can create custom probes for your app/service to make sure the app container can be analyzed much better. Second, you can explicitly tell docker-slim what you want to keep in your container image (you can specify files or executables)


exactly what I thought... there are nowhere 1-2 setences which sum up what they're doing


Another commenter above seems to think it’s dynamically determined


One of the monitors leverages FANOTIFY in Linux to determine what files your application is using. This is the same thing many AV tools use to detect malware :-)

The future version will get more different runtime monitors and it will do much more with static analysis too. Right now its static analysis is limited to LDD-like dynamic library inspection for extra artifacts you want to keep in your image. There's a lot more that can be done there...


Very interesting, but I worry this will just break a lot of applications that are run through it in subtle ways. For example, removing system packages can have negative effects not noticed except in subtle edge cases like DNS resolution.


I've done a similar yet simpler hobby scoped project:

https://github.com/tzickel/docker-trim

Last time I've checked some of their open issues about cases theirs doesn't work, worked on mine.

Also, mine is a few lines in python, if you want to learn how to trim a docker image.


Is there somewhere with a good walk though of the constructs that docker containers default to? Not these slim one the default docker one.

As someone who works with docker containers on a somewhat daily basis I only have a vague idea of what they do under the hood and don’t have much of a reference point when comparing this slim impl to the default one.


I'm not completely sure what it is you want to know, but basically there are two pieces to linux container tech (Docker and others). The first is a set of Linux kernel features that lets us isolate various aspects of processes in separate namespaces. The second is layered images, using filesystem features like overlayfs and copy-on-write to avoid having to duplicated everything. These two features of the Linux kernel are the real "container technology", docker and others are basically just user interfaces for these.

A link about namespaces:

http://ifeanyi.co/posts/linux-namespaces-part-1/

A nice little introduction to overlays/etc:

https://jvns.ca/blog/2019/11/18/how-containers-work--overlay...

And if you really want to learn how it all works, write your own "rubber-docker" in Python:

https://github.com/Fewbytes/rubber-docker


In general, the bottom layer is a small Linux distro (Ubuntu, Debian, Alpine), just enough to get you going.

Ideally you choose a base image that is getting security updates on regular basis, and that doesn't have significant changes over time for the same tag.

So e.g. `fedora` is bad base image, because one day you'll jump from Fedora 30 to Fedora 31. Likewise `ubuntu` isn't great. But `ubuntu:18.04` is the Long-Term Support release and so it'll be fairly stable.

In the context of Python packaging I've written a more detailed guide to choosing a base image: https://pythonspeed.com/articles/base-image-python-docker-im...


Well, you're always coming FROM something, unless you're coming FROM scratch, which is literally empty.

A popular base image is Debian Buster. What's in the Dockerfile?

https://github.com/debuerreotype/docker-debian-artifacts/blo...

Wow, simple, it just adds a "rootfs" archive. What's in there?

https://github.com/debuerreotype/docker-debian-artifacts/blo...

Aha. So what we have in rootfs.xz is essentially the output of a Debian system with all of those packages installed in it, and the minimal configuration needed to tie them together. No init system, no kernel, just a big filesystem full of the usual stuff an absolutely minimal Debian install would have.

Now, when you run FROM debian:buster, what you get is a base layer with enough to be useful. `apt-get` is there with a default repository list that works fine, and you can install things to your hearts' content.


docker-slim also gives you useful info about the original container image in addition to minifying your container (you can actually run it without the minification part if you want). It will "reverse engineer" the Dockerfile for your fat/original container image and it will produce a detailed report about the fat/original image layers and what's in them (diffing between the layers is an upcoming feature). I'll be happy to do an overview if you are interested. Please ping me offline.


Until you're more comfortable with the container model, I recommend sticking with Ubuntu LTS images, such as ubuntu:18.04. That way if something breaks, you have a familiar environment to work with.

Once you have gathered a mental model, feel free to tune your images by basing them on something lighter.


Instead of starting with a bulky image and guess and remove unnecessary stuff at the end, I would prefer the opposite: only include necessary packages for the executable in the first place.

I am happy with my current distroless + docker multi-stage build.


I wrote something similar but way simpler a while ago: https://github.com/ak-1/sackman


i'll be happy to answer any questions about the tool (i'm the main author) :-)


This looks pretty amazing, thanks for sharing. I've just been playing with this on a couple containers. The first went from 197MB to 71.5MB - not bad! I did some testing and it didn't work without including a few extra pieces but pretty painless.

The second container went from 176MB to 4.7MB!! I've not really tested that so there's a pretty good chance things aren't going to work too well in practice (but we'll see).

If it all continues as well as it's started then we'll definitely be using it.


Do you mind sharing a bit of information about your container images? What's the app language for both? What kind of application is it? How do you init your apps in the containers? docker-slim is definitely not perfect and there's lots of room for improvement. Can you ping me offline (on github/gitter, twitter or email, my email is kcq.public@gmail.com)?


Please add a better description of what the tools does, like "Docker-slim removes all files which you container doesn't touch when running"

Considering CI/CD attacks these days I spend a while scanning if this tool was some kind of joke trying to show that developers download anything from the Internet.

PS: Thanks for this :)


Thank you for the your feedback! It's very important for a tool like this to be clear and transparent about what it's doing. The docs definitely need to be improved. It's already a lot for some info to be lost and not enough, at the same time, in other areas.

By the way, if you don't feel comfortable with the minification functionality you can still use docker-slim as a Docker image inspection and profiling tool (take a look at the report files it generates). Additional package level reporting is on the todo list. Image and Dockerfile linting functionality is coming soon too.


Would be nice to add a support for native docker flags (-t for --tag, -f for --from-dockerfile, etc.)


This is a great idea! Thank you for your feedback. Using native docker flags as-is will definitely reduce friction and simplify the use. Do you mind creating a Github issue if you have a specific list of flags you'd like to see supported first?


Results are interesting. How are Go containers only 1.5MB, and Rust's ones 15MB? I would have expected Rust to be on par with Go (both compiled languages)


[flagged]


I see you, but your account is flagged as new and your comment grayed out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: