It seems this is basically analyzing the running application inside the container and only packaging what's needed to make it work, at a more granular level than OS-level packages.
Interesting concept. I wonder how it expected to cover 100% of the app usage if certain things aren't triggered during the analysis phase.
Yes, the coverage might not always be ideal. The tool will have additional static analysis to figure out the coverage. For now it relies on you to create custom probes to ensure better coverage if you need it...
Is that any different than anything else? Compilers, asset pipelines, and build tools all work the same way: they make assumptions about how a system works and try to optimize on those assumptions. Test your app, run QA, etc. Most licenses make no promises that the software will work, so this tool doesn't seem any different.
My point is merely that this is quite a significant risk. If you fail to exercise 100% of your code paths via functional testing (so you've got to have comprehensive positive and negative functional testing, which is pretty rare in my experience), you risk producing an image with docker-slim that will break. You've got to think about exercising every single possible interaction with every other component running on an OS. That's no small feat.
Think about it. That's not just 100% of _your_ code paths, that's 100% of the code paths that you could possibly ever trigger in any library that you consume, and you have to think about what might influence those circumstances. There's all sorts of angles to consider, e.g. Does latency of DNS response matter? Does time of day matter? Does IPv4 vs IPv6 matter (answer is likely yes in this case, so you might need to think about running the functional tests coming from both address stacks).
docker-slim is a neat idea, but it seems to come with significant risk.
Yes. In practice test coverage tends to be well below 100%. This is fine if you're just running tests but if you're deciding which part of your package should be pruned based on this sort of analysis then it's very likely that this will cause problems.
Yes, it is a potential problem, but in most real world cases it's good enough and if you have a decent test coverage to begin with (which you should have :-)) you can run those tests when the container is minified and after to confirm that it's working as expected.
Isn't it more or less the same with or without minification? If one wants to be sure one more or less has to have a good test suite. In this case one would simply run those tests against the minified image.
There is a stage during the analysis where you interact with the container. So you can do something to trigger the resource usage that you want it to pick up on.
Interesting concept. I wonder how it expected to cover 100% of the app usage if certain things aren't triggered during the analysis phase.