I would rather run software directly than wrap it in something, (and in this case, the wrapper isn't thin!) unless there are complex system dependencies. And that may be a smell of its own. Depends on the application.
But that's a personal preference. I was asking how Docker makes things slower by default? Like, the act of using Docker means software delivery will necessarily slow down.
Adding an additional layer also means that layer needs to be managed at all times, and additional setup is required to use it. This starts at installing docker related tooling, having to do extra work to access logs inside containers, additional infrastructure management/maintenance (eg private repository), Docker compatibility between versions (it's not very good at maintaining that) etc.
The build/deployment time difference is maybe the least relevant, but also there most of the time, because Docker performs more work than simple zip+scp and an scp copy of the version to archive somewhere. Docker needs to copy far more than just the application files. Avoiding an extra copy of 100MB data (OS + required env) during deployment, if application files are only ~1-2MB tends to add quite a few seconds to the process, although how much it matters depends on network speed of course. For example on my modest connection it'd be ~8-10 seconds vs <1 second.
There are of course great reasons to use Docker such as a larger team that needs a common environment setup or when using languages that don't have great dependency management system (eg they have non-transferrable builds between systems), but it is something "extra" to maintain.
Sure, in the grand scheme of things, though, I wouldn't argue that seconds is a legitimate slow down. I just really struggle to buy into the argument that "non-Docker" is superior and that introducing Docker is a problem. It's _another_ way to do deployments, and it's not strictly worse. There are tradeoffs on both sides, although I would argue Docker has far fewer than just using systemctl and SSH.
> Adding an additional layer also means that layer needs to be managed at all times, and additional setup is required to use it. This starts at installing docker related tooling, having to do extra work to access logs inside containers, additional infrastructure management/maintenance (eg private repository), Docker compatibility between versions (it's not very good at maintaining that) etc
Docker is available on every major distribution. Installing it once takes seconds. Accessing logs (docker logs mycontainer) takes just as long as systemctl (journalctl -u myservice). Maintaining a registry is optional, there are dozens of one-click SaaS services you can use you instantly get a registry, many of them free. Besides, I would consider the registry to have significantly more time-savings benefits due to being able to properly track builds.
> Docker needs to copy far more than just the application files. Avoiding an extra copy of 100MB data (OS + required env) during deployment
This is only partially true. Images are layered, and if the last thing you do is copy your binary to the image (default Docker practice), than it's possible for it to be exactly the same time as it's only downloading one new layer (the size of the application). Only on brand new machines (an irrelevant category to consider) is it fully true.
I think the point is that not using Docker is easier, simpler, cheaper and better than using Docker.
Unless it is not, then you should use Docker.
But many (all?) of us have had the experience of a manager insisting on some "new thing" (LLMs are the current fad of the day) and if we are not using it we are falling behind. That is not true, as we all know.
It is very hard for the money people to manage the tech stack, but they need to, it is literally their job (at the highest level). We desperately need more engineers, who are suited (I am not!) to go into management
But this assumes that Docker provides _no_ advantages in time-savings, which is simply false. The person who recently responded to me noted that themselves. There are several scenarios where Docker is superior, especially in cases with external dependencies.
My point is that the universal argument that Docker is inferior to manually copying binaries is flawed. It's usually put forward by people who fit in the narrow scenario where that happens to be true. If we can agree that both options have trade-offs, and that a team should pick the option that best fits their constraints, then I think that's pretty much where most of the world sits in thinking. There are extremists on both sides, but their views are just that, extreme.
I'm confused by this reasoning. How does Docker make things slower by default? Why would you look favorably on a company that doesn't use it?
Or At least with the second one, you have automatic artifact tracking for easier rollbacks.