The problem with this is that if Docker Inc goes under, you can say goodbye to Docker Hub: https://hub.docker.com/
Sure, there are alternative repositories and for your own needs you can use anything from Sonatype Nexus, JFrog Artifactory, Gitlab Registry or any of the cloud based ones, but Hub disappearing would be a 100 times worse than the left pad incident in the npm world.
Thus, whenever Docker Inc releases a new statement about some paid service that may or may not get more money from large corporations, i force myself to be cautiously optimistic, knowing that the community of hackers will pick up the slack and work around those tools on a more personal scale (e.g. Rancher Desktop vs Docker Desktop). That said, it might just be a Stockholm Syndrome of sorts, but can you imagine the fallout if Hub disappeared?
Of course, you should never trust any large corporation, unless you have the source code that you can build the app from yourself. For example, Caddy v1 (a web server) essentially got abandoned with no support, so the few people still using it had to possibly build their own releases and fix the bugs themselves, which was only possible because of source code availability, before eventually migrating to v2 or something else.
Therefore, it makes sense to always treat external dependencies, be it services, libraries, even tools like they're hostile - of course, you don't always have the resources to do that in depth, but for example seeing that VS Code is not the only option but we also have VS Codium (https://vscodium.com/) is encouraging.
Docker hub going down would be a disaster for sure, but I consider "pull image/library from 3rd party hub over the internet on every build" to be an anti-pattern (which is considerably worse with npm, compared to docker). That said,if this is where the value is being provided, perhaps they ought to charge for this service? I guess it's difficult because it's easily commoditized.
> but can you imagine the fallout if Hub disappeared?
I wish that would actually happen - not forever - if it'd go down for a day or 2 with no ETA for a fix, and the thousands of failed builds/deploys will force organizations to rethink their processes.
I think Go's approach on libraries is the way forward - effectively having a caching proxy that you control. I know apt (the package manager) also supports a similar caching scheme.
Large orgs started hitting the rate limits since many devs were coming from the same ip. Most places probably put in a proxy that caches to a local registry.
That's what we did, put a proxy in front that caches everything. Now that Docker Desktop requires licensing, we're going down the road of getting everyone under a paid account.
I'm sure Rancher is great for personal desktop use, but there's no reason large companies can't pay for Docker.
Or even small. At work, I advised that we just pay for Docker Desktop. We got it for free for a long time. Our reason for not paying is that we're an Artifactory shop, so their Docker Enterprise offering wasn't really attractive to us. But we're easily getting $5/dev/mo worth of value out of Docker Desktop.
And I don't really see this as an open source bait and switch, either. Parts of Docker are open source but Docker Desktop was merely freeware.
That said, I believe in healthy competition, and so it was quite worrisome to me that Docker Desktop seemed to be the only legitimate game in town when it came to bringing containerization with decent UX and cross-platform compatibility to non-Linux development workstations. So I'm happy to see Rancher Desktop arrive on the scene, and very much hope to see the project gain traction. Even if we stay with Docker, they desperately need some legitimate competition on this front in order to be healthy.
> but can you imagine the fallout if Hub disappeared?
> I wish that would actually happen - not forever - if it'd go down for a day or 2 with no ETA for a fix
Do people not run their own private registry with proxying enabled? If Docker Hub went down at this point, I think my company would be fine for _months_. Only time we need to hit Hub is when our private registry doesn't have the image yet.
You can already cache dockerhub via the docker repo container very easily. In fact, due to the number of builds, it would be foolish not to do this to avoid GBs of downloads all the time.
> Hub disappearing would be a 100 times worse than the left pad incident in the npm world
This is really overdramatic. If Docker Inc. went out of business and Docker Hub was shutdown then the void would be filled very quickly. Many cloud providers would step in with new registries. Also, swapping in a new registry for your base images is really easy. Not to mention the tons of lead time you’d get before docker hub goes down to swap them. Maybe they’d even fix https://github.com/moby/moby/issues/33069 on their way out, so we can just swap out the default registry in the config and be done with it.
> Also, swapping in a new registry for your base images is really easy.
This is the exact problem! Sure, MySQL, PHP, JDK, Alpine and other images would probably be made available, but what about the other images that you might rely on, but the developers of which might simply no longer care about them or might not have the free time to reupload them to a new place.
Sure, you should be able to build your own from the source and maintain them, but in practice there are plenty of cases when non-public-facing tools don't need updates and are good for the one thing that you use them for. Not everyone has the time or resources to familiarize themselves with the inner workings of everything that's in their stack, especially when they have social circumstances to deal with, like business goals to be met.
In part, that's why I suggest that everyone get a copy of JFrog Artifactory or a similar solution and use it as a caching proxy in front of Docker Hub or any other registry. That's also what you should be doing in the first place, to also avoid the Docker Hub rate limits and speed up your builds, not downloading everything from the internet every time.
Otherwise it's like saying that if your Google cloud storage account gets banned, you can just use Microsoft's offering, while it's the actual data that was lost that's the problem - everything from your Master's thesis, to pictures of you and your parents. Perhaps that's a pretty good analogy, because the reality is that most people don't or simply can't follow the 3-2-1 rule of backups either.
The recent Facebook outage cost millions in losses. Imagine something like that for CI/CD pipelines - a huge number of industry companies would not be able to deliver value, work everywhere grinding to a half, shareholders wouldn't be pleased.
Of course, whether we as a society should care about that is another matter entirely.
Its only job is to run containers on a particular schedule, no more no less. There are very few attack vectors for something like that, considering that it doesn't talk to the outside world, nor processes any user input data.
Then again, it's not my job to pass judgement on situations like that, merely acknowledge that they exist and therefore the consequences of those suddenly breaking cannot be ignored.
If you depend on it, you should keep a local copy around that you can host if needed.
Things get abandoned all the time. When you make them part of your stack, you now are forever indebted to keeping them alive yourself until the point in which you free yourself from that burden.
If only we could have a truly distributed system for storing content addressed blobs ... perhaps using IPFS for docker images. This way you could swap the hosting provider without having to update the image references
I’d love for others with more knowledgeable to chime in, since this feels close to the logical end state for non-user-facing distribution.
At a protocol level, content basically becomes a combination of a hash/digest and one or more canonical sources/hubs.
This allow any intermediaries to cache or serve the content to reduce bandwidth/increase locality, and could have many different implementations for different environments to take advantage of local networks as well as public networks in a similar fashion as recursive DNS resolvers. In this fashion you could transparently cache at a host level as well as eg your local cloud provider to reduce latency/bw.
I’m not super well versed, but I thought BitTorrent’s main contribution was essentially the chunking and distributed hash table. There is perhaps a hood analog of the different layers of a docker image.
Hub disappearing would be the best thing that happened to Docker in years. People really shouldn’t be running the first result from Hub as root on their machines.
I miss a version of hub with _only_ official images.
Given that it is extremely trivial to run your own container registry, I think the focus on this as some great common good is overstated. As it is 99% of the containers on it are for lack of a better word absolute trash, so it is not very useful as it stands.
VSCodium doesn't add anything other than build VSCode source without telemetry and provide real FOSS build of VS Code. If VSCode development stopped then VSCodium will stop also.
> The problem with this is that if Docker Inc goes under, you can say goodbye to Docker Hub: https://hub.docker.com/
So you think that Docker Hub is Docker Inc's entire value proposition?
And if Docker Inc is nothing more than a glorified blob storage service, how much do you think should company be worth?
Oh, not at all! I just think that it's the biggest Achilles' heel around Docker at the moment, one that could have catastrophic consequences on the industry.
- you no longer can use your own images that are stored in Hub
- because of that, you cannot deploy new nodes, new environments or really test anything
- you also cannot push new images or release new software versions, what you have in production is all that there is
- the entire history of your releases is suddenly gone
I don't pass judgements on the worth of the company, nor is there any actual way to objectively decide how much it's worth, seeing as they also work on Docker, Docker Compose, Docker Swarm (maintenance mode only though), Docker Desktop and other offerings that are of no relevance to me or others.
Either way, i suggest that anyone have a caching Docker registry in front of Docker Hub or any other cloud based registry, for example the JFrog Artifactory one. Frankly, you should be doing that with all of your dependencies, be it Maven, npm, NuGet, pip, gems etc.
Sure, there are alternative repositories and for your own needs you can use anything from Sonatype Nexus, JFrog Artifactory, Gitlab Registry or any of the cloud based ones, but Hub disappearing would be a 100 times worse than the left pad incident in the npm world.
Thus, whenever Docker Inc releases a new statement about some paid service that may or may not get more money from large corporations, i force myself to be cautiously optimistic, knowing that the community of hackers will pick up the slack and work around those tools on a more personal scale (e.g. Rancher Desktop vs Docker Desktop). That said, it might just be a Stockholm Syndrome of sorts, but can you imagine the fallout if Hub disappeared?
Of course, you should never trust any large corporation, unless you have the source code that you can build the app from yourself. For example, Caddy v1 (a web server) essentially got abandoned with no support, so the few people still using it had to possibly build their own releases and fix the bugs themselves, which was only possible because of source code availability, before eventually migrating to v2 or something else.
Therefore, it makes sense to always treat external dependencies, be it services, libraries, even tools like they're hostile - of course, you don't always have the resources to do that in depth, but for example seeing that VS Code is not the only option but we also have VS Codium (https://vscodium.com/) is encouraging.