> The whole idea that you'd need a container like environment to install an application
Simple example. App A wants tensorflow 1.10, CUDA 8, and python 3.7. App B wants tensorflow 2.2, CUDA 10, and python 3.8. You want App A and B installed at the same time but the two versions of tensorflow are neither forward nor backward compatible. The two pythons will fight with each other for who gets to be "python3". How do you deal with this without containerization?
I don't think it violates the principles of open source at all, it's just making sure each application gets the exact versions of libraries it wants without messing up the rest of your system.
> obviously python 3.8 should be backwards compatible with 3.7
I think so too, but HN downvoted me to oblivion the last time I advocated for that. That's part of the problem, I guess, is that the dev community doesn't actually agree with 3.8 being backwards compatible with 3.7.
Would you argue the same if they were called Python v37.0 and v38.0? Let's imagine that they are and move on. The problem is to use the alias "python3" as if it were the executable name.
At the risk of getting kicked off HN for all these downvotes for trying to have a discussion ... (thanks free speech haters, enjoy your echo chamber after I'm kicked off)
My understanding of semantic versioning is that:
- (x+1).0 and (x).0 don't necessarily need to be able to run code writen for the other
- 3.(x-1) doesn't need to be able to run code written for 3.(x)
- 3.(x+1) should always run code written for 3.(x)
Hence, you should be able to point "python3" at the latest subversion of 3 that is available, continually upgrade from 3.6 to 3.7 to 3.8, and as long as you have a higher sub version of 3, you shouldn't break any code that is also written for an earlier subversion of 3. That's why it is supposed to be okay to have them all symlinked to "python3". If a package install candidate thinks the currently running "python3" isn't recent enough for the feature set it needs, it can request the dependency manager upgrade "python3" to the latest 3.(x+n) with the understanding of not breaking any other code on the machine.
Unfortunately that isn't true between 3.7 and 3.8. There are lots of cases where upgrading to 3.8 will break packages and that violates semantic versioning.
Python doesn't use semantic versioning, so you can't really expect them to follow it. As GP insinuated, if you just pretend that 3.7 is 37, and 3.8 is 38, you'll pretty much be able to apply semver thinking, though.
Right, so because Python doesn't coooperate, we end up needing containerization, which is what I was trying to explain in GGGGP. Because apt will upgrade 3.7 to 3.8 and unfortunately break anything that was written in 3.7 (and vice versa).
An app needs to be able to say "I'm ok with python3>=3.7" and be fine if it gets 3.8, 3.9, or 3.20, if we want to be able to run it without a container. (And likewise for all its other dependencies besides python)
If appA needs python3.6 then call it with `python3.6` not `python3`. It can exist in your /usr/bin in parallel with 3.7.
The standard python used by your distribution is python3. I think it's currently the way it's done.
Simple example. App A wants tensorflow 1.10, CUDA 8, and python 3.7. App B wants tensorflow 2.2, CUDA 10, and python 3.8. You want App A and B installed at the same time but the two versions of tensorflow are neither forward nor backward compatible. The two pythons will fight with each other for who gets to be "python3". How do you deal with this without containerization?
I don't think it violates the principles of open source at all, it's just making sure each application gets the exact versions of libraries it wants without messing up the rest of your system.