> But if we have tried to define 'compatible' in a rigorous way, I haven't seen it
My understanding (and the way I've versioned my projects) is that a major version denotes a stable API and behavior you can safely rely on. What functions are exported, what arguments they take, etc. That's a pretty easy thing to not break.
Things that _aren't_ covered are implementation details such as the location of files in the package, the reuse of objects internally, or other not explicitly-defined behavior.
A couple of examples:
1. In JS, you can import from sub-folders: `let const = require('somePkg/a/b/c)`, but that depends on internal organization. That's not guaranteed across versions.
2. You can also add properties to an object. If we export a function that takes a request object and returns a response, we don't guarantee those are identical objects. We _do_ ensure that the object returned from the function has properties x,y,z, but not that anything you defined before calling the function will still be there.
We would have this back-and-forth on how to version fix releases. "Well, if someone was depending on this buggy behavior, this update will break their code". Ultimately, we said "this is what we guarantee will work. For anything else, run tests anytime you upgrade".
We're not perfect (all software has bugs), but this has worked pretty well.
The problem is that just about everything breaks the API. I have had situations where obscure bug fixes break my app because I was depending on the behaviour of that bug.
The answer IMO is to just do a full test of your stuff after doing updates. I update many packages at once to make the most of my time.
Then the problem is with you. You wrote code that had a dependency on a specific non-guaranteed feature (the bug) and then feel jilted b/c they changed the non-guaranteed feature. What you should have done was written defensive code and tests around the NGF so that when it changed, your tests would catch that and either not upgrade or allow you the chance to fix it.
I don’t feel any negative feelings towards the library developers. I have my versions locked so these unexpected surprises don’t happen. In this case it would have been extremely difficult for me to have noticed that the behaviour seen was not intended and supported by the developers.
My point is not that this is an unsolvable problem, but that it’s not a good idea to go “semver says this is non breaking, let’s just chuck it in production”
I once wrote a python module that applied diff patches as a part of the deployment process. This let us 'fix' libraries we depended on without having to maintain a full fork until the maintainer fixed the bug. If a new version came down, we could test the patch and update it if needed.
My understanding (and the way I've versioned my projects) is that a major version denotes a stable API and behavior you can safely rely on. What functions are exported, what arguments they take, etc. That's a pretty easy thing to not break.
Things that _aren't_ covered are implementation details such as the location of files in the package, the reuse of objects internally, or other not explicitly-defined behavior.
A couple of examples:
1. In JS, you can import from sub-folders: `let const = require('somePkg/a/b/c)`, but that depends on internal organization. That's not guaranteed across versions.
2. You can also add properties to an object. If we export a function that takes a request object and returns a response, we don't guarantee those are identical objects. We _do_ ensure that the object returned from the function has properties x,y,z, but not that anything you defined before calling the function will still be there.
We would have this back-and-forth on how to version fix releases. "Well, if someone was depending on this buggy behavior, this update will break their code". Ultimately, we said "this is what we guarantee will work. For anything else, run tests anytime you upgrade".
We're not perfect (all software has bugs), but this has worked pretty well.