B. Without swap, the culprit is immediately identified and killed. Your system is perfectly usable, aside from a killed process.
This is not my experience. I ran my desktop for a few years on this theory, and in practice I found the system behaviour far worse when the system ran out of memory, it would lock up completely as you mention with case A. With swap enabled I would eventually reach an unresponsive state, but not immediately: there would be a gradual slowdown in which I could take action to resolve the problem. I believe this is because of the page thrashing of file pages of the executables running (point 3 mentioned in the article). In practice if you want behaviour B you need to run something like earlyoom which kills processes before the kernel starts thrashing the disk.
There's also B-2. The kernel kills an unrelated process that happened to request a bit of RAM at the moment of OOM. The system becomes responsive for a while, and then the kernel starts looking for a new scapegoat. Which might be the same as the old scapegoat that got automatically respawned by your monitoring tool. This poor program keeps crashing for no reason and you're tearing your hair out trying to find out why. :(
Yes, in my experience those are the actual tradeoffs. With swap things slow down and that alerts you to the problem and you can go manually OOM kill the right thing. Without swap the kernel "randomly" kills the wrong thing without fail, often leading to a system that you have to reboot to get back into a sane state, and leading to a small industry of trying to tune the OOM killer to never do that.
Back in my day (which was a long time ago) we did tune the OOM killer in prod to hit only the right processes first (e.g. apache httpd or whatever software processes were deployed on the server) and that would usually lead to self-recovering behavior where one bad request that caused an OOM in one proc would be killed and the server would recover. That was only in prod though where we understood exactly what software ran on which instances.
So, I'd tend to suggest running with little to no swap in prod and tuning the OOM killer because you know what processes are likely to be the issue in prod. While on a desktop/laptop or something its better to have swap because the random workloads you throw at that are going to make tuning the OOM killer impossible.
I appreciate the argument that you should still have some swap in prod for paging under utilized anon memory, but I'd like to see some solid numbers about that vs. running swapless and to weight that against the fiddliness of managing the swap files. I suspect in the majority of cases you're going to see that it doesn't make any difference in actual performance numbers, and that you shouldn't be running so close to the edge that it would matter. But measure for your own situation.
I was also running in a mostly HDD era not SSD so things may have changed. That probably suggests less of a penalty towards swapping though and the right answer for desktop/laptop loads to be using swap to avoid the randomness of the OOM killer. That may lead though to using more swap in prod since it may degrade and recover more gracefully these days instead of the absolute catastrophe that swapping to the HDD was back in the day.
This is not my experience. I ran my desktop for a few years on this theory, and in practice I found the system behaviour far worse when the system ran out of memory, it would lock up completely as you mention with case A. With swap enabled I would eventually reach an unresponsive state, but not immediately: there would be a gradual slowdown in which I could take action to resolve the problem. I believe this is because of the page thrashing of file pages of the executables running (point 3 mentioned in the article). In practice if you want behaviour B you need to run something like earlyoom which kills processes before the kernel starts thrashing the disk.