I think you've both got a piece of it. I've programmed PC software, embedded software, and mobile software, and my gut feeling (without data) is that the shipped software quality is inversely proportional to the update frequency and ease of updates. Had nothing to do with how smart or skilled the developers and testers were. Had nothing to do with management's priorities. Update frequency and ease changed immensely once we could feasibly deliver patches over the Internet. Before easy updates, you'd actually quality check every corner of the application, you'd actually fix those P2s and P3s. You'd do exploratory testing off the test plan rails to find things. There was even a concept of "done" in software, as in, you eventually stop constantly jamming features in and tweaking the UI in maddening ways.
Now, it's just "LOL just ship it, users will just deal with it until the next release!" Now, it's "Do experiments on N% in prod and use end users for A/B testing. If something's broken we'll update!"
In several industries, it's actually totally expected that v1.0 of the application simply won't work at all. It's more important for these companies to ship non-working software than to miss the deadline and ship something that works! Because who cares? Users will bear the cost and update.
Now, it's just "LOL just ship it, users will just deal with it until the next release!" Now, it's "Do experiments on N% in prod and use end users for A/B testing. If something's broken we'll update!"
In several industries, it's actually totally expected that v1.0 of the application simply won't work at all. It's more important for these companies to ship non-working software than to miss the deadline and ship something that works! Because who cares? Users will bear the cost and update.