This sounds right to me. As a DBA containers appear to be a nightmare. I'm employed as an absolute expert in my product. A developer may know how to use Docker but are they an expert? Now...
* Who is going to look after the middle ground when the database is in the container?
* Who is going to be responsible for rewriting enterprise tools to discover those instances to gather metrics? Because none of the traditional methods (WMI, registry keys, etc) are going to work. You've just broken SCCM, ServiceNow, and everything else under the sun.
* Who owns the patching? Because WSUS can't discover it and isn't going to be able to patch inside a container.
* Who owns the backups? You know backups are complicated right and not just an on/off switch? You have to schedule the backup, but also make sure you're scheduling the backups in a standard way across your hundreds of hosts (now containers), and then validate those backups are actually being taken, and test those backups regularly. Developers couldn't care less about this stuff - it's someone else's problem - my problem - until it's in a container and then nobody is going to do it.
* And when something breaks in between, and the business suffers a massive loss of data, who are they going to sue? A liability-free open source project? I don't think so.
There's more to being a DBA than just stuffing it in a container and saying, "she'll be right mate".
These are all valid concerns, but, none seem specific to databases and - for companies who have moved even parts of their infrastructure to containers - have been deemed acceptable.
(Also, while you mentioned all Microsoft tools here, the same issues apply to Linux based containers)
Most of your arguments comes down to "this doesn't fit in my world where windows is king, so it won't work for me". That however, is not a problem of containers.
While I don't consider myself to be a pure DBA, I do know Postgres quite well, and manage quite a few both "classic" deploys in a VM and containerized instances. I was the one who created a default Postgres setup/image/config that our devs use, which when it's used correctly and as documented, when it is deployed to production, it is exactly the same as managing a normal instance.
For the devs it's simple, their local env is a checkout of a sample env, copy that to their new project, docker-compose up, and they have a database running with pretty much the same config they would get in test, acceptance and production. No surprises, we both know what to expect.
Backups? Still the same. Patches? I tell my config management to a pull a new postgres image on the servers and restart the db images during a maintenance window. This makes it actually a lot easier than updating the non-containerized services.
> and the business suffers a massive loss of data
This scenario should be recoverable in the first place, and should be tested on regular basis. I'm actually setting up a process to automatically verify database recovery using containers, which makes stuff like this a lot easier and more convenient. Spin up container, restore backup to it, full vacuum analyze, pg_check, select counts from every table, select random records from every table, and if possible spin up a test instance of the application (again, very easy if that also runs in a container) where we can run unit tests against the restored database.
> who are they going to sue? A liability-free open source project?
So when have you last heard about someone suing MS or Oracle when they had data loss? I suggest you read your license agreements... Our entire business runs on such "liability-free" open-source projects. Linux, Postgres, GNU userland, Python, GCC, clang, Boost, Wildfly, Java, ... and it worked out pretty well for us. We're not some hipster startup with nodejs, angular and mongodb "cloud" apps, we provide some mission-critical services for clients that are banks, oil companies, governments, ... with corresponding SLA's. The attitude of our (very tech-focused) management is simple: we don't need liability umbrella's when we _own_ the technology and know what the hell we're doing. If something does go wrong, this would mean that yes, we would be responsible, no point in hiding.
* Who is going to look after the middle ground when the database is in the container?
* Who is going to be responsible for rewriting enterprise tools to discover those instances to gather metrics? Because none of the traditional methods (WMI, registry keys, etc) are going to work. You've just broken SCCM, ServiceNow, and everything else under the sun.
* Who owns the patching? Because WSUS can't discover it and isn't going to be able to patch inside a container.
* Who owns the backups? You know backups are complicated right and not just an on/off switch? You have to schedule the backup, but also make sure you're scheduling the backups in a standard way across your hundreds of hosts (now containers), and then validate those backups are actually being taken, and test those backups regularly. Developers couldn't care less about this stuff - it's someone else's problem - my problem - until it's in a container and then nobody is going to do it.
* And when something breaks in between, and the business suffers a massive loss of data, who are they going to sue? A liability-free open source project? I don't think so.
There's more to being a DBA than just stuffing it in a container and saying, "she'll be right mate".