This is a bit naive. If you've ran a production system with any decent traffic and never needed to SSH into machines, congrats. I haven't and I don't know anyone who has. You might need to go in for anything from auditing to troubleshooting, even if it's rare.
PS: How is your automated provisioning system reaching your cluster if not by SSH?
Saltstack is either using SSH to communicate or opening it's own port. I'd much rather trust an open ssh port for securely provisioning/management than allow any other piece of software to keep a port open (upto and including TLS based protocols).
SaltStack has an agent that communicates with a master on a different server. The agents on the clients don't need an open port (other than egress).
This allows me to have one central server that is well secured and protected that allows ingress from the remote hosts, and then all the clients reach out to the master to get their tasks.
logging and metrics are sent elsewhere to be consumed and queried.
I build machine images with packer that get provisioned during the deployment pipeline. That single machine is then put into a cluster with x number of copies. If one dies I don't care, the cluster provisions another automatically.
>PS: How is your automated provisioning system reaching your cluster if not by SSH?
Not sure about moondev, but Terraform + Cloud-init + Container Orchestrator means that I basically never need to SSH into my nodes, except in extreme/rare circumstances.
I said "I basically never need to". Not that I never need to. But yeah, I basically never, ever need to. Short of needing to take a coredump or docker shits the bed, I don't really ever need to log into my nodes.
I guess that's an offensive thing to point out judging by my score...
PS: How is your automated provisioning system reaching your cluster if not by SSH?