Swarm works, but has poor support for volumes - which means it's tricky to run legacy applications on swarm (which eg uploads files to local disk, not s3 - or keeps state/cache on disk, not a database).
Ingress is also more complicated/bespoke - the best I've found is traefik with labels for routing/config.
My advice today would be to scale Docker compose vertically (eg: on a dedicated server) - then move to Kubernetes.
The swarm middle ground isn't really worth it IMNHO.
> Swarm works, but has poor support for volumes - which means it's tricky to run legacy applications on swarm (which eg uploads files to local disk, not s3 - or keeps state/cache on disk, not a database).
One way round that is to use an NFS volume. However, I've hit problems with too many client NFS connections on a docker swarm and so found it better to mount the NFS volume on each host and use a bind mount instead.
My general feeling about adding an NFS depency to multinode swarm is that it's effectively adding a single point of failure to a system which is otherwise somewhat robust against single node failure...
Porting a ten year old app from local file storage to s3 might not be trivial.
For a new app, one generally should and can embrace 12-facors, and delegate state to stateful services (managed databases, key-value stores, s3 etc).
Do note that for simple services, local disk can be very hard to beat for low complexity, extremely high performance - with the caveat that horizontal scaling might be tricky.
Ed: also depending on privacy requirements - self- hosted s3 (eg minio) might be tricky for a production load. OTOH self-hosted Kubernetes is no walk in the park either!
Ingress is also more complicated/bespoke - the best I've found is traefik with labels for routing/config.
My advice today would be to scale Docker compose vertically (eg: on a dedicated server) - then move to Kubernetes.
The swarm middle ground isn't really worth it IMNHO.