Hacker News new | past | comments | ask | show | jobs | submit login

Containers of course add some overhead, however it is negligible in modern world.

I honestly don't know what you have to do to get 20 gigs of overhead with containers, dozens of full ubuntu containers with dev packages and stuff?




> Containers of course add some overhead, however it is negligible in modern world.

I feel the people most enthusiastically trying to convince you of this are the infrastructure providers who also coincidentally bill you for every megabyte you use.

> I honestly don't know what you have to do to get 20 gigs of overhead with containers, dozens of full ubuntu containers with dev packages and stuff?

Maybe 5 Gb from the containers alone, they were pretty slim containers, but a few dozen of them in total; but I ran microk8s which was a handful of gigabytes, I could also get rid of kibana and grafana, which was a bunch more.


>I feel the people most enthusiastically trying to convince you of this are the infrastructure providers who also coincidentally bill you for every megabyte you use.

They don't though? Cloud providers sell VM tiers, not individual megabites, and even then on Linux there is barely any overhead for anything, but memory, and memory one is, again, if you optimize it far enough is negligible.

>but I ran microk8s which was a handful of gigabytes

My k3s with a dozen of pods fits in couple gigs.


> They don't though? Cloud providers sell VM tiers, not individual megabites, and even then on Linux there is barely any overhead for anything, but memory, and memory one is, again, if you optimize it far enough is negligible.

They don't necessarily bill by the RAM, but definitely network and disk I/O, which are also magnified quite significantly. Especially considering you require redundancy/HA when dealing with the distributed computing

(because of that ol' fly in the ointment

   r    n-r    n!
  p (1-p)   -------
            r!(n-r)!
)

> My k3s with a dozen of pods fits in couple gigs.

My Kibana instance alone could be characterized as "a couple of gigs". Though I produce quite a lot of logs. Although that is not a problem with logrotate and grep.


>They don't necessarily bill by the RAM, but definitely network and disk I/O, which are also magnified quite significantly. Especially considering you require redundancy/HA when dealing with the distributed computing

Network and Disk I/O come with basically 0 overhead with containers.

> Especially considering you require redundancy/HA when dealing with the distributed computing

Why? This is not an apples to apples to comparison then. If you don't need HA, just run a single container, there's plenty other benefits besides easier clustering.

>My Kibana instance alone could be characterized as "a couple of gigs". Though I produce quite a lot of logs. Although that is not a problem with logrotate and grep.

How is Kibana(!) relevant for container resource usage? Just don't use it and go with logrotate and grep? Even if you decide to go with a cluster, you can continue to use syslog and aggregate it :shrug:




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: