IIUC it's not specifically banned by name in a law, more like not on the whitelist for food additives. Industrialized foodstuffs manufacturing in late 20th century Japan was wild, and additives are managed on approvals basis than bans as the result.
I got a DM from API Brew. It reminds me of Firebase (Remember? it is not the current Google acquired bloated mobile platform, but only the realtime database and looks promising for me...).
Good luck for the API Brew team.
How about Kubernetes cronjobs on Spot VM on GKE autopilot?
https://cloud.google.com/kubernetes-engine/pricing
If your job only consume 1 vCPU/1GB memory, it costs about $10/month. If those jobs only run 1/100 of a month cost should be $0.1/month.
CircleCI's remote docker have a restriction that only one of jobs can access same remote docker enginge at a time. Say, a job A build an image, then job B, C try to use same remote docker, but only one of them have the cache.
Yup the caches for each architecture are available in parallel and multiple builds for a single architecture can simultaneously use the same build machine for a single project. So we don't limit the concurrency.
I believe Cloud Build has no persistent caching so you are forced to use remote cache saving and loading. Which can incur a network latency that can slow the build to some extent. Cloud Build with Kaniko also expires the layer cache after 6 hours by default.
GitHub Actions is similar except that there is the ability to store Docker cache using GitHub's Cache API via the `cache-to=gha` and `cache-from=gha` directives. However, this has limitations like only being able to store a total cache size of 10GB per repository. You also have network latency for loading/saving that cache as well.
With Depot, the cache is kept on a persistent disk. So no need to save/load it or incur network latency doing so. It's there ready to be used by any builds that come in for the given projects.
This one?
https://github.com/kubernetes/kubernetes/blob/master/CHANGEL...
> autoscaling/v2beta2 HorizontalPodAutoscaler added a spec.behavior field that allows scale behavior to be configured. Behaviors are specified separately for scaling up and down. In each direction a stabilization window can be specified as well as a list of policies and how to select amongst them. Policies can limit the absolute number of pods added or removed, or the percentage of pods added or removed. (#74525, @gliush) [SIG API Machinery, Apps, Autoscaling and CLI]
Yep, in the version I'm on (1.15) there's only global flags and config[1] which apply to all HPAs, but not all apps should scale the same way - our net facing glorified REST apps can easily scale up with, say, a 1-2m window, but our pipeline apps sharing a Kafka consumer group should be scaled more cautiously (as consumer group rebalancing is a stop-the-world event for group members)