Hacker News new | past | comments | ask | show | jobs | submit | jb_gericke's comments login

So first off, don’t use Python in prod. Second off, don’t use async because of complexities in multi-threading?

A lot of the world runs just find on Python (see Django), async is mature and stable.


> A lot of the world runs just find on Python (see Django), async is mature and stable.

I never claimed that one should not use `async`, I am only suggesting that be careful about using `async`.

In my experience, a median Go programmer is more comfortable with Go routines than a median Python programmer is with async functions. YMMV.


It sounds like the author makes a blanket claim to avoid asynchronous functions, even when using an inherently asynchronous web framework (FastAPI), which would negate any FastAPI asynchronous concurrency gains.

> It sounds like the author makes a blanket claim to avoid asynchronous functions, even when using an inherently asynchronous web framework (FastAPI), which would negate any FastAPI asynchronous concurrency gains.

FastAPI is faster than the other popular framework, Flask (I posted data supporting that), even without using async.

I only suggested that one should use their discretion when using async.

When did I suggest not to write async code?


WASM solves a very different problem to Kubernetes, and you can happily run, scale and orchestrate WASM binaries on Kube.


I don’t think scale is the only consideration for using Kubernetes. The ops overhead in managing traditional infrastructure, especially if you’re a large enterprise, drops massively if you really buy into cloud native. Kubernetes converges application orchestration, job scheduling, scaling, monitoring/observability, networking, load balancing, certificate management, storage management, compute provisioning - and more. In a typical enterprise, doing all this requires multiple teams. Changes are request driven and take forever. Operating systems need to be patched. This all happens after hours and costs time and money. When properly implemented and backed by the right level of stakeholder, I’ve seen orgs move to business day maintenance, while gaining the confidence to release during peak times. It’s not just about scale, it’s about converging traditional infra practices into a single, declarative and eventually consistent platform that handles it all for you.


Can you qualify this statement? It’s 2024, Kubernetes is old tech and bullet proof at that.


Does Kubernetes has reliable and predictable cronjobs already?



We've seen a multitude of issues, like jobs failing to start, getting too delayed (also the infamous "if your cronjob fails too much it will stop working forever ")

Though it seems they rebuilt the controller to address most of the issues https://kubernetes.io/blog/2021/04/09/kubernetes-release-1.2...


Having watched the infrastructure side of things evolve from the late 90s/early 2000s, where every HP/IBM rackmount was a snowflake, configuration and releases were hand rolled and debugging server / OS / package dependency issues (not to mention scaling and managing load balancers) were exclusively manual to where we are today with Kubernetes, I would select Kube all day everyday. A consistent and now very stable substrate and API I can expect pretty much everywhere, which handles rollouts, resources, health checking/auto healing and scaling for me, and pretty much lets me sleep while infra is failing? Good luck debugging that hand rolled bash script to pull a container after whoever wrote it has left (and good luck scaling it).


Which doesn’t really make sense, as drinking ACV alone will definitely not result in a body recomp/the addition of muscle (every meat head lifter, myself included, would be drinking a few bottles a day in that case).


Yeah, especially with containerisation and orchestration / Kubernetes, I get that perhaps not everything is viable to containerise, but in 2023 this feels archaic and like a lot of (potentially unnecessary) engineering work.


We had our main GCP account suspended because we were running a Lightning node, and some Google automata flagged us as mining.

We couldn’t get hold of any actual person at Google, and were told by our Google reseller to buy a fairly expensive support package to have our issue expedited after raising multiple appeals/objections. Suffice it to say, we run out of AWS now. I’ve heard GCP support (in terms of reaching an actual human support person/engineer has only gotten worse since then, and our experience occurred a good few years back).


This is kind of a problem with non-paid support I think. I had an AWS account with some credits that were going to expire and decided to use them to spin up a bare metal instance. My account was immediately flagged and I got locked out. I finally managed to reach someone who made me jump through hoops changing my password and rotating my API keys. It still took a few days before I was able to log in again, and by then my credits were expired. Luckily I wasn't doing anything important in my account, but someone running a business would have been screwed.


I can see these popping up all over the show at Burning Man. Maybe the guru dude just knows his audience?


There are already better "hexayurt" designs for BM better-designed to block/reflect all sunlight, made with foilized foam boards.

https://en.wikipedia.org/wiki/Hexayurt


> There are already better "hexayurt" designs for BM better-designed to block/reflect all sunlight, made with foilized foam boards.

Have you ever actually occupied a hexayurt throughout a full summer day at burning man?

My limited experience was that the typical one as executed @BM is just a dark, sweaty, oven. Excellent insulation by itself is a trap. Once you insulate a space with polyiso, you need good ventilation and heat removal. It's not really a good match for something so ad-hoc.


A hexayurt with a standard window air conditioner powered by a generator is glorious. Ours was tall enough to stand in and also had designer wallpaper. A 12V-powered filtered ventilation system meant the AC didn't need to be turned on until close to noon.


How did you affix the wallpaper?


Wallpaper paste.


I am surprised that it sticks! We'll have to try that with ours; the manufacturer logos are not such a pleasant look.


You do need ventilation, but that's not hard; a small swamp cooler (plus a box fan) kept us as cool as we wanted to be:

http://www.yurtcooler.com/


Negative, but they appear popular. My guess is that they buy you a few extra hours of coolness in the morning if you open/close openings at the right time. Obvs you don't want to be in a hermetically sealed box ever.


These aren't ideal as they look fairly permanent. I don't see the labor or a time-lapse video of assembly or takedown.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: