This might speak to the craziness of the gstreamer plugin ecosystem - good/bad/ugly might be a fun maintenance mnemonic, but `voaacenc` is actually in `bad` - not `ugly`. Most plugins you'd want to use aren't in `good`. How are you supposed to actually use "well supported plugins" with gstreamer? Is it just to not use gstreamer at all?
I was recently in the market for one of these! I ended up going with https://github.com/dbohdan/recur due to the nice stdout and stdin handling. Though this has stdout/stderr pattern matching for failures which is nice too!
Cool, I hadn't seen this one yet! Using Starlark is a very good idea. I ended up writing some tiny DSLs to specify certain things like status code patterns and durations; using an off the shelf DSL like Starlark would've saved a lot of effort.
I'll confess I've never used `expect`, but I think `expect` is for interactive commands. I think if you were going to write retry logic in bash you would pipe it to `grep` and examine the return code. If `grep` doesn't find any matches, it'll exit with a status code of 1.
I'd never heard of the `wish` command shell (discussed briefly in that document) but you can always rely on the Tcl community to find a great pun.
Facebook’s wormhole seems like a better approach here - just tailing the MySQL bin log gets you commit safety for messages without running into this kind of locking behavior.
IMO mermaid is awesome, but for two somewhat indirect reasons:
- There’s an almost wysiwig editor for mermaid at https://www.mermaidchart.com/play . It’s very convenient and appropriately changes the layout as you draw arrows!
- Notion supports inline mermaid charts in code blocks (with preview!) It’s awesome for putting some architecture diagrams in Eng docs.
LLMs (I use ChatGPT) can take a generic process description, spit out the result in mermaid, which can then be imported and refined in something like draw.io. Yes, you’ll have to correct a few things by hand, but it drastically speeds things up. Last time I check draw.io is supported in obsidian.
I recently tried this, but the import to draw.io did not go well. It imported as a single static image rather an editable diagram. Maybe I did something wrong?
Usually in such cases either copy and paste the error message from draw.io, or screenshot it and upload to chatGPT. It will debug it for you.
There’s also a specific sequence of steps to import mermaid scripts, I don’t remember the menu location by heart, ChatGPT can also give you the steps needed to do this.
I don't like the color scheme, and in some of the snippets I don't understand the correlation, but some of them, I think the structural highlighting is very nice.
I don’t know about tires, but for brakes we already know how to make lower dust brakes - use drum brakes instead of disc brakes. The friction material is enclosed on drum brakes so much less of it just flies away.
There's also EVs that generally do most of their braking on the regenerative whatsit, which causes no wear on the brake pads. A lot of it can be prevented by education / driving style, and improving road designs to allow for smooth driving.
If you look at what cars of this type are produced and who drives them, it quickly becomes clear where the road is heading.
Huge off-road vehicles, albeit with electric drive, are missing the mark.
These things are advertised with sporty performance, comfort and so on.
In my opinion, energy is being thrown out the window to satisfy the ego of the buyer.
These people are buying themselves a clear conscience.
Even if the cars are electric, where can they be charged?
Not everyone lives in the houses you see in the advertisements.
Not everyone can just go into debt for something like this.
I drive an economical petrol car with 200k kilometers on the clock.
I don't need to produce anything new or use any rare earths or energy.
Even with electric cars, the plastic for the door panels has to be made from crude oil.
The cost of installing all the electronics is also high.
I drive this car until I can't drive any more, I mostly use public transport, but sometimes I have to use the car for the weekly shop.
I'm also staying in the city because I'm getting older and I'm dependent on doctors and markets, at the moment I work outside the city, like many others, and people just need a car to park here.
Not everyone has the same life as others.
Pretty much every EV does regenerative baking, because it (greatly) extends range. Even hybrids have done this since the very earliest mass-market models (the 1997 Prius has it). EV brakes see a lot less wear and tear than ICE brakes.
Drum brakes are way more prone to fail, the heat cant be transported away, the dust still is produced and the brake power, the law requires, is way to little.
If we switch fully to trams and buses, they produce the dust amount of lets say 100 cars. If the public transportation should be capable of all inhabitants of a city, we would have up to 200 trams running every day and night.
Who should be a tram driver? Most of the younger folks dont want to work shift or at weekends and night.
My town has drivers with grey beards, between 50-60 years old. There are no younger applicants for that job so they drive even if retired to keep up the demand.
They got paid extra which making tickets more expensive.
Presumably, because it'd be annoying waiting for lame duck mode when you actually do want the application to terminate quickly. SIGKILL usually needs special privileges/root and doesn't give the application any time to clean-up/flush/etc. The other workaround I've seen is having the application clean-up immediately upon a second signal, which I reckon could also work, but either solution seems reasonable.
Using SIGTERM is a problem because it conflicts with other behavior.
For instance, if you use SIGTERM for this then you have a potential for the app quitting during the preStop, which will be detected as a crash by Kube and so restart your app.
We don't want to kill in-flight requests - terminating while a request is outstanding will result in clients connected to the ALB getting some HTTP 5xx response.
The AWS ALB Controller inside Kubernetes doesn't give us a nice way to specifically say "deregister this target"
The ALB will continue to send us traffic while we return 'healthy' to it's health checks.
So we need some way to signal the application to stop serving 'healthy' responses to the ALB Health Checks, which will force the ALB to mark us as unhealthy in the target group and stop sending us traffic.
SIGUSR1 was an otherwise unused signal that we can send to the application without impacting how other signals might be handled.
So I might be putting words in your mouth, so please correct me if this is wrong. It seems like you don’t actually control the SIGTERM handler code. Otherwise you could just write something like:
I don't think it matters the framework, it's an issue with the ALB controller itself, not the application.
The ALB controller doesn't handle gracefully stopping traffic (by ensuring target group de-registration is complete) before allowing the pod to terminate.
Without a preStop, Kube immediately sends SIGTERM to your application.
This is actually a fascinatingly complex problem. Some notes about the article:
* The 20s delay before shutdown is called “lame duck mode.” As implemented it’s close to good, but not perfect.
* When in lame duck mode you should fail the pod’s health check. That way you don’t rely on the ALB controller to remove your pod. Your pod is still serving other requests, but gracefully asking everyone to forget about it.
* Make an effort to close http keep-alive connections. This is more important if you’re running another proxy that won’t listen to the health checks above (eg AWS -> Node -> kube-proxy -> pod). Note that you can only do that when a request comes in - but it’s as simple as a Connection: close header on the response.
* On a fun note, the new-ish kubernetes graceful node shutdown feature won’t remove your pod readiness when shutting down.
More likely they mean "readiness check" - this is the one that removes you from the Kubernetes load balancer service. Liveness check failing does indeed cause the container to restart.
Yes sorry for not qualifying - that’s right. IMO the liveness check is only rarely useful - but I've not really run any bleeding edge services on kube. I assume it’s more useful if you actually working on dangerous code - locking, threading, etc. I’ve mostly only run web apps.
Talking of trees and caches, back in school I remember learning about splay trees. I’ve never actually seen one used in a production system though, I assume because of the poor concurrency. Has anyone heard of any systems with tree rebalancing based on the workload (ie reads too not just writes)?