lol guy makes a fair point. Open source software suffers from this expectation that anyone interested in the project must be technical enough to be able to clone, compile, and fix the inevitable issues just to get something running and usable.
I'd say that a lot of people suffer from this expectation that just because I made a tool for myself and put it up on GitHub in case someone else would also enjoy it that I'm now obligated to provide support for you. Especially when the person in the screenshot is angry over the lack of a Windows binary.
Thank goodness; solving this "problem" for the general internet destroyed it.
Your point seems to be someone else should do that for every stupid asshole on the web?
But will this run inside another docker container?
I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.
Docker in Docker is not a waste of resources, they just make the same container runtime the container is running on available to it. Really a better solution than a control plane like Kubernetes.
No, you're running docker inside a docker container. The container provides a docker daemon that just forwards the connection to the same runtime. It's not running two dockers, but you are still running docker inside docker.
These days, knowing that instead of spending hours artfully crafting a solution to something, GPT could code up a far-less-elegant-but-still-working solution in about 5-10 minutes of prompting has all but solved this.
That makes me feel even more guilty for not solving them, now that I realize the solution is one or two orders of magnitude easier to do.
Not joking with orders of magnitude. At this point, I regularly encounter a situation in which asking ChatGPT/Claude to hack me a little browser tool to do ${random stuff} feels easier and faster than searching for existing software, or even existing artifacts. Like, the other day I made myself a generator for pre-writing line tracing exercise sheets for my kids, because it was easier than finding enough of those sheets on-line, and the latter is basically just Google/Kagi Images search.
Yeah but if you let go of your years of coding standards / best practices and just hack something together yourself, it won't be much slower than chatgpt.
Make it timing based and randomized. Sync the 'seed' during device init, and then the listener knows when to listen for the airtag. The airtag then turns on for a specified duration (random between some min/max time), and the listener picks it up.
This hasn't been my experience; I see much higher sequential read results compared to random reads on a wide range of storage from low-end home PC SSDs to high end NVME flash storage in large servers.
It's certainly not true on actual hard drives, and never has been. A seek is around 10ms.
By what metric? I think this is close to true for identical blocksizes, but most benchmarks test sequential transfers with large 1M blocks and random ones with small 4K blocks. In this case, the speed of the fastest NVME drives is more than double for sequential transfers than it is for random ones.
I don't like comparing the two, they're completely different workloads and it's better IMO to look at the IOPS for random transfers, which is where newer, faster SSDs truly excel, and where most people "notice" the performance.
I work in broadcast TV in San Francisco and am very good friends with one of the engineers who is responsible for the care and maintenance of some of the facilities up there. We talked about him taking me up there for ten years before we finally got around to it. :-)
GCP got this reputation because it’s a second class citizen within Google. Google’s own internal infra (Borg, Blaze) is top-notch.
If Meta can pull off the public cloud correctly, I’d trust them greatly - they’ve shown significant engineering and product competence till now, even if they could use some more consistent and stable UI.
I disabled IPv6 as my little ISP has not yet figured out how they want to bill for or assign/segment it out for static assignment. I have multiple static IPv4 addresses. I only use static IP's but that is a requirement specific to me. The firewall is very simple and just forwards packets and uses a simple IPv4 SNAT. The only time I've had it set up more complicated was when a guest was abusing P2P so I had to block it using string matches on the unencrypted commands.
My setup is honestly simple enough that a write-up would not benefit many. My Unbound setup to block many malicious sites is also fairly well documented by others. The null routing of commonly used DoH servers is straight forward. My Chrony setup would just annoy people as I only use stratum-1 servers and the options would just look like cargo-culting to some.
About the only thing not commonly discussed is the combination of thc_cake and some sysctl options to keep buffer bloat low but OpenWRT has their own take on that topic already.
What if instead you bound your own DNS server to localhost:53 inside the network namespace? I suppose you'd still have to mess with /etc/resolv.conf in case it points to hardcoded public resolvers instead like mine does.
[1]: https://github.com/NilsIrl/dockerc
reply