Hacker News new | past | comments | ask | show | jobs | submit | arjvik's comments login

Good friend built dockerc[1] which doesn't have this limitation!

[1]: https://github.com/NilsIrl/dockerc


That screenshot in the readme is hilarious. Nice project.

Instead it requires QEMU!

I can't tell what this does from the readme. Does it package a container runtime in the exe? Or a virtual machine? Something else?

Looks like MacOS and Windows support is still being worked on.

lol guy makes a fair point. Open source software suffers from this expectation that anyone interested in the project must be technical enough to be able to clone, compile, and fix the inevitable issues just to get something running and usable.

I'd say that a lot of people suffer from this expectation that just because I made a tool for myself and put it up on GitHub in case someone else would also enjoy it that I'm now obligated to provide support for you. Especially when the person in the screenshot is angry over the lack of a Windows binary.

Thank goodness; solving this "problem" for the general internet destroyed it. Your point seems to be someone else should do that for every stupid asshole on the web?

But will this run inside another docker container?

I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.


Docker in Docker is not a waste of resources, they just make the same container runtime the container is running on available to it. Really a better solution than a control plane like Kubernetes.

Aren't you describing docker-out-of-docker rather than docker-in-docker?

No, you're running docker inside a docker container. The container provides a docker daemon that just forwards the connection to the same runtime. It's not running two dockers, but you are still running docker inside docker.

https://medium.com/@moshedana058/understanding-docker-in-doc...


Docker is not emulation so there's no waste of resources.

Doesn't podman get around a lot of those issues?

Aw hell, more band-aids because people don't want to get software distribution done right.

Can we please go back to the days of sudo dpkg -i foo.deb and then just /usr/bin/foo ?


I am still using "ar x" and "tar xvf" for .deb files on Void Linux, because some projects only release .deb files!

These days, knowing that instead of spending hours artfully crafting a solution to something, GPT could code up a far-less-elegant-but-still-working solution in about 5-10 minutes of prompting has all but solved this.

I went down this road and it doesn't free up time, you just get to fix many many more problems.

I should clarify - I don’t mean I use GPT to write these solutions, I leave them unsolved knowing that they’re solvable in a very inelegant way.

That makes me feel even more guilty for not solving them, now that I realize the solution is one or two orders of magnitude easier to do.

Not joking with orders of magnitude. At this point, I regularly encounter a situation in which asking ChatGPT/Claude to hack me a little browser tool to do ${random stuff} feels easier and faster than searching for existing software, or even existing artifacts. Like, the other day I made myself a generator for pre-writing line tracing exercise sheets for my kids, because it was easier than finding enough of those sheets on-line, and the latter is basically just Google/Kagi Images search.


Yeah but if you let go of your years of coding standards / best practices and just hack something together yourself, it won't be much slower than chatgpt.

For some value of "working".

Is 4 on 1 off really the best strategy? Seems like it just makes it a 20% chance that the thieves detect the AirTag, right?

Yes, I'm thinking of offering various set-ups in the future, if I see that people are interested

Make it timing based and randomized. Sync the 'seed' during device init, and then the listener knows when to listen for the airtag. The airtag then turns on for a specified duration (random between some min/max time), and the listener picks it up.

Bonus points if the 'seed' is volatile.


On an SSD, random and sequential reads have nearly the exact same performance. Even on large arrays of spinning rust this is essentially true.


This hasn't been my experience; I see much higher sequential read results compared to random reads on a wide range of storage from low-end home PC SSDs to high end NVME flash storage in large servers.

It's certainly not true on actual hard drives, and never has been. A seek is around 10ms.


By what metric? I think this is close to true for identical blocksizes, but most benchmarks test sequential transfers with large 1M blocks and random ones with small 4K blocks. In this case, the speed of the fastest NVME drives is more than double for sequential transfers than it is for random ones.

I don't like comparing the two, they're completely different workloads and it's better IMO to look at the IOPS for random transfers, which is where newer, faster SSDs truly excel, and where most people "notice" the performance.


How’d you get this opportunity?


I work in broadcast TV in San Francisco and am very good friends with one of the engineers who is responsible for the care and maintenance of some of the facilities up there. We talked about him taking me up there for ten years before we finally got around to it. :-)


What issues? I'm not aware of any Java build process that checks timestamps.


JARs are archives, and archives have timestamps.

You can remove those with some extra work.


Just add a post-process step that sets the output artifacts' timestamps (including its content)?

Wouldn't that work?


Yes, just add that.


GCP got this reputation because it’s a second class citizen within Google. Google’s own internal infra (Borg, Blaze) is top-notch.

If Meta can pull off the public cloud correctly, I’d trust them greatly - they’ve shown significant engineering and product competence till now, even if they could use some more consistent and stable UI.


Dont all new projects go to gcp within google?


Depends on what you mean by new. None of the new features for the mature product I work on have touched it.

To the side I have had GCP work but it’s been isolated and as if I were moonlighting.


Yeah at this stage it doesn't make sense to have two tiers within Google.


What hardware do you use for your router?


I use a https://www.pcengines.ch/apu2.htm with a separate wifi access point.

That's EOL now, so nowadays I'd look to ARM e.g. https://radxa.com/products/network-computer/e52c


I use ancient+cheap netgear SOHO routers (WNDR3700 v1 and v2 from ~2012) which can route 940Mbps on ethernet (with software flow offloading enabled).

For wireless AP i have an Mediatek MT7621 device, they are very well supported and provide proper wifi throuput


https://protectli.com/ Good quality devices. Real serial consoles to allow recovery when you make a networking configuration mistake ;-)


Same here. Alpine Linux on top of that + Unbound DNS, dnsmasq for DHCP, netfilter, chronyd for time. I've never been able to make them break a sweat.


Curious: how did you set up firewall (nftables?), IPv6 delegation both ULA and public prefix? Happy to read if you have a write-up somewhere.


I disabled IPv6 as my little ISP has not yet figured out how they want to bill for or assign/segment it out for static assignment. I have multiple static IPv4 addresses. I only use static IP's but that is a requirement specific to me. The firewall is very simple and just forwards packets and uses a simple IPv4 SNAT. The only time I've had it set up more complicated was when a guest was abusing P2P so I had to block it using string matches on the unencrypted commands.

My setup is honestly simple enough that a write-up would not benefit many. My Unbound setup to block many malicious sites is also fairly well documented by others. The null routing of commonly used DoH servers is straight forward. My Chrony setup would just annoy people as I only use stratum-1 servers and the options would just look like cargo-culting to some.

About the only thing not commonly discussed is the combination of thc_cake and some sysctl options to keep buffer bloat low but OpenWRT has their own take on that topic already.


What if instead you bound your own DNS server to localhost:53 inside the network namespace? I suppose you'd still have to mess with /etc/resolv.conf in case it points to hardcoded public resolvers instead like mine does.


Read the whole blogpost that quote was taken from:

https://wasmer.io/posts/wasmer-and-trademarks-extended

I don't think this is as much of a smoking gun as it is made out to be.


Attention is all we need.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: