I love this! I really wanted to go down this road when my kids were younger, but the paucity of floppys and the low storage space made me go down the Avery business card print outs with RFID stickers on the back and a raspberry pi with an RFID reader inside. Of course, the author is using the floppys as hooks instead of as storage media...what a great idea. The tactile response and the art you can stick to them makes them ideal for this purpose.
QR codes on cards would work as well, if I'm understanding what this project is. The floppy disk approach has some nostalgia maybe but seems quite fragile. I quickly learned to never let my kids handle CDs/DVDs (one of the worst physical media designs ever; they are totally unprotected) as they would quickly become damaged and unplayable. Floppy disks are at least sort of protected but the same idea applies.
I still have a large number of working CDs from when I, myself, was a kid. DVDs too but they were later and more durable.
I’ve always wondered what people are doing to them? Maybe I just got lucky. Maybe I was just careful with them. Maybe I don’t remember the ones that failed.
I don’t think kids are less careful now, although being screamed at for making the CD or record skip was probably a deterrent.
Some people really get the idea of only handling it around the edges. Lots of other people just handle them however they want and have no problems touching the media anywhere. Especially kids, which often don't have the cleanest hands at any given moment.
Lots of kids will handle them however they want. They'll pick them up with greasy, sticky hands right on the media section. They won't necessarily care about ensuring they're properly in the drive tray. They'll jam all kinds of things into the drive slots. They'll drop them on the floor and step on them, toss them in a toy box when told to clean their room, etc.
Obviously not all kids will be this way, but many will.
I don't think I can get my hands on a floppy drive, but I still have an ancient computer somewhere with a DVD player in it. While not as cool, I had been considering turning into a simple media station for the specific purpose of letting my kid pick what music to play or video to watch by herself, without needing a screen to navigate it.
Like you, it never occurred to me that I can also just use specific DVDs or CDs as hooks for videos to be streamed, or media downloaded on a hard drive. So that suddenly makes the whole project a lot more interesting, and possibly easier too.
Buying a large pack of burnable DVDs is a lot cheaper and sustainable than using SD-cards like other commenters suggested.
I use Opus 4.5 and GPT 5.2-Codex through VS Code all day long, and the closest I've come is Devstral-Small-2-24B-Instruct-2512 inferring on a DGX Spark hosting with vLLM as an "Open AI Compatible" API endpoint I use to power the Cline VS Code extension.
It works, but it's slow. Much more like set it up and come back in an hour and it's done. I am incredibly impressed by it. There are quantized GGUFs and MLXs of the 123B, which can fit on my M3 36GB Macbook that I haven't tried yet.
But overall, it feels about about 50% too slow, which blows my mind because we are probably 9 months away from a local model that is fast and good enough for my script kiddie work.
Does anyone know how this actually was done? Like, did they export every frame as a PNG and then run them each one by one through the model? Or did they somehow "load" the video into the model directly (which then internally somehow steps through each frame?)
Do yourself a favor and study for both your technician and general at the same time (I’m assuming you live in the US). HF is exponentially more fun than just VHF/UHF.
The US ham test question pools are fully public. Your test will be a mixture of questions from the pool. HamStudy basically lets you churn the question pool, and then will offer explainer text / references to back up each question and correct answer.
I went on a vacation and used their phone app any time I was standing in a line. You can set it to just keep spinning through the questions, with a bias towards ones you're getting wrong.
You need to get 37+ correct to pass. Another way to think of that is you can get up to 13 wrong and still pass.
Within each category there are subcategories. "Antennas and Transmission Lines" for example has 8 subcategories. The 8 questions in "Antennas and Transmission Lines" are one from each of those subcategories. The question pools for these subcategories each have 10-14 questions.
If you compare to the closest corresponding categories/subcategories from the General and Technician exam you'll probably find that there are a few cases.
1. The Extra is just more of the same. It's not harder per se. "Commission Rules" for example.
2. The Extra goes goes deeper and also adds new material that is more advanced.
3. The Extra is in new territory.
If you get to the point where the Technician and General are going to no problem, then you will probably have no trouble getting to the point where case #1 is also no problem, and case #2 is also well in hand. It is #3 where you might have trouble.
But remember that you can get 13 wrong and still pass!
Pick say 10 subcategories that are in case #3 that look like they would be the hardest to get good at and just write them off.
For example in "Antennas and Transmission Lines" you might decide that the "Smith chart" subcategory, which has a pool of 14 questions, would take a lot of time to get good at. So skip it. That's 14 less potential questions you have to be prepared to answer, leaving more time to study for things in class #2 and the class #3 things that look most doable.
It doesn't cost extra to take the Extra test at the same session that you take the Technician and General tests, and there is no penalty for failing, so might as well go for it.
Huh, I never heard that one. Extra gets you more frequency privileges (nice not having to worry so much about band edges) but IMO the real benefit is being able to enjoy reciprocal operating under CEPT when traveling abroad.
Only the entry level license (Technician in the US, covering UHF/VHF) is substantially different from the German one, and it's also much more restricted. Germany in general is a much better country for radio, especially if you wanted to ever do high power broadcasting.
I think the one perk to the US is that the FCC has basically stopped caring about all but the most important frequencies. This makes HF particularly fun, since HF pirate radios are often the best listening stations in the entire RF spectrum. I have no idea what that's like in Germany, but I would imagine given the general ordnung culture and veneration of rules, German hams are much less tolerant of flagrantly unauthorized broadcast stations and your regulatory bodies are more proactive in shutting them down.
These drugs require constant monitoring by physicians that understand what to look for, that are actively involved in adjusting dosage, for them to be even remotely safe. Muscle atrophy is considered an "uncommon side-effect", but that's just because most studies aren't performed over long enough periods of time. People tend to be kept on statins permanently. And that changes the safety factor considerably, as it does with any medicine taken over years.
My anecdotal evidence is based on multiple family members on these drugs and how even in Norway, with our fairly good healthcare system, it's not monitored sufficiently to avoid issues. There's a lot of strong feelings around statins as is apparent in the other response to my comment, but claims about them being safe except in outlier cases is simply not true.
The numbers suggested in this article (and to be clear, what you have linked is not a study - it is a discussion by graduate students in a PT/exercise science program. Not to disparage them, but this is not primary research) do not match the results we see in actual RCTs or meta analysis of them.
See the bottom half of my comment at https://news.ycombinator.com/item?id=45446913 - significant muscle damage is exceedingly rare, and ~90% of the muscle pain reported in the RCTs ended up not being related to the statins at all.
Your claims are simply not backed by the data. These are some of the most prescribed and studied drugs on the planet.
I don’t disagree with your point, but I just would like to point out that there are over 100 known post-transcriptional modified RNA bases [1]. In fact, tRNA are more modified bases than canonical if taken as a whole. AND! the ribosome can’t function without all of its modifications. If I were to put money toward “targeting an RNA to make a drug” rRNA is where I’d aim…
Yeah you can, so long as you're hosting your local LLM through something with an OpenAI-compatible API (which is a given for almost all local servers at this point, including LM Studio).
That said, running agentic workloads on local LLMs will be a short and losing battle against context size if you don't have hardware specifically bought for this purpose. You can get it running and it will work for several autonomous actions but not nearly as long as a hosted frontier model will work.
Unfortunately, IDE integration like this tends to be very prefill intensive (more math than memory). That puts Apple Silicon at a disadvantage without the feature that we’re talking about. Presumably the upcoming M5 will also have dedicated matmul acceleration in the GPU. This could potentially change everything in favor of local AI, particularly on mobile devices like laptops.
Cline has a new "compact" prompt enabled for their LM Studio integration which greatly alleviates the long system prompt prefill problem, especially for Macs which suffer from low compute (though it disables MCP server usage, presumably the lost part of the prompt is what made that work well).
It seems to work better for me when I tested it and Cline's supposedly adding it to the Ollama integration. I suspect that type of alternate local configuration will proliferate into the adjacent projects like Roo, Kilo, Continue, etc.
Apple adding hardware to speed it up will be even better, the next time I buy a new computer.
The author says 36GB unified ram in the article. I run the same memory M3 Pro and LM Studio daily with various models up to the 30b parameter one listed and it flies. Can’t differentiate between my OAi chats vs locals aside from modern context, though I have puppeteer MCP which works well for web search and site-reading.
reply