He has access to the same resources that everyone here does, probably more actually. Instead of retweeting the first "interesting" thing that pops up on his feed, hear me out on this, maybe he could have quickly Google'd/DuckDuckGo'd/Kagi'd/Yandex'd this before he retweeted it.
No, they don't teach "meme coins" at university, but I don't think that's really relevant.
12 V power supply … sdr that jams ELRS like they don’t even know what ELRS is or how it works. An SDR that could jam that wide of a frequency all at once would be very, very expensive.
Also you can just buy a purpose made 300 W jammer on AliExpress
It supports AMD cpus because, if I understand correctly, AMD licenses x86 from Intel, so it shares the same bits needed to run openVINO as Intel’s cpus.
Go look at CPUs benchmarks on Phoronix; AMD Ryzen cpus regularly trounce Intel cpus using openVINO inference.
Or use the underlying open-source models directly; this is just several existing open models packaged by an Intel-specific deployment framework and wrapped as Audacity plugins.
There are existing frontends for these models that aren't tied to Intel hardware. It may be somewhat less convenient than having them packaged as audacity plugins, but they certainly exist, for people who would want to use them but do not want to be limited to Intel hardware.
This is the fault of the regulators. There’s no reason that new discoveries are not put in a queue to train a new AI and when there are enough to make it worth the run, you do the run and then you give the doctors old model and new model and they run both and compare the results.