I think Raspberry lost the magic of the older Pis, they lost that sense of purpose. They basically created a niche with the first Pis, now they're just jumping into segments that others created and are already filled to the brim with perhaps even more qualified competition.
Are they seeing a worthwhile niche for the tinkerers (or businesses?) who want to run local LLMs with middling performance but still need full set of GPIOs in a small package? Maybe. But maybe this is just Raspberry jumping on the bandwagon.
I don't blame them for looking to expand into new segments, the business needs to survive. But these efforts just look a bit aimless to me. I "blame" them for not having another "Raspberry Pi moment".
P.S. I can maybe see Frigate and similar solutions driving the adoption for these, like they boosted Coral TPU sales. Not sure if that's enough of a push to make it successful. The hat just doesn't have any of the unique value proposition that kickstarted the Raspberry wave.
Yep. RPi foundation lost the plot a long time ago. The original RPi was in a league of its own when it launched since nothing like it existed and it was cheap.
But now if I want some low power linux PC replacement with display output, for the price of the latest RPi 5, I can buy on the used market a ~2018 laptop with a 15W quad core CPU, 8GB RAM, 256 NVME and 1080p IPS display, that's orders of magnitude more capable. And if I want a battery powered embedded ARM device for GPIO over WIFI, I can get an ESP32 clone, that's orders of magnitude cheaper.
Now RPi at sticker price is only good for commercial users since it's still cheaper than the dedicated industrial embedded boards, which I think is the new market the RPI company caters to. I haven't seen any embedded product company that hasn't incorporate RPis in its products they ship, or at least in their lab/dev/testing stage, so if you can sell your entire production stock to industrial users who will pay top dollar, why bother making less money selling to consumers, just thank them for all the fish. Jensen Huang would approve.
I still use Pis in my 3d printers. Laptop would be too big, and a ESP could not run the software. "China clone" might work, but the nice part of the pi is the images available. It just works™
I'm also currently building a small device with 5" touchscreen that can control a midi fx padle of mine. It's just so easy to find images, code and documentation on how to use the GPIO pins.
Might be niche, but that is just what the Pi excels at. It's a board for tinkers and it works.
You can run Klipper on any Linux SBC with a USB port, RPi works but so does an old router that supports OpenWRT, a cheap Android TV box that could be flashed to run Linux, or any of the OrangePi/Banana Pi/Alliwinner H3 boards. You don't really need hardware UART because most of the printer boards you'd be using have either native USB or USB to UART converters. For that pedal, would an old Android tablet that supports USB OTG work? Because that's got to be much cheaper, and with much better SDK.
Correct. But when I looked into it a few years back fir OrangePi it was not as easy as downloading raspbian. All the images made for the pi would not work, you had to download a kernel from another place or something like that? Sorry I don't remember the details, but it was not as easy as a pi.
How much cheaper then 50 bucks can a tablet get? With the pi I can quickly in a hacky way connect rotary encoders with female-female dupon cables, use a python GPIO library made for raspberry pi.
See my other comment, pi5 2 GB is about ~10% more expensive then when 3 or 3b got released when factoring in inflation. ~60 EUR including 25% VAT.
With PSU it's 77 EUR including 25% VAT right now.
4 GB version + case + 64GB SD + PSU = 135 EUR, but I don't need that much ram, disk space or the case. When I put it into a 3d printer I also don't need the PSU.
> When I put it into a 3d printer I also don't need the PSU
Unless you want your printer to power up on demand, then you need a separate PSU and an SSR (and you still need a buck converter because printers don't supply 5V at required amperage).
Yeah, Pi 5 2gb is ~20% more expensive compared to pi3b on release, factoring in inflation (Both in including VAT and local prices)
It's 10 bucks more. ¯\_(ツ)_/¯ Still half the price that I see intel NUCs for sale. Which of course are way more capable. But still, I don't mind the price that much.
I could go with a cheaper alternative, but then AFAIK you might have to fiddle with images, kernel and documentation. For me that is worth 10 bucks.
2. Testing software on ARM64 Linux. Pis are still cheaper than used Apple Silicon Macs, and require less fiddling to run Linux. I currently have a free Oracle Cloud instance that would work just as well for this, but it could go away at any time and it's a PITA to reprovision.
3. Running Mathematica, because it's free on Pi, I only use it a few times a year, and a fully-loaded Pi 5 is cheaper than a single-year personal license to run it on any other platform.
4. Silly stuff like one Pi 3 I have set up to emulate a vintage IBM mainframe.
>Yeah, Pi 5 2gb is ~20% more expensive compared to pi3b on release, factoring in inflation (Both in including VAT and local prices)
I don't really care how it compares to past models or inflation to justify its price tag. I was just comparing to to what you can buy on the used market today for the same price and it gets absolutely dunked on in the value proposition by notebooks since the modern full spec RPi is designed to more of a ARM PC than an cheap embedded board.
60 Euros for 2GB and 100 for 8GB models is kind of a ripoff if you don't really need it for a specific niche use case.
I think an updated Pi-zero with 2GB RAM and better CPU stripped of other bells and whistles for 30 Euros max, would be amazing value, and more back to the original roots of cheap and simple server/embedded board that made the first pi sell well.
Yeash, but not as good as an alternative to a PI back then, since 8 year old notebooks 10 years ago (so 18 year old notebooks today) were too bulky and power hungry to be a real alternative. Power bricks were all 90W and CPU TDW was 35-45W. But notebooks from the 2018 era (intel 8th gen) have quite low power chips that make a good PI alternatives nowadays.
The mobile and embedded X86 chips have closed the gap a lot in power consumption since the PI first launched.
Now you can even get laptops with broken screens for free, and just use their motherboard as a home server alternative to a PI. Power consumption will be a bit higher, but not enough to offset the money you just saved anytime soon.
You can get a 5-year-old laptop with a perfectly working screen for free if you're on good terms with the owner of a company who has a stack of them sitting in a storage closet waiting for disposal. :)
Which is basically just cutting out the middlemen in a transaction that might cost $100 on eBay.
Used corporate laptops are particularly cost-effective if you're interested in running Windows, as unlike Intel NUCs and most SBC products, they typically include hardware-locked Windows 10 Pro licenses which can be upgraded to Windows 11 Pro for free.
5-year old laptops for free aren't really a thing in most of Europe unless maybe you're in Norway or some super rich country where 100$ is pocket change. In most large places I worked in Europe, laptops are leased from a service provider, not owned by the company. When they're obsolete they get sold in bulk locally or abroad. But never for free.
That comparison was true back in 2012 when the first version was released, too.
Things like used PCs and forgotten closet laptops were running circles around brand-new Raspberry Pi systems, in performance per dollar, for as long as we've had new Raspberry Pis to make that comparison with.
Those first Pis didn't even have wifi, and they were as picky about power supplies and stuff back then as a Pi 5 is today.
The primary aspects that are new are that the featureset of new models continues to improve, and the price of a bare board has increased by an inflation-adjusted ~$10.
(Meanwhile: A bare Pi 3B still costs $35 right now -- same as in 2016. When adjusted for inflation, it has become cheaper. $35 in 2016 is worth about $48 today.)
I had just gotten into Arduinos when the first Raspberry Pi came out.
I noticed I can do 90% of the stuff I'd use an Arduino for with a RPi, except I had the full power of an internet connected Linux machine available. The Arduinos are still collecting dust somewhere =)
But now we have the ESP32 filling the same niche along with the Pi Zero W, so I don't really understand the purpose of RPi 4 and 5. They're not cheap compared to the price nor very powerful in any measure.
You don't even need a full laptop, any Chinese miniPC will blow the RPi5 out of the water AND some of them have expandable storage+RAM, while also having 5-20x more CPU/GPU oomph. They do consume a few watts more power, so there _might_ be a niche for the Raspberry Pi, but it's not a big one.
You don't buy the Pi for its price: performance as a desktop replacement, you buy it for the incredible stability of the platform as a target, the support, and the addon ecosystem. If you want to screw around with taking a motherboard out of a laptop, go right ahead. €160 for a 16GB Pi5 that I know for sure will be available, replaceable, and supported for the next decade is more than worth the small investment.
> I don't really understand the purpose of RPi 4 and 5. They're not cheap compared to the price nor very powerful in any measure.
They are good for commercial installations like smart displays in stores (think big screens with menus behind fast food counters) and information kiosks. The extra HDMI port lets you drive two screens with one pi and the extra processing power keeps the UI smooth on high resolution. They also have hardware acceleration video decoding for shops wishing to play hi res promo videos and hobbyists building media terminals.
Cost is not a major concern here because the installation volume is low and there are far bigger expenses anyway. Just take a look at how much commercial displays are. The Pi company’s future supply guarantee is also nice because you know that within a given number of years if something breaks or you need another screen, you can just buy another identical pi and be done with it. Good luck sourcing a Chinese mini pc with compatible footprints, port orientations etc five years down the road.
However I think it is way closer to their original vision than anything else, i.e. It is a lot like the 1980s computers, the magic they were trying to capture.
Clockwork Pi experience with the CM4 is not good. 10 months to ship. Horrible Wifi performance, can't hold a link, and it only has around 50 minutes of battery life. I regret my purchase and it's sitting in my rack next to a bunch of old ham radios.
> I don't think I could a RPi as cheaply once parts and power supply etc are taken into account
The RPi Zero 2W costs $15 and runs HA just fine. One can splurge on a pricey case, microSD, and high-amp GAN charger, and still be under 50% of your spend. You don't have to buy the flagship RPi.
500MB is just not enough RAM for a good HA experience. I would go with at least 2GB. If you have a few add-ons running I would go with 4 or more.
If it's an option I would always go with an SSD for HA. It makes a big difference in usability. Writing often and a lot to SD cards, like HA does, kills them way too fast
> 500MB is just not enough RAM for a good HA experience
Strong disagree: my experience has been great, my HA has been running on a Zero 2W for more than 2 years! I have several HACS plugins enabled - just not any if the video or AI stuff. The same Pi concurrently runs PiHole. For a while, it also acted as a git mirror via iSCSI-backed Gitea, but I had to migrate Gitea off of it since it was memory-hungry. You can do a lot with 500MB in headless mode.
What moving parts do competitors have to be less mechanically reliable?
In fact, a NUC or used laptop would be even more reliable since you can replace NVME storage and RAM sticks. If your RPI ram goes bad you're shit out of luck.
>RPi will still have lower power consumption and is far more compact.
Not that big of on an issue in most home user cases as a home server, emulator or PC replacement. For industrial users where space, power usage and heat is limited, definitely.
>I'm in the market to replace my aging Intel NUCs, but RPi is still cheaper.
Cheaper if you ignore much lower performance and versatility vs a X86_X64 NUC as a home server.
I don't want a used laptop. I have my NUC mounted inside a small enclosure on a bracket, with PoE for power. It's a single-purpose device, only used for HomeAssistant and nothing else. It also has to be located centrally in my house for better ZigBee/ZWave networking.
Unfortunately, it's close to dying. The heat from the CPU disintegrated the plastic of the SATA cable header on the motherboard. I fixed it for now with a bit of glue, but it's not going to hold indefinitely. And NUCs were pretty pricey.
RPi with a SATA/M.2 disk and a PoE hat is not that much cheaper than Intel, but it uses much less power. They also tend to not have cables that are kept under mechanical strain. I have a single-purpose RPi that's been running since 2014, and it's doing just fine.
It feels like you think that the parent hasn't really considered their options and don't know what they really want.
> Not that big of on an issue in most home user cases as a home server
I don't know what "most home users" want, but I can understand wanting something more compact and efficient (also easier to keep cool in tighter or closed spaces), even at home.
> Cheaper if you ignore much lower performance and versatility vs a X86_X64 NUC as a home server.
Or maybe they noticed they don't need all the performance and versatility. Been there. It's plenty versatile and can run everything I need.
I agree completely - the NUC segment has a gaping hole post 2023, and faster raspberry pis can probably fill a lot of it especially for small scale commercial stuff.
There are dozens and dozens of NUC style / form factor machines available these days. Especially cheap ones from China. Not sure what you mean by gaping hole post 2023. I'm running 3 of them with N97 and N150 Cpus. All bought within the last 18 months.
There are very few which are suitable for integration into other products - I currently build a scientific instrument that needs a fairly powerful SBC to run. Intel NUCs were well supported and documented: all of their firmware was updatable on linux without any issues, they had data sheets with power specs so you could run them off DC predictably, and you could buy boards without a case. There are plenty of small NUC shaped mini-PCs but few that are suitable for integration (at the price point intel was at).
Cheap Chinese mini PCs just aren't well documented and don't have predictable supply.
After you buy a case, and a real disk, the pi, cost savings is gone.
Meanwhile you can pick up a used 8th gen intel 1L form factor for about 100 bucks. You can pick up one that will take a PICE card for 150ish bucks, with remote management.
The 8th gen or better intel has all sorts of extra features that may make it worth while (transcoding/video support).
Not just laptops but the used enterprise micro PCs from Dell, HP, and Lenovo. All the same small form factor with very low TDP You can have up to 32 or 64 GB RAMs depending on the CPU, dual
or even triple disks if you want a NAS etc.
yeah, depends on what the used market looks like where you live. Here I see way more laptops for sale for cheap than those enterprise thin clients.
And the thin clients when they are for sale tend to have their SSDs ripped out by IT for data security, so then it's a hassle to go out and buy and extra SSD, compared to just buying a used laptop that already comes with display , keyboard, etc.
In The Netherlands the first generation RPi was only sold to users with a Chambers of Commerce registration, I figured this was always the typical end user for it. Like schools, universities, prototyping for companies. Was the RPi in the rest of the world targeted towards home users?
What? I ordered the original Pi in May 2012 from Farnell/Element14 without a Chamber of Commerce registration (KvK nummer). A couple of my colleagues did too.
>I can buy on the used market a ~2018 laptop with a 15W quad core CPU, 8GB RAM, 256 NVME and 1080p IPS display, that's orders of magnitude more capable..
But it won't be as reliable, mostly motherboards won't last long.
Don't know what your source is for that, but that's not my experience, and i've had dozens of laptops through my hands due to my hobby.
The ticking timebomb lemons with reliability or design issues, will just die in the first 2-4 years like clockwork, but if they've already survived 6+ years without any faults, they'll most likely be reliable from then on as well.
Why not 50 more years if we're just making up numbers? I still have an IBM thinkpad from 2006 in my possession with everything working. I also see people with Macbooks from the era with the light up apple logo in the wild and at DJs.
In your comment you didn't say Apple computers or Thinkpads. Those are different. I was talking about plain old vanilla business class laptop (because we are talking about raspberry alternative).
I have computers that are ~20 years or even more and still work fine. My main computer which I just replaced is ~14 years old (with some components even older than that), was used every single day, and is now a perfectly functioning server. I have stacks of SFFs and minipcs from eBay going back to 2008 but most from 2012-2015, which have been running virtually uninterrupted for a decade, and still working fine. I have several laptops from different OEMs, business and consumer lines, that are as old as 2008 and have been used regularly for at least 10 years, all still fine.
I understand what you're saying but saying it isn't enough. There's nothing to support your claim.
>I can buy on the used market a ~2018 laptop with a 15W quad core CPU, 8GB RAM, 256 NVME and 1080p IPS display, that's orders of magnitude more capable..
If it is a generic laptop, yes. 10 years is a stretch. Components used in the motherboard are probably not high quality enough to last more than 10 years. A manufacturer does not have an incentive to put high quality stuff (that is probably costlier) in a laptop who's only selling point is cheap for the "features", and not reliability or longevity..
One might get lucky with such a laptop, but I won't count on it.
Again, is that just a feeling, or do you have some data to actually show this. In my experience even old basic Acer laptops easily last more than 10 years, probably without the battery and married to the charger forever now, but they will work fine. But I don't go on the internet and tell everyone laptops last most than 10 years just because I know of a few Acers lasting that. Likewise, do you have any statistics on longevity of Raspberry Pis.
Sure, but $200 on eBay will get you something along the lines of a Dell Latitude, with decent build quality, cost-optimized more than a flagship workstation-class laptop, but certainly not designed to squeeze out the last penny at the expense of reliability or repairability like the cheapest consumer models.
And if you buy a 5-year-old corporate laptop in very good condition with minimal visible wear on the keyboard and touchpad, it was likely only used as a desktop replacement connected to a dock, so unlikely to have suffered abuse not apparent from visual inspection alone.
If you're planning to use it as an actual laptop, price out a replacement battery before purchase, as battery capacity will degrade over time, even if the laptop is exclusively used on AC, so will always be something of a crapshoot.
Otherwise, I'd expect the rate of component failure to be no higher than for any other lightly-used laptop of similar vintage, which is low.
One thing is that Raspberry PI have a fewer of them. So less chance of one becoming faulty.
Regarding higher quality components, I think the for the usecase (I mean the kinds of thing it is supposed to be used for) of Raspberry PI, reliability is more important.
> Regarding higher quality components, I think the for the usecase (I mean the kinds of thing it is supposed to be used for) of Raspberry PI, reliability is more important.
That you think that reliability is more important for a Raspberry Pi usecase than a laptop doesn't somehow magically make it a fact that its components are of higher quality than your average laptop. You only speculate and then speculate further on the basis of your original speculation. That's not how you arrive at a basis for a factual claim or an estimate.
Sure, there's other numbers to find as well, but I'd suggest that they're pretty comparable in the way they handle environments. If one would fail, so would the other.
The Raspberry Pi probably still has the advantage of an actually robust firmware/software ecosystem? The problem with SBCs has always been that the software situation is awful. That was the Raspberry Pi's real innovation: Raspbian and a commitment to openness.
Fragmentation in the non-x86 world really hurts adoption. RPi presents a very well documented configuration that can be used as a target for development.
RISC-V is going through this exact same problem right now. All of the current implementations have terrible documentation, and tailoring Linux for each of these is proving to be difficult. All of these vendors include on-board devices that have terrible doc and software support.
ARM has a mitigation for this called SystemReady. It's basically "does your board support UEFI enough to usefully boot a battery of generic ARM Linux images". The Raspberry Pi can be made SystemReady, and Radxa also makes SystemReady-compliant SBCs you can buy.
RISC-V would do well to adopt and promote a similar spec.
Whenever the Pi gets brought up people on HN will tell you you are wasting money if you are not buying a Chinese Pi clone with beefier specs instead. But at least for me, "Being able to boot into a Linux system without having to dig through outdated wikis and Chinese language support forums to hunt down a google drive link to an OS image from 2021 that has since then received zero updates" is definitely worth paying ten extra dollars for.
really good point, but I actually don't think the Pi's biggest competitor is Chinese clones, but just regular laptops and mini-pcs that have dramatically lowered in price over the years. I do still think Pis have a purpose though, it's just harder to justify in certain cases.
It depends on what you use your Pi for. If you are self hosting anything beyond what a Pi Zero can handle then a mini PC would most likely be a better choice than a Pi 4 or 5. If you are building embedded devices or are a hobbyist then the Chinese ARM SBCs are definitely in the competition.
You don't need an ARM, and for the same price, before the RAM crisis, you could get many tiny PCs for the same amount of money, with proper SSDs and power supplies and in tiny form factors.
The problem here is "something like this". I may buy one of these today. Then in 6 months, I have to buy a different one. Only to find out that it's firmware has a bug. Or it uses a different ethernet chips that is a problem for me for some reason. Or something like that.
The RPi I can depend upon to be shitty as well, but in the exact same way. So it stays fit for purpose.
> [...] if I want some low power linux PC replacement with display output, for the price of the latest RPi 5, I can buy on the used market a ~2018 laptop
Nah, they released products better suited to what people were already using Pis for.
The Picos are great for the smaller stuff, new Pis are great for bigger stuff, and old Pis and Zeros are still available. They've innovated around their segment.
The AI stuff is just an expression of that. People are doing AI on Pi5s and this is just a way to make that better.
The original Pis are still for sale, are cheap, and still do everything you need. This doesn't conflict with an expanded product line. The whole reason for Pi is still GPIO plus general purpose computing. AI is now a part of general purpose computing, so it only makes sense to adopt it too.
The things you can do locally with AI now are amazing. For several years there's been multiple open source products that can do both audio and visual processing locally using AI models. Local-only Home Assistant is almost equivalent to Siri. The more things you throw at it, the more computing power it needs (especially for low latency), and that's where the dedicated GPUs/NPUs (previously ASICs) are needed. And consider the expanded use cases; drones and robots can now navigate the world autonomously using a $150 SoC and some software.
I think this is a miss on what the Pi is: an experiment. Sure, it stood on the shoulders of other SFF boards that came before it - but it broke into the general computing landscape targeting makers and builders. If the AI hat doesn't work out, so be it. The use cases for this type of hat may yet to be seen. On one hand it may feel shortsighted to bringing hardware to market with no explicit use case, but that's part of the Pi brand.
As someone else mentioned: if the hat could efficiently be leveraged with the YOLO models on Frigate for a low volume camera setup that could be a nice niche use case for it.
Either way I hope the RPi org keeps dropping things like this and letting the users sort out the use cases with their dollars.
> I think Raspberry lost the magic of the older Pis, they lost that sense of purpose. They basically created a niche with the first Pis, now they're just jumping into segments that others created and are already filled to the brim with perhaps even more qualified competition.
I don't think you will find anything on the market enabling you to create your own audiophile quality AMP, DAC, or AMP+DAC for a pretty attractive price except a Pi 3/4/5 with a HifiBerry (https://www.hifiberry.com/) HAT.
This is why I keep buying 3B+ and Zero 2 W and not any of the newer versions...it's much more in keeping with the relatively low cost board with GPIO and reasonable compute. It's kind of the last one they made that does what I kind of expected out of a Raspberry Pi at a reasonable price point. If I needed more compute I would have skipped the travesty that is ARM and just bought an x86 system.
I still remember the email I got telling me that they were going to upgrade the RAM of the 256Mb Model B I ordered and that I would receive a brand new model with 512Mb for no extra cost. Hard to believe that was nearly 14 years ago.
Was very helpful in me learning Linux. The only alternative I had at the time was a few old Pentium 4 machines, which were very noisy and my parents didn't like me leaving turned on for a long time.
Not everything needs to be for everyone. I think this is super cool - I run a local transcription tool on my laptop, and the idea of miniaturising it is super cool.
I wouldn't dare suggest that. The RPi was never for everyone yet it turned out it was for many. It was small but powerful for the size, it was low power, it was extremely flexible, it had great software support, and last but not least, it was dirt cheap. There was nothing like that on the market.
They need to target a "minimum viable audience" with a unique value proposition otherwise they'll just Rube-Goldberg themselves into irrelevance. This hat is a convoluted way to change the parameters of an existing compromise and turn it into a different but equally difficult compromise. Worse performance, better efficiency, adds cost, and it doesn't differentiate itself from the competing Hailo-10H-based products that work with any system not just RPi (e.g. ASUS UGen300 USB AI Accelerator).
> the idea of miniaturising
If you aren't ditching the laptop you aren't miniaturizing, just splitting into discrete specialized components.
This is the problem with this gen of “external AI boards” floating around. 8, 16, even 24 is not really enough to run much useful, and even then (ie. offloading to disk) they're so impractically slow.
Forget running a serious foundation model, or any kind of realtime thing.
The blunt reality is fast high memory GPU systems you actually need to self host are really really expensive.
These devices are more optics and dreams (“itd be great if…”) than practical hacker toys.
I feel like if RPI doubled down heavily into education, they would be in a much better spot. They really could never win on price in the long run. But having a bit of K-12 and university budgets going to RPIs every year, especially during the "teach the kids programming" era, would I think make them a much healthier business.
It's actually not great since Ethernet is over USB on the pi 4 (edit this is not true, confused with pi 3). Not that it doesn't work, but I'd rather have an N100 minipc.
OTOH with ram prices being where they are and no signs of coming back down in the foreseeable future a second hand pi 4 may be a very wise choice.
The thing with the Chinese SBCs is that (like every other player in the embedded world) they don't give a flying fuck about upstreaming their code to the Linux kernel.
Of course, Raspberry Pi just like everyone else has their custom patches, but at least to my knowledge you can use a straight Linux kernel and still have a running system.
My biggest issue is the lack of really good cases. There are all those fancy peripherals you can buy, but it's really hard to find simple case that works without overheating and no cables sticking out on all four sides.
People have for quite some time been using Googles Tensor chip to accelerate AI workloads on the Pi. I doubt that anyone runs Llms on Pis but stuff like security cameras with object detection...
I read somewhere that Google has largely abandoned support for their Coral TPUs? I still use a Coral TPU for Frigate NVR. But not sure how long they'll be supported.
IMO this is a consequence of Raspberry Pi going for-profit and IPOing. Now they are incentivised to chase the same hype trains as every other public tech company. I can't see them having another "Raspberry Pi moment", those are too risky now.
That said, more options at the (relatively speaking) low end of the AI hardware market probably isn't a bad thing. I'm not particularly an AI enthusiast generally, but if it is going to infest everything anyway, then at least I would like a decent ecosystem for running local models.
Oh, I did not realize it was getting that first class treatment, I thought (from only reading the article) that this was just a hat made by a third party and sold for the ecosystem.
In regard to their niche, their niche is a ridiculously well-documented ecosystem for SBCs. Want to do something with your RPi? You can find it on Google, and the LLM of your choice is probably trained to give you the answer on how to do it. If you're just tinkering or getting a POC ready, that's a big help.
Of course, if you're in the business of hardware prototyping, and have a set of libraries and programs you know you're going to work with, you don't need to care as much.
Reclining seats are more expensive and heavier. The target customer for a low cost flight is cost sensitive and more resistant to "punishment". The expense would be hard to recuperate.
I still see ashtrays on older plans, trains, and boats. Sometimes older stuff is left there because it's not financially advantageous to replace it. You can use the recline button to your liking, but it can be inconsiderate to do it. Traveler discretion is advised.
A question you can always ask yourself is "should I do it just because I can do it?". It will stop you from being needlessly inconsiderate many times, and maybe even make you a better person.
> I'd rather expose a Wireguard port and control my keys than introduce a third party like Tailscale.
Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
> I am not sure why people are so afraid of exposing ports
It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.
> It's the way the Internet is meant to work.
Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.
Ah… I really could not disagree more with that statement. I know we don’t want to trust BigCorp and whatnot, but a single exposed port and an incomplete understanding of what you’re doing is really all it takes to be compromised.
Same applies to Tailscale. A Tailscale client, coordination plane vulnerability, or incomplete understanding of their trust model is also all it takes. You are adding attack surface, not removing it.
If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
If you are exposing a handful of hardened services on infrastructure you control, Tailscale adds complexity for no gain. If you are connecting machines across networks you do not control, or want zero-config access to internal services, then I can see its appeal.
I'll take this to mean that you think arbitrary access to a computer's capabilities will require licensure, in which case I think this is a bad metaphor.
The point of a driver's license is that driving a ton of steel around at >50mph presents risk of harm to others.
Not knowing how to use a computer - driving it "poorly" - does not risk harm to others. Why does it merit restriction, based on the topic of this post?
1. "Unpatched servers become botnet hosts" - true, but Tailscale does not prevent this. A compromised machine on your tailnet is still compromised. The botnet argument applies regardless of how you access your server.
2. Following this logic, you would need to license all internet-connected devices: phones, smart TVs, IoT. They get pwned and join botnets constantly. Are we licensing grandma's router?
3. The Cloudflare point undermines the argument: "botnets cause centralization (Cloudflare), which is harm", so the solution is... licensing, which would centralize infrastructure further? That is the same outcome being called harmful.
4. Corporate servers get compromised constantly. Should only "licensed" corporations run services? They already are, and they are not doing better.
Back to the topic: I have no clue what you think Tailscale is, but it does increase security, only convenience.
The comment I was replying to was claiming that using your computer 'poorly' does not harm others. I was simply refuting that. Having spent the last two decades null routing customer servers when they decide to join an attack, this isn't theoretical.
As an aside, I dislike tailscale, and use wireguard directly.
Back to the topic: Your connected device can harm others if used poorly. I am not proposing licensing requirements.
Most inadequate drivers don't think they're inadequate, which is part of the problem. Unless your acquaintances are exclusively PMC you most likely know several adults who've lost their licenses because they are not adequately safe drivers, and if your acquaintances are exclusively PMC you most likely know several adults who are not adequately safe drivers and should've lost their licenses but knew the legal tricks to avoid it.
From the perspective of those writing the regs, speeding, running lights, driving carelessly or dangerously (all fines or crimes here) are indeed indicators of safe driving or not.
Understand, I am not advocating this. I said I did not like it. Neirher of those statements have anything totk do with whether I think it will come to pass, or not.
This felt like it didn’t do your aim justice, “$X and an incomplete understanding of what you’re doing is all it takes to be compromised” applies to many $X, including Tailscale.
Even if you understand what you are doing, you are still exposed to every single security bug in all of the services you host. Most of these self hosted tools have not been through 1% of the security testing big tech services have.
Now you are exposed to every security bug in Tailscale's client, DERP relays, and coordination plane, plus you have added a trust dependency on infrastructure you do not control. The attack surface did not shrink, it shifted.
I run the tailscale client in it's own LXC on Proxmox. Which connects to nginx proxy manager also in it's own LXC, which then connects to Nextcloud configured with all the normal features (Passkeys, HTTPS, etc). The Nextcloud VM uses full disk encryption as well.
Any one of those components might be exploitable, but to get my data you'd have to exploit all of them.
You do not need to exploit each layer because you traverse them. Tailnet access (compromised device, account, Tailscale itself) gets you to nginx. Then you only need to exploit Nextcloud.
LXC isolation protects Proxmox from container escapes, not services from each other over the network. Full disk encryption protects against physical theft, not network attacks while running.
And if Nextcloud has passkeys, HTTPS, and proper auth, what is Tailscale adding exactly? What is the point of this setup over the alternative? What threat does this stop that "hardened Nextcloud, exposed directly" does not? It is complexity theater. Looks like defense in depth, but the "layers" are network hops, not security boundaries.
And, Proxmox makes it worse in this case as most people won't know or understand that proxmox's netoworking is fundamentally wrong: its configured with consistent interface naming set the wrong way.
For every remote exploit and cloud-wide outage that has happened over the past 20 years my sshd that is exposed to the internet on port 22 has had zero of either. There were a couple of major OpenSSH bugs but my auto updater took care of that before I saw it on the news.
You can trust BugCorp all you want but there are more sshd processes out there than tailnets and the scrutiny is on OpenSSH. We are not comparing sshd to say WordPress here. Maybe when you don’t over engineer a solution you don’t need to spend 100x the resources auditing it…
If you only expose SSH then you're fine, but if you're deploying a bunch of WebApps you might not want them accessible on the internet.
The few things I self host I keep out in the open. etcd, Kubernetes, Postgres, pgAdmin, Grafana and Keycloak but I can see why someone would want to hide inside a private network.
Yeah any web app that is meant to be private is not something I allow to be accessible from the outside world. Easy enough to do this with ssh tunnels OR Wireguard, both of which I trust a lot more than anything that got VC funding. Plus that way any downtime is my own doing and in my control to fix.
SSH is TCP though and the outside world can initiate a handshake, the point being that wireguard silently discards unauthenticated traffic - there's no way they can know the port is open for listening.
Uh, you know you can scan UDP ports just fine, right? Hosts reply with an ICMP destination unreachable / port unreachable (3/3 in IPv4, 1/4 in IPv6) if the port is closed. Discarding packets won't send that ICMP error.
It's slow to scan due to ICMP ratelimiting, but you can parallelize.
(Sure, you can disable / firewall drop that ICMP error… but then you can do the same thing with TCP RSTs.)
Wireguard is explicitly designed to not allow unauthenticated users to do anything, whereas SSH is explicitly designed to allow unauthenticated users to do a whole lot of things.
Interesting product here, thanks although I prefer the p2p transport layer (VL1) plus an Ethernet emulation layer (VL2) for bridging and multicast support.
Headscale is only really useful if you need to manage multiple users and/or networks. If you only have one network you want to have access to and a small number of users/devices it only increases the attack surface over having one wireguard listening because it has more moving parts.
I set it up to open the port for few secs via port knocking. Plus another script that runs on the server that opens connections to my home ip addr doing reverse lookup to a domain my router updates via dyndns so devices at my home don’t need to port knock to connect.
I think the most important thing about Tailscale is how accessible it is. Is there a GUI for Wireguard that lets me configure my whole private network as easily as Tailscale does?
This is where using frontier models can help - You can have them assist with configuring and operating wireguard nearly as easily as you can have them walk you through Tailscale, eliminating the need for a middleman.
The mid-level and free tiers aren't necessarily going to help, but the Pro/Max/Heavy tier can absolutely make setting up and using wireguard and having a reasonably secure environment practical and easy.
You can also have the high tier models help with things like operating a FreePBX server and VOIP, manage a private domain, and all sorts of things that require domain expertise to do well, but are often out of reach for people who haven't gotten the requisite hands on experience and training.
I'd say that going through the process of setting up your self hosting environment, then after the fact asking the language model "This is my environment: blah, a, b, c, x, y, z, blah, blah. What simple things can I do to make it more secure?"
And then repeating that exercise - create a chatgpt project, or codex repo, or claude or grok project, wherein you have the model do a thorough interrogation of you to lay out and document your environment. With that done, you condense it to a prompt, and operate within the context where your network is documented. Then you can easily iterate and improve.
Something like this isn't going to take more than a few 15 minute weekend sessions each month after initially setting it up, and it's going to be a lot more secure than the average, completely unattended, default settings consumer network.
You could try to yolo it with Operator or an elevated MCP interface with your system, but the point is, those high tier models are sufficiently good enough to make significant self hosting easily achievable.
> Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
Wireguard is distributed by distros in official packages. You don't need time, money and expertise to setup unattended upgrades with auto reboot on a debian or redhat based distro. At least it is not more complicated than setting an AI agent.
What about SMTP, IMAP(S), HTTP(S), various game servers parent mentioned have open ports for?
Having a single port open for VPN access seems okay for me. That's what I did, But I don't want an "etc" involved in what has direct access to hardware/services in my house from outside.
> I need to install a bunch of drivers from HP with dubious names, like sp1234
This is just how HP names their software deployment packages. Lenovo will have something like "u6chp70us17" or "83wo12ww". You go on your product's page, download the driver, install it. I understand complaining about a device that doesn't work out of the box, but about the name of the driver installer?
To be honest I've never seen an EliteBook that needed any drivers for the common components (I also own quite a few Elites, oldest from 2012), and in general any laptop that needed a touchpad driver to work in well over a decade. And I've played with a lot of different laptops, business models in particular. Not saying it doesn't happen, just that I don't think it's common.
I have two similar laptops, 840 g8 and 845 g8. The first is intel, the other amd.
The intel one had some kind of new touchpad, which doesn't work during the windows install. It was apparently some new thing introduced with intel's 11th gen, don't remember the specifics, but apparently other models had the same issue. Windows needs to connect to the internet to fetch drivers once installed, even 25h2 which I installed two weeks ago. Bonus points for the AMD ethernet dongle not being recognized, even though it's some random realtek, so I have to type my wifi password (my AP doesn't support wps). The AMD one works, even though the touchpads seem similar.
For a long time, the AMD's webcam didn't work. There's some USB doohickey that wasn't recognized (showed up with an exclamation point in device manager), and even installing all the drivers from HP's webpage didn't solve it. It solved itself somehow after some windows update two or three years later. Out of the box, it did have the webcam working, but the display brightness was somehow limited to "pretty dim". I was ready to write it off as just another crappy enterprise pc panel, but then I rebooted it into the bios and the thing burned a hole through my eyes. Installing windows manually fixed the backlight. Sleep on windows more often than not hangs for some reason, even now, 5 years in.
The intel had a long-standing issue with 4k output over its usb-c ports. At one point, installing the gpu driver from intel fixed it, but windows update would helpfully update it to an older, borken version. Nowadays we have 5k panels at work. I can only get 5k if the driver is initialized with the monitor connected. So if the screen goes to sleep, it won't run at 5k anymore when it wakes back up. Newer models don't seem to have this issue anymore.
Fortunately I'm only an occasional windows user, so don't care all that much. Everything worked perfectly under linux since day one, so apart from the comically bad display quality, I'm generally a happy camper.
> They like it given a chance. My daughters for example far prefer Linux to Windows.
The two topics are orthogonal. GP talks about "local computing" vs. "black box in the cloud", the difference between running it vs. using it. You're talking about various options to run locally, the difference between running it this way or that way.
Linux or Windows users probably understand basic computing concepts like files and a file system structure, processes, basic networking. Many modern phone "app" users just know what the app and device shows them, and that's not much. Every bit of useful knowledge is hidden and abstracted away from the user. They get nothing beyond what service the provider wants them to consume.
It's most certainly not. Phones connect to a cell tower even without a SIM to make emergency calls. The phone can still be tracked and it's not a difficult leap from there to identify the owner of the phone.
> the Gazan government strategically uses humans shields
This just means Israel knows they're hitting women and children every time they send a bomb their way.
> the majority of Palestinians still support starting this war
Palestine isn't a democracy with well documented preferences. Israel is though, so why don't you say that a majority of Israelis are fine with the killing of women and children in Gaza?
elcritch, you're beating around the bush but strongly suggesting there's a reasonable justification (not just an explanation) for killing women and children if it suits someone's needs. Does this apply just to Israel killing people in Gaza or universally valid? Because I distinctly remember the US going to war over WMD that never existed. So elcritch, are you saying US women and children are fair game now?
> there's a reasonable justification (not just an explanation) for killing women and children if it suits someone's needs
The Law of Armed Conflict specifies exactly when it considers such a reasonable justification to exist, which is not "never". You don't get to plop down women and children in front of military installations and go "neener neener" like you're a child on the school playground.
Sure Eli, and I'm sure you're not biased at all, but when you find so many "reasonable" reasons to kill thousands and thousands of civilians, women and children included, and you never ask yourself any questions, there's nothing more anybody else needs to know about you.
The comparison writes itself and when it doesn't, you make it obvious. You wouldn't be the first person who finds justification for something like this.
1) The average death per bomb was less than 1. Strikes mostly hit things which had already been evacuated.
2) When human shields get hit we blame the side that put them in harm's way, not the side that harmed them. Just look at the criminal trials in police actions--a hostage dies when SWAT hits a place, the murder rap lands on the person who took the hostage even if it turns out to be a police bullet in the hostage.
And your note about WMD--said WMD existed. On paper. We read the paper, didn't realize it was underlings lying to Saddam.
> a hostage dies when SWAT hits a place, the murder rap lands on the person who took the hostage even if it turns out to be a police bullet in the hostage.
The murder wrap doesn't fall on the SWAT shooter even when they shoot completely unarmed, innocent people, in their own home. So all your example says is that SWAT gets gets a free pass for murder no matter what. All it takes is for someone to anonymously say "LorenPechtel is a terrorist, he's planning to blow up some children at this address right now" and your chances are slim.
The other thing OP presents is very different from any eID scheme in terms of anonymity. You'd show an ID to a human at the counter and even if the seller stores your info somehow, it can't be linked to the token they sold to you. The required infrastructure is minimal and relatively simplistic. The only drawback is that being anonymous means it's easy to resell tokens.
An eID system links your real life identity to any use of the eID online. Anyone who thinks there's a math or technology that fixes this misses the fact that it's the trust in the humans (companies, institutions, governments) who operate these systems is misplaced. Math and technology are implemented by people so there are many opportunities to abuse these systems. And once in place I guarantee, without any shadow of doubt that sooner or later, fast or slow, it will be expanded to any online action.
I will take anonymity and the small minority of kids who will find a loophole to access some adult-only stuff over the inevitable overreach and abuse against the large majority of people whose every online move will be traced and logged.
> The only drawback is that being anonymous means it's easy to resell tokens.
That’s a pretty major flaw. These tokens will be sold with markup on black markets, rendering the whole system unfit for its intended purpose.
Additionally, in line of drawbacks, buying porn scratch cards will be stigmatised, because everyone will (think they) know what you’ll use them for. Are you comfortable doing it in front of your teenage child, neighbor, crush, grandma, or spouse?
> Math and technology are implemented by people so there are many opportunities to abuse these systems.
And yet we have functioning asymmetric cryptography systems that enable secure encryption for billions of people, despite malevolent actors having a clear incentive to subvert that, much more so than age verification tokens.
> […] the inevitable overreach and abuse against the large majority of people whose every online move will be traced and logged.
This is happening right now already, in a scale hardly imaginable.
> These tokens will be sold with markup on black markets,
Black markets catering to minors aren't very large or profitable. No adult needs to buy from this black market. How big is the black market for beer for teenagers? Yes, some reselling will happen, just as minors sometimes get alcohol or tobacco from older friends and siblings. Prosecute anyone involved. It doesn't have to be perfect. It just has to be good enough without sacrificing privacy.
> buying porn scratch cards will be stigmatised
There was once a time, in living memory, when people had Playboy and Hustler mailed to their houses. You're overthinking it. And also why would the seller assume it's for adult content instead of social media?
> Are you comfortable doing it in front of your teenage child, neighbor, crush, grandma, or spouse?
So don't do it in front of them? You're allowed to go to stores alone.
For people who have no money to spare for games it really doesn't matter if games come with DRM or not. They wouldn't afford them anyway so "for free" is the only option that matters.
For people who have money for games but don't want to pay, the presence of DRM matters very little. 99% of games are usually trivially cracked, especially if you are willing to wait for some days or weeks after launch (an important sales window for the publishers).
For people who have money for games and are willing to pay, DRM turns out to be maybe an inconvenience, but definitely a guarantee that they don't actually own the game. The game can be taken away or even just modified in a way that invalidates the reason people paid in the first place.
> especially if you are willing to wait for some days or weeks after launch (an important sales window for the publishers).
“Important” is an understatement. Even for long-term success stories, the first three or four months often accounts for half of a game’s revenue.
And, despite so many people theorizing that “pirates don’t have money and wouldn’t pay anyway”, in practice big publishers wait in dread of “Crack Day” because the moment the crackers release the DRMless version, the drop in sales is instant and dramatic.
When the Nintendo Switch became hackable, ie can play any game, Nintendo saw a massive decrease in sales in Spain. Btw people in Spain pirate the most games in Europe. The decrease was at least 40%. The idea that this is a service issue and piracy doesn’t affect sales is just PR speak. If the game is offline, it’ll be pirated a lot.
Both you and GGP make concrete claims but fail to provide evidence. Can anyone cite published sales data or is this all mere conjecture?
We've been exposed to what seems like FUD about piracy killing sales since approximately forever - you wouldn't dOwnLoAd a cAR - but seemingly zero actual evidence to date.
My source is first and second hand reports from management of game companies having worked in the industry for decades. But, they don’t make numbers like that public.
The best public report I can find is https://www.sciencedirect.com/science/article/abs/pii/S18759... which shows a median difference 20% of revenue for games where Denuvo is cracked “quickly” but also no significant difference if Denuvo survives for at least 3 months.
What I’ve observed from internal reports from multiple companies is that, if you don’t assume an outlier blockbuster game, major game studios’ normal plan is to target a 10% annual profit margin with an expected variance of +/-20% each year.
So, assuming you have a solidly on-target game, DRM not just being there, but surviving at least a couple months is the difference between “10% profit moving the whole company forward on schedule” vs “10% loss dragging the whole company down” or “30% profit, great success, bonuses and hiring increases” depending on the situation.
Outside of games, I have seen many personnel reports on Hacker News over the years from small-time ISVs that they find it exhausting they need to regularly ship BS “My Software version N+1” just as an excuse to update their DRM. But, every time they do, sales go back up. And, the day the new crack appears on Pirate Bay, sales drop back down. Over and over forever. Thus why we can’t just buy desktop software anymore. Web apps are primarily DRM and incidentally convenient in other ways.
> which shows a median difference 20% of revenue for games where Denuvo is cracked “quickly” but also no significant difference if Denuvo survives for at least 3 months.
So how did they measure the difference? They released one title with Denuvo then erased everyone's memories about it and released it again without?
Because if you compare different titles I don't know what you base that percentage on.
I've been saying that for decades at this point. Web apps trade post-release support issues with slightly higher development costs upfront (dealing with browser compatibility), but the real kicker is that the company is now in complete control of who gets to use what and when.
It's a vacuous argument. Even in the complete absence of piracy web apps would still have won out over desktop software due to turning a one time sale into a recurring subscription. That's what drove their adoption.
MMOs show the same thing. There are plenty of multiplayer games with centralized servers that are effectively impossible to pirate. But subscription based MMOs score a clear win in terms of revenue.
(It turns out free to play gacha is even more lucrative than subscription, but I digress.)
> My source is first and second hand reports from management of game companies having worked in the industry for decades. But, they don’t make numbers like that public.
As an aside, I find this kind of behavior on the part of companies rather irritating. It's like, if you want people to believe that something affects your sales, you need to publicly release the sales data (and do so in a way that people will trust). Otherwise there's no reason for anyone to believe you're not just making stuff up.
They just need law makers to support IP/DRM laws that allow them to continue to operate. (I made games for a while at a small studio; I understand some of the pressures that studios are under and don’t support piracy of games.)
And they can get that support without publicly releasing detailed time-series sales data.
It doesn't add up though. If they were actually dependent on DRM as described then broad public support would be a massive benefit to them. Yet seemingly none of the many studios out there publicize such data. And this comment section is full of hand waving about "well I can't provide actual data but I talked to someone who said ..." it sure looks like BS to me.
when i was younger there were more games i wanted to play than i had money to pay for..and i pirated.
then i had some money and i bought more games than i had time to play.*
now i neither buy or play games.
*the point is that at this point, there is no point wasting time trying to pirate games. every humble bundle. every steam sale. u just click and its yours. you dont even have time to play. why waste time pirating?
Are they seeing a worthwhile niche for the tinkerers (or businesses?) who want to run local LLMs with middling performance but still need full set of GPIOs in a small package? Maybe. But maybe this is just Raspberry jumping on the bandwagon.
I don't blame them for looking to expand into new segments, the business needs to survive. But these efforts just look a bit aimless to me. I "blame" them for not having another "Raspberry Pi moment".
P.S. I can maybe see Frigate and similar solutions driving the adoption for these, like they boosted Coral TPU sales. Not sure if that's enough of a push to make it successful. The hat just doesn't have any of the unique value proposition that kickstarted the Raspberry wave.
reply