Hacker News new | past | comments | ask | show | jobs | submit login
The Orange Pi 5 Plus (taoofmac.com)
156 points by rcarmo on Jan 20, 2024 | hide | past | favorite | 174 comments



I'm a longtime Pi fan but I'm really dubious of the single board computer market .

At $5-$10 (pi-zero / W) and $35 (pi3 / pi 4) these little boards made a ton of sense.

Pushing into ~ $120-$150 to me doesn't make any sense. You can get 8gb/16gb N100 at that price point complete with a case.

I saw his point on the per-watt performance which is valid, but are people running a room full of these things? Why spend so much to save 10 watts ?

Someone please enlighten me on how this segment still remains viable.


I used to think this way as well, but then I started doing the actual math.

My last power bill was ~$103 for exactly 256kw, or put another way about $0.40/kwh. For context this is in NYC. I'm sure other people have cheaper power elsewhere.

0.01kw * 24 hours * 30 days * 0.4 $/kwh = $2.88 a month or about $35 a year.

If something is going to be on constantly, the ROI on a 10watts savings can quickly out pace the initial investment.

And that is every 10 watts. Something using 100watts continuously is 10 times that.

This affected a bunch of my other thinking as well. Having a raspberry pi in my home as always on server costs as much a small linode instance and much less reliable.


Similarly, I upgraded the GPU in my server from a decade old formerly high end gaming GPU to a modern lower-mid range because I wanted new video encoders, and a smoother Linux driver experience. But it's idle 99% of the time. The difference in idle consumption (80% lower on the new GPU) works out to €50/year, which means even if I didn't use any of the other features of the GPU, it would pay for itself in 3 years.

Video encoding power draw is also 86% lower and even if I found something to max the new card out, it's still 40% lower than maxing out the old card (for a lot more compute power than the ten year old card).


Note that this trend isn’t always downwards - upgrading from rdna2 to 3 or from intel CPUs to AMD CPUs could result in increased idle power, sometimes significantly so. Things like higher PCIe lane configurations, higher generation/transfer rate, or higher memory channel count/frequency are likely going to represent higher power expenditures at all levels. PCIe 5.0 is a very fast link (a PCIe 5.0 channel is about equivalent to a DDR5 memory channel) and the trend is already upwards in terms of PCIe 4.0 requiring much more than PCIe 3.0. And turning the link speed down doesn't mean you get all the power back either.

(From what I remember, ASPM is sometimes still a mess especially with consumer chips with their own frequency transitions and OC'd fabric/System Agents, and devices don't all wake up reliably all the time and return to high-power state, so iirc the enthusiast advice is "it's only a couple watts, it's a hassle, leave it off". I fear this may become "it's a hassle and it's also a significant power expenditure". I hope it's improved like other frequency transition things. :\)

Also, just in general, the product focus on efficiency varies between vendors. NUCs are quite efficient, because they put a lot of focus into it to win those corporate contracts back in the day. A random minisforum mini pc probably is less efficient than an old nuc. Connectx-5 is much less efficient than connectx-4 even when running at lower pcie speeds. Ssds generally pull much more power than the HDDs they replaced, and pcie5 pulls much more than pcie4 which pulls much more than PCIe 3. Most laptop or other power bricks don't hit 80+ cert and if you are switching off a fancy brick that does, you'll use more. Etc.

Not saying you didn’t look at it, but you have to look at it specifically and newer is not automatically better, sometimes product tiers increase over time too. And it’s certainly always a product-by-product thing. The exact way products are built or cut down doesn't just matter for performance, it affects power too.


> or put another way about $0.40/kwh.

I pay $0.16/kwh net of everything (all taxes + fees). That's insane.


I seriously doubt that. "net of everything" would include HVAC.

* In the winter, my computer means I run my space heater less by every watt it puts out. It's free.

* In the summer, the costs are several times higher. Every watt put out by my computer means several watts of running the AC.

It's actually a rather large difference. If I were in Alaska, computer power would be nearly free. If I were in Mexico, it'd be very, very expensive.


Wow, that's a lot of money. I thought my electricity was high because it's gone up a lot to 0.12 in South Texas.


That’s amazingly cheap to me if it’s a unit price per kWh.

In the UK for me unit prices from April to July last year were £0.50/kWh (USD$0.64) and that was with the government implementing a price cap and subsidising it.

It’s down dropped to a more reasonable £0.29 (USD$0.37) but that’s still with the government capping the maximum rates that can be charged, but no longer subsidising energy companies as the UK wholesale rate is lower than the cap.


You have to remember that in the US we have deregulated electricity so power charge and delivery charge are separate. That 12c might only be the electric cost. Delivery charge is about 60% of the cost of the whole kwh for me. 40c is probably the delivery + cost of electricity itself.


In central Texas, my power rate is about 9.4 cents per kWh, inclusive of "wires" and "energy" (as my bill breaks it down).


Up here in BC it is 0.0975CAD/kWh for the first 1350kWh/2 months and then 0.1408CAD/kWh after - and BC Hydro is publicly owned and almost exclusively hydroelectric.


Indeed. Everyone would do well to remember that kind of thing in the US, but only some people actually do. That's why these discussions almost always quickly devolve into irrelevant things like "What? I pay $0.05 per kWh! It says so right on my bill under 'Price to compare'!"

And then the discourse diverges into yet-another discussion about how billing works, and power deregulation, and other things.

But the only number that matters is the total cost per kWh -- including delivery, distribution, generation, taxes, fees, grift, and whatever else might be included in the bill total.

And that's easy to figure out: Total dollars billed divided by total power consumed, for any given billing period.

The result is a number, in US dollars (or whatever local currency), that neatly and inclusively expresses what a person was actually-charged, per kWh, for electricity in their home for that period.

---

So, for example: My most recent electric bill was, in total, $176.26.

I used 1,111kWh during that period.

$176.26 ÷ 1,111kWh = a cost of $0.158 per kWh, plus or minus a rounding error, for the power I used last month.

---

Only now can I extrapolate that if that my total cost per kWh remains static over time (it will not, but it is likely to be close), then: A 10 Watt difference in 24/7 power consumption is equivalent to ~$13.85 per year, for me.

I can now also evaluate that cost in more practical terms, wherein: I can see that using an extra 10 Watts on a 24/7 application costs me less than one decent beer per month, or one twelve-pack per year. And I'm pretty far from wealthy, but I can see that drinking one fewer beer per month isn't going to make me wealthier in any practical sense of the word. A 10-Watt difference in power consumption thus won't weigh heavily on my decision to use one system or another -- as long as it is just one such system.


Yep. PG&E + EBCE/AVA runs around $0.52/kWh on the low end to $0.68/kWh on the high end out here. Generation is about a third of that, the rest is delivery. Using PG&E for both distribution and generation would add 5% to the total cost. PG&E got a 13% rate hike approved this for year, and they're asking for another hike in March.

As we race towards $1/kWh I wonder where the breaking point will be.

I'm on a time-of-use plan so the costs vary both on when and how much you use.


The most expensive states (except Hawaii) for electricity like Massachusetts and New Hampshire have electricity at about 18 cents per kWh. Hawaii has it at about 30 cents per kWh but they're an island so I imagine that complicates things. In most states it ranges from 9 to 14 cents per kWh.


Connecticut and California really have some explaining to do regarding their electric rates. https://www.statista.com/statistics/630090/states-with-the-a...


California will vary quite a bit. The municipal power companies are quite reasonable (in the Bay Area: Alameda, Palo Alto, Santa Clara?).

PG&E (the dominant player in northern california):

- lobbied hard against muni power

- spent its safety budget on executive compensation

- is recouping its maintenance costs and criminal penalties from its ratepayers instead of from its shareholders

- got Newsom to stack the regulatory body (CPUC) with PG&E sycophants

- is back to paying its shareholders dividends

- has one of the most influential California politicians (Willie Brown) on their payroll

Yes, it's obscene that in a climate as mild as the Bay Area, $400+ utility bills are the norm. Short of some major revolt, rates are set to go up in March as well.


You're right to account for TCO. It's another good reason to try to sleep devices, use WOL.

also minding the cpu power state of the device , making sure it's running efficiently and going to sleep whenever possible.


That’s wild. I’m right outside DC and last month I used 2066kWh for a total bill of $258.85. So, 0.13c/kWh.

I’m all electric heat pump. That ran for about 300hrs last month, so accounts for around 800kWh.


I don't understand how hours run equates to your calculation unless it was always pulling 2,7 kW. Is this some 30 year old setup that doesn't have an inverter-driven compressor?

Also, wow your electricity is really cheap.


This resonates with me. I have a few zeros, 3’s and 4’s for random things (nerves projects, dns, bastion host etc) but the price point now puts it squarely in the “I better have a good use case for this” rather than “I want to tinker around and if it ends up in a drawer no big deal” camp.


i'm with you. i'm leaning toward moving them all to a proxmox server


I recently bought a mini pc with an AMD 5500u (6 cores, 12 threads, 15W), 16 GB DDR4, and a 512 GB nvme SSD for $225 (on sale a bit). I suspect it would run laps around the orange pi despite the similar price and wattage.


Vastly better hardware and software support too by virtue of being x86. Preferred flavor of *nix, Windows, Haiku, whatever, as well as the possibility of virtualization of any of the above.

Not to knock ARM SBCs — ARM is great and I think holds a lot of promise for the future, but there’s no arguing that the platform tends to be restrictive outside of mainstream devices (e.g. Samsung Galaxies and Macs).


I've got a Ryzen 5560U mini-PC in my k8s cluster at home, and it's great. It is faster than the OPi5's that are also in the cluster; those are around the same speed as a few-year-old Celeron or so (edit: originally I said a N5105 but I'm not actually that sure). But I have them in my cluster because CPU perf isn't the only axis that matters. They're also cheaper, they're fanless, they're physically small, they're ARM so I can use them for arm64 builds at speeds faster than qemu offers, and they use less power. I guess I'm here for heterogeneous hardware.


I was going to say that mini pc's were more powerful and cheaper but then I realize that cheap mini pc's like yours use HDMI 2.0 (it can drive 2 or 3 4k@60Hz monitors but not 8k@60 like this Orange Pi 5 using HDMI 2.1). Mini PC's using HDMI 2.1 must be at least $400 a piece. At that price, the mini pc's CPU is much more powerful than this orange's.


Which mini PC was that? That sounds great.


Beelink sells a bunch of them, as do a few other vendors (who are all probably rebadging from the same manufacturers somewhere). If you're willing to spend more, the ones with Ryzen 7840HS processors are particularly impressive.


Don't beelink mini PC's have terrible power supply that makes them randomly go silent?


Some of them have something weird on them, most of them are a normal barrel jack. I can't speak to the weird one, I don't have one of those.


The other avenue you can go down is to get a used mini pc from Lenovo/HP/Dell. These are great because the providers are delivering huge volumes meant for corporate America. This means there is competition, and the build quality is good, not expecting the hacker market to throw together a printed case. I recently picked up a used HP with 16GB RAM, 512SSD, AMD5 series chip, and an OEM Win10 license for ~$150. Untested, but the idle power draw on it is supposed to be 10-15watts. My only complaint about the unit is that I wish these were powered by USBC PD. More common is to be powered by a 19V barrel jack, which would make replacements more of a pain if the external power supply dies.

Serve the Home has an ongoing series about evaluating these: https://www.servethehome.com/introducing-project-tinyminimic...


I have two of those lying around, but care should taken with some of the smaller HPs at least. I have 800 G4 DesktopMini. It has two SO-DIMMs, two full-length NVME slots and a 2.5" SATA bay. It supports thunderbolt (which was rare at the time - it's an intel 8th gen era model). It looked absolutely great on paper. Had them at work, worked perfectly with Linux. Bought one for my house, where I don't have the 24/7 hum of AC and random people talking all day. I quickly began to hate it, since the fan spins all the time and they figured there was no reason to attach it securely, so it rattles around in the case. Newer models (G6 at least) seem to have improved the design so that the fan is actually secured, but I've never tested one in a quiet environment.

> Untested, but the idle power draw on it is supposed to be 10-15watts.

I haven't tested the "mini" one I have, but I also salvaged an older "SFF" HP from work, an 800 G2, with an i5-6500. I've replaced its spinning drive with 2 SATA SSDs (Samsung 840 EVO 512 GB), increased RAM to 2*8+16 GB and thrown in a 4-port intel i350 network adaptor. According to some watt-meter off Amazon it pulls around 16W while idling under Linux running three VMs on top of KVM (OpnSense (= FreeBSD), and two Linux). The highest I could get it was around 50 W while booting up.

The watt meter seems accurate enough. I tested it with a laptop for which I have two adaptors, one specified as 45W, the other 65. With the battery drained, screen backlight at max and a compilation taking 100% CPU, it was reported as drawing 46W and 55 respectively on the two adaptors.

> My only complaint about the unit is that I wish these were powered by USBC PD

Which unit is that? I would actually love that, especially if the USB-C port can also be used as a regular "docking" port, handling DP out and USB in. I could connect my monitor and peripherals with a single cable to it and call it a day, instead of having to deal with random power adaptors lying around. Bonus points if it's actually Thunderbolt.

> More common is to be powered by a 19V barrel jack, which would make replacements more of a pain if the external power supply dies.

I don't think so, those look like pretty much standard fare to me. Although I've heard stories about Dell doing shady stuff with the adaptors, trying to talk to them and stuff, AFAIK HP doesn't do that.


Did not mean the barrel jack power is vendor proprietary. Only that it's not something I can pickup anywhere, but more or less have to source from online. Unlike USBC, where we are almost at the point I could get a 100W charger from any corner store. If nothing else, the standard charger is laughably bulky vs the GaN chargers that are now available.


Those are pretty cool and I like that website. I have one, it's great. I'd probably have a bunch more if they had Type C power, that's too good of a product to come out of those environments I guess.


The brand is GenMachine. Bought from Newegg and shipped from china so you can probably just order from their website


which one did you get?


> You can get 8gb/16gb N100 at that price point complete with a case.

The last time I read a post like this I immediately rushed to buy an N100 that I now have sitting in my living room doing literally nothing, lol

It's funny how huge the market is for people (like myself) addicted to having the latest and great tech gadgets.

I have multiple of every Raspberry Pi... doing nothing.


I end up eventually using them up in various DIY projects. I like to have a pile around because it lowers the inhibitions to get started with one! Nothing to break your new-whim momentum like a missing part and shipping times.


Isn't that the truth. I'm not a huge brick and mortar store guy but I miss Radio Shack when I don't want to wait days for the resistor I need now - even if it is cheaper per unit in 2024.


You're so right. Berlin has one store like that left, and I make sure to hit it as often as I can to help it survive. Because that's an ETA of an hour to the missing part, and not 2-3 days (local supplier) to weeks (AliExpress).


I also had multiple Pi's doing nothing, and took advantage of the Covid shortage to sell them to folks who may be doing something.

Now waiting for my Pi 5.


I also had multiple pis doing nothing. I finally got pihole going on the 3b with DHCP server, hosted my blog on the 4, and have a 5 hooked up to a keyboard, mouse, monitor, and speaker, and when set to 1080p, it's actually a responsive, usable personal computer. I'm developing a three.js game on it that uses a per-frame-updated 512x512 soft shadow map and runs on it at about 50fps at 1080p. I'm planning on running a node.js server on the pi 4 (or maybe 5) and make it multiplayer, though maybe there's a better way to implement the server backend. Of course, if the pi 5 can run it at 1080p50, then so can any random junk laptop from the last 10 years, which is a good feeling.


The Point of PI is not the computer - it’s the GPIO, camera kit, infrared camera, various HATs, such as gps or motor control. It’s for making a camera that opens a door when it detects a cat, or puts more food when birds eat all the food in a feeder.

It’s not best for being a server in a closet


The point of the pi was for teaching computer science in educational settings.

Of course it largely failed at that (imagine running a lab full of raspberry pi 1 model b with the original full-size sd card and phone usb chargers) but everything else is a retcon/product pivot. It wasn’t invented for hacker anything.


If it is running 24/7 for years, watts add up. Running cool and quietly is another benefit.

My Orange PI 5 has been running Nextcloud, Mastodon, Jellyfin, XMPP, Cryptpad, Vaultwarden, and about a dozen other services/sites for about a year. I love it. Some apps only run on X86, and I install those on a VPS.


> If it is running 24/7 for years, watts add up. Running cool and quietly is another benefit.

Ideed, but I think one would be best served to actually do the math. I like having my own router on which I can segment stuff, for example, having my smart TV on the network but not allowing it to phone home and deliver me ads.

I have an old 6th-gen i5 I salvaged from work, which, I thought, must draw a lot of power, so I started looking at newer low-power models, like n5xxx or n100s. Well, if you want one with multiple network ports, it quickly costs around 2-300 Euros. You can also forget about having 10 Gb ports, although such Internet speeds are becoming common where I live, and you can get a 30 Euro dual 10Gb adaptor off Ebay which fits in the PC I already have.

Anyway, I don't actually need 10 Gb right now, so it's not really an issue in practical terms. But 300 Euros isn't exactly pocket change for me, so I sat down and did the math. I'd need around 3 years to break even on electricity costs. Then I went and bought a watt-meter off Amazon. Turns out, the horror-stories of older computers drawing around 40-50 W at idle didn't apply to my specific box, and it only drew around a third of what I had estimated, which means an 8-9 year break-even. Of course, rates will most likely increase over that period, which would bring it closer.


It's an interesting segment, especially now that Intel N100 machines are out. But it essentially depends on what you want to do, and what use/need you have for ARM hardware.

Me, I write fairly low-level stuff that runs on ARM (for kicks), so having one of these as a server/development sandbox makes a ton of sense (even though I have Macs and VMs and whatnot, some things you can only do with hardware).

I grant that you won't find normal people filling their closets with these. But when you walk past, say, a phone exchange, a 5G base station, or any other of hundreds of invisible machines out there, they'll be running a variation of these boards (perhaps slower and dumber, but soon ramping up to this kind of thing), because Intel lost the embedded market years back.


It's a good question. Here's my specific use case, I'm sure there are plenty more.

I have a golf simulator running in the cloud (AWS) on a g4dn.xlarge ec2 insurance.

At home, I use a raspberry pi 5 as a thin client. It plugs into a 4k projector and streams down the display of the cloud PC.

Because it's cheap and reliable, I can leave it in place sitting up on the ceiling attached to the projector. I wouldn't want to devote a more expensive laptop to the job - the raspberry pi 5 is just man enough for the job, powerful enough but only just.


I did mostly the same for a while, but with a Pi 4: https://taoofmac.com/space/blog/2022/10/23/1700


Sounds interesting (but a bit above my pay grade). AWS uses a streaming tech called NICE DCV which runs in a browser (chromium) on my raspberry pi.

I was using a RP 4 before, but the performance is noticeably better with the RP 5.


I spent a couple grand[1] building myself an all-flash Pi CM4 NAS that I reckon will deliver RoI in ~25 years at €0.39/kWh. I did it for the sport, mainly, and because I wanted silence and minimal power draw. It works pretty well, though!

The Pi has a nice ecosystem, is better for hardware hackery than N100 mini PCs, and like all cheapish consumer products, people will kinda buy them just to buy them. I have a Pi5 sitting unused on my desk. Doing my part to pay for Eben's private island, I guess. I'll find a use for it eventually.

[1] https://github.com/theodric/NASty/blob/main/NASty-bill-of-ma...


I’ve set a reminder for this date in 2049 to make sure ROI was achieved


thank you.


There's some arm-only stuff that's fun to work with like risc-os and AOSP. Also if you're trying to fill out a support matrix, having an ARM machine is a useful thing to have around.

It's also cheaper and probably easier to work with than Apples ARM systems for these purposes (although the used market for the m1 will probably cross below a new pi within 2-3 years)


if i knew the drawbacks of the apple tv and the extra money i'd have to spend on apps just to get it somewhat functioning as a network media player, i'd have been better off with the orange pi 5+

if it works as advertised.... my previous experiences with their zero line have been.... trying...


> Pushing into ~ $120-$150 to me doesn't make any sense. You can get 8gb/16gb N100 at that price point complete with a case.

And the power supply. Current-day SBCs aren't like the first-gen raspberry pi, where you could just recycle your old phone charger and off you went.


There is a huge demand for Pi clones in the Industrial segment. Check RevolutionPi.com. $200 for a single Pi is cheap, the equivalent Intel Box costs close to $2000, yes 10x. It's even difficult to get decent number of Industrial Pis on time.


But the per-watt performance of these chips is usually terrible, made as they are on older silicon processes. Any recent-ish laptop CPU will have no problem beating them.


The Intel N100 is a Alder-Lake N chip, based on Intel 7. It's four of the small cores you find on 12-Gen Laptop processors, made with the same process.

It's successor arch, Meteor Lake has only recently launched, so the N100 shouldn't be too far off in terms of efficiency.


these small intel cores aren't known for their power efficiency


Mostly because the high end intel processors are run at power limits where they're getting extremely diminishing returns, trying to eek out every last bit of performance.

In lower power scenarios the little cores can be more efficient than the large ones. And at 6W TDP the N100 is a low power scenario. https://chipsandcheese.com/2022/01/28/alder-lakes-power-effi...


Oh, sure. Still not anywhere nearly as good as the latest Zen parts but the n100 mini pcs are very cheap.


> Why spend so much to save 10 watts ?

That's a wrong mindset to have. How are we going to improve environment if we will be careless about energy?

As in the saying > If you look after the pennies, the pounds will look after themselves


How much energy was spent on raw materials, components, assembly, and shipping?


Production and shipping is a one time event and most likely in the same ballpark regardless how much energy the device uses.

Whereas power consumption is recurring and it adds up. Multiply the difference by hundreds of thousands or more devices and it is no longer is trivial.


They might not consume a lot of power, but they're also not doing much work. How does their performance per watt compare to a modern low power laptop, an intel NUC, the N100, etc.?


Before someone starts the usual yadda yadda about the RPi biger community, the OS not having long time support etc. I would repeat one more time: do not rely on board vendor supplied images; this is valid for pretty much all boards. Just go to Armbian or DietPi pages and you'll almost certainly find one or more images that work on your board and forums to discuss about them with very knowledgeable people.

https://www.armbian.com/download/

https://dietpi.com/#download

Those projects are well worth a contribution, as they don't have a giant like Broadcom behind them.


I agree they're worth a contribution, but sometimes I just want a device to work and even Armbian isn't as well supported as Raspberry Pi OS. For example 2 days ago I tried to get my Orange Pi 4 working. It turns out Armbian's Orange Pi 4 builds have had broken HDMI for months. There's value in having something just work.

There's already a post about this issue on the forums, and a fix: https://forum.armbian.com/topic/26818-opi-4-lts-no-hdmi-outp... . But the precompiled version offered isn't for my board (I have the non-LTS version), so I'll have to compile it myself.


Why not rely on them if they do the job? What is the objection?

Or why not take advantage of the absolutely trivial deployment that the Raspberry Pi Imager offers?

This is like the place the 3D printing world has been in for the last two or three years. Why is it not OK to want to just do stuff and not think about performance-tuning the hardware before you do stuff?

Some of us just want to make stuff, not tinker with the tools.


Just adding in a link to their Donation page at https://www.armbian.com/donate/

I'm not affiliated with them or anything, but also appreciate their efforts and have a small recurring donation set up in the hopes of seeing it continue. Especially for groups like this that have image hosting and hardware costs, even a few dollars can make the maintainers' load lighter and help them continue doing this kind of quiet, important work.


> I would repeat one more time: do not rely on board vendor supplied images; this is valid for pretty much all boards

It's not valid, it's not even an argument.


Armbian is great. Without it I would have never again bought non-Raspi SBCs.

Board vendors believe that it is OK to host their images on dubious download sites, with zero information on what the image is built with.


Why wouldn't you just use stock Debian? The first rpi used a weird CPU, which didn't map well to the CPUs supported by the stock OSs, but that hasn't been true since that very first one.


heh, because normal linux distros are finally somewhat usable with the rpi 5, let alone the older versions. neat little headless servers? sure. capable desktops, not quite there yet.


Lots of people have been using these systems on their desktops for decades. This meme really needs to die.


Yep. That is why I went with Armbian (even though I did test the OrangePi images while I waited for my NVMe). Can't wait for them to ship kernel 6.x support for this board.


Mind elaborating on this? I've always used Raspbian and am interested in hearing about the downsides.


I was referring to other boards, not the Raspberries which have a well supported OS, apologies for not being clear about that. Other board manufacturers often publish distros which have been cobbled together with old kernels and proprietary blobs, then they abandon them when the board is declared obsolete. This is not the case of the Raspberry Pi of course, but for other boards I would check first the above mentioned distros before installing anything by the vendors.


Ahhh, I understand what you mean and believe you to be 100% correct in this case!


Not to mention that if you really want to tinker you can use pi-gen to customise builds from a desktop without all that much difficulty:

https://github.com/RPi-Distro/pi-gen

Worth a play.


Pi 5 is the first Pi I did not buy. I was always the early adopter of all Pi’s but that ended with 4.

For me, the killer was a combo of 3 things: 1. Too expensive relative to performance. 2. Availability of quite decent N100 (and similar) boxes with expandable memory and storage. 3. Not interested in Pi that should be actively cooled.

I always wished the Pi had eMMC storage but it never happened for the non-compute module versions.

I still have a few Pi’s around the house and they are plenty powerful for their purpose.


I suspect the Pi Foundation was getting really annoyed at other Pi compatible boards capturing the high end ARM SBC niche. Their flagship Pi 4's Cortex-A72 is now almost a decade old and not even fast enough to run more than a very basic lightweight desktop and completely out of the question for the market section.

Each Pi so far was about a 30% jump in power consumption, this time it's over 130%. They couldn't get the performance they needed, so they cranked the Pi 5 TDP beyond what was sensible to compensate. I mean 5A over 5V USB-C is borderline non-standard and basically maxing out the current port without needing a regulator. It's really funny seeing the N100, a CISC for fucks sake, get 2-3x the performance while pulling 2 watts less under load. This is their AMD Bulldozer moment.


Checked out RISC-V yet? That's becoming more interesting than ARM for me.


I’m in a similar boat with the Pi 5 being the first I haven’t bought. I do have one of the VisionFive 2 boards, but I’m more interested in the Milk-V Oasis board that’s supposedly coming near the end of the year.

Software support for RISC-V isn’t as good yet either. I don’t know of any hardware that supports the hypervisor extension, there is no IOMMU spec today, many distributions don’t give it the same support as ARM64, etc. It is definitely improving rapidly, but I don’t think there have been any really compelling RISC-V SoCs just yet.


> I do have one of the VisionFive 2 boards, but I’m more interested in the Milk-V Oasis board that’s supposedly coming near the end of the year.

The Milk-V Oasis is out, but are you saying there's going to be another revision of that board?

I'm torn between the Lichee (7-node, 28 cores), the Milk (1 node, 64 cores), or the VisionFive 2.

If I was going to build out a lab for shared use across a number of engineers building packages and ci/cd for Linux, what would be the best option now, while we all wait for the hardware to improve in H2/2024?


64 cores sounds like the Pioneer [0].

Their Oasis [1] board is going to use SiFive's IP, the main cluster is 12 P P670 cores, and 4 E cores. There are also 8 X280 cores as an "NPU", also RISC-V but with a slightly different ISA.

[0] - https://milkv.io/pioneer

[1] - https://community.milkv.io/t/introducing-the-milk-v-oasis-wi...


What's the "I have less than a hundred dollars and I want a reasonably performing Linux desktop" RISC-V SBC option?


I think this is a good question. I don't think you're going to find a RISC-V board with comparable performance to an rPI 4 or 5 in that price range currently.


Interesting. Using the RPi 5 as a benchmark, what if I was willing to spend $200? I guess what I'm wondering is how much you have to spend to get to that point.


Been following the progress. Exciting to see it catch up on the software side!

https://www.youtube.com/watch?v=1apoFXZ9ad8

https://www.youtube.com/watch?v=RhPKZ5JpbHw


I feel like these boards now have only one way to improve, and that is by adding AI capabilities. Like being able to load Whisper onto a board and use it for transcribing mic input.

Because the bigger-faster-hungrier race is putting them in direct competition with x64 boards, where you then ask yourself that for a couple of watts more you'll be able to get a real PCI slot or two to plug in whatever you want, and use the RAM you want.


Which reminds me to mention that the Radxa ZERO 3E [0], which was announced last November, is now for sale [1] (since last week).

It's basically a Raspberry Pi Zero with the difference that it has a gigabit ethernet port instead of WiFi+Bluetooth.

This is not an ad, I've ordered two because my OpenVPN server which runs on a Raspberry Pi B+ (1st gen, 9 MBit/s throughput on Bookworm) needs upgraded hardware.

In that context, it's remarkable that Bookworm still runs on an device as old and weak as the 1st-gen Raspi.

[0] https://radxa.com/products/zeros/zero3e/

[1] https://arace.tech/products/radxa-zero-3e


Perhaps you have reasons for avoiding it, but have you considered researching Wireguard for your VPN? It's pretty night and day the throughput you can get out of it -compared to OpenVPN.

It's also really amazing when it's used in mesh VPNs like Tailscale.


It's a legacy setup and it's working. I did some tests last year with Wireguard and among the problems I encountered were the lack of reconnection upon IP address change. I tested it with a couple of VPS and devices in the LAN, and for some reason it failed to work reliably, like for weeks and surviving reboots of random machines.

My setup bridges two home networks into one with two different subnets.

Long term I definitely want to use Wireguard, but for I'll continue using what works reliably.

Regarding Tailscale, I don't want to use 3rd party services for this.


If you want to avoid third party services Tailscale works with the community project Headscale that you can self-host. It's even gotten patches from Tailscale themselves. https://headscale.net/


Can't remember the exact name but I remember seeing a similar pi0 sized board with POE ethernet and an m.2 slot for wifi, and thinking it'd be perfect for a WAP.

Unfortunately it was a industrial vendor so don't think you can buy it in low quantities and the price is probably way too high for what it is.

I feel like there must be a market for something like that tough, a board with the bare essentials to make it cheap enough to have a few around the house / office and leave it up to the customer to find a wireless card that could be upgraded down the line.


I got one for review in December. Had a few initial issues with it (this was way before release), hope to be able to test it fully soon.


What does one have to do to get such review devices?


I have been working in the space for a while...


I can't take any of the benchmarks seriously when he is using very different hardware across the tests. I can somewhat understand comparing Orange Pi NVMe to the RPi 4 SATA because that's what ships out-of-the-box (but there's an NVMe hat available), even though it'll be rate-limited to USB 3.0 speeds. But I can't understand comparisons to the u59 micro that are actually run on an Intel machine and then not using an NVMe in the Intel for comparison.

This abounds across all tests, from the very first I/O tests that show the Orange Pi 5+ beating both Intel configurations to the OnnxStream test that shows Intel beating the Orange Pi 5+ even though the Intel unit has to load/stream the model from its paltry SATA disk while the Orange Pi 5+ is outfit with an NVMe drive.


Hi, author here. I tested with what I have, and with what I currently use to work and test with that is in the general price/performance range. If that isn't obvious, then I have to apologize and make an explicit note of that.

My u59 ships with SATA SSDs, as it happens.

I do have an Intel i7 13th Gen with PCIe 4.0 NVMes (and several modern Macs), but that would be so far off base (and so expensive) that it isn't even comparable. The i7-6700 is much closer in "value", if you will.

However, you are mis-reading the way the OnnxStream test works. It is still CPU-bound for the most part.


I did not understand your complaints, so I have searched the specifications of Beelink u59.

This is a small computer with the previous generation of Intel Atom CPUs (Jasper Lake) and it happens to support only SATA SSDs, so your suggestion of using a NVMe SSD would have been impossible.

Even with the current generation of cheap Intel CPUs, i.e. Alder Lake N, for instance N100, the CPUs have very few PCIe lanes and most cheap computers do not have an M.2 socket that works at the full PCIe 3 speed of 32 Gb/s like the SSD of the tested OrangePi computer, but they have sockets with only 2 lanes or only 1 lane, which work at half speed or at quarter speed.

Most computers with RK3588 have a full-speed M.2 type M SSD socket and this is one of their advantages over most other computers in this price range.

Since the OnnxStream performance depends both on SSD and on CPU performance, there is no surprise that an Intel Skylake CPU using AVX2 instructions is so much faster than Cortex-A76 with much lower clock frequency that it wins the benchmark despite the slower SSD.

The only benchmarks more informative than these would have included comparisons with a computer using the direct competitor of RK3588, i.e. Intel N100 (which is faster for CPU-limited tasks, but not necessarily for those involving I/O or video), but it appears that the author does not have such a newer computer.


You are 200% correct. I intended to use the N5105 for comparison, but it lacks the right instruction set--and I do end the article mentioning the N100 as something I'd like to compare with.

The RK3588 designs stood out to me as having a very nice PCIe layout (the RK3588s, for instance, doesn't), and that is one of the main reasons I wanted to test the Orange Pi 5+.


The idle power draw at 5W is a little surprising to me, actually - the M1 mac mini draws ~7 idle. The max is a whole lot higher, but I'd also suspect the max performance you're getting is a whole lot higher, too.

I really like the pi zeros for low power budget computing, but I think once you're getting into this kind of power envelope, you're kicking into "real computer" territory and I'm not sure how much benefit the SBCs are giving you.


Hi, author here. I live inside a Mac mini M2 Pro, and love the fact that it only draws around 21W while running Windows on ARM inside a VM with a Teams call, so yes, there is that. But they are different animals altogether.


> I live inside a Mac mini M2 Pro

How cozy is it :) ?


I gave it huge windows :)


Ahh the old classic, https://i.imgur.com/avyXeCN.png


You can bring it down to around 3W with a few config tweaks. Mine averages about that. (edit: wrong board! I was thinking of my OPi5, not the Plus. My OPi5+ looks like it's around the same as the author.)

An M1 Mac is more powerful for sure, but for my use cases (my OPi5+ is being used as a video capture relay with its onboard HDMI capture and my two OPi5's are k8s nodes running Github Actions jobs) it would also be a lot more expensive.


Did you change the CPU governor, or were there any more tweaks? I did measure this off the wall and there's always a bit more overhead in that situation, but I'm curious.


Nope, I just remembered the wrong one. 3W on the OPi5, not the Plus. Mea culpa.


I'm curious how much more expensive, though - the low-end mini is $600. I don't know how much each of the OPis is once you've added all the accessories to make them functional, but it wouldn't surprise me if all of performance/$, performance/watt, and actual total cost winds up being better with the mini once you've got more than one or two of the OPis running.


A 16GB RAM OPi5 is $140 and a 16GB OPi5+ is $180, outside of sales. Both of my OPi5's have a small microSD boot volume and a 1TB NVMe drive that cost about $50, because when they're not doing arm64 builds for my GitHub projects, they also do some Longhorn volume replication in my home k8s cluster. My OPi5+ has a 2TB NVMe drive for video recording-to-disk. (I designed and 3D printed my own cases, so I didn't factor that in.)

A Mac Mini that matches the important specifications here--and CPU performance isn't one of them, but memory capacity and disk storage are--is twelve hundred dollars. Before you add a capture card or the additional terabyte of storage for the video capture box. Also then I'd have to fight with Asahi Linux or something, because my workflows, while probably portable to macOS, already exist on Linux.

I have no problem buying Macs, I have plenty. The Mini is not a replacement for the needs I described. The more general Ryzen mini-PCs are better competitors, and if you need more and faster compute are a better call at ~$230 to $400--a far cry from the Mac mini's pricing.


Yeah, you’re right - the math is a lot different when you’re looking at the memory and storage side of the equation.


I won't lie, I've thought about 3D printing a 2U or a 3U rack and treating them like really cheap blades. ;)

Admittedly, as much for the hell of it as anything.


> Keeping in mind that the i7 runs at nearly six times the wattage, this is pretty good3, and the key point here is that the Orange Pi 5+ generated the output at a usable pace–slower than what you’d get online from ChatGPT, but still fast enough that it is close to “reading speed”–whereas the Raspberry Pi 4 was too slow to be usable.

> This also means I could probably (if I could find a suitable model that fit into 4GB RAM) use the Orange Pi 5+ as a back-end for a “smart speaker” for my home automation setup (and I will probably try that in the future).

This is pretty interesting for me. I had (wrongly, I suppose) assumed that hardware requirements for LLMs were “have a recent NVidia GPU” but this proves otherwise.


Hi! Author here. Mind that I had to test with relatively small, "dumb" LLMs. I have no doubt I can run whisper.cpp on the RK3588 _and_ a tuned LLM to handle intents, but it won't be a very smart one (I am hoping to find a good way to run quantized Mixtral, but given the RAM constraints on the 4GB board I didn't even try).

Edited to add: I'm looking for something like https://news.ycombinator.com/item?id=38704982 (LLM in a Flash) even if I find something with 16/32GB of RAM, which is why I looked at OnnxStream as well (but of course the inference in LLMs is different, so I can't leverage the NVMe just yet).


Thanks! How "bad" are these LLMs though, especially for smart home-esque basic tasks? I'd imagine them to be alright-ish?


Well, the ones I tried that would fit into 4GB RAM have trouble following directions and inconsistent output. I can't really tell you (yet) which would fit into, say, 16GB RAM and work consistently (and fast enough to, say, turn a light off faster than it would take you to get up and reach for the switch), but I'll eventually get to it...


Dolphin-phi (uncensored tweaked version of msft phi) has been pretty good, I've been testing for only a couple days. It's 2.7B params, so depending on how much RAM the os is using you might be able to run that on 4GB. I run it on 8GB windows using wsl2/ollama and the OS takes up around 5GB (I think), so maybe....


Thank you, that makes sense. I think I'll stick with my google hub for now haha


I’ve run larger models on mine - work but quite slow.

You can get a 32 orange off aliexpress today already


Inferencing can be done in software entirely (e.g. INT8) but it's very slow compared to GPU or APU. nVidia cornered the market because everything (tensorflow and everything after) is optimized for it, but you can get good results on AMD now, and on ARC too in some cases. And slow results entirely in software (CPU-RAM), which for personal and non-constant use may be just fine too.


Thanks! Do you have any guides/websites/github repos for running these models on CPUs?


Ollama will (nearly always) work provided you have enough RAM. I was actually pretty surprised that it didn't work on my N5105 (which has 16GB) because it relies on AVX instructions...


Thanks! Someone else mentioned llama.cpp but it appears that ollama is just a gui frontend for llama (which is good because I find guis easier). I'll hopefully set it up soon!


It's not a GUI it's a cli but very easy to use "ollama run {model}". you can also `ollama serve` which serves an api, and then you can use or build a simple gui.


Thanks, I’ll keep that in mind!


Next upcoming Ollama version will support non-AVX CPUs


Over the past year or so various projects have made it possible to run LLMs on just about anything. Some GPUs are still better than others like Nvidia GPUs are still the best for token throughput (via TensorRT-LLM), but AMD GPUs are competitive (via vLLM) and even CPUs can run LLMs at decent speeds (via Llama.cpp).


Thank you!


I always found it odd that many seem to belittle the RPi and/or say it's lacking in power or features.

As I understood it since the beginning, the RPi is a teaching and learning tool, not your 32gb home server running git, nextcloud, plex, portainer and 15 other services. So faulting it for something it was never intended to be seems a bit unfair?


This thing would be great for a low-power NAS, but no mainline kernel support, no buy :/


Yeah, well, I do mention near the end that I would like a couple of SATA ports for it. I suppose one could stick a controller into the NVMe or Wi-Fi slot...



I couldn’t get armbian to boot on mine


Damn, that's a shame. Why are these hardware companies too lazy to upstream?


There was no mention that this costs >$100 which does limit its usage compared to a $35 rpi.


At current rates the best Pi you can get for $35 is a Pi4 with just 1GB of RAM, which isn't in the same league as the OP5+ 4GB.


On the other hand, you can in fact do a lot with a RPi4 with 1GB of RAM. For me, once you reach $100 I'd much rather get one of the NUC-like boxes.


The article mentions that author wanted to see if the orange pi 5 plus could be the successor to the Pi4.


Not so much a successor but an alternate path. I didn't want to get into pricing because a Pi (regardless of number) doesn't ship with a PSU, an NVMe slot, or even two 2.5GbE interfaces. It's not something I want to directly compare with price-wise, really (although I suspect adding up all the bits might be comparable...)


The Pi 5 + heatsink + customs fees also comes up to just over $100.


People who think the Raspberry Pi 5 is underwhelming

a) don't understand the market or its needs particularly well

b) aren't really paying attention to the underlying trends from the foundation or the trading company

c) are willing to write off any absurdly arcane, poorly-documented things they had to do to get a competing board to offer a stable, supported alternative to the Pi 5.

But the most interesting thing about the Pi 5 is what it tells you about what is coming.

Look at the (astonishingly) successful RP2040, and then look at the Pi 5's RP1 Southbridge, and then scratch your chin and think for a bit.

It's not really incremental. Something quite big has happened here, we just don't see the product of it yet.


Care to share your opinion more clearly? PCIE is eating the world? More modular computing systems? None of that seems "big" to me so I am probably missing your point.


I agree with what the parent is alluding to; the introduction of the RP1 is very understated but perhaps it's more interesting to SBC engineers rather than the end users.

In other words: 1. The RP1 (implemented on TSMC 40LP) contains all the power hungry/high bandwidth IO that is difficult to do on smaller process nodes. This allows the main processor to be moved to smaller nodes or even a different vendor/architecture in future boards. Easier to target better power efficiency in the future. 2. Going forwards, the IO feature set will now be consistent and reliable, by reusing RP1. It is no longer a requirement to try to get these peripherals on the main processor.


Yes, it is this -- and what the sibling comment says.

It's clear that at least these things have changed:

1) there is now independence from the "old smartphone processor" model

Because the RP1 allows them to take control of the very bits of the puzzle that the Pi pioneered and apply them more broadly (including to x86 hardware if they chose to; they clearly did this in the development process)

2) nothing in particular stops them selling the RP1 as-is (except that they are not going to).

There have been some interesting allusions very recently as to what the success of the RP2040 and the RP1 might mean for a future microcontroller lineup, but my guess would be a mid-sized processor optimised for very small educational computers and emulating larger machines.

I would expect to see an RP2040 successor board based around something like the RP1 with USB-C and more concessions towards DVI/HDMI for one thing.

3) they now don't have all their eggs in the one basket (which is better for the foundation)

4) they could now choose a "partnership" model where something like the RP1 turns up in other people's hardware; there are already SBCs on the market using RP2040s for GPIO. [1,2]

Essentially, what has happened is not an incremental change. It's not even particularly incremental in the Pi 5, which is architecturally new.

It is a step change on the design level but also on the business level.

[1] https://www.tomshardware.com/news/thunderberry5-sbc-to-take-...

[2] https://linuxgizmos.com/low-profile-radxa-x2l-sbc-featuring-...


I mean, you are fundamentally describing a chipset/southbridge. It is a common approach and it has tradeoffs (much lower efficiency than monolithic SOCs and additional data movement power).

It is an “interesting” choice for an SBC (probably a bad one given the lower efficiency and higher BOM cost) but overall it hasn’t changed the fact that the N100 is still a faster, cheaper, more efficient device (despite its monolithic SOC design!) unless you actually need the GPIO.

It’s really only an improvement vs the early RPi 1 boards where everything was interfaced using a 500mbps half-duplex usb2 connection as a system bus. That was an exceptionally bad design, particularly in the days when the (closed-source) kernel modules would drop usb frames under load. But the newer ones with sata support etc have already moved away from this.

It is more interesting that rpi is branching out into chip design etc, vs relying on third-party suppliers or pre-existing designs, than on an actual technical level.


Both the 2040 and the Pi5 south bridge are in-house custom silicon, and fairly good at that. I think the parent post is alluding to the Pi foundation eventually building their own processors, and in the process hopefully shrugging off many of the pi’s longstanding limitations.


> in general, right now Rockchip images are mostly kernel 5.x based

No, they're mystery meat-based, merely pretending to be 5.x, like most of those Chinese SoCs. I'll care when it can run mainline and not Armbian.


I was interested in this board because of the HDMI input. However, I couldn't find anyone reviewing/testing that (last searched a few weeks ago).

I have another board (Khadas Vim4) with HDMI input. But the HDMI input only recently got support in their vendor provided Linux image and is finnicky. In the Armbian image I couldn't get it to work for more than a few frames of input video (tried with gstreamer and ffmpeg).

Additionally, I couldn't find any information on HDMI input in Linux (seems like everyone uses USB capture cards that use uvc with v4l2).


Hi, author here. That is something I intend to test. I know it works under Android (which I've yet to test), and under Linux I can see the audio part of it using lshw:

  *-sound:3
       description: rockchiphdmiin
       physical id: 6
       logical name: card3
       logical name: /dev/snd/controlC3
       logical name: /dev/snd/pcmC3D0c
I can't see anything interesting in the USB bus:

    $ lsusb -t
    /:  Bus 06.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 5000M
        |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/4p, 5000M
    /:  Bus 05.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 480M
        |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/4p, 480M
    /:  Bus 04.Port 1: Dev 1, Class=root_hub, Driver=ohci-platform/1p, 12M
    /:  Bus 03.Port 1: Dev 1, Class=root_hub, Driver=ohci-platform/1p, 12M
    /:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M
    /:  Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M
(other than the ludicrous bandwidth available, that is)

...but I am using the Armbian 5.x image, so maybe I am missing some driver or ARM DTD.


i use an orange pi 5+ as a read replica for all of my home logs, open telemetry, system metrics (gathered via snmp), and environmental data.

i have it configured at 16gb of ram, a 2tb nvme, connected to my network at 1gbit, and to my nas to run iscsi at 2.5gbit.

it is a very nice little system, and has been rock steady running ubuntu 22.04. i plan on making it my primary database server, but that's a later project.

it's been in service for 8 months now, and has been quite impressive. highly recommended for those into small compute home databases.


Incidentally, one of the LXC containers I have running on mine has a copy of my IoT metrics database (2 years, roughly 10GB in SQLite, a bit over that when imported into Postgres). Queries are lightning fast compared to the Pi, even doing aggregations--which is why I mention at the bottom of the article that I intend to move my home automation stuff to it.


What is your IOT metrics stack?


Homegrown. Essentially Node-RED and some Go/Python.


I just want something with sata ports and 10G ethernet so I can make a cheap, power efficient home NAS.

Right now I'm using an old office PC that costs $7/m to run. Using the USB port on my router is too slow (maybe I need more expensive HDD enclosures) and can't RAID


ODROID-H3/H3+ has 2 sata ports, emmc and an m.2 but you have to be willing to do a tiny bit of hacking to get 10gbe out of it. Supports a fair bit of ram (64GB) but no ECC which is a dealbreaker for some.

The "hacks" with these are to use the m.2 port with an m.2 to PCIE adapter to get 10gbe or several more SATA ports.

NAS's are tricky to spec because people have such a wide range of requirements. For example, a lot of people want their NAS to be able to do video transcoding which wouldn't work well on this hardware.


I just need a fast file host.

The idea of video transcoding is pretty strange to me. What happened to the days where the client had the necessary codecs to play a video?

Even if you store the video in the destination format, adding subtitles immediately requires Plex (and similar) to transcode, rendering the subtitles into the video.

Personally, I have VLC installed on my Chromecast and, while it's not as pretty or convenient as Plex, the format of my files is irrelevant.



Well, it was there when I revised the draft early this morning... I'm betting having the link here won't help their uptime :)


I'm not removing it. with all due respect, without a working URL the article is kind of useless to me. ultimately I want to buy or see the official pages on the hardware, which currently doesn't seem possible.


I wasn't asking you to, just pointing out the impact :)


orange pi's are officially available on amazon or aliexpress. they do not have their own online shop: https://www.amazon.com/stores/OrangepiCreateNewFuture/page/F...

unfortunately, there are too many sellers selling them on aliexpress for me to find their store there, so you will have to wait until they recover from the hug. prices are the same on each, though if you are purchasing from the united states.

(edited for clarity)



Is it running on an OrangePi perhaps?


Can I use a regular m.2 PC Wi-Fi card on these ARM SBCs? Suppose the SBC's CPU is mainline supported / armbian supported, and the PC Wi-Fi card works on x86 machines under Linux.


Generally yes. M.2 wifi cards are just PCI-E (except for Intel CNVi). Jeff Geerling tried a bunch of different PCI-E cards with the Raspberry Pi 4 Compute Module which does expose the PCI-E interface: https://pipci.jeffgeerling.com/


Warning, suspiciously the article doesn't mention the USB ports are only 2.0.

Was this a coincidence or was the article biased?


>there is going to be a 32GB version as well

I've got two in my k3s cluster so very much a thing already


Anyone know if the CPU is fast enough to saturate those 2.5GbE ports?


Saturate probably. Switching fw style over the two ports not so sure


The RK3588 is great, but I really want a Dimensity 9300 SBC


Wow this chip does look good - when can we get our hands on it?


ISTR seeing that a couple of phones with it released in china just before christmas, so it's probably already available, here's one of them (albeit pretty meh apart from the CPU):

https://www.gsmarena.com/vivo_x100_pro-12694.php




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: