Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know it’s a HN cliché to comment on this, but it’s also true: I went several links deep, then looked at the homepage and the FAQ, and still have no idea what this product is or who it’s for.


Short version is that it is a custom board into which you can plug multiple RPi compute modules (and now some Jetson modules) to create a miniature version of a blade server system; this board is the backplane of that compute module blade system. Use it to create your own edge cluster system I guess; it does not seem particularly useful beyond being a neat curiosity when you put RPi CMs into it, but as a GPU/CUDA node filled with Jetson modules there is some interesting possibility there for people looking for a cheap local cluster for training ML models.


4x Jetson nano would cost $240, the Turing board will probably cost around $100 (it looks like they haven't decided yet) and you get 1,88 tflops; you can add 50 bucks and get GTX 1060 with 4.4 tflops and you can play games.


Not sure that's quite a fair comparison, because you'll need quite a bit more hardware to use the GTX 1060, and I think that the Turing board + Jetson setup would be all-inclusive (except a power supply and chassis, I suppose)?

I could be wrong about that though.


But can you do a 4x/cluster SYN flood with a GTX 1060


And "RPi CMs" are Raspberry Pi Compute Modules, so apparently these: https://www.raspberrypi.org/products/compute-module-4/?varia...

Ironically, the adapter images that you use to plug in the RPis (which might give you a clue) don't currently load on the Turing Pi homepage.


Agreed save for the caveat that a 4 RPi compute system could actually do quite a bit of Edge ML. Even a single RPi is enough for >15fps image recog.


Their two use cases are edge infra — horizontally scaled server applications on a power and cabinet space budget — or as a workstation for workflows that can benefit from distributed compute power.

I could imagine the latter might be handy if you’re doing CAD with rendering in the Amazon rainforest and don’t have 5A of power for x86_64 + GPU. Maybe.

It definitely seems like a solution in search of a problem. Happy to be proven wrong though.

See “Use Cases”, here: https://turingpi.com/turing-pi-2-announcement/


There's a company here in Ireland, Cainthus, that does workplace wellbeing for dairy cows. It does so by continuously analysing video feeds and detecting behaviors that could indicate stress or other environmental factors that make the cows unhappy. Management could be done by the RPi while the inference could run on one or more Jetson boards. These little machines are very friendly for embedded work.


I think I'll wait for (and enjoy!) the obligatory Jeff Geerling[1] video[2] on it.

--

[1] https://news.ycombinator.com/user?id=geerlingguy

[2] https://www.youtube.com/c/jeffgeerling


oh I didn't know he had a youtube channel. I knew him through his excellent Ansible modules.


I'm also here on a daily basis ;)

I'm hoping to get my hands on a board soon... I think they're still brushing up the prototype though, trying to get it to a more final production state.


Excellent. Your videos on the various Pi clusters have been fantastic. I'm looking forward to your review of this new updated board.


I have the earlier 7 node version which runs Kubernetes, following Jeff Geerling's guide (https://www.youtube.com/c/JeffGeerling).

Actually I wish they hadn't reduced the number of slots to 4, because part of the "fun" is dealing with the fact that with 7 nodes, using ssh and individual node management is no way to manage a cluster, so you're forced to treat it as a real cluster. I feel with 4, I might be tempted to individually manage each node. But I also understand why changes in the Pi Compute Module 4 made this necessary. The CM4 is physically much larger than the CM3.

Edit: Actually my real wish is for a compute module that has more I/O channels. I would love to build a hypercube-style supercomputer (like the Meiko Computing Surface) but these require 5+ high speed I/O interconnects say to build a 32+ node cluster. I wonder if PCIe offers a solution?


I too would be interested in playing around with more processor to processor interconnects. Just for fun I built a 16 way SAMD21 board that used the serial interconnects to make a hypercube arrangement and it was very cool to play with.

It would be possible to build an interconnect over PCIe, but of course it might just be better to use a 10g ethernet PCIe interface chip for each node and a local to PCB network.


They look to be great devices for edge compute. I can slap a Jetson in one slot and pi's in the other three. Cheap, easy to fix, no expensive support contracts. I'll be looking at throwing them out into rural locations/farms. I can get x86 nucs but tbh they lack the customisability in this has. This is awesome for industrial applications.


Doesn't apply here. From the homepage:

----------

What can I do with Turing Pi?

Home server (homelab) and cloud apps hosting

Learn Kubernetes, Docker Swarm, Serverless, Microservices on bare metal

Cloud-native apps testing environment

Learn concepts of distributed Machine Learning apps

Prototype and learn cluster applications, parallel computing, and distributed computing concepts

Host K8S, K3S, Minecraft, Plex, Owncloud, Nextcloud, Seafile, Minio, Tensorflow


No, I saw that, I just don’t understand it. I can do all of that with a PC. Is this a PC? A more powerful Raspberry Pi? What does the ability to “learn concepts” even mean? I learn concepts from books, what does the hardware do?


This board is a very convenient way (maybe the most convenient one I've seen) to setup a bare-metal cluster of computers. Not just multiple cores, not just multiple VMs, four entirely separate ARM computers communicating over a real hardware network. One Alternative to boards like this is to connect multiple SBCs together, with all the wiring, and also some mechanical support. Another (more powerful alternative) is to install some kind of server rack at home. More expensive, too. Using multiple virtual machines is also not quite the same.

What people use it for? Mostly to learn how to deal with problems that arise from managing a cluster and running software on it. Can you build a website that tolerates getting one of the nodes or hard drives turned off?

Some people use such solutions for productive things, like a Home Server, but a store-bought NAS or a single PC is usually more performant. A PI cluster might be less power hungry in some scenarios.

Some people use them as build/test platforms for code that should run on ARM architectures. Others have used them to host a website from their internet connection (I know...).

Some people just have fun tinkering with such things....


Don't worry, it's not just you. I learn concepts by building and tinkering (and reading specifications), so you'd think I'm a target market. But when I wanted to get some hands on experience with a cluster file system, for a job, I spun up a cluster of 5 vms on... my normal computer.

4 seems like a very useless number to me. 4 raspis is more expensive and less useful than a used dual xeon on ebay. I could imagine maybe there's a use for something with 16 slots? or at least 8? But I don't get these cluster boards (or, for that matter, storage enclosures!) which presume I can do something fundamentally different with 4 small computes than 1.


The original Turing Pi had 7 slots (wish it had even more!). I do feel that was better, because it really forces you to manage it as a cluster.

Spinning up VMs is sort of fine, but they don't have quite the performance or management characteristics of a real cluster. The network is slow, the nodes individually are not very powerful, you have to work out how to image each physical machine, nodes break or have I/O errors, ...


> maybe there's a use for something with 16 slots? or at least 8?

You can always connect 2 or 4 of these together.

But I understand what you mean. A project I want to build one day, when I have the time and learn ethernet interfacing through PCB's is to build a single board cluster of Octavo SoM modules. They are individually inexpensive and it'd be relatively easy to build a board with a dozen of them connected to a switch chip.


Yeah exactly, my first thought is that a normal multicore PC is going to be not just more powerful, but more power efficient and cost efficient. It's a fun idea but I wouldn't be interested unless they publish some comparisons.

Basically everything here can be done on a single multicore computer (which is already a distributed system in many respects):

https://turingpi.com/12-amazing-raspberry-pi-cluster-use-cas...


more powerful in most cases - yes. distributed - no. learning about doing HA/Scale out via distributed systems is a really valuable skill, and projects like these make it sooo much more real, beyond even just basic networking.


I wonder if this video helps: https://www.youtube.com/watch?v=8zXG4ySy1m8 Jeff Geerling "Why would you build a Raspberry Pi Cluster?"


That says what it is for, but not what it is.


Ironically I thought just the opposite.


Raspberry Pi has a compute module version: CM4, which is basically a pared down RPi4 with almost no IO options. This is a board for CM4 (and, apparently, NVidia Jetson) that lets you power, network and communicate with several CM4 boards in a cluster setup.


But... why? Every other thing doesn't work on ARM, so what's the point of a rpi cluster?


It's kind of this:

https://blog.fosketts.net/2012/02/21/cubix-ers-blade-server-...

but for Raspberry Pi's and Nvidia Jetsons.


same here, their website should have a one-line description of what this board is/for. Though I did find a short answer on potential use cases from FAQ in one of the pages (I think it's the Turing Pi V2 product page?)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: