Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What would be a use case for using one of these FPGAs rather than something like a Raspberry Pi with a traditional microcontroller?

I'm genuinely interested. I've been really curious about FPGAs but I don't know what a good use case there is for them for a hobbyist.



The use case for these boards is to learn how to use FPGAs.

Why would you use an FPGA? Mainly when you have specialized requirements that can't be met by processors. FPGAs mainly excel in parallelization. A designer can instantiate many copies of a circuit element like a processor or some dedicated hardware function and achieve higher throughput than with a normal processor. If your application doesn't require that, you might like them for the massive number of flexible I/O pins they offer.

Lastly, using FPGAs as a hobby is rewarding just like any other hobby. Contrary to popular belief, you don't "program" with a programming language like with a processor. You describe the circuit functionality in a hardware description language and a synthesizer figures out how to map it to digital logic components. You get a better insight into how digital hardware works and how digital chips are designed. When you use them as a hobby, you get the feeling that you are designing a custom chip for whatever project you are working on. Indeed, FPGAs are routinely used to prototype digital chips.


Sounds like something that might be just as enjoyable on a simulator.


No, not really. FPGA's by definition are massively parallel. There is no way you can simulate them on CPU's with any reasonable speed (think: 1ms cpu time for simulating one clock cycle, so your simulation maxes out at a few kHz max). That sucks all the enjoyment out of it.


Ha! I often find myself wishing I had an FPGA. It's very common that I have to control external devices that would be straight forward with logic but requires all soft of hacks and tricks using a microcontroller.

Here's just one simple example: controlling servos. Sure you can do that with most uC simply enough, say using timer interrupts, but what if I need to control 100 of them? In logic, I can just instantiate 100 very trivial pulse controllers where as this typically is impossible with a microcontroller or at the very least leaves no cycles free for any computation.

Another example: you want to implement a nice little crypto, like ChaCha20. Even though ChaCha20 is efficient, it's still a lot of cycles for a microprocessor, where as an FPGA can implement this is a nice pipeline, potentially reaching speeds like 800 MB/s, while still having ample resources left for other work.

I could go on.


Great comment and examples. The CPU's are optimized to try to do everything kind of fast a step at a time within specific constraints, often legacy. The FPGA's let us build just the right hardware for our computations using components that run in parallel with fewer constraints. The result is often some amazing performance/circuitry ratios.


To see the board running your program, there's a rare beautiful moment where YOU, one hobbiest, designed the whole stack :).


I don't know if this is a stupid question, but could one design a very basic LISP machine on a FPGA? how about a diminutive JVM?


Why not? It was done on real hardware in the 80s, right? hell, here's one:

http://www.aviduratas.de/lisp/lispmfpga/


It's very easy. Personally, I find Reduceron [0,1] far more interesting.

[0] https://www.cs.york.ac.uk/fp/reduceron/

[1] https://github.com/reduceron/Reduceron


I made a little FPGA LISP machine:

https://github.com/jbush001/LispMicrocontroller


A few ARM CPUs have not a JVM, but the ability to accelerate a JVM by directly executing Java bytecode (Jazelle DBX).

"The Jazelle extension uses low-level binary translation, implemented as an extra stage between the fetch and decode stages in the processor instruction pipeline. Recognised bytecodes are converted into a string of one or more native ARM instructions."


Cool, haven't thought about that. I probably need to get an FPGA. Really liked the book "The Elements of Computing Systems" [1][2] in which one builds a computer from NAND gates upwards, a compiler, vm and finally a simple OS with applications. The hardware part of the course seems to be on coursera now as well. [3]

[1] https://mitpress.mit.edu/books/elements-computing-systems

[2] http://www.nand2tetris.org/

[3] https://www.coursera.org/learn/build-a-computer


That's some really neat stuff that I somehow missed in prior research. Thanks for the links. I'm particularly going to have to take another look at the paper that details their methodology for building systems ground up. The abstraction process and samples.


Try interfacing one of them with DRAM, at a moderate speed.

You'll learn: Pipelines, Caches, Why cache misses are so painful and a whole host of CPU performance stuff that "looks" esoteric will become plain as day.


FPGAs are for pretending you have the money to fab every hardware design iteration. Small CPUs are...not?

Honestly the FPGA in data center stuff is probably mostly hype for most people, but toying with an FPGA is super fun.


Well... if you're going for something stupidly small/underpowered (ATTiny level of power consumption), but run out of cycles to handle multiple I/Os at the same time, FPGA allows you to cheat a little bit by doing things in parallel. For example with 10 inputs on standard CPUs you have to spend some cycles checking each one separately. With FPGAs you can have block for each and just get a signal propagated when something "interesting" actually happens. Then again, you could just invest in bigger batteries and better CPU instead :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: