Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have to ask, what makes you say this: "and it will cause nausea in the typical functional programming supremacist" with regard to FPGAs?


Not only are FPGAs as far from the ideal of "computation as reduction" expressed in the paper as they could possibly be, but they're the hardest machine to fit into this sort of a paradigm.

The typical CPU will run functional code fairly efficiently if enough work is put into the compiler (to some functional programmers this sounds bad because hardware should make writing functional language compilers easy. People who know about hardware make a different conclusion - that hardware is just fine, as Ken Thompson said, "the PDP-11 is a fine Lisp machine, it's not a language special enough to warrant a specialized machine.")

The typical accelerator is a harder target to a general-purpose functional programming language (it's a poor target for a general-purpose imperative language to begin with.)

The FPGA, with its explicit management of every imaginable resource, is the worst target for a general-purpose functional programming language.

In general, the farther away a commercially significant machine is from the classical von Neumann architecture, the more terrible target it is for functional languages, which never prevented FP aficionados from talking about "the end of Moore's law bringing about the long-awaited demise of the von Neumann machine", a "demise" that largely doesn't happen and where it does, it does not mean what they think it means.


> to some functional programmers this sounds bad because hardware should make writing functional language compilers easy. People who know about hardware make a different conclusion - that hardware is just fine, as Ken Thompson said, "the PDP-11 is a fine Lisp machine, it's not a language special enough to warrant a specialized machine."

Wow, really? C got a lot of things wrong, but the one thing that it got right is its abstract model of the machine. As a functional programmer, rather than specialized hardware, what I want is functional languages designed to target the abstract C machine. (Though maybe I am in a minority.)

> The FPGA, with its explicit management of every imaginable resource, is the worst target for a general-purpose functional programming language.

Explicit resource management is compatible with functional programming, but it requires a type system capable of undestanding the ephemeral nature of resources, à la Rust.


> The FPGA, with its explicit management of every imaginable resource, is the worst target for a general-purpose functional programming language.

That's a great way to put it...

but perhaps there's a more suitable FPGA cell for functional programming (than block lookup tables with state)? Maybe something with a local stack, recursion assistance, an iterator?

Also, a good deal of the FPGA resource management is focused on optimization... What if the optimization-at-all-costs constraint were relaxed somewhat, or supplanted with meta-information to indicate the completion of a group of cells to allow a message to propagate?

With relaxed constraints partial, on-the-fly and self- modifications could become regular features.


> Maybe something with a local stack, recursion assistance, an iterator? [...] supplanted with meta-information to indicate the completion of a group of cells to allow a message to propagate?

What you're describing sounds a lot more like many CPU cores connected by a network than like an FPGA. The point of an FPGA is that it gives you very precise and flexible control over things like timing and interconnects. Without that, FPGAs are actually pretty slow and you'd better use a CPU.

> With relaxed constraints partial, on-the-fly and self- modifications could become regular features.

There's a reason why self-modifying software is restricted to niche use cases: it's hard to reason about. Dynamic partial reconfiguration is slowly becoming a thing in FPGA design, but it's typically used to work around resource limitations.


> The FPGA, with its explicit management of every imaginable resource, is the worst target for a general-purpose functional programming language.

Huh? The greatest strength of functional languages is to allow you to explicitly manage cross-cutting concerns, no?


What about PLA or PAL? Such tech seems closer to functional programming than FPGA's.


To put it simple, FPGA design (as practiced in nearly 100% of commercial use cases, i.e. with VHDL or Verilog) is iterative programming on steroids – even a sequence of actions requires an explicit mutable state variable. If you need to access external memory (e.g., DDR3 RAM), you have to communicate with a memory controller module. So not only is there no garbage collection, there's not even a malloc! And no recursion.

From time to time you'll read statements along the lines of "functional programming languages map really well to FPGA design", allegedly because the logic functions between register stages behave like declarative constructs. Such claims are excellent bullshit detectors, indicating that the person uttering them doesn't have the slightest clue what they're talking about. It's on par with believing that expressions like "a + b * c" make C a functional language.


what are your opinions on http://www.clash-lang.org/ ?

It's different than what you describe because the language is explicitly aware of the fact its running on an FPGA. However, it's seemingly pretty declarative and functional and has an FPGA as a possible target.


First impression, could be misinformed, but I'll bite.

Since a hardware circuit is a recursive function that takes a state of the hardware and produces a new state, starting from an initial state (and so is the universe where the initial state was supplied by the Big Bang), you can of course express it straightforwardly as a recursive function, which I think is what they sometimes do:

   blinkerT (leds,mode,cntr) key1R = ((leds',mode',cntr'),leds)
    where
    -- clock frequency = 50e6   (50 MHz)
    -- led update rate = 333e-3 (every 333ms)
    cnt_max = 16650000 -- 50e6 * 333e-3

    cntr' | cntr == cnt_max = 0
          | otherwise       = cntr + 1

    mode' | key1R     = not mode
          | otherwise = mode

    leds' | cntr == 0 = if mode then complement leds
                                else rotateL leds 1
          | otherwise = leds
This is VHDL/Verilog spelled in Haskell (real hardware description languages will not make you spell the old state - leds,mode... - and the new state - leds',mode'... explicitly, which gets annoying for 20 state variables. Of course this could be taken care of by a preprocessor or a macro system, my point isn't that it's verbose but that it's VHDL spelled in Haskell, a bit awkwardly and maybe you can do it less awkwardly but the programming model is the same so calling it Haskell is cheating, kinda.)

This is definitely not what TFA is about - you don't represent the blinking leds as graph reduction, rather, you generate VHDL from VHDL-like Haskell.

This is not to say they aren't doing cool things in this project. If indeed they compile this:

   fir coeffs x = dotp coeffs (window x)
      where
         dotp as bs = sum (zipWith (*) as bs)
...to efficient hardware description code, that's really cool. In general, I think that a sufficiently restricted functional language can compile very nicely to FPGA/hardware (and to DSPs and GPUs etc. etc.) I'm just saying that a general-purpose functional language is better targeted at a CPU, and that a functional language targeting something else is a DSL of sorts.


To me, the inability of these "haskell to hardware" systems to handle general recursion shows that they are fundamentally missing the most important part of the mapping from functional computation to combinational logic plus registers. They don't easily permit the "unrolling" of recursively-written code, so your recursive programs must squirrel away their state between every iteration, even when they could potentially be performed in parallel. In trying to sum a list in parallel in hardware, how do you know how many per-element operations to do at once? This determines the number of gates you'll need, and the buffer size for storing the rest of the list until those gates are free.

One way to get around this is to use termination proofs to bound the size of input data, which is exactly what's done in total functional programming. In total programs, no term is a bottom, so laziness vs. strictness is moot (both give the same answer for all programs with no exceptions or infinite loops in either evaluation strategy). Which is what you would expect since you're modelling hardware which has a clock and will clock out results into its output pins each cycle anyhow.

How can we bridge the mapping? Well, you could write a totality checker which searches for a recursion principle which is well-founded, and use this to bound data/iteration sizes and determine the shape of necessary hardware. Or you could build a theorem-proving language whose constructs enable extraction of these sizing bounds, and have humans write the proof with application of tactics.

What I really think we should be doing is writing specifications (aka, type signatures, per Curry-Howard), which do not themselves impose a particular mode of execution of the program which fulfills the specification, but only completely specify its results (and effects, if in an effectful language), and then do (constrained, by the desirable properties of the resulting programs/hardware designs) searches for proofs of those theorems.


SHard synthesizes hardware directly from Scheme programs. So, one can do it.

http://scheme2006.cs.uchicago.edu/05-saint-mleux.pdf


I'm a Haskell fan, so I'll bite: what do you use to program your FPGAs? VHDL or Verilog? Or do you have some other higher level language? I have not found VHDL or Verilog easy to use at all.


In addition to what yosefk has written: I really don't see the advantage of trying to fit the round peg Haskell through the square hole FPGA design – it even seems to add friction (see the bundle/unbundle stuff in the examples).

Besides, if you thought that getting C programmers to use a pure, functional language like Haskell is difficult, have fun convincing an FPGA designer (which typically come from an electrical engineering background, due to historical reasons) to do this. Their UART example [1] uses monads, monad transformers, and lenses...

[1] http://hackage.haskell.org/package/clash-prelude-0.10.10/doc...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: