How easy is it to take code written for an FPGA and transfer it to run on an ASIC (e.g. code written in Verilog). Are they so different it would need to be rewritten from scratch, or could it be done with the equivalent of a a recompile? Do people often use FPGAs as a stepping stone to a custom ASIC design? Would learning how to program FPGAs be of any use when it comes to designing ASICs?
I'm interested in getting into FPGAs, but I can't help but worry they are a bit of a dead end as any sufficiently popular use case will eventually be replaced by an ASIC.
FPGAs are definitely not a dead end. By virtue of being reconfigurable, they will never be obsolete as long as ASICs are a thing. Now, some whole new technology will come along eventually, supplanting present day ASICs and FPGAs... but until then...
Program as a term means something different with chip design than it does with software. An analogy is that to program an FPGA is to paint a canvas. The source code in chip design is instructions for how the canvas should be painted.
Another analogy would be to program an FPGA is to cook a meal. The source code is the recipe for the meal. But one doesn't run a recipe on a meal.
These analogies break down because a painting and a meal is passive... it doesn't do anything by itself, or react to the outside world.
So another analogy would be building a car. Here "programming" and "building" are the analogous terms. The instructions for the assembly line to construct the car is the source code. Once built, the car responds to stimulus (steering wheel, pedals) and does stuff. Same with the FPGA. It has inputs, it responds and does stuff. If you painted a picture of a CPU in your FPGA, it could run software.
There is tremendous overlap in designing for an FPGA and an ASIC. Most ASICs start life as an FPGA simply to prototype an idea.
The difference between an ASIC and an FPGA, at a high level from a design perspective, is the difference between writing with a pen vs a pencil. Learning to write is equally applicable.
It's probably not helpful to think about right now, but an FPGA is actually an ASIC.
The LUT based architecture is starting to run out of steam, I think a CGRA sort of architecture is the future, but programmable logic startups will likely fail, and there's approximately a zero percent chance that Xilinx or Altera would try anything that new.
Problem is you still generally need simple logic to combine some course grained blocks. Also we have a lot of FGPA's that including adders, RAM, DPS cores, and more course grained devices.
Honestly, a LUT can be pretty efficient structure for what it does. The biggest advantage to coarse grained structures is their they are much faster since the internal construction can use optimal routing.
The biggest issue with FGPA's is the programmable routing/connections. Ideally each LUT would form a complete graph. However, the number of wires grows at the approx rate n(n-2)/2 where N is the number of LUTs. So instead the structure is more hierarchical. Still the majority of silicon on an FPGA is still just used for routing.
However, I think an array of ALU's actually could be quite useful for some applications over an FPGA.
I think GPUs, FPGAs and scalar cores will all mix into a single fabric. As you mentioned, FPGAs are getting dedicated hard blocks, GPUs are getting scalar cores and CPUs are getting LUTs.
> However, I think an array of ALU's actually could be quite useful for some applications over an FPGA.
Well this depends on the underlying routing architecture of either system. However you are right in general since finer grain logic means more things that need routing.
Nothing stops you from treating LUT outputs in groups like corse grain system though. FPGA manufactures could make chip with a different routeting topology that works really well for certain applications.
However, we could be making lots of devices that fit certain data flow patterns better. By doing so makes the devices simpler and faster.
Routing is pretty important. Its just current FPGAs are built with quite flexible interconnect.
If you want see really limited programble interconnect look at some old PLDs that you program by blowing fuses.
I would say that most digital ASICs have the logic proved in FPGAs first. While automated tools exist to go from FPGA to ASIC they are nowhere near as good as hand layout.
FPGAs are not going away as they fill the nitch where a CPU/GPU is not fast enough, but the quantity of units needed does not justify the cost of making an ASIC. Cheap FPGAs also replace glue logic in smaller run products.
Many different FPGAs exist for different tasks. Some include multiple ARM cores so your FPGA design doesn’t have to waste FPGA space on a processor. A few even have programmable analog sections.
I was under the impression that automated layout had gotten a lot better over the past few years. In particular, it's very different than hand layout and works best when you let it go whole hog at a chip, but doesn't really compare well to hand layout on tiny sections.
It can be done with essentially a recompile as long as it's all digital, but the result won't be great. If you want to use area well, get high clock rates and power efficiency, you really need to spend a significant amount of effort on physical design.
Technically speaking, you have to worry about these things on FPGAs as well.
But FPGAs and ASICs are quite different, so you'll have to adjust the design for those differences. In fact, you may have to adjust the design when switching fabs, because of process differences and resulting standard cell library differences.
I'm interested in getting into FPGAs, but I can't help but worry they are a bit of a dead end as any sufficiently popular use case will eventually be replaced by an ASIC.