Hacker News new | past | comments | ask | show | jobs | submit login
The Economics of ASICs: At What Point Does a Custom SoC Become Viable? (2019) (electronicdesign.com)
123 points by amelius on May 22, 2020 | hide | past | favorite | 52 comments



This article is just an advert. I have worked on practically the same project as the second example with a different manufacturer and can tell you it is a lot more expensive than 5 million. This project was 90nm,3 years ago so comparable to 55nm nowadays, and minus the mcu, so there is no licensing costs. NRE of 20 million for design, development, and bringing to production. And I think it ran over budget, with only 1 silicon revision. With a final component cost of $1.20, it finishes up cheap compared to the cost of individual components, but the up-front costs are much larger than they are suggesting. They only list mask sets here, not mentioning engineering time for 12+ months and design license tools, costing upwards of 1 million per seat. Then on top of that the PDK from the manufacturer. I would love to see the full cost breakdown and see where it becomes viable.


I work at a company that does these sorts of custom ASICs for a variety of customers all the time. $5M for the second example is a bit low but the right ballpark. Even allowing for 2 mask sets (mistakes happen) and $1-2M for an off the shelf bluetooth IP it would be difficult to see this going much over that.

I'd estimate a team size peaking at < 10 over 12 months to go from initial specification discussion to GDS-II. Another $1-1.5M for qual and support through production plus silicon validation.

Put that all together and <$7M sounds to me like a good estimate and I've seen a lot of more complex projects come back for less.

Not wanting to be a gobshite here but how did you manage to spend >$20M for something like this? It sounds like you were being seriously ripped off if you were paying $1M per engineer for design tools - you might want to push your tool vendors on that next time negotiations come round.


The second example is a medical IoT telemetry device. If the parent’s project had antenna in package I totally believe $20M for the project, especially as it includes rf validation, the bringing it to market part. $1M per engineer is a lot, especially for typical VLSI cad stuff, but possible for some “design” tools. I’ve used some specialized field solvers that cost $500k per engineer. Not sure that AiP in 2017 would’ve required that, but maybe.


I agree, there probably were some wastage on our project, but I'm only a lowly engineer so cannot really comment on high level decisions. There were a lot of engineers working on the project, wages tends to drive prices up quickly, and all the ip was developed from scratch. I don't work there anymore, moved to a smaller more nimble outfit. Thanks for your insight


An anecdote, the Allwinner, the company that once made MP3 and MP4 player chips made once thunderously popular A10 SoC with just $1,000,000 USD (set as a condition of the investor.)

Though, I admit, making just any IC under $1m requires really knowing what you are doing. Not something for a team of green engineers, and engineers whose only experience was doing cookie cutter SoCs from hard macros.


Well, if you steal all the CAD tools, design is pretty cheap ...

EDA CAD tools are infuriatingly expensive. You can get a chip run on an older node (180nm or 250nm) for <$100K. Good luck finding a set of EDA CAD tools for under that per year.


I used xcircuit, magic and irsim in university to make plenty of ASICs at MOSIS. Many universities still do. One can even get the NCSU PDKs for MOSIS for free. Totally compatible with 0.18u http://opencircuitdesign.com/magic/


Magic is sorta okay as long as you stick to low-performance digital or very small analog transistor counts.

The problem is that most of the "interesting" circuits for old tech nodes have a significant analog piece or RF piece--generally either ultra-low power(nanoamps) or higher voltages(15V+) or higher frequencies (2GHz+).

Both the simulation models as well as the tools to extract parasitics are extremely weak (or non-existent) on the open source front for analog and RF circuitry.


Does this apply to the formerly commercial Bravo3VLSI from the 80ies, now available for free & open-sourced as Electric from https://www.staticfreesoft.com too?


Yes, unfortunately.

Anything based around Magic has to run on an extremely simplified set of rules in order so that the tiling and stitching mechanisms that it uses don't get upset.

DRC and extraction are hard. They require line-sweep geometry engines of fairly significant sophistication. Extraction requires some notion of third dimension matching and/or analysis.

The problem is that Cadence will donate tools to practically any school but will take your firstborn if you're a company--and Cadence are one of the better companies in the EDA space. This cuts off any incentive for someone in academia to create a new VLSI tool.

Note the accepted papers at DAC 2020: https://www.dac.com/content/2020-dac-accepted-papers

Not a single mention of "DRC", "rule", or "extract." Even simulators are thin on the ground unless you include "quantum". You would think that extraction, DRC, RF simulation, etc. are all solved tasks, right? (I assure you that algorithms in these spaces that can exploit massive parallelism are quite rare and are very difficult to implement well) :(

We should be living in the time of massive GPU and cloud acceleration of these tasks, and yet ... nothing.

This tells you what research is getting funded.

(Edit: Sorry, Largo, for some reason your down comment isn't getting a reply button...)

Edit: Magic, in this context, refers to the VLSI layout tool. Most open source EDA systems default to Magic as the thing that you use to draw/interpret physical geometry. This is good code reuse, but bad in that you inherit all of its limitations.


Thanks for the fast response. What do you mean by magic in this context, are you referring to the program mentioned by the GP, or 'magically' simplifying DRC etc. as offered by Electric?

edit: I wasn't even aware of https://www.dac.com/ -> one more fractal in the Tsundoku stack

edit: never mind :-)


The basic point that older nodes are still viable and exponentially cheaper than newer ones (and getting cheaper over time) seems very much correct. It's not just the physical masks either; the design rules get simpler and more openly available the further back you go, and entirely open toolchains with negligible licensing costs start becoming viable as well. In fact, this might end up being the 'real' Moore's law, not so much the availability of more and more advanced nodes at the leading edge.


Are there any resources I could read or watch to understand why it's so expensive to produce mask sets vs compiling code for an FPGA?


Sure they’re expensive to design and make, but with appropriate volume or margin you’ll get the lowest power device possible for your application. General SoC’s are just that: general. Moore and Dennard stopped helping, so they’re missing what you need and have what you don’t, requiring a SW stack that ends up working around it all.

Example from HPC 10-15 years ago: Opteron had a fast start but gen2 was really late. Intel Sandybridge Xeon was their first with PCIe 3.0, which was 6-9 months minimum after adapter cards supported it. Both products caused headaches for interconnect and system builders because they couldn’t ship with parts that didn’t exist.

Apple can of course control their execution of their A series, but they can also be first to AirPods and a smart watch with that integrates Siri. This from a company that famously waited for a market to look like it was going to take off before getting in because they let other companies shake out the issues with merchant silicon before they’d integrate. Now they don’t have to wait.


Does anyone know if anything ever came of the 'Bespoke Processor' [1] approach? Essentially you run your application on a simulated MSP430, and it prunes out all the gates you don't use. Seemed promising, but then I don't actually know this field.

[1] http://people.ece.umn.edu/~luo../jsartori/papers/isca17.pdf


Sounds like a horrible idea if your application ever needs to receive an update. Which is basically 100% of applications that aren't abandoned.


Ship new processor with update?


Man, getting software updates installed in a timely manner is already a problem for large parts of industry and government/administration.

No need to make it even harder by requiring a hardware update at the same time.

Don't even get me started about the incentives for the vendor: way easier to sweep a security bug under the rug than roll out an update that requires a new processor to be manufactured and rolled out.

Or about the economical and ecological impact. What do you do with the old processors? Just throw them out because they were optimized for an old, insecure version of the application?


> What do you do with the old processors? Just throw them out because they were optimized for an old, insecure version of the application?

We do the same shit with tablets and phones and noone bats an eye... instead of updating an existing phone and just charging for an update, we'd rather release a new $1000 phone every year and just 'recycle' the previous one.


Japanese companies seem to have a boner for custom ASICs. I never understood that. For example Roland uses custom chips [1] for their synthesizers and effects, where they could use almost any DSP on the market.

Likewise printer manufacturers go for a huge SoC where they could get an application processor that suits their needs and couple it with a specialized printer chip. [2]

What's the deal?

[1] https://www.flickr.com/photos/psychlist1972/37188679832

[2] https://news.synopsys.com/2020-05-21-Fuji-Xerox-Adopts-Synop...


Because its cheaper.

An off the shelf AP is often going to have a lot of functionality you are not going to use but you are going to be paying for. It's going to consume power driving signals back and forth to the specialised printer chip. Its going to cost money to control inventory and deal with two chips when one could do the job. It may be end of lifed at an awkward time causing you to have to redesign or store a lot of inventory.

And it really isn't that much more expensive to develop one big SoC if you are already developing a specialised chip to go with the AP.


I suspect not all ASICs are "true" ASICS (and I mean, fully custom chips), sometimes they're just custom versions of specific chips, not built "from zero", then just the custom print on the chip.

(someone with more experience feel free to give more detail or to correct me)


The article makes several huge leaps of faith and the intended audience is definitely not intended for the typical Ycombinator crowd. To clarify:

1. All his examples are large companies (eg Tesla & Amazon) with internal demand for components which a custom ASIC could provide some cost efficiencies). For a startup to compete for such business, they would have to be proven, well capitalized and/or have unique IP. -- It is hard for a startup to land business with these giants. --

2. He ignores to other SoCs as a potential alternative. The IoT segment is full of standard parts and it would be a challenge for a chip design house to compete unless the company contracting the development has high enough volumes to offset the NRE cost. -- see point #1 above --

3. SoC is generally bad business for startups. There is a lot of IP that needs to be aggregated. Your value-add is small unless you have fundamental value-add. And there is high risk in of integrating someone else's IP as it is not in your direct control. If you need cutting edge IP like DDR6 memory controllers, SERDES interfaces or PCIe Gen4, it's usually out of reach unless your startup has mid-7 digit back accounts. -- Sourcing IP can be expensive and time-consuming. License terms are typically not favourable to small startups and require "large" upfront payments --

4. The examples cited have long product life cycles. Most startups excel at Greenfield or new market opportunities where the risk is higher and lifecycles are much shorter.

5. I would not do a design in nodes larger than 45nm today. The PPA (Performance, power and area) differences are small compared to your engineering and EDA tools cost.

Basically the article is an advertorial, self serving and not appropriate advise for startups to follow. I agree with noone10101's comments below.


Stupid question: has the pandemic made creating custom/any HW in places, specifically, Shenzen, even more economical/cheaper??

If i have a product definition, would i be able to get prototypes for say, $1000 assuming its a fairly (seemingly) simple product?

Edit:

As ones will probably ask “define simple”

- i want what is in effect a body cam, but with multiple cameras providing a 360 degree view, ideally affixed to the “button” location on a baseball cap. Or a small pole on a bavkpack.

I actually thought of this years before gopro existed hut i couldnt convince any of my HW friends in silicon valley of its interest...

There are many iterations on this, for example, one iteration i would like is effectively what has been developed into the “google hike view/walk view”

Lidar on a bavkpack for mapping out in 3D your walk hike...

Regardless, the most simplistic veing a multi cam 360 camera on a hat....

Could this be something done more cheaply in the pandemic climate?


You could attach a pair of 180 degree camera modules to a single board computer and have a proof of concept prototype with no custom software to make it easy to use, but if you want a polished custom solution with compact hardware, battery power, and associated software you’re looking at six figures of NRE minimum. If you want a production ready product, it’s into the millions.

How cheap do you think engineers will work? Even if they could pull it off with 1000 man hours, do you really expect them to charge $1/hr? Your estimates are off by many orders of magnitude.

But what you’re describing (Minus LIDAR) is available off the shelf from multiple action camera vendors. We’ve been using 360 cameras in the action sport world for years now.


Cool thing would be attaching 180 camera to the front and 180 camera to the back of a necklace or a collar. And have some smart image processing algorithm to get 360 out of that. Paired with large storage, battery and wi-fi upload in the background this might be life-logging solution some people crave.


FITT360


> would i be able to get prototypes for say, $1000

This takes $500K+ of investment. You need expensive tooling to develop the design and license any commercial IP. You have to pay the engineers to do the design and preproduction verification work. You have to pay the fab to make the masks, make the chips, and test them. Then you have to iterate to deal with problems.


Go buy some USB hubs, some webcams, and a cheapest fake backpack...and try out Ricoh Theta first if you're going to burn that $1000 anyway.


What do the current crop of 360-degree view cameras lack that you really want?

https://www.techradar.com/news/best-360-degree-camera

You could take the raw video from these and run it through a SLAM algorithm (simultaneous location and mapping) to build a 3-D map of the environment.

Those cameras may not be as small as what you envision, but you'll have trouble making them even smaller.

If you want to carry around a lidar unit, I have some bad news for you regarding cost and weight...


Thank you for this.

Like i said, i wanted one of these years ago and these things didnt exist then....

But i think ill buy one now


> If i have a product definition, would i be able to get prototypes for say, $1000 assuming its a fairly (seemingly) simple product?

No.

Simplest microcontroller prototypes start to fly from $6k-$10k if you want anything beyond a kickstarter project.

I once worked in an LED lighting company, a simplest analog IC went into $3m after cost overruns from $1.2m after a few failed tapeouts.

To make any IC under $1m requires you to really know what you are doing. It's not what a startup with green, unexperienced engineers can do.


Or you could go with an FPGA for low volumes. Like the Lattice CrossLink-NX. It can get video input and even aggregate multiple video feeds into one as output for further processing.


And you can do an FPGA to ASIC conversion which is probably the cheapest “custom” ASIC option out there.


Don’t you need to have a minimum (substantial) amount to be made before a fab will do anything for you?


There are companies that specialize in FPGA to ASIC conversion, and I think it works by sharing masks with other custom ASIC orders. So I think it’s feasible at thousands of units. It’s something like 50% lower power and cost per unit than FPGA and a little higher speed as well. Not super dramatic, but is a pretty easy path. I think $10,000 to $35,000 non-recurring costs plus the cost of the chips themselves. Potentially in the Kickstarter range.

Full custom ASICs get much higher performance, lower power, lower per unit cost but have like a minimum of $500,000 NRE costs.


I did a business plan for this and tried to raise money almost 20 years ago. :)


We are building such products, exactly.

Http://www.mosaic51.com


You've got no idea what simple or complex means.

If you had asked for a 3 or 4 cameras with 5mm lenses on 25x25mm base boards connected by MIPI ribbon cables under your collar to a few Raspberry Pis and a big battery in a backpack, you can do that today with off-the-shelf hardware for half your budget. They won't be synchronized, you can do that after the fact in your video editor. Getting someone to hold your hand and configure it for you will still cost the rest of your budget, but it's job done.

Or buy an off-the-shelf 360 degree action camera with included mounting pole and just use that.

Miniaturizing it into an ASIC and custom optics? That's man-years of effort and hundreds of thousands of dollars.

And both DARPA and the private sector have put millions to billions into miniaturizing and reducing power requirements for LIDAR into the baseball cap and it's still humanly impossible at the moment to fit it into the button of a baseball cap.

It's not even necessary to ask whether the pandemic makes these tasks a few percent cheaper. If you have an idea, get some basic information about the problem domain first.


Please don't be a jerk in HN comments. If you know more than others, it's great to provide correct information, but please don't put other people down.

https://news.ycombinator.com/newsguidelines.html


I typically agree with your calls on being a jerk vs. not, but LeifCarrotson is being pretty factual here.

Hardware is incredibly expensive and time consuming to create from scratch in ways programmers usually don't appreciate. If the GP is asking those questions after thinking about the problem for years they do clearly need to be told they have no idea what the problem is.

In the hardware business we are constantly approached by people who have little more than a vague idea and a hobby-money budget. I've seen what happens when you encourage them, and it usually involves fucking up their retirement if you let them get too far.

Much more humane to see it for what it is and shoot the idea down as fast as possible, or at least help them understand they probably need VC level funding to get anywhere.

If someone has been simmering on an idea for years and can't even formulate a clear explanation of a problem he wants to solve (so that people can explain what's required past "it's really expensive, don't do it") it's clearly not sinking in and there's no product there, just a guy who wants to play with technology and waste manufacturer's time while doing it.


I certainly wouldn't argue against any of that. But it's easy to make such a case without putting someone else down. In fairness, though, the bits I was objecting to in LeifCarrotson's comment were pretty borderline.

There's a long long tail of weirdness and variation in any large population (such as an internet forum like HN), so it's best to give people the benefit of the doubt. It can easily seem like someone else is doing one thing when in reality they're doing another. I feel like half of the moderation we have to do in comments boils down to this.


Not disagreeing, just adding more info:

I think a lot of people on the manufacturing side react in a confrontational way because it's disrespectful to a manufacturer when someone wants to use the infrastructure they've sunk years into as a playground with no hope of actually making it to production.

If a manufacturer takes on 10 projects in a year and none of them do anything but endless prototyping they will quickly go out of business. Unlike in software where you make your money in the design phase.

If someone has been trying to make a hardware project happen for years and they still don't understand this dynamic then there's not just an understanding issue, there's a track history of being a drain on other people's resources, or at least a willingness to be one. Establishing a relationship with everyone needed to make hardware has a much higher drain on partner resources than would happen if a consumer called a help line or a programmer read some API documents. Just making a quotation can take a week or more for a non-trivial hardware project, and prototyping is generally not that profitable either, if at all.

An analogy: If someone came in here and said they wanted to hire CS grad students to work on Blockchain (but no clear idea of a specific problem) during the shutdown and was hoping they could pay them half of what they make on their already lower than market rate research stipend but still take advantage of the research topics they are involved in, I think expressing a certain amount of "hey, those are real people you are hoping to screw over" would be appropriate.


> humanly impossible at the moment to fit it into the button of a baseball cap It’s actually pretty close, considering the size of this LiDAR module: https://www.youtube.com/watch?time_continue=93&v=xz6CExnGw9w


What is the largest node/process that is currently available? Does that large size mean the lowest cost for development? I assume the trade is that fewer chips are produced from a wafer, so lower development costs are related to higher unit costs.


> I assume the trade is that fewer chips are produced from a wafer

Not really true once you scale down to a minimum viable size for your application, since you'll need plenty of area regardless for bonding pads, vias, interconnects etc. Nowadays the partial failure of Dennard scaling means that smaller chips may also be a lot harder to cool; it may be more advantageous to try and spread out logic in a way that might be a bit "wasteful" of area, if this makes the thermals more manageable. The real tradeoff wrt. very coarse nodes is performance.


Naive q: does RISC-V potentially impact any of this?


Yes, but I think it depends on where the company is coming from already.

The IP licensing cost goes to zero when using RISC-V, and if the company already has a design force set up for MCU dev, it could be a big deal (if RISC-V works out its architectural issues).

Comparing this to ASICs still has to take into account what all of the other top-level comments in this thread have discussed. It still doesn't answer the question of MCU vs ASIC (vs. FPGA), but it does add a significant cost reduction to the mix if an MCU is in the running.


In a way; if you're doing a custom ASIC at an older process node as discussed in the article, there's basically no real reason not to use RISC-V (perhaps RV32E when applicable for minimized area requirements) for your logic. (16-bit and 8-bit logic might also be viable but is really quite constrained these days.) But RV is only one part of what might become a thriving open-hardware-blocks ecosystem over time, that would be especially compelling for these sorts of designs.


I've been watching RISC-V for the last year, and for me it represents a coming tipping point that I've seen in other tech areas in the past. Having an open standard that people are rallying around is a game changer. I don't know how it effects ASICs specifically, but it seems to be helping open up FPGA development and tooling, making it easier for people outside big established companies to work. I see chip design getting easier (or at least more approachable) in the next 5 years, and I think it will only accelerate from there.

Mind you, I may have rose colored glasses on. I've been dreaming of JITs all the way down to the hardware layer ever since I first heard of an FPGA that could flash itself in a single clock cycle over a decade ago...


tl;dr: 1 million units


$2M component costs, actually




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: