Hacker News new | past | comments | ask | show | jobs | submit login

> It just seems so obvious today that you can create gates, you can create macros, you can create complex designs, and you can define the interface at every level so you can hook them up and they just work. That idea came out of Conway and the early pioneers of VLSI.

And you can see the opposite of this in many early microprocessor designs, like the (original, NMOS) 6502 and Z80. There's a lot of highly idiosyncratic designs for gates, heavily customized for the physical and electrical context that they're used in - and I won't deny that they're often very clever and space-efficient, but they were also extraordinarily time-intensive to design, and weren't reusable. It made some complex designs possible within the limitations of the time's fabrication technology, but it wasn't an approach which would have ever scaled to larger designs.

One great example of this is this bit of 6502 overflow logic:

http://www.righto.com/2013/01/a-small-part-of-6502-chip-expl...




> There's a lot of highly idiosyncratic designs for gates, heavily customized for the physical and electrical context that they're used in - and I won't deny that they're often very clever and space-efficient, but they were also extraordinarily time-intensive to design, and weren't reusable.

Is this optimization now something that hardware design tools do automatically?


They optimize on a different level. Instead of trying to optimize the arrangement of individual transistors, you start with a set of standard cells which contain optimized transistor-level implementations of individual gates, and have your design tools optimize the placement and routing of those cells within a grid system.


Does that mean there's an opportunity for increasing performance by bringing collections of gates into scope for optimization? Or does that not actually let you decrease transistors very much?


There is some, but most of that actually gets pulled into standard cell libraries (the gate libraries), which are very big collections of primitives. Most of them have a lot more than just the standard gates you think of - they include many 3-input gates, adder cells, multiplexer cells, flip flops of all kinds, and all sorts of other basic building blocks that are micro-optimized. They tend to use a standard width of 7 or 9 "tracks," where a track is defined by the width of the lowest metal layer, and the optimization comes from reducing the length of the gate. They also have gates of different sizes/strengths, so you can use the weak and small version on paths that are not critical, and the bigger and faster versions on critical paths.


There is - but given the size of the design space that's mostly done with a library of gates - synthesis/layout pick cells from that library and place them, often putting connected gates together - you could then merge gates in some smart way to save a few percent in area but chances are you wouldn't gain much because you'd have to shuffle all the other gates in that row a bit, and that would mess with timing elsewhere.

Also routing (wires between gates) constrains how close many gates can be, and for everything but regular arrays of gates there may be little point




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: