Hacker News new | past | comments | ask | show | jobs | submit login
Goodbye, Motherboard. Hello, Silicon-Interconnect Fabric (ieee.org)
345 points by craigjb on Sept 25, 2019 | hide | past | favorite | 85 comments



It's been fun to see Dr. Subu present this concept and prototypes at several conferences, and the level of integration possible is absolutely insane. I think the industry is definitely moving toward chiplets, such as the latest AMD release.

I definitely think we will see more chiplets and more standardization on interfaces between chiplets. The focus will be on how to minimize energy per bit transferred (a big topic in Subu's talks) and how to minimize the die area used for inter-chiplet communication. In monolithic silicon, you don't have to think about die area, since your parallel wires between sections might just need a register or two along the way. With chiplets, you typically can't run wires at that density yet, so you still have some serialization/deserialization hardware. But, since it's not crossing multiple high inductance solder balls and PCB traces, you can get away with less. Hopefully also you can get away without area-intensive resynchronization, PLLS, etc.

I think it will definitely be awhile before this kind of integration is used outside of niche cases though. The costs are just insane. You have to pre-test all manufactured chiplets before integration, and that test engineering is nothing to sneeze at. If you don't then you have all kinds of commercials issues about who is liable for the $500k prototype one bad chip broke.

On the bright side, I see the chiplet approach benefitting other integration technologies. For example, wafer level and panel level embedded packaging technologies can be used for 1-2um interconnects now. You won't get a wafer sized system out of it with any kind of yield, but it's probably the direction mobile chips and wearables will go.

Anyway, disorganized info-dump over.


I agree this looks promising, though I'm not an expert in this field.

But the title is a bit, well, overpromising or broad. I don't think we'll replace traditional motherboards anytime soon (except maybe in smartphones?). Rather, it will be an incremental progress.

- first, SoC's will be replaced with chiplets

- then we'll start seeing more and more stuff being integrated on this wafer.

- say, instead of a server motherboard with multiple sockets, have all the CPU chiplets on the same wafer and enjoy much better bandwidth than you get with a PCB

- integrate DRAM on the wafer. This will be painful as we're used to being able to simply add DIMM's, but the upside is massively higher bandwidth.

The motherboard pcb per se will live for a long time still, if nothing else then as the place to mount all the external connectors (network, display, pcie, usb, power, whatnot).


> integrate DRAM on the wafer. This will be painful as we're used to being able to simply add DIMM's, but the upside is massively higher bandwidth.

One way I imagine this working out is that, instead of just replacing the plastic motherboard with a silicon motherboard, you eventually do away with a single monolithic motherboard entirely. Instead, you have "compute blocks" (comprised of chiplets bonded to a silicon chip, or conventional chips on a conventional circuitboard) that connect with each other via copper or fiber optic point-to-point communication cables, and you can just wire them together arbitrarily to build a complete computer. Like, you might have a couple blocks that house CPUs, one or two that have memory controllers and DRAM, and maybe one with a PCI bus so you can connect peripherals, and you can connect them all in a ring bus. You could house these blocks in a case and call it a server, or connect a lot more blocks and call it a cluster.

The main advantage of such a setup is that you don't have a single component (the motherboard) that determines how much memory, how many processors, or what sort of peripherals you can have.


This becomes specially interesting if you imagine these components becoming smart enough to support high(er)-level atomic operations and some form of access-control, so you could have shared resources between two subsystems.

Also if all these components are reasonably smart and interconnected, it could become more common for the CPU to merely coordinate communication in many cases, so larger chunks of data could easily be handed around different components and the processor only telling them what range of bytes to send where.


I've been playing a lot of TIS-100 lately, so this architecture sounds like a nightmare to develop on.


You know, I bet the TIS-100 wouldn't even be all that tricky given a) way more cells and b) a compiler to abstract a bunch of stuff.


I think the embedded wafer level or "panel" level packaging technologies are the mid-ground. These technologies don't use expensive silicon, and instead surround the die with cheaper epoxy. Then the interconnects are built on top of that, and can connect multiple die together. Yield and interconnect pitch are the big issues here though, and that's why I think you're right, that we will see SoCs or mobile systems first, not whole motherboards.

With that said, some of these technologies can have a layer of surface mount pads on top. So you have a substrate of epoxy with all your chips and interconnects embedded in it, and then surface mount parts on top. For example, passives, connectors, etc. It would look almost like a motherboard, but with all the chips inside. Of course, for cost and yield reasons, this will be for mobile devices only at first.


Say what now? The die is the silicon, right?


I didn’t phrase that well. I meant that the wafer wafer and panel level embedded technologies embedded the silicon die inside of cheaper epoxy, instead of building expensive silicon interconnect to integrate them on. They basically make a plastic wafer with a bunch of die in it. Then interconnect is built up on that.

Edit: the links below show solder balls. Today this technology is used for packaging, and has been used on chips in phones for years now. In the near future, we should be able to embed or surface mount passives and mechanical components, so maybe we don’t need the PCB.

https://www.semanticscholar.org/paper/3D-eWLB-%28embedded-wa...

https://www.semanticscholar.org/paper/Latest-material-techno...


We already pretty much do #1 & #4 today, it's call POP[1], take a look at a RPi3 and you can see the gap between the DRAM and SoC.

[1] https://en.wikipedia.org/wiki/Package_on_package


Package on Package has many downsides though:

- The interconnect pitch is huge, 0.3mm-0.4mm. HBM memories have 1000s of I/Os

- The inductance of the solder balls and the impedance discontinuities in the path mean the logic below still has to have big energy-hungry I/O drivers

- If you want to stack more than one die, you need something expensive like through silicon vias (TSV)


Putting more dies closer together makes thermal issues worse.

Hot parts next to other hot parts increase thermal power density, more heat to remove from a small area.

Colder parts next to hot parts can overheat because of the hot neighbors.

I suspect water cooling may become a must, air just cannot take away enough heat.


Air is gonna do fine. The bottleneck in CPU cooling right now is pretty much always the transfer between the die, the heat spreader and the cooling plate, not the transfer from the fins to the air. Water cooling can do slightly better because you can keep the water cool, and with that the cooling plate, and through that increase the heat flow from the CPU to the plate, but it's really only marginally better than a big air cooler.

And if you put more dies below a heat spreader, you get more surface area, i.e. better heat flow overall (compared to a single die with the same power consumption) from the dies to the heat spreader and from the heat spreader to the cooling plate.

That's also the reason why bigger air coolers don't really do as much as you'd think they should in terms of cooling performance or overclocking, the difference between an NH-U14S and an NH-D15 is really quite small. If the problem is heat dissipation through the fins all you have to do is make the cooler bigger.


You can bring water closer to the crystal, and make it pass faster past / inside the dissipator plate, thus achieving a larger stream of heat. Effectively you can turn the dissipation plate into moving liquid with high specific thermal capacity (5-7x of the metal plate).


Put the darn thing in mineral oil or other heat-conductive liquid and have a radiator dissipate that heat?

Wouldn't that be easier if everything's on a big wafer? It's already been done for a normal motherboard.


Water has the big advantage that it's plentiful, cheap, and environmentally benign.

Sure, it'll take some more upfront engineering to design a system/rack/datacenter for water cooling than just immersing a server in a tank of inert liquid (flourinert or whatever they use these days), but I'm quite sure that at some point water cooling will be the standard solution in data centers.


NUMA will be much more important. This will really push on memory hierarchy aware data structures and programs.


Hmm, I would say the opposite. If all the memory and CPU cores are integrated on a single wafer, the penalty for off-chip access would be much less than if you had to go through a PCB.


It'll be less than a networked cluster, but it still mattered with Threadripper units and I'd expect a racked board of this nature to expose more disparity between accessing memory in other chiplet areas.


What niche cases do you think this applies to first? They will probably be the ones to propel this technology forward if I had to guess


Are you perhaps aware if videos of these presentations are available anywhere online?


He said he prefers dielets over chiplets, but we’ll see what sticks.


Yeah... I'm not entirely convinced about this future.

* PCBs are cheaper to manufacturer than silicon wafers.

* PCBs can be arbitrarily created and adjusted with little overhead cost (time and money).

* PCBs can be re-worked if a small hardware fault(s) is found.

* PCBs can carry large amount of power.

* PCBs can help absorb heat away from some components.

* PCBs have a small amount of flexibility, allowing them to absorb shock much easier.

* PCBs can be cut in such a way as to allow for mounting holes or be in relatively arbitrary shapes.

* PCBs can be designed to protect some components from static damage.

What I can see on the other hand is some packages end up being dropped into the PCB and soldered at the sides. Sometimes this is done with large through-hole capacitors, where the legs are bent and the capacitor sits in the middle of the PCB (inside a cut hole). Other than ball packages, you could could probably drop the majority into the PCB itself.

The other obvious option for manufacturers will be to put more tech on a single die, but then other problems are also raised. For example, some parts are binned based on their tested results.


What I can see on the other hand is some packages end up being dropped into the PCB and soldered at the sides.

That's been done for at least 30 years:

https://www.keesvandersanden.nl/calculators/hp32sii_repair.p...


That's amazing and I'm glad that it's possible, even by the standards of older technology. I certainly think dropping packages into the PCB is the lowest hanging fruit for reducing depth.


I’d like to add onto your list:

* PCBs can act as integrated antennas.

* PCBs can easily mount connectors.


"Power turned out to be a major constraint. At a chip’s standard 1-volt supply, the wafer’s narrow wiring would consume a full 2 kilowatts. Instead, we chose to up the supply voltage to 12 V, reducing the amount of current needed and therefore the power consumed. That solution required spreading voltage regulators and signal-conditioning capacitors all around the wafer, taking up space that might have gone to more GPU modules."

Uh, this seems like a pretty serious downside.


Also: shifting PCB prototype costs in the direction of silicon is going to be a tough sell for many applications.

> Another drawback of SoCs is their high one-time design and manufacturing costs, such as the US $2 million or more for the photolithography masks

> ... 6 paragraphs later ...

> Patterning wafer-scale Si-IF may require innovations in “maskless” lithography.


Silicon's limit is 0.7V and there is no trick against that.

No matter how efficient is your power supply, you will be losing electricity very very rapidly within single centimetres.

That's why there is no way to work around the need to move the voltage conversion on chip.

In the future we may even increase IC voltage to reduce the copper losses for very low power, but huge devices.


Resistive losses are that high? The signal integrity issues are going to cause unacceptably high error rates.


“Silicon-interconnect fabric, or Si-IF, offers an added bonus. It’s an excellent path toward the dissolution of the f(relatively) big, complicated, and difficult-to-manufacture systems-on-chips that currently run everything from smartphones to supercomputers. In place of SoCs, system designers could use a conglomeration of smaller, simpler-to-design, and easier-to-manufacture chiplets tightly interconnected on an Si-IF.”

Reading this reminded me of a remark from Bunnie Huang’s teardown of a dirt cheap ‘gongkai’ cellphone (https://www.bunniestudios.com/blog/?p=4297): “To our surprise, this $3 chip didn’t contain a single IC, but rather, it’s a set of at least 4 chips, possibly 5, integrated into a single multi-chip module (MCM) containing hundreds of wire bonds. I remember back when the Pentium Pro’s dual-die package came out. That sparked arguments over yielded costs of MCMs versus using a single bigger die [...] I also remember at the time, Krste Asanović, then a professor at the MIT AI Lab now at Berkeley, told me that the future wouldn’t be system on a chip, but rather “system mostly on a chip”. The root of his claim is that the economics of adding in mask layers to merge DRAM, FLASH, Analog, RF, and Digital into a single process wasn’t favorable, and instead it would be cheaper and easier to bond multiple die together into a single package. It’s a race between the yield and cost impact (both per-unit and NRE) of adding more process steps in the semiconductor fab, vs. the yield impact (and relative reworkability and lower NRE cost) of assembling modules. Single-chip SoCs was the zeitgeist at the time (and still kind of is), so it’s interesting to see a significant datapoint validating Krste’s insight.”

I wonder if there’s any advantages to si-if from an ewaste (aka reverse logistics, cradle-to-cradle) perspective


Another drawback the article doesn't mention is tight component coupling removes the ability to treat the components as separate pieces. This can make repair or upgrades extremely difficult. This might be desirable for companies against the right to repair; if it's illegal to use software to prevent repair, making it sufficiently difficult by hardware design is a possibility.


There is this section, which acknowledges the problem of replacing components:

"We also need to consider system reliability. If a dielet is found to be faulty after bonding or fails during operation, it will be very difficult to replace."

Their proposed solution (which is not repair):

"Therefore, SoIFs, especially large ones, need to have fault tolerance built in. Fault tolerance could be implemented at the network level or at the dielet level. At the network level, interdielet routing will need to be able to bypass faulty dielets. At the dielet level, we can consider physical redundancy tricks like using multiple copper pillars for each I/O port."

Your point is still valid, just wanted to call out their their thoughts on the issue.


It’s not just about faults.

But about customizability and upgrades!

I like to choose how much RAM and storage and which ports I want with how many generic, vector, FPGA and neural cores, thank you very much.

And I like to change them later, to upgrade gradually. Even buses.


This is currently targeting non-serviced computers. Eg data center and mobile/embedded. Where you replace the whole unit anyway.


Aka something that should never be that way in the first place.


That doesn't seem to solve the issue of when a chip fails all together. While difficult, it's not impossible or unheard of for people to replace chips on a PCB like a charge controller that has fried.


Big CPUs / GPUs already have physical redundancy, and a way to cut / rewire a limited amount of faulty parts by laser etching in the die, prior to packaging.


Cell processors are interesting because they do quite a bit of binning on a user facing SPI slave. You shift a thousand or so bit payload into it from a support/binrg up processor that tells it which pieces are disabled, how the PLLs are configured, etc.

Hacked PS3s could reenable the binned off 8th SPE for instance.


That might be fine if they can produce them cheap enough... I’m sure the same arguments were said when the CPU switched to having non-repairable transistors


Is anybody actually repairing a faulty motherboard? Let's one of the chips or some resistor is broken.


Yes, quite often in fact. On older motherboards, often the only problem is something benign like a blown capacitor, which is a few cents at the hardware store and a few minutes labor to desolder the old and attach the new. It's a really common part to wear out due to heat stress, and it manifests as funky logic problems, since the caps are mostly there to clean up the line noise. I've saved a few flatscreen monitors this way, that were flickering and unusable; I'll take a $5 handful of parts over a new $179 monitor any day of the week.


I have a friend who buy disfunctionning TVs for a few euros, and most of the time (8-9 over 10) there is only one broken capacitor that need replacement. He makes maybe one hundred euros every month by doing so, with really little work.


I have only a rudimentary understanding of electronics and what the various components do, so this may be a dumb question, but how on earth do you diagnose an issue like this? If my monitor dies or my printer stops working and I open it up, I’m completely bewildered at what steps you’d take to figure out that some random resistor or capacitor out of (what seems like) thousands of components is the one causing the issue. Do you just go through and test each one with a multimeter until you find it?


If a consumer electronic device suddenly refuses to power on, it's a failed electrolytic capacitor most the time (in my experience). Finding them requires effectively no knowledge of circuits. Sometimes one is visibly deformed/exploded, but I usually just replace them all.

If that doesn't work, yea you pretty much start testing things with a multimeter starting from the power source. But if it isn't a capacitor failure, it's probably ESD or power-surge related damage and not worth trying to fix.


I attempted to repair a coffee machine that had a short that left a black burn on the PCB. Replaced the parts around it and it still wouldn't turn on. Likely the power surge destroyed a lot of things.


Those sorts of power faults can burn out traces on the circuit board too (you can bypass the break sometimes), and often there are fuses that would n need changing (in my very limited experience).

Louis Rossman on YouTube has a lot of stuff on circuit repair that I find good, https://m.youtube.com/watch?v=_at9Jy3dfeY.


You can also test electrolytic in-circuit using an ESR meter. That's effective series resistance if I'm remembering correctly.


Imagine an industrial robot which welds gratings and its controller. Old. Manufacturer either long gone, or bought by another, and that merged construct merged again with another, ad infinitum. So no support anymore, no spare sparts. But the thing works O.K. has a comfortable display where you can program it easily in place, and even understands "teach in".

But welding grates is more of a side gig of that company, so they only do batches from time to time. One day a new, rather large batch is due, but robot doesn't start up at all, and programmer stays dark. It has power but is dead. What to do? Disassembling the controller/programmer of course. Something super special, running only one "App", written in something esoteric, running on a CPU which was designed to only run that esoteric stuff and nothing else.

And probably a mouse somehow crawled into the case and shat and pissed onto the PCB, and the CPU. Which is corrosive and dissolves the pins of the CPU und the the copper traces of the PCB, turning them into some sort of gel. But not much area at all, so easy to bridge with wires if it weren't for the dissolved pins of the CPU. So i removed that part, cleaned it with compressed air, benzine and alcohol and then very slowly and carfully drilled open the edge of the CPU until i could see the bonding wires from die to pin. Again very carefully soldered wires onto the ones missing pins, bridged that over the broken copper traces on the PCB, hot glued that crazy work, and reassembled it.

Against my expectations it worked! At 10Mhz! For years afterwards. How the mouse made it into the case wasn't obvious because the largest openings had only the diameter of a pencil, and i can't imagine a mouse fitting through that. But it somehow did. Anyways, what i wanted to say is that sometimes you can see what's wrong without knowing electronics at all. Same with the capacitor problem other commenters mentioned. They have to have a flat top, any bulging is wrong, especially when the top cracked open and some gooey stuff leaked out. Or from below.


That's the general idea, yeah. I always start with the power supply because that part is under the most electrical strain, and tends to get hot. That means its components are often both the first to fail, and the most likely to cause weird problems when they do fail. Other than that, it's just intuition about how I think the device probably works, and a lot of trial and error. Major bonus points if there's a datasheet available, or a kind forum poster has had a similar problem to give me a lead on where to look.


Faulty capacitors often bulge, so you can find them by visual inspection.


Bulge, or even just burst open and start leaking:

https://en.wikipedia.org/wiki/File:Al-Elko-bad-caps-Wiki-07-...


The "capacitor plague" is such a common problem, and there was a notorious run of bad ones with a defective electrolyte from the late 90s to mid 2000s, that there is even a site dedicated to it: badcaps.net

There is an extremely detailed article about it here: https://en.wikipedia.org/wiki/Capacitor_plague


I know that hobbyists do this on vintage equipment, repairing '80s home computers and such. But is it at all common at scale with recent motherboards? I find it hard to believe that any substantial percentage of 2010s motherboards that fail are being diagnosed, repaired, and put back into service.


What do you think happens to electronics that are sent back during warranty due to being defective?

They are mostly "refurbished" (=repaired and cleaned), and then sold again. If possible, even as new. Or, of course, sent back fixed.


I'd be open to any contrary numbers on this, but my expectation has been that most mobos pulled out of service, especially by high-volume operators like the AWS datacenters, are just junked, not repaired or refurbished.


I have seen videos of people buying in batches of broken game consoles and replacing USB connectors and charge controllers since they often break and are not too hard to replace with the right tools.

I also spoke to someone from Russia who worked as a electronics repair person fixing things we would normally throw away because the price of new equipment is very high compared to the price of an expert's time to fix it.


You should check out Louis Rossmann's videos on YouTube - he does exactly this.

This one's not on his channel but is a good example: https://www.youtube.com/watch?v=g0S1ku9xvDI


This sounds like making the entire motherboard an equivalent of an epoxy blob.


This step is inevitable in my opinion. This is especially true for mixed signal (that is some digital and some analog components) systems. It's early fore-bearer (multi-chip modules or MCM) was instrumental in IBM getting its later mainframes to hit the density and thermal constraints.

Like the authors, when I saw AMD's chiplet pictures for the Zen2 chips I felt that we would see this expanded. Intel has also done some interesting optical chip to chip interconnects that would facilitate assembling these newer multichip modules into chassis that route signals to and from the outside world.

The next (and perhaps last) element to fall into place is a way to efficiently cool these systems. One of the problems that large data centers face is not that they want "smaller" boxes but that they need to pull enough heat out of a rack of servers in order for them to reliably function.


For data centers, while air cooling might be running out of steam, there's plenty of leeway in water cooling. So I don't think that's an insurmountable problem.


Having been inside Google's data centers I agree with you :-). I did a top level, high efficiency data center design for a multi-tenant hosting with Google scale economics as an exercise once, talked with some potential partners who were very enthusiastic. It would cost roughly double what the existing warehouse type data centers cost to build, but it would repay its costs faster than they did (given colocation cost structures at the time). It also leveraged some of the open compute designs to achieve better density.


For reference, there are companies that have developed this to a commercially viable level. First that comes to mind is ZGlue, a company that has the design tools, interconnect fabric, and chiplet ecosystem required to deliver these kinds of devices.

https://www.zglue.com/


This is really impressive work.


The downside is that independent entities are not anymore able to design hardware projects, there are only a few companies that can design and make such integrated circuits. Cpu's are not only made for computers but for all sorts of appliances. Time will tell if these big companies will sell chiplets and the tech to assemble them onto silicon to 3rd paties. PCB tech is fairly democratic in that sense.


I think it's more fruitful to (initially at least) see this as an alternative to single chip SoC's rather than an alternative to PCB's.


I kinda like this idea, but I think it only makes sense where the end product sold to the intended consumer is wholly integrated. So, this might make sense for a smartphone. Or perhaps custom systems in Google- or Amazon- scale datacenters, with a number of different custom types tuned to exactly the work each is intended to do, and packed as densely as the intersection between thermodynamics and economics will allow.

But, I don't think we can escape the need for packages and PCBs entirely. At some point, you're going to need to interface with something that either A) you don't or can't control, or B) something at appropriate scale for interfacing with the humans whom the fancy system is ultimately supposed to serve. In either case, here come standard connections that are much bigger than the chiplet dies or the interconnect fabric between them, and thus, the need for PCBs to connect the Systems-on-a-wafer to the outside world.

As such, I think it will be a while before I can pick components out on Mouser's website, and have all those component chiplets fused to an interconnect wafer and delivered to my home or employer's shipping dock (though that would be damned awesome).


> As such, I think it will be a while before I can pick components out on Mouser's website, and have all those component chiplets fused to an interconnect wafer and delivered to my home or employer's shipping dock (though that would be damned awesome).

I agree. People have been talking about that future since at least the 1990s (back then SOC didn't mean a somewhat standard chip with a lot of peripherals, but literally a custom die with your own hardware on it). "Hardware/software co-design" was part of the jargon of the day. I'm not holding my breath.

But I do imagine seeing a lot of consumer products go this way. Not a $3 IoT light switch/malware vector, but anything that gets rid of connectors is a win on both the BOM and reliability standpoint, but anything in the $50-$500 price range with volume over a million units is probably worth it.


Color me skeptical, but I remember being 12 seeing this on The Screen Savers on TechTV in 2003


I like the idea of faster and smaller devices but dislike the idea of expensive production methods that are only realizable with heavy capital investment and large volumes of products.

Like I can realize a PCB prototype in my apartment (or rather, my parking lot without telling my landlord), or rent a CNC machine at a makerspace/public library to do it. If I need the thing fabbed with a nice solder mask and silk screen, I can have it made for < $20 domestically with under two weeks lead time. Component sourcing is even easier.

But where do I go to have the chiplets I need for the circuit? Organize the logistics to ship them to the clean-room where they can be packaged on this fabric? How many widgets do I need to ship for this to be viable, or for the contract fab to not laugh at me? How do I prototype? How long is the lead time?

It just seems like there's a lot in the way of this being viable for run of the mill projects.


Increasing capital costs and consequent dependence on huge volumes have been a defining feature of the semiconductor industry since the invention of the integrated circuit.

AFAICT this technology will have no impact on the low end hobbyist end of the market, but rather (if all works out) enables those $zillion behemoths to produce ever faster systems since they won't be as bottlenecked by off-chip bandwidth as they are with today's PCB's.


You’ll need $2 million, just for the mask. And if you want to fab yourself, the machines cost in the range from $10 to $50 million. For one machine. And you will need many.

As the article states, you will not make those at home any time soon, unless ”maskless“ fabbing becomes a thing.


You can order custom pcb from China for a couple of dollars. How likely is that this technology will be affordable at small quantities?


Seems one of the main advantages is higher density of conductors and connectors compared to a PCB. Aligning those dielets with micrometer precision would seem to require equipment out of reach for the hobbyist. And then bonding them, while they don't use solder but rather temperature and pressure, again with such small connectors I guess the line between bonding and destroying the thing is very very fine. Again requiring specialized and presumably expensive equipment.


First of all, I'm no expert (at all, I stay firmly on the software side of things) but here's my 2c:

I'm going to guess that when PCBs were introduced, you couldn't get them for a couple of dollars from China either.

These kind of things tend to become cheaper over time, as manufacturing, competition (newer or competing components) and availability grows.

So I assume it's at least possible that at some point, a few decades from now, they're cheap :)


Does anyone know how this compares with AMD's Infinity Fabric? How is it similar or different from it?


Infinity fabric is a software thing (a way to distribute information over a buss similar to pci-e) where as this is a physical piece of hardware that electrically passes the signals between 2 chips.

I see no reason why infinity fabric could not be routed through a silicon interconnect, but it would be wasteful. The whole point of silicon interconnects that the hardware protocols like pci-e are no longer necessary and can use much lower power/cheaper ways to transfer data.


It seems like promising technology but the cost of making these chiplets must outweigh their benefits.


I think it should be called Silicon Fabricated Interconnect so we can just call it Si-Fi...


Cyberpunk 2077, here we come!


Gloodbye customizability and modularity, from a user standpoint.

I guess Apple will hire them pretty soon.


Will this increase the number of bit flips from cosmic rays?


All the cool kids these days are _wearing_ nodeJS , not just writing it!


It sounds like hardware side of computer science is rediscovering the UNIX philosophy - each unit does one thing, well, with standardized interfaces.


>pack dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.

Congratulations on the insurance money from your building burning to the ground.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: