Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Toshiba and WD NAND Production Hit by Power Outage: 6 Exabytes Lost (anandtech.com)
162 points by deafcalculus on June 29, 2019 | hide | past | favorite | 101 comments


This doesn't make sense:

>Toshiba Memory and Western Digital on Friday disclosed that an unexpected power outage in the Yokkaichi province in Japan on June 15 affected the manufacturing facilities that are jointly operated. //

Surely that's not the reason, it would have to be "and local [backup] power failed, and the failovers for that failed too"??

Toshiba manufacture generators too, it's not like they'd need to go far to get backup power designed for them.

There must be more to this? (Which explains why people are assuming it's suspicious, I guess; and this site is making 35% of global NAND output).

FWIW, I hadn't realised that it takes ~2months to process a wafer in to a chip.


Semiconductor fabs require MASSIVE amounts of electric power. In fact, to the first approximation, 5-10% of the cost of a silicon wafer is purely the cost of the electricity consumption. Source: worked in finance dept managing fab spend


There had to be a problem with backup systems.

In the US in certain industries, you have to do quarterly disaster recovery testing.

I am surprised they don’t do something like that here. The losses would certainly warrant it.


Yeah I was surprised. Don't they have a ups for this kind of thing?


Yes. My Google-fu is not up to finding them but I happen to be familiar with this company's products and here is one. https://www.energy-xprt.com/products/purewave-ups-systems-55...

One application of this kind of product is chip fabs because they are so sensitive to power disruptions.

Whether Toshiba/WD had this type of system and if so, why it didn't prevent loss of product was not mentioned in the linked article. I have heard that there is a glut in chips for SSDs so a reason to cut production can't be ruled out. However it seems like Toshiba/WD would pay the price for this outage while their competitors would reap the benefits (unless the competition agreed to somehow share the cost.)


I believe the memory industry is also known for price fixing, so it's not out of the question. The oversupply is ridiculous though; the last time I shopped for an nvme they were $1000 for 500gb and the other day I bought 1TB for $500.


1TB TLC was $300-$500 3 years ago, where are you shopping, the expensive store?


The other day I bought 2 TB for $189 (Intel 660P). Actually I bought 5 of them, because they're so cheap in Taiwan right now. I was really pleased to discover that they fit my spare MacBook Air using an adaptor, and 10.14 Mojave supports TRIM.


Amazon was selling 2TB NVMe from Intel for under $200 just last week!


Yeah Intel 660p (QLC, but with some SLC cache and a decent controller) is going for under $100 per TB. In my (limited) experience it makes a solid non-24/7 workstation drive (e.g., I use it for local object storage for large builds).


> The oversupply is ridiculous though;

No it's not? Isn't that basically capitalism working as intended (in this rare case)?


Yeah I was surprised. Don't they have a ups for this kind of thing?

One would think. Though, the tech industry is much like any other industry, and I imagine a conversation like this:

Engineer: We need to install backup generators in case the grid goes down.

Middle manager: Can you do it without stopping production?

Engineer: No.

Middle manager: Screw it. It's the next guy's problem.


At this sort of level, all power systems can be replaced or redesigned without the load being switched off...


Here's an article from one month ago discussing the over-supply of NAND and DRAM (and the effect it has on pricing): https://www.forbes.com/sites/tomcoughlin/2019/05/25/nand-dra...

I can't help but feel very skeptical about the timing of this event, given the history of price-fixing in the industry.


These kind of issues do seem to hit with a suspicious degree of regularity - it seems every 1-2 years there is a shortage due to some calamity or such...


Yea like that time a suspicious typhoon knocked out all the Hard Drive fabs in Thailand.


Sorry it was not a typhoon, but flooding due to policy failures. Each Tambom (sub-district) control's it's dikes and levies, plus 3 or 4 "top level" water related ministries unable to cooperate, and general bureaucracy combined with nobody willing to sacrifice (get flooded to lessen the impact upstream) caused the flooding of these industrial parks.

As a result of this and SSD, Thailand's HDD industry is mostly gone now. (among other losses techies don't hear about.)


I was there at the time, was pretty bad.


"It would be a shame if we build labs in places with unavoidable weather problems, wouldn't it?" "Yes, it would! I believe it's your turn to tee-off".


A power outage no less. Seems like the sort of thing that could be solved.


How ironic is the fact that Toshiba makes back up generators and industrial uninterrupted power supplies ?


Important to quote from the comment section

>Five fabs and an R&D center, outage was after the batteries also ran out.

For perspective, the batteries at GF's leading fab can run the 1/3 of the systems for only a few minutes. That's the scale we're dealing with.

I think before we do all sort of conspiracy theory, we need to look into reason for why was there an outage in Yokkaichi.


Batteries (and giant multi-to. spinning wheels, which serve the same purpose) are not a long term power supply. They are intended to only bridge the couple minutes until generators can come online and provide stable power. So yes, it’s expected that they drained, the question is why didn’t the generators come online?


That is a good question. but if I had to guess,

Judging from the scale, the "Generator" would have to be a power plant? I.e It is not feasible to have generators to operate at this Scale?


> It is not feasible to have generators to operate at this Scale?

Getting an exact figure on how much utility power they use is proving difficult, but let’s shoot on the very high side and say it’s 100MW. It’s fairly easy these days to buy generators that put out 10MW of power and are either diesel or natural gas powered. Price wildly varies based on a number of factors, but even on the very high end that would cost $50M for ten such generators.

The facility itself was in the multiple billions range to build, so the added cost would be a rounding error. The environmental hazards alone due to losing containment, let alone how much the outage costs in lost business, seems pretty logical to me then that the generators existed.

So the question really is, was it incompetence (unexpected failure of backup systems) or malice (good excuse to justify constraining supply)? We will likely never know.


Funny you should ask, but the parent company of said fab actually builds and sells generators of the appropriate size.

Outside that, have you not seen a multi-building data center complex? The power demands aren’t that different.


Gas turbine power plants aren't that expensive...

Lots of sites use them for backup power and simultaneously use them as regular power plants selling power back to the grid.


I was estimating to the very high side to make a point. One can easily get 10MW in the 500k-1M USD range, but I did see some very elaborate setups peaking out near $5M so went with that. Heck, I recently saw some on Alibaba for $100k, but I highly doubt Toshiba/WD would buy from there.


I would be most grateful if someone could please explain what sort of tools are likely to be used here, and why a power loss to those tools would ruin days/weeks/months worth of output relative to the time they were offline?


Semiconductor manufacturing involves a lot of precisely controlled processes. You put the wafers into furnaces and pass a gas over them for X time and Y flow rate at Z temperature to impregnate the wafer with various chemicals. You put them in low pressure plasma environments to etch them, again for X time at Y flow rate. There are half a dozen more of these as well, like applying metal and implanting ions.

These values are experimentally tightened to get the highest possible accuracy to the desired effect and improve the number of working chips that leave the factory. If the power cuts out you don't know what conditions the wafer experienced while the system was winding down completely uncontrolled and your processes haven't been designed for the wafer going through the ramp up twice.

The reason why it's lost so much output is because modern semiconductor processes have hundreds of steps and (I believe) a lead time in the months, so the amount of material that's in flight at any one instant has to be huge to get any reasonable throughput.


So they realized late that an early step of the pipeline was off, rendering everything that has been through flawed ?


No, they experienced a power outage which has ruined every chip that was currently at any point in the pipeline. The realization would be immediately (as soon as the power goes out).


It said that it was 1/2 their production output for one quarter. Is that really possible for an instantaneous power outage?


If at that instant they have half there quarter’s product in various stages of processing (and there are hundreds of such stages, in hundreds of parallel pipelines) when all of those stages shit the bed, obviously yes.


That's the point. The only way that's possible is if things sit in a process that takes many months.


or that the restart procedure after a power failure is going to take a few weeks...

I can imagine that if the factory is entirely automated and a full-restart has never been attempted. Every single machine will probably be in some bad state with unknown chemicals settled into unknown pipes in the machine, requiring custom flush processes to be designed, and in some cases machines might have to be replaced, which in a human-free clean room isn't easy...


Yes, that's exactly what's going on here. The processes last quite a long time.

Similarly, a drought that doesn't last that long (relative to the life of a big tree) can nevertheless kill that tree even though said tree has been growing for centuries.

In other words, there need be no correlation between how little time it takes to ruin something that takes a very long time to make.


If you understand that that’s the way the process works, then why are you asking if it’s possible


More like many steps in parallel would all be affected by a power outage, and much or all of the wafers in progress were ruined or simply not economical to recover.


Interesting. So what do they do with the materials afterwards? Landfill or melting or?


Probably varies depending on stage. Wafers are very pure silicon, so the ones that haven't yet been contaminated with trace elements would probably be melted down for reuse. Wafers later in the process are probably carrying a lot of material (gold/silver/platinum, maybe iridium, some heavy metals?) that would be worth salvaging in bulk.


Acids, the manufacturing basically is controlling the etching process to get rid of unwanted parts and keep the designed metal circuits.

When you lost power, you are not sure if the chips stayed with acids for too long or too short, or coated with unwanted amount of materials, the uncertainty kills the yield rate, which can be already low since memory chips require repeated stacking nowadays.

Similar to https://i.stack.imgur.com/yTQqw.jpg


Yesterday this issue was being discussed in /r/DataHoarder and I asked a similar question:

https://www.reddit.com/r/DataHoarder/comments/c6mt9l/a_13_mi...


I imagine that the process is very highly pipelined and optimized, and I would imagine that they had some sort of backup (generator) that failed.

One analogy is to think about if you had a batch script that you were working on that touches a lot of files (1000s). Now imagine power was cut and the batch script was interupted because the computer turned off, but that computer hasnt been turned off in a long time (say it was a server).

First, you have to turn the server back on after a power outage. Was there corrupted files in it? You have to now get that server in a known working state, and if you have kept it on for years....then you may be in a world of hurt.

NOw you got your server up and running. You have the option of going through each of the 1000s of files your script was working on....but that will take time. Does it make sense to start from scratch? You will have to through our all the files you were working on, but at least you can start that script again. You could attempt to salvage every file, but that will also take time too.


Another analogy is that you have a bakery that produces soufflés on an industrial scale.

These soufflés take two months to bake. The baking has to be done in such a precise fashion that even fractions of a degree in variance results in the entire batch being ruined.

Worse still, it takes a long time to bring the oven up to temperature and stabilise it on the precise temperature. You can't just scrap the batch that's ruined and start production again.


I like this analogy. Lets expand.

These soufflés take two months to bake but you need soufflés every day for sales. What does this mean?

You always have 2 months of soufflés in various states of production at all times.

Now you lose power and all these as very fragile soufflés in production are lost because of the power failure. Furthermore, it will take you two months to get the first soufflés off the restarted production line.


I recall the story of Micron's first Chinese fab: 1 millisecond out phase brownout and they loose few megabucks instantly, and like that during every electrical event.

Giant UPSes are not an option in the industry because fabs eat oodles of electricity, and it is cheaper to loose a megabuck once a year than build a stabilisation/ups plant


my friend - it's lose, not loose.


So... does that mean they'll be hiking NAND prices, just like with HDD prices after the Thai floods?


Looking at the numbers it should not move that much. According to this article, https://www.businesswire.com/news/home/20190307005812/en/TRE..., in 2018 912 exabytes of HD and SSD storage was sold. 800 exabyte for HD and 112 exabyte for SSD. And the SSD market grew 45% in 2018. If manufacturers project to grow at the same rate then 2019 SSD shipments will be around 162 exabyte. This puts the 6 exabyte loss at around 3.5%.

But we all know that markets are driven by emotion: losing 3.5% of your raw materials in a market that is projected to grow 45% will cause big fluctuations. But that is just my opinion.


I believe you forgot Toshiba, which could be 9 exabytes lost according to the article for this quarter, so we are talking about 15 exabytes lost.

According to your data, the total quaterly production is 41 exabytes for SSD, which would mean losing about 37% of the total SSD production this quarter.

That being said, it is the first time I read about the scale of storage production worldwide. It makes you wonder what does the humanity store in those hundreds of exabytes per year. Probably many duplicated data or unused bytes.


These days a large portion of it is just basically logs. Logs of all the traffic we're generating looking at content on the internet, to be used to try to target ads. That and videos, youtube itself probably accounts for a significant amount of that storage use.


Are the logs of this scale and less-popular videos usually stored on SSD? I thought HDDs are still cheaper and RAID gives enough throughput given enough disks?


HDD's access time (10milliseconds or more), means a hard disk can't really serve more than 100 concurrent users, assuming each wants to stream a chunk of video every second.

That makes it a poor choice for serving anything but the rarest of YouTube videos.


I believe a typical video, stored in all of the Youtube formats, uses on the order of a megabyte per second. So, a 10TB disk probably holds about 100 days of video. Seems fine for videos that are watched less than once a day, that's probably a vast majority of Youtube's storage.


Logs are probably on HDD, video might be on SSD given how aggressively it gets edge-cached.


> It makes you wonder what does the humanity store in those hundreds of exabytes per year.

Let’s as a hypothetical assume that Apple iPhones average 128GB of storage in 2019 (they go up to 512gb after all now). Let’s also assume Apple sells 50M iPhones in 2019. Doing the math, assuming my wild estimates are right, that gives us about 6.4 exabytes of storage usage in 2019 for just iPhones alone.

Android however ships something like 1.25B devices per year. The storage average is way lower I’d assume, but that’s still easily in the tens of exabytes per year most likely.


HDD and RAM companies always use bullshit excuses to collude and raise prices.


6 exabytes is the Western Digital side, the article says that Toshiba produces more there, so it is going to be 12+ exabytes.


NAND chips are a commodity, so you're actually undercounting their sales by a tremendous amount. Don't forget the integrated NAND chips in... well, nearly every product produced across every market segment for the past 20 years. If it runs on electricity, it probably has a NAND chip somewhere.


Most of those "if it runs on electricity, it probably has a NAND chip somewhere" are very different chips from those used in SSDs; they are typically low density and manufactured on older processes (1xx μm) used for firmware/bitstream storage.


That is true. However, those less performant chips, due to the way NAND storage functions, could easily be used in place of the newer chips. If you need twice as much performance, you just use 2 chips and you're done. Unless dealing with extreme size constraints like in a cell phone (where the high-performance NAND isn't even used for price reasons), there's no real reason to prefer a single chip over two. For the consumer, that is. If you're producing chips and trying to drive per-chip profit margin and trying to make your product look like CPUs and other products that have some complexity behind them, though, it's useful.

I can understand how NAND prices don't look suspect if you're not terribly familiar with the history and low-level factors of the industry, but if you really look into it, it's kind of ridiculous. The same companies price-fixed DRAM chips and got busted. Then they price-fixed LCD panels and got busted. Then they price-fixed DRAM chips again and got busted again. They were being investigated for price-fixing NAND chips, but the South Korean president shut down the investigation. Shortly before being ousted for rampant corruption (and then being bailed out of prison by Samsung). Anything that is present in such a gigantic variety of devices should cost almost nothing. That's just economics. It becomes commoditized. The materials involved and their rarity become the primary drivers of price. Comparing price per terabyte of storage between mechanical drives and NAND-based storage is the most telling to me personally. The technology that goes into modern high density mechanical hard drives is utter madness. They should be, by all accounts, astronomically expensive. They use helium, of which there is a global shortage. They coat the platters with rubidium and other rare materials. They include neodymium magnets. They include high precision mechanical motors that spin platters fast enough that the surface tension against the air becomes a significant factor (leading to the use of helium) and still maintain enough precision to be able to seek to a very precise spot in nanoseconds. Also, you've got 'hybrid' drives that include both the mechanical and NAND storage... which incurs almost no premium over the pure mechanical solution. Now they're beginning to produce drives with integrated lasers for heat-assisted magnetic recording. And these are still many times cheaper on a $/TB basis compared to.... just a dumb parallel array of NAND gates that don't require anything rare?


> That is true. However, those less performant chips, due to the way NAND storage functions, could easily be used in place of the newer chips.

I was mostly thinking of serial flash chips there, which use a very different interface from plain NAND; they have their own little controller built into them. They are used in relatively big numbers for firmware etc. Even if they had the same interface, there is still a huge gap between a 128 MBit chip and the densities you find in PC storage, where we now have 512 GBit chips.

Samsung & Co. might make them too, but you mostly see other semicons badged on them.


If you're talking about Flash typically used in microcontrollers, then no... thats NOR Flash, not NAND.


No, it means prices will go up because there is a shortage, like with everything else. Sugar, Gasoline and so on are good examples. HDD prices are just one more item that follows the supply/demand curve.

Sure there will be some clever parties that will make some money anticipating this. But that's the same reason why the price of the gas at the pump that was already in the tank jumps up because of a shortage somewhere else. The whole stock is instantly valued at a different price.


I think that is entirely up to Toshiba/WD. The profit margin on NAND is so astronomical and the price charged for it is so completely decoupled from the cost of production (which is as close to nothing as anything gets) that they could afford to just absorb the 'loss', but it might mess with their projected schedules of how much they had expected to make, so I could see them jacking up the price to compensate. The market and society in general seems to be content with permitting the NAND manufacturers price-fixing even when it's become absurd (do a tally of the raw materials and processes involved in producing 1TB of modern mechanical hard drive storage compared to a dumb regular parallel array of 1TB of NAND gates... it's ludicrous) so they've got whatever flexibility they feel like using.


Are you just leaving out the cost of building the Fab...? The whole reason it was a joint venture in the first place was to try to soften the up front investment for both companies. That's ignoring the R&D dollars on 3D and/stacking. If you think that's also easy, why were micron and intel over a year behind Samsung? It wasn't by choice.


First time I have to really think about Exa<unit>.

     Giga / Tera / Peta / Exa
6 Millions Terabytes of solid state memory.. quite a mass.


I know that NAND involves no exotic raw materials, so does that enable them to recycle any of the damaged/lost wafers? I don't know very much about the physical processing/preparation of the raw silicon and such that goes into making a wafer, could you simply grind up or perhaps chemically dissolve everything back to base components and re-create a fresh wafer?


At least some scrap is now being bought by solar cell industry, but that material is forever lost for IC making because it's already contaminated with dopants and metals


I wonder what failed in their redundant power supply because they surely have something.

I hope the postmortem will be public !


> I hope the postmortem will be public !

So do I.

It turns out that backup power fails more often than one would hope.

Generators fail to come online, batteries not performing as expected despite recent maintenance, switching gear failing, or the switching gear's safety mechanisms preventing a successful switch etc.

Source: I work for a smallish ISP, and have heard lots of stories from the ISP community, and am always eager to read about outages when there's a public postmortem.


It could be worse. At least you don't need that backup power to keep a nuclear reactor from doing anything unseemly


I know, let's use the power from our own turbine as it spins down to power the emergency shutdown of our reactor! (see: Chernobyl)


> I wonder what failed in their redundant power supply because they surely have something

Based on comments on the site, it appears that even a very short power disruption can mess up semiconductor manufacturing.

If that is true, then a backup power system that involved detecting an outage and starting up generators might be too slow.

If based on generators, they'd either need to have the generators always running, or have a second redundant system based on batteries that can immediately take over during the time it takes to start the first redundant system.

Or they could run their stuff off batteries all the time, with the batteries charged from the grid. They will still need something that can very quickly switch to the grid in the case of their own battery powered inverters failing.

All of these are going to add complexity and cost that may drive up the effective cost of electricity enough that if may be cheaper in the long to simply go with the grid, if they are in a place with a reliable enough grid.

Anyone know how reliable the grid is at their location?


> Based on comments on the site, it appears that even a very short power disruption can mess up semiconductor manufacturing.

> If that is true, then a backup power system that involved detecting an outage and starting up generators might be too slow.

I expect them to have anticipated this. I expect them to have applied a system that would have worked for them, had it worked as designed.

I would suspect the failure to have happened somewhere behind the redundant power supply (but then again why aren't the individual production steps not independently redundantly powered?).


If my reading comprehension has not let me down then a 13 minute power disruption can cause them to lose 1/2 of their output for a quarter.

Given the massive consequences of quite a short disruption maybe they need to figure out how to weather disruptions more robustly?


Cycle times (the time it takes to process one wafer) can be in the range of a month. Any disruption therefore kills roughly a month (plus or minus) of output, at least for wafers in certain steps. It's brutal.

Fabs are engineered to have redundant power, but what's interesting is that the same thing happened to Samsung last year: https://www.anandtech.com/show/12535/power-outage-at-samsung...


Interesting!

That's my point though. If power outages hurt these fabs so severely why aren't their power supply systems more robust?

I know it's easy for me to say but I'm having a hard time wrapping my head around it.

Say in another engineering space where an hour of power outage means roughly an hour of downtime then you'd maybe not care so much.

But if, as you link here, a 30 minute power outage can "destroy 3.5% of the global NAND supply for March" wouldn't they make sure they have 0 minutes of power outage – heck, that's nearly national security levels of threat – wouldn't the South Korean government install two (or three) sets of power lines from different parts of the grid. Or a local power source (diesel generators and a small coal power plant.) Expensive? Sure. But so is 3.5% of global NAND supply for a month?


Power is honestly really hard -- for example, if you read through this list of datacenter power failure post-mortems:

http://up2v.nl/2017/06/02/datacenter-complete-power-failures...

there are a lot of individual failure cases. DC operators learn from each failure, but there are a lot of ways things can go wrong. Fabs can be upwards of 50MW, which puts them in the range of a good-sized datacenter, so the challenges probably end up similar. (I'm saying the last part carefully - I'm much more familiar with datacenter power design than fab power design!)


This assumes all wafers in process at a given time are 100% waste after a power disruption. If true, that’s brutal indeed.


Even if it weren't 100% wastage, it might take time and effort to find the salvagable bits. So when they restart, they'll might need to restart with an empty pipeline. By the time the salvagable bits are recovered it's conceivable the spots they could return to in the pipeline are filled; or merely that they're even slightly filled - enough for the risk to the just-starting production to discourage the addition. And even what they salvage might thus end up displacing new production - so while that might save them money, it might not increase throughput.

And of course, it's just an estimate. Maybe the real damage will be different.

I have absolutely no idea what I'm talking about, by the way ;-) - I'm just positing a hopefully plausible explanation why the ratio of wastage might not impact the amount of "lost" (in the sense of reduced from baseline) production, even if it help reduce the amount "lost" (in the sense of unrecoverable expended resources) production.

I doubt non-experts can do much better than believe their own projections - assuming nobody with a real background here comes up with a solid reason why not.


> Given the massive consequences of quite a short disruption maybe they need to figure out how to weather disruptions more robustly?

If you mean "they should not lose so much product when equipment loses power", that's just not possible. Modern semiconductor manufacturing involves hundreds of steps where the wafers need to soak in a chemical bath for a very specific time, and missing deadlines by a few seconds causes the entire wafer to fail.

The question is very much: "why did their UPS fail?".


That is indeed what I was implying.


I can't imagine that something so business critical wouldn't have local power generation capacity to short-term sustain them through < 1 day outages (and rapid failover mechanisms).

Either that mechanism failed somehow or there's more to this outage (maybe it damaged the production equipment as a result of spikes corresponding to the outage or repair).


If power supply issues hurt the business that severely then how come they allow it to happen is my point. I'm not saying I'm smarter than them, I'm not – I know there must be a reason, I can't figure it out.


Why individual process independently powered, if risk is this huge ? Why common power ?


Not quite understanding you, sorry.


This seems weird that 13 minute outage can kill month and a half production.

I wonder if this is standat hi-tech factory process reliability.


I guess it makes more sense to destroy everything affected by a power loss (even if some of it could be perfectly fine, or salvageable) than risk shipping products that will fail at a higher rate. That would cost way more in lost trust and lost sales.


They could sell under a special brand.


Oh I just remembered! There used to be noname parts - motherboards, PCI cards, RAM. Literally no brand markings, no warranty either.

I remember the RAM in particular - the chips had nothing etched on them, or a single 5char line. Even the firmware was unbranded, with strange timings, too (probably loosened because they would not work at standard specs).

I'm guessing it was Chinese companies buying up "bad" or excess stock and reselling it.

Haven't seen this in a while, either they tightened up regulations or the margins are too low to make a profit nowadays.


Not to mention it's likely that it's insured against.


Conspiracy: this is how the NSA buys their disk space.


That is an interesting idea but it would be so much easier for them to constantly buy, say 5%, of the output.


Well I’m not typically a real conspiracy guy but if I had to really think on this I’d say it’s easier to have a loss event like this and consequent writeoff vs some unexplainable long term 5% buyer.

I also don’t think if I needed tons of storage like this I’d want to acquire it over a 20+ month period.

Obviously I think I’m kidding but the thought is interesting.


NSA knows they need the storage, they didn't just figure out they needed to store a lot of data to perform a massive buy. It would much easier for the manufacturer to disguise 5% of constant purchases through various techniques, they could hide it in all sorts of ways. Also the NSA could set up shell companies to consistently buy output (frequently done during the cold war when USA needed to buy supplies from Soviet aligned nations, USSR itself). The NSA could hide behind major buyers like Google and Amazon. If the plant shut down for false reasons literally thousands of plants workers would know if was false. That wouldn't accomplish anything.


At least 12 exabytes upgrade.

Wow.

Maybe they switched to using electron internally?!


The fragility of the supply chain.


The stock of these companies really took a hit. Sarcasm.


Every time I see Godzilla he’s ensnared in and tearing down power lines. This was bound to happen sooner or later.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: