This sounds big enough to require a black start. Unfortunately, those are slow and difficult.
If an entire nation trips offline then every generator station disconnects itself from the grid and the grid itself snaps apart into islands. To bring it back you have to disconnect consumer loads and then re-energize a small set of plants that have dedicated black start capability. Thermal plants require energy to start up and renewables require external sources of inertia for frequency stabilization, so this usually requires turning on a small diesel generator that creates enough power to bootstrap a bigger generator and so on up until there's enough electricity to start the plant itself. With that back online the power from it can be used to re-energize other plants that lack black start capability in a chain until you have a series of isolated islands. Those islands then have to be synchronized and reconnected, whilst simultaneously bringing load online in large blocks.
The whole thing is planned for, but you can't really rehearse for it. During a black start the grid is highly unstable. If something goes wrong then it can trip out again during the restart, sending you back to the beginning. It's especially likely if the original blackout caused undetected equipment damage, or if it was caused by such damage.
In the UK contingency planning assumes a black start could take up to 72 hours, although if things go well it would be faster. It's one reason it's a good idea to always have some cash at home.
In another life I worked as an engineer commissioning oil rigs and I’ve seen how tricky even a small-scale black start can be. On a rig, we simulate total power loss and have to hand-crank a tiny air compressor just to start a small emergency generator, which then powers the compressors needed to fire up the big ~7MW main generators. It's a delicate chain reaction — and that's just for one isolated platform.
A full grid black start is orders of magnitude more complex. You’re not just reviving one machine — you’re trying to bring back entire islands of infrastructure, synchronize them perfectly, and pray nothing trips out along the way. Watching a rig wake up is impressive. Restarting a whole country’s grid is heroic.
I remember talking to my ex's dad about his job, which involved planning refuels of a large nuclear-powered generation station in the Lower Midwest.
The words "it's a miracle it works at all" routinely popped up in those conversations, which is... something you don't want to hear about any sort of power generation - especially not nuclear - but it's true. It's a system basically built to produce "common accidents". It's amazing that it doesn't on a regular basis.
> The words "it's a miracle it works at all" routinely popped up in those conversations, which is... something you don't want to hear about any sort of power generation - especially not nuclear - but it's true.
Funny thing is, those are the exact words I use when talking to people about networking. And realistically anytime I dig deep into the underlying details of any big enough system I walk away with that impression. At scale, I think any system is less “controlled and planned precision” and more “harnessed chaos with a lot of resiliency to the unpredictability of that chaos”
This is one of the key insights in my early SRE career that changed how I viewed software engineering at scale.
Components aren’t reliable. The whole thing might be duct tape and popsicle sticks. But the trick for SRE work is to create stability from unreliable components by isolating and routing around failures.
It’s part of what made chaos engineering so effective. From randomly slowing down disk/network speed to unplugging server racks to making entire datacenters go dark - you intentionally introduce all sorts of crazy failure modes to intentionally break things and make sure the system remains metastable.
Everything is chaos, seek not to control it or you will lose your mind.
Seek only to understand it well enough to harness the chaos for more subtle useful purpose, for from chaos comes all the beauty and life in the universe.
We would instead have HaaS, with monthly subscriptions for a license to use the house. Which can be randomly revoked at any moment if the company doesn't feel like supporting it is profitable enough, or if an AI thinks your electricity usage is suspicious and permabans you from using a home in the entire town.
A bit of a tangent, but I don't think this is it. There are plenty of species with plenty of shared norms, expectations, and trust - but no civilization. And, vice versa, many of the greatest societies have been riddled with completely incompatible worldviews yet created amazing civilizations. Consider that Sparta and Athens were separated by only 130 miles, yet couldn't possibly have been further apart!
The reason people work together is fundamentally the same reason you go to work - self interest. You're rarely there because you genuinely believe in the mission or product - mostly you just want to get paid and then go do your own thing. And that's basically the gears of society in a nutshell. But you need the intelligence to understand the bigger picture of things.
For instance Chimps have intricate little societies that at their peak have reached upwards of 200 chimps. They even wage war over them and in efforts to expand them or control their territory. This [1] war was something that revolutionized our understanding of primates behaviors, which had been excessively idealized beforehand. But they lack the intelligence to understand how to bring their little societies up in scale.
They understand full well how to kill the other tribe and "integrate" their females, but they never think to e.g. enslave the males, let alone higher order forms of expansion with vassalage, negotiated treaties, and so on. All of which over time trend towards where we are today, where it turns out giving somebody a little slice of your pie and letting him otherwise roam free is way more effective than just trying to dominate him.
> There are plenty of species with plenty of shared norms, expectations, and trust
Citation needed on that one.
> Consider that Sparta and Athens were separated by only 130 miles, yet couldn't possibly have been further apart!
They spoke the same language, shared the same literature, practiced the same religion, had a long history of diplomatic ties. When the Persians razed Athens, they took refuge with the Spartans.
> For instance Chimps have intricate little societies that at their peak have reached upwards of 200 chimps.
Again, I don't think this claim stands to evidence. The so called chimp war you mention is about a group of about a dozen and a huge fight that broke out among them. That doesn't support the idea that they are capable of 200-strong 'intricate' groupings.
"They spoke the same language" ... not exactly, the Spartans spoke Doric, while the Athenians Attic. (Interestingly, there is a few Doric speakers left [0].) While those languages were related, their mutual intelligibility was limited. Instead of "Greek" as a single language, you need to treat it as a family of languages, like "Slavic".
"shared the same literature" ... famously, the Spartans weren't much into culture and art, and they left barely any written records of their own. Even the contemporaries commented on just how boring Sparta was in all regards.
If we delve deeper into ideas about how a good citizen looked like, or how law worked, the differences between Sparta and Athens are significant, if not outright massive.
While those two cities weren't entirely alien to each other, had some ties, same gods, and occassionally fought on the same side in a big war, there was indeed a huge political and cultural distance between them. I would compare it to Poland vs. Russia.
Not "entirely alien, had some ties" is not it. They were part of the same cultural cluster, participated in the same games, traveled to the same sanctuaries, had mutual proxenies. The very fact that we know the opinions of several Athenians about Spartans is telling. We don't know what they thought of inhabitants of Celtic population centers, or Assyrian cities, or Egyptian ones. But we know what they thought of individual Spartans that they mention by name, biographical detail and genealogy.
I stand by my comparison to the Slavic nations of today.
Yeah, we have a lot of opinions of one another, yes we understand basic vocabulary of our cousins, though details in fine speech are another matter, yes, we are technically Christian, but still the political and societal difference between, say, Czechs and Russians is quite big.
As was the difference between the Spartans and the Athenians. Constitutionally, the poleis were all over the map, from outright tyrannies, through oligarchies and theocracies, to somewhat democratic states.
So your argument is: Athens and Sparta had things in common but were different. Like Czechia and Russia. Czechia and Russia are quite different. So were Athens and Sparta?
Try to speak holistically. I have no idea what you're trying to argue. I could expand or provide evidence for everything I said, but providing a citation or proving that there are indeed social groups of upwards of 200 chimps, or whatever, isn't going to do much, because you're not really formulating any argument or contrary view yourself.
Put another way, you're arguing against an example and not a fundamental premise. Proving the example is correct doesn't really get us anywhere since presumably you disagree with the fundamental premise.
That sounds very much like "Just believe me." or even more "The rules were that you guys weren’t gonna fact-check"
> I have no idea what you're trying to argue.
Presumably you know what you are trying to argue. That is what the questions were about.
> Proving the example is correct doesn't really get us anywhere
You would have solid foundations to build your premise from. That is what it would get us.
First we check the bricks (the individual facts), then we check if they were correctly built into a wall (do the arguments add up? are the conclusions supported by the reasoning and the facts?). And then we marvel at the beautiful edifice you have built from it (the premise). Going the other way around is ass-backwards.
> you're not really formulating any argument or contrary view yourself.
I don't know what viewpoint namaria has. I know that "Sparta and Athens [..] couldn't possibly have been further apart" is ahistorical. They were very similar in many regards. If you think they were that different you have watched too many modern retellings, instead of reading actual history books. That's my contrary view.
> For instance Chimps have intricate little societies that at their peak have reached upwards of 200 chimps.
Here the question is what do we believe to be "societies". The researchers indeed documented hundreds of chimps visiting the same human made feeding station. Is that a society now? I don't think so, but maybe you think otherwise. What makes the Chimps' behaviour a society as opposed to just a bunch of chimps at the same place?
The preppers can only buy themselves a small amount of time, though—no more than a year or two. Eventually, their stockpiled supplies will run out, or some piece of equipment will need a replacement part.
I'd much rather focus on "prepping" by building social resiliency, instead. The local community I'm plugged into is much stronger together than anything I could possibly build individually.
I am an ex-scientist and an engineer and had a look at the books of my son who studies finance in the best finance school in the world (I am saying this to highlight that he will be one of the perpetrators, possibly with influence, of this mess)
The things in there are crazy. There are whole blocks that are obvious but made to sound complicated. I spent some time on a graph just to realize that they ultimately talk about solving a set of two linear equations (midfle school level).
Some pieces were not comprehensible because they did not make sense.
And then bam! A random differential equation and explanation as it was the answer to the universe. With an incorrect interpretation.
And then there are statistics that would make "sociology science" blush. Yes, they are so bad that even the, ahem, experts who do stats in sociology would be ashamed (no hate for sociology, everyone needs to eat, it is just that I was several times reviewer of thesises there and I have trauma afterwards).
The fact that finance works is because we have some kind of magical "local minimum of finance energy" from which the Trumps of this world somehow did not maybe to break from (fingers crossed) by disrupting the world too much.
I did a lot of work for a major airline earlier in my career and came away with same impression. I just couldn’t see how they kept planes in the air based on my experiences through out the organization. I think in a big enough org the sheer momentum keeps things moving despite all the fires happening constantly.
"Funny thing is, those are the exact words I use when talking to people about networking"
Computer networking is not the same. Our networks will not explode. I will grant you that they can be shite if not designed properly but they end up running slowly or not at all, but it will not combust nor explode.
If you get the basics right for ethernet then it works rather well as a massive network. You could describe it as an internetwork.
Basically, keep your layer 1 to around 200 odd maximum devices per VLAN - that works fine for IPv4. You might have to tune MAC tables for IPv6 for obvious reasons.
Your fancier switches will have some funky memory for tables of one address to other address translation eg MAC to IP n VLAN and that. That memory will be shared with other databases too, perhaps iSCSI, so you have to decide how to manage that lot.
You tried to nerdsnipe someone without mentioning L2 is effectively dead within datacenters since VXLAN became hardware accelerated in both Broadcom and "NVIDIA"(Mellanox) gear. And for those that don't need/care about L2 they don't even bother and run L3 all the way.
EVPN uses BGP to advertise MAC addresses in VXLAN networks which solves looping without magic packets, scales better and is easier to introspect.
And we didn't even get into the provider side which has been using MPLS for decades.
A problem with high bandwidth networking over fiber is that since light refracts within the fiber some light will take a longer path than other, if the widow is too short and you have too much scattering you will drop packets.
So hopefully someone doesn't bend your 100G fiber too much, if that isn't finicky idk what is, DAC cables with twinax solve it short-range for cheaper however.
I built control computers for nuclear reactors. Those machines are not connected to a network and are guarded by multiple stages of men with automatic machine guns. It was designed to flawlessly run 3x boards each with triple-modular-redundant processors in FPGA fabric all nine processors instruction-synced with ECC down to the Registers (including cycling the three areas of programmable fabric on the FPGAs). They cycle and test each board every month.
Well, the news says that doge randos are potentially exfiltrating the details of systems like that as well as financial details of many Americans, including those who hold machine guns and probably suffer from substandard pay and bad economic prospects/job security as much as anyone else does.
Perhaps the safest assumption is that system reliability ultimately depends on quite a lot of factors that are not purely about careful engineering.
A bit off topic, but my uncle used to be security at a nuclear plant. Each year the Delta Force (his words) would conduct a surprise pentest. He said that although they were always tipped off, they never stopped them.
I guess the biggest security advantage of any of these old critical systems is fact that they are not connected to the internet. At least I hope they are not.
The regulations around parts sourcing, required maintenance, and training has more to do with how well/safe modern aviation is than anything else. If those aren’t done properly, all sorts of weird things start happening. Pretty much the only reason aerospace safety records aren’t worse in third world countries is because of how obviously bad the consequences are quickly - and even then….
I love the "analog" handcranked air compressor to 7MW generator escalation, it really captures human ingenuity.
I wonder however how being part of the "continental Europe synchronous grid" affects this, and how it isolates to Portugal and Spain like this.
But yeah there are a lot of capacitors that want juice on startup that happily kills any attempt to restore power. My father had "a lot" of PA speakers at home and when we tripped the 3680w breaker (16A 220v) we had to kill some gear to get it back up again. I'm also very sure we had 230v because I lived close to the company I worked for and we ran small scale DC operations so I could monitor input voltage and frequency on SNMP so through work I had "perfect amateur" monitoring of our local grid. Just for fun I got notifications if the frequency dropped more than .1 and it happened, but rarely. Hardly ever above though since that's calibrated over time like Google handle NTP leap seconds.
I saw some ancient footage of an Me-109 fighter engine being started. A tech jumped on the wing and inserted a hand crank into a slot on the side. He threw all his might into turning it, and then after a delay the propeller started turning and coughed into life.
I realized the tech must have been winding up a flywheel, and then the pilot engaged a clutch to dump the flywheel's inertia into the engine.
The engineer in me loves the simplicity and low tech approach - a ground cart isn't needed nor is a battery charger (and batteries don't work in the cold). Perfect for a battlefield airplane.
---
I saw an exhibit of an Me-262 jet fighter engine. Looking closely at the nacelle, which was cut away a bit, I noticed it enclosed a tiny piston engine. I inferred that engine was used to start the jet engine turning. It even had a pull-start handle on it! Again, no ground cart needed.
---
I was reading about the MiG-15. American fighters used a pump to supply pressurized oxygen to the pilot. The MiG-15 just used a pressurized tank of air. It provided only for a limited time at altitude, but since the MiG-15 drank fuel like a drunkard, that was enough time anyway. Of course, if the ground crew forgot to pressurize it, the pilot was in trouble.
You are correct, the official moniker is Bf-109, but the Allies referred to it as the Me-109.
BTW, since we are Birds of a Feather, I bet you'd like the movie "The Blue Max". It's really hard to find on bluray, but worth it! The flying sequences are first rate, and no cgi.
Blackstart assumes *no* power is available, period. Nothing but human muscle power. Thus the first stage is always either a human pulling a starter cord or the like, or a human building up energy in some fashion that is then dumped into the system to produce a bigger surge than is possible by direct muscle power.
And, despite the news reports, this is not a true blackstart. Some power survived.
> have to hand-crank a tiny air compressor just to start a small emergency generator
Similarly, the US Navy maintains banks of pressurized air flasks to air-start emergency diesels. Total Capacity being some multiple of the required single-start capacity
Random fact: Those starters are a plot point in the 1965 film The Flight of the Phoenix, where the protagonists are trying to start a plane that’s stranded in the Sahara, but only have a small supply of starter cartridges left.
I lived for a while on a sailboat equipped with an ancient Saab tractor engine (8 whole horsepower!). Was designed for cartridge starts in cold weather, though someone had fitted an electric starter by the time I saw it
That would be charging up the spring to throw the breaker. High voltage breakers need to switch on (or off) very quickly, to avoid damage from arcing. It's common for them to have some kind of spring or gas piston arrangement that you pump up first to give them enough energy to do that quickly.
No, he's winding up a spring to close the circuit breaker quicker than a human hand could, which reduces/prevents and arc from forming as the electrical contacts close.
If you were the right age when it came out in theaters in '93 (roughly between 11-15), Jurassic Park was a huge deal. Titanic was another of those in that era (although mainly to certain females).
I can appreciate the ability to revert to hand cranking an air compressor, yet I can't help but feel that the 99.99% of events, you'd be better served with keeping a two stroke gas engine ready to go. Air compressors tend to have parts just as or more vulnerable to environmental factors, and you get a lot more power for less elbow grease out of a two stroke.
In 99.99% of real-world scenarios, the rig would have other options to bootstrap a black start—like fully charged air tanks, backup power from a support vessel, or even emergency battery systems. The hand-cranked air compressor is really a last resort tool. We test it during commissioning to prove it could work, but in most cases, it’s never used again in the rig’s working life. It’s there for the rarest situations—like if a rig was abandoned during a hurricane, drifted off station, and someone somehow ended up back onboard without normal support. It’s a true "everything else failed" kind of backup.
Nice to see that at least in some places people are actually thinking to almost-impossible scenarios and taking them into account. I have the feeling that it's quite unlike most infrastructure development nowadays, unfortunately.
The key is the responsible party's skin in that particular game. A drilling rig is a very large, very expensive, and very lucrative man-made island. The backed-up backups have backups. Not only could it be very far away from support vessels, capable of bringing it online in every situation, every minute not in production is money thrown overboard.
Very true, although I think that economic arguments can apply to most infrastructure. What are the actual costs of a day-long nationwide blackout? I have no actual idea, but I'd not be surprised if they exceed 1 billion {EUR|USD}.
The part you are missing is ‘paid by whom’. Unlikely the power companies or regulator is going to be paying that amount here. It’s all the poor saps who didn’t have sufficient backup capacity.
There will be costs/losses by the various power companies which weren’t generating during all this of course, but also fixing this is by definition outside of their control (the grid operators are the ones responsible).
I’m sure public backlash will cause some changes of course. But the same situation in Texas didn’t result in the meaningful changes one would expect.
That’s because there is no effective regulation of the state’s power industry. Since they’re (mostly) isolated from the national grid, they aren’t required to listen to FERC, who told them repeatedly that they should winterize their power plants. And a state-level, the regulators are all chosen by the Governor, who receives huge contributions from the energy industry, so he’s in no rush to force them to pay for improvements.
The real irony was the following summer during a heatwave, when they also experienced blackouts. Texas energy: not designed for extreme cold, not designed for extreme heat. Genius!
Same thing happened in south Texas last year. Years of deferred maintenance on transmission lines resulted in almost two weeks of power outages from two major storms, that could have largely been avoided. The utility provider is mostly allowed to regulate itself (while donating to the campaigns of the dominant political party), and allowed to keep excess profits/return dividends to shareholders, rather than re-invest in infrastructure. There is very little regulatory structure or checks in place to ensure the grid is being maintained. And there have essentially been no consequences, other than an apology and excuses, with an attempt to raise delivery rates even higher. As a home owner, its on me to bear the additional cost of a backup generator, because I can’t rely on the state to regulate the utility to provide the service I’m forced to pay them for.
Based on how difficult it can be to start my chain-saw, snow-blower, and motorcycle after they've sat without being run for a while, I'd not recommend a gasoline-powered engine to be the only thing on stand-by.
Air compressors in adverse environments don't hold up that well either, without basic maintenance. I've had engines run seasonally for decades. It doesn't take much for them to keep working well, though doing nothing at all is an easy way to clog up the carburator.
Compressor pistons/screws that ingest grit/dirt, or aren’t run often enough to boil the water out of them, also tend to not last long. I used to help run a volunteer workshop with an Atlas Copco screw compressor, and it died in a few years because it wasn’t being run hard enough and the screws rusted (doh!).
It shouldn't be that bad. A little fogging oil when put away and drain all the fuel. Then a little starting fluid on the first couple start attempts. Usually they start fairly quickly if they're in decent shape. And that's just for pull starts. My electric start mower starts right up after even 5 months of not running with stabilizer in the fuel.
As an ex small engine mechanic, I'd advise against using a 2 stroke for something like that. A 4 stroke would be a better bet. Better yet would be a natural gas/propane 4 stroke, since gasoline goes stale and plugs carburetors.
Small diesels could be an option but they're harder to pull start for a given size.
> Small diesels could be an option but they're harder to pull start for a given size.
I once needed to jump-start a small marine diesel, many miles from land...
There was a small lever that cuts compression. You have to get it spinning really fast before restoring compression! It's definitely a lot of work!
EDIT - Here is a cheap modern small marine diesel [1]. The operation manual suggests that you don't have to do anything to get it spinning quickly, you just have to crank it 10 times, put away the crank handle, and then flip the compression switch. That's progress!
Lister diesel generators are much the same - half a dozen cranks, restore compression and off they go. The hand cranking can easily break your arm if you get it wrong though.
Even gas engine pull starts have a compression release function built in. That's why you need special cylinder pressure tools to check compression on most pull starts.
I did that too and crank got stuck on flywheel. To stop engine I had to climb over the engine where now-removed stairs were since my mate was clueless. Fortunately the crank handle stayed on.
Cranks and decompression levers are gone for at least 30-40 years now tho.
Not being at all qualified to comment (though I work for a power company), I'd think the hand crank air compressor wouldn't suffer from no spark or bad gas.
If stale gas is a concern, then all of the other steps in-between zero power and your full start are also screwed.
Air compressors have more valves and gaskets that are vulnerable to oxidation, especially in salty environments, so I'd have thought the upkeep between the two, the two stroke would be easier.
But it's an emergency system, not a general operation system. Thus it's not going to be exposed to the salty environment most of the time. You could certainly put the whole thing in an airtight box.
Look at how the military builds surface-based missiles these days: it's in a factory-sealed box. Molten salt batteries so they last for decades. (You don't see molten salt in most purposes because once it's been triggered it's lifespan is in minutes or even less. They're used in applications that only need to deliver power once.)
Diesel will run on mostly anything if it’s running… including methane in the air intake, so you need to think quickly when presented with a generator that keeps running after cutting the fuel
Oil leaking around a turbocharger rotor seal also makes for good diesel fuel, if you define "good" as an exciting uncontrolled disassembly of the engine.
Crude oil from various wells has properties varying from ‘thick, stinky, corrosive goo’ to ‘explosive, barely liquid, bubbly mess’. Also, rigs need to be careful about ignition sources, as methane leaks can be a common emergency condition for some wells/crude.
It’s not the type of thing that using directly is economically feasible, even for emergency situations.
Batteries are great when they have charge. What happens if the generator doesn't want to start the first, second, and third time? How many start attempts do you get before the batteries are dead?
The hand-pumped air compressor is the tool of last resort. You can try an engine start if there's someone there who's able to pump it. You don't have to worry about how much charge is left in your batteries or whether or not the gasoline for the 2-stroke pump engine has gone stale. It's the tool that you use as an alternative to "well, the batteries are dead too, guess we're not going to start the engine tonight... let's call the helicopters and abandon ship"
The data center where I work has large diesel generators for power cuts. They are electric (battery) start. There is no capability to start them manually. The batteries are on maintenance chargers that keep them in good condition. The generators are started and tested every two weeks.
Could the batteries be dead and the generators not start? I guess but it's very unlikely. I get that on an oil rig it might be a matter of life and death and you need some kind of manual way to bootstrap but there's not much that's more reliable than a 12V lead-acid battery and a diesel engine in good condition.
Also, the data center is probably in a city, surrounded by infrastructure that could be used if necessary. An oil rig is in the middle of an ocean, and has to rely on itself.
Lead acid batteries are not exactly what I would call reliable. They require a lot of constant maintenance to ensure that they will work when you need them and they can easily degrade in such a way that they maintain voltage and appear to be good but then fail to deliver the needed amps when you demand them. This is made much worse in cold weather. Finally, if allowed to freeze when they are moderately drained, then the accumulated water inside will freeze and drastically shorten their life span.
I think I'd take Lithium Ion batteries over lead acid for almost every conceivable use-case. They are superior in almost every way. Lighter, less likely to leak acid everywhere, better long term storage (due to a low self-discharge) and better cold weather discharge performance. The only drawback would be a slightly increased risk of fire with Lithium.
I worked with a telecom provider's data center that ended up having a quad redundant diesel generator failure during the first cold snap that took the Texas grid offline a few years back. They had at three fuel supplies gel and then failed to start. The fourth, as I remember, just didn't try to fire.
It's unrealistic, and if one power station is unable to use their batteries to start their emergency generator (through the absurd incompetence you describe, or more likely through a major fire, flood or assault) the grid can be started from a different one.
Black out on a rig or ship is very different to black start of a national electricity grid.
Most vessels will experience a blackout periodically and the emergency generator start fine, normally on electric or stored air start, and then the main generators will come up fine. It's really not delicate, complex or tricky - some vessels have black outs happen very often, and those that don't will test it periodically. There will also be a procedure to do it manually should automation fail.
There are air starters on some emergency generators that need handling pumping. These will also get tested periodically.
The most complex situation during black out restoration would be manual synchronisation of generators but this is nothing compared to a black start.
The point isn't to make a system that is easy. The point is to make a system that is guaranteed to work in any remotely realistic circumstance.
In a real black start, the guys might very well grab a portable generator and just use that instead. But having the option to hand crank something rather than rely on batteries that might run flat is good.
and if the entire thing depends on it, you'll give that generator a handcrank as a backup too instead of assuming the batteries ever dying or getting flooded or whatever is entirely impossible.
bringing islands together requires one to synchronize both -- frequency and phase. It is super difficult for large generators and transmission lines. transient heat dissipation can be a real bummer.
How hard is getting each island within .1Hz of correct? The full grid doesn't have much trouble, but I don't know how much cutting things down impacts that.
And then phase will align itself a couple times a minute so what's difficult about that part?
The compressor pressurizes an air tank. When the pressure in the tank is nice and high, use the compressed air to turn a turbine connected to the crankshaft of the engine.
You can also directly feed the compressed air into a cylinder (or even the intake manifold!) to force the engine to turn. No extra turbine required, though the plumbing might get a little odd. [https://en.m.wikipedia.org/wiki/Air-start_system]
That tends to be for very large engines, where the extra plumbing isn’t a problem.
This technology of starting a diesel engine using a turbine driven by compressed air was used in Russian T-34 tank during the WWII. While Germans could not start the tanks in the cold of winter 1941 from the frozen batteries the Russians were using compressed air (hand-crank) to start T-34s just fine.
The fewer resources we dedicate to grid resilience and modernization, the harder black starts become. And as grids get more complex and interdependent, recovering from total failure becomes exponentially harder.
A rare but sobering opportunity to reflect on something we usually take for granted: electricity.
We live in societies where everything depends on the grid — from logistics and healthcare to communications and financial systems. And yet, public awareness of the infrastructure behind it is shockingly low. We tend to notice the power grid only when it breaks.
We’ve neglected it for decades. In many regions, burying power lines is dismissed as “too expensive.” But compare that cost to the consequences of grid collapse in extreme weather, cyberattacks, or even solar storms — the stakes are existential. High-impact, low-frequency events are easy to ignore until they’re not.
Just to highlight this: the last significant power outage in Western/middle Europe was 2003. [1]
That's 20 years without any significant problems in the grid, apart from small localized outages.
It's not hard to start taking things for granted if it works perfectly for 20 years.
Many people don't even have cash anymore, either in their wallet or at home. In case of a longer power outage a significant part of the population might not even be able to buy food for days.
> Many people don't even have cash anymore, either in their wallet or at home.
Even if you have cash many shops would not sell anything in case of a mass outage because registers are just clients which depend on a cloud to register a transaction. Not reliable but cheap when it works.
Many supermarket chains (in the West at least) have satellite links at their major locations because they can't afford to close a store just because the local ISP had an issue.
The real question is how long can some of the smaller banks' datacenters stay up.
Firstly and most importantly, a cash register needs a power outlet. It is highly contestable that every single Western supermarket out there has a diesel generator down in the back / storage room that will kick in in an instant if a power outage begins.
Lest also forget the Crowdstrike drama where many supermarkets simply went dark, in some instances for nearly 24 hours, despite working communication links. But I digress.
Crowdstrike was an interesting one. Just as it was going down I went out to the supermarket and found half of the self checkouts had bluescreened. Then a few hours later they were all back and functioning again. The supermarket had remote management at a level below the OS that could restore the whole countries self checkouts rapidly.
I would not be surprised if they simply booted an image from the network. It would significantly simplify maintenance, as for any change you'd just need to update a single image and push it downstream to an in-store management server. The individual terminals essentially become disposable.
> It is highly contestable that every single Western supermarket out there has a diesel generator down in the back / storage room that will kick in in an instant if a power outage begins.
Literally true. However:
- If it takes them 10 minutes to fire up the generator, then 5 minutes to restart the network and registers, that is no big issue (in a many-hour outage)
- At least in my part of the USA, many supermarkets do have generators - because storm damage causes local outages relatively often, and they'd lose a lot of money if they couldn't keep their freezers and refrigerators powered. Since the power requirements of the lighting and registers are just (compared to the cooling equipment) a rounding error, those are also on generators.
Plus, there are backup-power lories and refrigerated trailers. If your shop doesn't have enough backup power for duration, you might see several of these pull into the carpark all at once. If not all of the chillers can be powered, shop's staff will schlep stuff to the refrigerated lories.
Seen it done in USA, for a Target next to a Kroger grocer. Kroger lost everything that needed cold after reserve either ran out or wouldn't start, but Target had a contingencies contract and lost no product.
Well,in my experience it was the case for the 2 largest supermarket chains. We lost electricity at 12:30pm and only got it back during the night at 3am.
But both major supermarkets nearby worked on diesel generators and payment by card worked flawlessly. I guess they had satellite connection.
It might have been more complicated in small villages but people living in rural areas ually still use a lot of cash.
In my local area of Spain/portugal, 2/3 supermarkets and 2/3 gas stations had generators up and running within a couple of hours. We’re pretty rural, though - I don’t know that urban areas faired as well
We're outside Mataro, had to make a trip into Barcelona yesterday. I'd say most gas stations north of Barcelona/Maresme area were 100% offline, some (we found only two, from 6 visited) gas stations still had operational pumps but huge queues and cash only. None of the TPVs seemed to work anywhere in the afternoon yesterday here, even the battery powered/mobile network ones.
That's true, I went shopping cca. 4-5 hours after the blackout started and had no issues, even card transactions worked. Whoever designed the retailer, they clearly had this scenario in mind. Even the "self-service" computer kiosks all worked.
In a multi-day event like we are talking about here, couldn't a shop owner revert to a paper ledger? I mean, it would suck and transactions would take much longer but if the alternative is people starving or having your inventory looted by a desperate mob, a nineteenth century solution seems preferable.
They can and do. They will also do deep cuts on prices/give away for free for refrigerated and frozen goods because those will just get tossed and neighborhood goodwill is still a thing in some places.
There was a 4-day power outage here (Seattle suburbs) last fall, and one of the auto parts store made an effort to serve customers even though they didn't have any power. I paid cash, and I forget whether or how they did credit card transactions (possibly by writing down CC numbers on paper.) They made a lot of phone calls to a different store to get prices for items.
I'm located in Barcelona, and yesterday lot of transactions on mini markets / pharmacies were not possible because the item prices were unknown, adding to the fact there was no phone lines available to reach out.
Portugal has mandatory electronic receipts. By this I don't mean email receipts, I mean that all receipts have a code on them that is then also available to be looked up on the government's side (e-Fatura is the app for this) for tax reasons. I think it's fairly simple though, just registers the total amount, the seller, and how much VAT was paid.
However I assume this can work offline with the data being uploaded later though, as basically all the small supermarkets and shops were still open here (_incredibly_ chaotic though), and on the big supermarkets card payments were working (TBF, even the free wifi was working there, I guess they probably have some satellite connection).
> Many people don't even have cash anymore, either in their wallet or at home. In case of a longer power outage a significant part of the population might not even be able to buy food for days.
So, what's really interesting is that these sorts of social collapses have happened. In fact, they often happen when natural disasters strike.
When they do happen, mutual aid networks just sort of naturally spring up and capitalism ends up taking a backseat. All the sudden worrying about the profits of Walmart are far less important than making sure those around you don't starve.
As it turns out, most people, even managers of stores, aren't so heartless as to let huge portions of the population starve. Everyone expects "mad max" but that scenario simply hasn't played out in any natural disaster. In fact, it mostly only ends up being like that when central authority arrives and starts to try and bring "order" back.
You can read about this behavior in "A Paradise Built in Hell" [1].
Well if the situation happens that people can't buy food, things will easily become nasty quickly. People will break open stores and use violence to get what they need. So cash money won't really help a lot in a serious situation.
The "civism" was well noted in the Spanish case, as far as I know, the whole country passed this incident without a single case of security issue. During the whole thing, people were extremely kind and polite. People sitting in dark bars was kind of "funny" how people clinging to their normal lives even in emergencies.
I often wonder if we should leave energy/telecommunications in a state where they can and do fail with some degree of frequency that reminds us to have a back up plan that works.
I had thought that the (relatively) recent lockdowns had taught us how fragile our systems are, and that people need a local cache of shelf stable foods, currency, and community (who else discovered that they had neighbours during that time!)
For something like this, a local electricity generation system (solar panels, wind/water turbines, or even a ICE generator) would go a long way to ensuring people continued to have electricity for important things (freezers)
There are now ubiquitous wireless POS terminals for card payments that can be recharged from emergency sources of electricity(like cars). As long as the mobile internet works it's possible. Of course this only little alleviates the disruption.
Surprisingly mobile networks seemed to stay up in Portugal. I'm not sure to what extent and if they lasted for the whole duration of the blackout.
They definitely limited consumer use though.
Most mobile base stations have a limited backup battery and some have generators on site. I'd expect telephone infrastructure to have 24-48 hours of backup in the USA and I don't know why Europe would be much different.
Population density is pretty high in many European inner cities. Most of the cell sites around here are on top of apartment buildings and I doubt they have a genset. Here in central BCN the mobile network was completely offline within an hour or two of the power going out.
Surely there are still card POS systems that can buffer transactions? Sure you loose some part of the system like payment authorizations but the potential loss of money is lower than closing shop.
The 1000EUR limit doesn't apply between private individuals; for businesses, you will find many other European countries also don't take large cash payments, for security or convenience reasons. E.g. you can't buy a car from a dealership for cash in "cash is king" Germany either. They expect a wire in almost all cases.
I don't know but I find it very practical to not carry piles of cash in my pocket and home and know that we're less likely to get robbed just because of the cash we have.
I don't know how true the relationship between the cashless lifestyle and safety actually is, but it works and I feel ok; I'm not sure that the prospect of a few hours of national blackout once in 20 years will make me change my mind significantly.
Today I was able to walk into a grocery store, pay for food, and go home to have a warm lunch (having a gas stove also helped tremendously). The matter was having a 10€ note at home. Not what I'd call "piles of cash".
As an added benefit, no bank knows where I bought and when, which I find is a great advantage over the alternatives. (I also use Gpay; this comes from someone who just found a good middle point without forgetting about the more reliable, physical and privacy friendly option)
I think I have 10€ laying around at all times, possibly in loose change between my home and car. I do not always walk around with that in my pocket and I never have more than 500€ at home.
I didn't mean literally zero cash, but once the bulk of your transactions are by card, you don't need to constantly go to the ATM and replenish your cash reserves
And I myself didn't mean that I happened to have a measly 10€ note at home, I normally have around 300 minimum, to spend organically on purchases.
Of course I get that carrying coins and notes is cumbersome, but if we've managed to live all through the 80's and 90's with it, I think we can manage to keep doing it. 100% digital money is giving up on a huge level of self-determination and privacy that I wouldn't feel comfortable with, but I guess as newer generations grow up already pre-adoctrinated and not being able to compare the before-and-after, in the end society will end giving up.
I got my first cell phone as a full bearded adult so I do remember the times where you carried cash and the time you could actually meet people in places without having to constantly update one's position.
I don't think it's just a generational divide.
I do understand the privacy and self-determination problems of a cashless society but I have to admit I'm just to weak-minded to care about that in practice; the practicality of just paying even for just coffee with my phone is just too big for me to care for it.
> the time you could actually meet people in places without having to constantly update one's position
Not sure I understand how that's different than today? You set a time and place, then you meet there, are people doing more than that today? Seems the youngsters understand this concept as well as older people, at least from the people I tend to meet like that.
> the practicality of just paying even for just coffee with my phone is just too big for me to care for it.
Interestingly enough, no matter if you had cash or card yesterday you couldn't get a coffee anywhere, as none of the coffee machines had power and even in the fancier places where they could have made the coffee without power, they didn't have electricity for the grinder itself, so no coffee even for them.
> You set a time and place, then you meet there, are people doing more than that today?
no, today people are continuously updating you about their whereabouts and assume you can just change time, place continuously and if you don't have the phone people get lost and panic. Ok I'm exaggerating of course, but there is a grain of truth in this
It does seem safer to not carry cash. However, I remember around May 2024 that there were some reported incidents in Chicago were in the early morning hours, there were some groups of robbers who would force victims to do something I found especially worrying: they would be forced into resetting their phone password and logging into their mobile banking app. I’m not sure what ended up happening in these cases.
I once used an aggregator app to summon a handyman to my place. My request was simple: move two pieces of furniture around my very small apartment.
So I find a reputable service within the app, I schedule it, and they send a guy. He shows up to my door breathless, with some kind of sob story about a vehicle breakdown. I dismiss that out of hand and he gets to work. He did a fine job and it didn't take very long.
Then we get to the point of settling up, so I announce I'm going to pay in the app. He looks really disappointed and says he usually takes cash. I realized at this point that he was ready to shake me down, and also he would incidentally be discovering where I stashed my cash, when I reached for it with him there in the room. So disappointed. So I send the money out in the app and I show the confirmation screen to the guy. And I felt so bad that I followed up with a tip in the same fashion.
But at the end of the day he was just a garden-variety cash-in-hand scammer and I had no reason to feel guilty, because I had unwittingly outwitted him by trusting the app. And the company had no qualms about it.
Another time, I had a very short cab ride to the laundry. And it did not take long for the driver to spin a gigantic tale about his auntie addicted to gambling used up all their savings and they was really hurting for money. I was shifting uncomfortably wondering why I was hearing this. So the cabbie parks the car and his POS machine shuts off. He's like "oh it's out of order" so here he is, shaking me down and expecting me to go fetch cash to put in his grubby hands.
I stared him up and down, started taking photos, and got out of there. I discussed with dispatch. They said if he's not accepting cards and I intended to pay by card, I owe him nothing.
So again a cash-in-hand sob-story scammer was foiled. The cab service was crazy enough to assign him to pick me up additional times. This is why I ride Waymo, folks!!!
This might be one of the most paranoid things I've ever read on HN.
The laborer was simply trying to actually get paid vs. deal with the overhead of the app. Somewhat shady perhaps - since it routes around the company taking their cut for finding him the work, and likely avoids taxes. I've paid these sorts of guys cash every single time I've used such a service and exactly zero of them have "shook me down" or cared where I stored the money. They make so little already I'm happy to help them out with a smile.
Cabbies simply want cash for pretty much the same reason. They get charged an astronomical "service fee" by the cab company, and likely are avoiding taxes as well. I agree that such a situation is more shady in general, but I've actually had (NYC) cops side with cabbies on this topic and force me to go get cash at the ATM or get arrested. I also use car services now over cabs whenever possible due to this reason - mostly for convenience, never out of fear of being robbed though.
The chances of you getting mugged/stolen from for using cash are just the same as the chances of you getting mugged for no apparent reason walking home. Perhaps the collective dis-use of cash has reduced these odds, but you specifically is utterly irrelevant.
Lol what? How do you know either of these guys were trying to rob you? They prefer cash because they can take it under the table and not report it for taxes. Everybody knows that.
Yes, well, I choose not to participate in shady shit like that. Is that OK that I prefer to make transactions as laid out by their employer and not every random guy?
> happy to help them out with a smile
So you choose to be knowingly complicit in tax-avoidance schemes. That's fine; you do you, but some of us steer clear of shady shit, just on principle, you know? Perhaps the company deserves their cut as well -- they get paid so little already, amirite?
Also if there was nothing unusual about their choice of payment, then why must they regale me with these shitty sob stories? Am I supposed to be moved to tears at their hardship and heroism at making it to my door, that I must promptly cover their expenses? They are not panhandlers, they are service providers.
No, I ordered a service and I pay for the value of the service, according to the Company's rates. The cab company was clear about it: either I pay how I want to or I don't owe them. Nobody's arresting me for refusing to fork over cash. That's a scam.
That's not the only time I was cash-scammed by a cab driver. They will pull every trick in the book, and surely they compare and trade notes on their marks.
It gets even worse: my simple insistence on transacting with the cab company earned me fake receipts. Yes, they faked every receipt that they sent me in email. The totals were all fudged down to be much smaller than what I paid, including a $0 tip. It was very very obvious, especially when the rides booked in the app were generating duplicates showing different calculations. I reported it twice to their backend developers and they said that there were some coding errors in device drivers; please stand by for a fix. LOL!
That's a scam to ensure that taxpayers can't get reimbursed for out-of-pocket medical expenses. Most/all cab companies provide NEMT services as well, and they can't stand when people go outside of insurance companies. So they falsified my receipts.
We will use better technology for electronic transactions. Most of banks worldwide still use COBOL for most (all?) their software infrastructure.
You can do as many electronic transactions as you wish without internet or electricity, provided you have something with charged battery. Problem is the transaction cannot be verified without internet, but when internet gets restored, all transactions can be applied.
That technology exists for more than a decade, so banks will implement it in 20 or 50 years. Most sane people will not wait patiently for half a century till some software engineer implements electronic transactions with COBOL, and we will use some kind of blockchain much sooner than that.
I see no problem with COBOL. I'd rather have my credit card transaction be processed with 40yo well tested COBOL than Java from the newest intern using copilot.
> Most of banks worldwide still use COBOL for most (all?) their software infrastructure.
Nahhh - some banks have some parts of the infrastructure in COBOL. Specifically larger retail banks often have their ledgers in COBOL. Most of them want rid and are actively getting rid. Most places have had programs to root COBOL out since before 2000, but there are residual implementations remain. The ledgers are the hardest place to deal with because of the business case as well as the awkwardness. Basically there's not much of an advantage (or at least hasn't been) in modernizing so keeping the thing going has been the option. Now people want to have more flexible core systems so that they can offer more products, although not so sure that customers want this or can consume it. Still - it supports the idea of modernisation so not many people are keen to challenge.
The most common big implementations I come across are in Java.
They use Go as well, I know someone who writes it for banks, but most of their infrastructure is written in COBOL. There are some sources [1], and some people have told me the same in person, not in exact numbers or percentages, but roughly the same.
Anyway point remains, electronic transactions with no internet or electricity is a solved problem, and banks don't want to solve it or they can't due to incompetency or maliciousness.
Currency transactions worth their weight in gold, it is of utmost importance for transactions to always be published to a central authority right away. If they don't have to be published, they should not exist at all. Imagine people buying stuff without anyone knowing right away! That should never, ever exist, for any reason.
It’s not incompetence it’s regulations and inertia. Getting your central banking system past the regulators in most countries is difficult enough that it shades the effort required to replace the core systems.
Yeah, this is the turkey’s dilemma - life on a farm is a lot better than life in the wild for 51 out of the 52 weeks of the year.
Most of our modern economy and systems are built to reduce redundancy and buffers - ever since the era of “just in time” manufacturing, we’ve done our best to strip out any “fat” from our systems to reduce costs. Consequently, any time we face anything but the most idealized conditions, the whole system collapses.
The problem is that, culturally, we’re extremely short-termist- normally I’d take this occasion to dunk on MBAs, and they deserve it, but broadly as a people we’re bad at recognizing just how far down the road you need to kick a can so you’re not the one who has to deal with it next time and we’ve gotten pretty lazy about actually doing the work required to build something durable.
"Just in time" is a phrase I hate with vehement passion.
You aren't optimizing the system, you're reducing safety marigns - and consequences are usually similar to Challanger.
This is a solution that teenager put in management position would think of(along with hire more people as solution to inefficient processes), not a paid professional.
Systems like electric grid, internal water management (anti-flood) shouldn't be lean, they should be antifragile.
What's even more annoying that we have solutions for a lot of those problems - in case of electric grids we have hydroelectric buffers, we have types of powerplants that are easier to shutdown and startup than coal, gas or wind/solar(which cannot be used for cold start at all).
The problem is that building any of this takes longer than one political term.
Things which can’t self improve can’t be antifragile by definition. NNT alludes to this multiple times - systems together with processes and people running them can be antifragile, but just things cannot.
I postulate the grid as a whole is antifragile, but not enough for the renewable era. We still don’t know what was the root cause of the Spanish blackout almost 24h after it happened.
JIT isn’t about reducing safety margin. It was pioneered by Japanese companies, namely Toyota. They are known for risk adverse, safety first.
> This is a solution that teenager put in management position would think of(along with hire more people as solution to inefficient processes), not a paid professional.
What kind of comment is this? Toyota has been using and refining it for decades. It wasn’t invented yesterday by some “teenagers”. Such a state of HN’s comment section.
JIT is definitely not perfect as exposed during the Covid period, but it isn’t without merits and its goal isn’t “reducing safety margin”.
Sure it is. That's exactly how it achieves the higher profitability. Safety margin costs money. Otherwise known as inefficiency.
Slack in the system is a good thing, not a bad thing. Operating at 95% capacity 24x7 is a horrible idea for society in general. It means you can't "burst mode" for a short period of time during a true emergency.
It's basically ignoring long tail risk to chase near-term profits. It's a whole lot of otherwise smart people optimizing for local maxima while ignoring the big picture. Certainly understandable given our economic and social systems, but still catastrophic in the end one day.
It literally is reducing safety margin(buffers) of a whole distribution system by definition, and it is also being applied in places where it does not fit - like systems that should be resilient to disruption and/or anti-fragile.
I would expect a paid professional in management discipline to be aware of such nuance but alas proven wrong again.
Challenger wasn't really about cutting safety margins, but about kicking the can on a known problem: blowby in the motor joints. It was a gut feeling by the engineers that the problem was related to temperature, but there was enough of a random element to it that there was nothing specific to point to.
That should have been enough to scrub anyway, but there was clearly political pressure to launch.
I do agree that they need to specifically design anti-fragile.
For the people who died of normally preventable death during covid while the health services were overwhelmed, the damage is irrecoverable. The chips shortage lasted years. Every year we become more, not less, dependent on the supply chain working. Every year we become less, not more, resilient.
I don't think it's crass to separate the deaths that occurred from a novel disease from the impact it had in society. In the medium term, it's a blip, never mind the long term. There's a huge chunk of society that thinks there was a huge overreaction!
The chips shortage has been difficult, but it's also been little more than an inconvenience when you look at it in terms of goods being available to consumers or whatever.
That chunk is heavily influenced by the propaganda that over a million dead people isn't a big deal. The propaganda is economically incentivized because slowing down the economy is bad for business even if it protects human lives.
Nearly all collapses are of limited temporal duration (except for extinction events....). I think it is fair to call a health system that failed to protect the nation and world a collapse. It failed to perform its function in a dramatic way. Now, the fact that most of us survived at least is being exploited to say it was no big deal and actually, why not trash every public health institution so the economy is never shut down again?
Sad whomp whomp horn: the economy is going to be negatively affected by covid disability and death on an ongoing basis and a new pandemic will still cause so much fear the economy will shut down.
We have plenty of small scale collapses that weren't of limited duration. It's just that such things are typically only noted by archeologists. We only see the survivors and thus conclude that collapse isn't an existential problem.
I do agree on Covid disability. Early on we saw some pretty dire predictions, but since then it's mostly been an exercise in muddying the waters. Lots of wheel-spinning about what constitutes long Covid when they should have simply been collecting data on the various symptoms. Better to not see the problem than have to deal with it.
Look at how we were handling AIDS before we discovered it was HIV destroying the immune system. Long Covid is still at that stage--we are seeing a slew of highly varied effects rather than the mechanism.
The post you are replying to is not talking about the Covid deaths, but rather about the deaths from other causes triggered by the Covid disruptions. When a trauma case dies of the lack of a ventilator because they're all in use on Covid patients. When the trauma case bleeds out at the scene because the ambulance is running a Covid patient to the hospital.
And a lot of people thinking it was an overreaction proves nothing. People don't get a vote on reality.
Honest question, are we better off in the long run, and is it a better solution, to decentralize energy generation and make more smaller grids rather than linking them all up? This isn't to say completely getting rid of the ability to transfer between the smaller grids to assist with power disruptions but to decouple and make it less likely for catastrophic "global" failures like this.
With a high fraction of renewables, the reverse is probably better in the long run. The larger geographic area you connect, the less you're affected by weather systems, and the wider area you can draw dependable dispatchable power such as hydro from. But that depends on having enough grid capacity to move enough power around, which is currently a problem.
But I wonder from a reliability (or lack of cascading failures) point of view whether synchronous islands interconnected with DC interconnects is more robust than a large synchronous network?
We're slowly reaching this point with the internet too.
I feel like to many technologists, the internet is still "the place you go to to play games and chat with friends", just like it was 20 years ago. Even if our brains know it isn't true, our hearts still feel that way.
I sometimes feel like the countries cutting off internet access during high school final exams have a point. If you know the internet will be off and on a few days a year, your systems will be designed accordingly, and if anything breaks, you'll notice quickly and during a low-stakes situation.
Maybe a good reason (in parts of the world where this is practical) to have some solar + battery storage. Doesn't even need to fully replace grid power, just enough to run the barebones when the grid goes out.
Houses have water tank that work as buffer for when the water stop, couldn't residential batteries work the same way?, it could detect drops in voltage and stop charging, so even if the grid is unstable in a black out, the residential isn't going to light up immediately, or only the houses without batteries
Interestingly it seems that the black start drill is considering a smaller zone of impact than what has happened here.
Also I suspect there is far more renewables on the grid now than in 2016.
This is potentially the first real black start of a grid with high renewable (solar/wind) penetration that I am aware of. Black starts with grids like this I imagine are much more technically challenging because you have generation coming on the grid (or not coming on) that you don't expect and you have to hope all the equipment is working correctly on "(semi)-distributed" generation assets which probably don't have the same level of technical oversight that a major gas/coal/nuclear/hydro plant does.
I put in another comment about the 2019 outage which was happened because a trip on a 400kV line caused a giant offshore wind farm to trip because its voltage regulator detected a problem it shouldn't have tripped the entire wind output over.
Eg: if you are doing a black start and then suddenly a bunch of smallish ~10MW solar farms start producing and feeding back in "automatically", you could then cause another trip because there isn't enough load for that. Same with rooftop solar.
This is potentially the first real black start of a grid with high renewable (solar/wind) penetration that I am aware of.
The South Australia System Black in 2016 would count - SA already had high wind and rooftop solar penetration back then. There's a detailed report here if you're interested:
"Grid tied solar won't put power into the grid when the grid is down. It's the one reason I didn't grid tie."
Why would that prevent you from being grid-tie? I have 53 panels (~21kw) grid tied and pushing to the grid, but in the event of grid failure my panels will still operate and push into my 42kwh battery array which will power the entire house. ( The batteries take over as the 'virtual grid source). I can then augment the batteries with generator and run fully off grid for an extended amount of time ( weeks in my case ).
these are mutually exclusive. What you have is a hybrid system, which is something i explicitly did not mention. A grid-tie system is not generating an AC waveform when the grid is down, at all. it cannot, by definition, and by design, as the AC waveform requires the grid to synchronize to.
Youre missing my point. I know that; I mean if you are restarting the grid and say you have a segment you think has 5MW of load that will come online, but you connect it and a minute or two later suddenly 2MW of grid tied solar detects the grid and starts exporting and you now have 3MW of load it is going to make it much more tricky to balance the restart. I'm not sure how much of a problem this is in reality but it seems to me restarting a grid is made much more tricky when you have millions of generation assets with no control over.
This is why most of the restart of the grid is being done as the solar input tappers out I suspect. The grid was pretty much down during peak sun and started coming back online around 5:30 pm or later.
> I mean if you are restarting the grid and say you have a segment you think has 5MW of load that will come online, but you connect it and a minute or two later suddenly 2MW of grid tied solar detects the grid and starts exporting and you now have 3MW of load it is going to make it much more tricky to balance the restart.
I really thought that sentence was going to end with "it makes it a lot easier to handle that segment".
Yeah you have some big problems if it's a complete surprise, but your status quo monitoring would have to be very strangely broken for it to be a complete surprise. Instead it should be a mild complicating factor while also being something that reduces your load a lot and lets you get things running quicker.
Practical Engineering did a really great video a few years ago on why black starts are hard, complete with a tabletop demo about the physics of synchronizing large spinning generators: https://www.youtube.com/watch?v=uOSnQM1Zu4w
Yes because they have to bring it all back up in phases so that they only face the load spike* from one interconnect at a time, which can take some time and can fail if there’s unknown damage like the GP said.
It really depends on the region though because almost all large hydroelectric dams are designed to be primary black-start sources to restore interconnects and get other power plants back up quickly in phase with the dam. i.e. in the US 40% of the country has them so it’s relatively easy to do. The hardest part is usually the messy human coordination bit because none of this stuff is automated (or possible even automatable).
* the load spike from everyone’s motors and compressors booting up at the same time
Presumably emergency phone comms still work(?) so they could issue instructions to do a phased (heh!) restart to avoid every fridge/air conditioner/whatever restarting at once. Not sure how successful that would be however.
There’s usually two parallel processes going on during a black-start: spinning the power plants back up to get them synchronized to the grid, and getting power back out to all the consumers. Power plants have breakers that disconnect them from consumers which they keep disconnected until independent connections to other power plants allow them to spin up their massive turbines and synchronize them to the grid. There’s also tons of other downstream breakers at substations which will be in various states of functionality.
The power plants with direct connections have hard lines and black-start procedures that get power out to the most important customers like telecom infrastructure, which provides the rest of the comms. In a real world full restart it’s going to mean organizing workers at many substations to babysit old infrastructure so cellular is pretty much mandatory.
And to add more fun, this time they're not dealing with a small number of individual power plants that can be connected with only some phone-call based coordination.
Instead, there are literally hundreds of smaller wind/solar installations. Some of which depend on rapidly fading cellular communication to restart. And some might need an actual site visit to throw on the physical breakers.
Spain is but Portugal is only connected to Spain and they are currently doing a full black start.
For Spain the external power and synchronization can come from France rather than generators which will help, but the process and complexities are still mostly the same. Call it a dark start, perhaps.
Sure. Portugal is in deed completely “isolated” if it loses Spain, so it needs black start capability if it wants some degree of autonomy.
As far as containing the issue, this was a disaster. On the flip side, this was as good an opportunity to test a black start as any, it went reasonably well, and the network operator was already in the process of contracting two further dams for the ability.
> A black start is the process of restoring an electric power station, a part of an electric grid or an industrial plant, to operation without relying on the external electric power transmission network to recover from a total or partial shutdown.[1]
The key part of the phrase is actually "electric power station, a part of an electric grid or an industrial plant." Note how the definition doesn't include an entire grid.
Only the first power plant in a black-start (like a hydroelectric dam or gas plant started by a backup generator) is truly "black started." The rest don't fit that definition because they depend on an external power source to spin up and synchronize frequency before burning fuel and supplying any energy to the grid. If they didn't, the second they'd turn on they'd experience catastrophic unscheduled disassembly of the (very big) turbines.
Only the first power plant can come online without the external transmission network.
You’re absolutely right, there are a lot of variations. In this case I think Portugal started from their own hydroelectric dam and restarted everything North to South while Spain started several in parallel from the interconnects to the rest of Europe (can’t tell which interconnect it was though).
The frequency aspect of a black start is presumably a bit easier in Europe because there's an interconnected synchronous grid so they can bootstrap it from France essentially.
It's far more problematic for the UK because all the interconnects are DC.
I was recently told by an electrical engineering lecturer that the black start plan here in Ireland is to use the DC interconnectors with the UK to provide startup power to a synchronous generator.
With the new wexford-wales interconnect that went live last month, and another one planned from Cork (?) to France things might be even easier in the near future I reckon.
To me it sounds like an energy company attempting to excuse lack of spending on infrastructure whilst paying out millions to C-suite in bonuses and millions more to shareholders whilst arguing prices have to rise because they can't afford to spend on infrastructure...
Electricity markets and electricity networks are designed by the regulator.
Incentives are planned by the regulator so that individual stations or companies have the correct incentives to have capabilities that the network grid needs.
One example is financial incentives to provide black start capabilities. Another example is incentives to provide power during peak loading (peaker plants). There are many more examples of incentives designed so that the needs of the whole network are met.
If an operator is incentivised to act selfishly in such a way that the grid will fail, then that is a failure of the regulator (not the individual operator).
Blaming individual people or companies for systemic faults is generally a bad thinking habit to form. There are too many examples where I see individuals get blamed. Fixing our systems is hard but casting blame in the wrong places is not helpful. It's difficult to find a good balance between an individual's responsibilities and society's responsibilities.
> Electricity markets and electricity networks are designed by the regulator.
Not quite. They are _influenced_ by the regulators.
And Europe has been incentivizing trash-tier low-quality solar and wind power, by making it easy to sell energy (purely on a per-Joule basis) on the pan-European market.
Meanwhile, there is no centralized capacity market or centralized incentives for black start and grid forming functionality.
> Meanwhile, there is no centralized capacity market or centralized incentives for black start and grid forming functionality.
There absolutely is. Look up terms like "Frequency Containment Reserve" and "automatic Frequency Restoration Reserve". The European energy market takes transport capacity in account, and there is separate day-ahead trading to supply inertia and spare generating capacity. Basically, power plants are being paid to standby, just in case another plant or a transmission line unexpectedly goes offline.
Similarly, grid operators offer contracts for local black start capacity. The technical requirements are fixed, and any party capable of meeting those can bid on it.
It's quite a lucrative market, actually. If during the summer a gas plant is priced out of the market by cheap solar, it can still make quite a bit of money simply by being ready to go at a moment's notice - and they'll make a huge profit if that capacity is actually needed.
No, there isn't. FCR market is not pan-European, and even where it's in place, it's basically in the name only. It's basically only countries that already use rotational generation, so it's not really a stretch for them to participate.
Spain and Portugal are not members, btw.
And the same applies to capacity markets. I believe, there is a plan to come up with a plan for it by 2027.
> Similarly, grid operators offer contracts for local black start capacity. The technical requirements are fixed, and any party capable of meeting those can bid on it.
And I don't believe there are ANY solar/wind plants that have black start capacity in Europe. The current incentives structure makes that a near certainty.
> there is no centralized capacity market or centralized incentives for black start
There certainly is in New Zealand, although the dollar amounts are quite small. If your countries regulator doesn't incentivise the capability, I believe that is a fault of your regulator.
Transpower (NZ) says:
We may enter into black start contracts with parties who can offer the black start service compliant with our technical requirements and the Code. Black start is procured on a firm quantity procurement basis (via a monthly availability fee and/or a single event fee for specified stations). Black start costs are allocated to Transpower as the Grid Owner (see clause 8.56 in the Code for details)
With the DC interconnect, your DC to AC conversion equipment would need the capability to provide synchronized power to the generator you are trying to start. With the synchronized grid tie, your are pulling the generator into the running grid.
A synchronous interconnect provides not just a source of truth but also stabilizes your grid frequency. If you have an isolated grid you have to match generation to demand to keep the grid frequency stable. If you have a 1GW interconnect that means you can mismatch generation and demand by up to a gigawatt and still be fine. I imagine that makes for a much faster startup procedure
You can connect two running grids. Earlier this year the Baltic countries disconnected from the Russian grid, and synchronized and then connected to the European grid.
I imagine you can get close enough by syncing to a shared time source like GPS or the DCF77 signal, as long as you communicate how the phase is supposed to match up to the time source. Or at least you could get close enough that you can then quickly sync the islands the traditional way.
The question is if it's worth the effort and risk. Cold starting a power grid is a once in a lifetime event (at least in Europe, I imagine some grids are less stable) and Spain seems to plan to have everything back up again in 10 hours. Maybe if the entire European grid went down we would attempt something like that by having each country start up on their own, then synchronize and reconnect the European grid over the following week.
Nothing that complicated. You just carefully synchronize the state of the grid on both sides of the interconnect, and when they are perfectly matched, you throw the big switch.
It's a bit difficult in another way: Obviously 50/60hz is not such a high speed that it's difficult to synchronize.
The harder part is this: To pump power into the grid you lead the cycle ever so slightly, as if you were trying to push the cycle to go faster. If instead you lag the phase the grid would be pumping power into you.
That lead is very very small, and probably difficult to measure and synchronize on. I would imagine that when the two grids connect everything jumps just a little as power level equalize, it probably generates a lot of torque and some heat, I would assume it's hard on the generator.
From a physics point of view, by leading the cycle you introduce a tiny voltage difference (squared), divided by the tiny resistance of the entire grid. And that's how many watts (power) you are putting into the grid.
Yes, but that actually makes it harder to start up.
To synchronize the isolated grids, they all need to operate with an exact match of supply and demand. Any grid with an oversupply will run fast, any grid with an undersupply will run slow. When it comes to connecting, the technical source-of-truth doesn't matter: you just need to ensure that there will be a near-zero flow the moment the two are connected - which means both sides must individually be balanced.
And remember: if you are operating a tiny subgrid you have very little control over the load (even a single factory starting up can have a significant impact), and your control over the supply is extremely sluggish. Matching them up can take days, during which each individual subgrid has very little redundancy.
On the other hand, the interconnect essentially acts as a huge buffer. Compared to the small grid being connected, it essentially has infinite source and sink capacity. For practical purposes, it is operating at a fixed speed - any change is averaged out over the entire grid. This makes it way easier to connect an individual power plant (it just has to operate at near-zero load itself, move to meet a fixed frequency target [which is easy because there is no load to resist this change], and after connection take on load as desired) and to reconnect additional load (compared to the whole grid, a city being connected is a rounding error).
If your Factory uses too much power, theres not enough energy to run the power plants generation, which decreases your power production. Death spiraling until theres no power.
You have to disconnect the factory, and independently power your power plants back up until you have enough energy production to connect your factory up again.
Capacitor circuit network warning for alarm for "main grid is drawing on this bank - bad things may happen if capacity is not increased."
Another "trick" is those burner inserters are black start capable. They can pick up fuel and feed themselves to keep running without an electrical network.
I also tend to put Schmitt triggers in low priority areas. They've got a battery on the main grid next to them and if the battery drops below 50% power they remain off until it goes back above 75% power.
That's a mod, not stock. It does two things: feeds the inserter from either end (something I think should be stock--it's a machine that grabs things, why can't it grab fuel it can reach??) and provides a trickle of power to unpowered ones (nice from a standpoint of not having to fuel them when you put them down, but it also adds blackstart capability.
As for 50%/75% triggers--the game doesn't model start/stop problems, only fancy circuit setups would give a hoot about being fed by flickering power. (But as a human....I was out adding to a big accumulator bank at twilight. Far away the bugs had a base close enough to my laser turrets that they kept attacking. The sun was powering my base but didn't have enough for the turrets. The whole bank would flicker for every bug. Usually the electrical indicator on the accumulators is a good thing.)
> The burner inserter is the most basic and slowest type of inserters. It is powered by burning fuel, compared to the more advanced inserters which are powered by electricity. It will add fuel to its own supply if it picks any up, which makes it useful for filling boilers with coal. This has the advantage that it will continue working even if the power fails, as opposed to electrically-powered inserters which will be unable to function.
In particular, when power demand drops below the supply everything starts running slower, which in turn means that the electrical inserters used to feed coal into the boilers run slower which drops the rate of electrical production. Burner inserters don't have that problem.
---
Schmitt triggers are really easy with stock multi combinators now. It isn't so much a "things have issue with flickering power" but rather "turn off the coal feed to the steel furnaces that would go to the electrical station instead" and "make sure that the coal unloading station doesn't brown out" and "turn off power to the electrical furnaces and labs to make sure that the coal mines don't dip in production rate when power dims".
The Schmitt trigger also makes for more reasonable "where is there excess oil that I should produce solid fuel from?" There's the optimal, but sometimes in the processing you can't crack any more crude to gas because the heavy oil is backed up, so turn on the appropriate cracking station when {conditions} and turn it off when {conditions}.
Burner inserters are stock, I didn't realize the fuel leech was now stock. It didn't used to be. And the ability to slowly grab fuel without any source of power is definitely mod. And the mod can leech fuel it's not actually handling--a burner inserter pulling from a furnace will take fuel from the furnace for it's own operation.
I'm just saying what's the need for a gap between on and off? How does the bounce harm anything in the game? Simply put the non-critical stuff behind combinators that will switch off at x% of power. Real world machinery wouldn't like that but the game factories don't care. Nor do the refineries--in your oil case, plonk down a tank of heavy oil, turn on the crackers when it's above a certain level.
> Since v0.10.0, any Fuel items picked up by a burner inserter will also be used to power the inserter. This makes it useful for:
> Automatically loading Gun turrets from a Transport belt, where one side of the belt is filled with magazines and the other with Coal.
> Filling Boilers with Coal. This has the advantage that they will continue working even if the power fails. This is not the case for electrically powered inserters.
> Burner inserter will use item with fuel value for itself when it has empty inventory.
0.10.0 was released 2014.
---
The flickering isn't a problem (though a high frequency flicker can caused the power chart to be difficult to read).
A consistent drain on power that is causing capacitor levels to drop below some threshold suggests that certain things should be turned off before the problem becomes cascading when capacitor supply drops to 0.
Except that in many cases the "consistent" drain is simply night. (Or, day, on Fulgora.) Other than making the chart easier to read what's the case where a 50%/75% trigger is superior to a straight up enable if > 50%?
And apparently I misunderstood the mod, it leeches from things it could pick up but doesn't. It was probably written before it became stock and thus contained some out of date text.
In my server I hooked up a sound alarm to a set of capacitors. Too low of a charge indicates higher power consumption than production, allowing you to unplug certain low priority loads. I also have some emergency coal generators ready to go at the flick of a switch if needed.
Same with Satifactory, The larger powerplants need a lot of energy for their infrastructure to run and an overload will trip breakers and shut the whole grid down, a naively designed grid death spirals very easy. My factory was needing increasingly complicated black start systems so I started putting the powerplant infrastructure on a self contained islands. something a factory overload would not trip, it was something like one coal power plant can run the machinery needed for itself and 8 grid power plants.
Thinking about this makes me wonder how all real world power plants aren't designed to survive a grid outage? Why would you ever need to black start a real power plant? Like, can't you design it so that it disengages from the grid rather than shutting down in a catastrophic outage like this? Or make it only take on a fraction of the load or something?
The problem is that it's still steam pushing spinning rust. You can't instantly scale up or down, let alone do so in an organized fashion. If you were happily generating 1000MW out of 1500MW max, there's very little you can do if a power line goes offline and you're suddenly connected to 2000MW of load - or only 250MW. At best it's going to take tens of seconds to adjust, which in practice means you're forced to dump the entire load because in the meantime your output has deviated so much that you're causing serious damage to downstream equipment - or your own. And load shedding isn't really an option because you're now operating as an island, which means you have to instantly figure out how the grid is now connected, what the current supply/demand is, which neighborhoods should be turned off to best match the pre-incident demand, and what the impact on the local grid is - there are way too many variables there to be able to respond in a fraction of a second.
Starting back up from zero is significantly easier, as you are completely isolated and have zero load. You turn the power plant on, and start slowly adding local load to ramp up. Synchronize with neighboring plants where possible to build the grid back up. The only issue is that a power plant needs a significant amount of power to operate, so you need something to provide power before you can generate power. In most cases you can just piggyback off the grid, but in an isolated black start it means you need a beefy local independent generator setup. That costs money and it's rarely needed, so only a few designated black start plants have them, paid for by the grid as a whole.
Disengage from the grid and do what with the vast amount of power that suddenly has no place to go? That kind of power tends to break things. Powerplants have big cooling towers because disposing of heat is problematic.
And you're assuming that there is a throttle setting that lets the plant produce so little power that it only runs itself. Think of the Falcon 9--the landing is hairy because it's impossible for it to produce less thrust than it weighs. The engine will go out if you try to throttle it too much.
One of the Administrators of the REN, the Portuguese electric supplier is currently giving a press conference. Confirmed they are in scenario of restart from black start.
- Cause of event not known yet.
- They noticed power oscillations from the Spanish grid that tripped safety mechanisms in the Portuguese grid. At the time, due to the cheaper prices, the Portuguese grid was in a state of importing electricity from Spain.
- They are bringing up multiple power systems and the Portuguese grid is able to supply 100% of needs if required. It was not configured in such a state at the moment of event.
- They had to restart the black start more than once, since while starting, noticed instabilities in some sectors that forced them to restart the process.
- Time for full recovery unknown at this time, but it will take at least 24 hours.
We are beginning to recover power in the north and south of the peninsula, which is key to gradually addressing the electricity supply. This process involves the gradual energization of the transmission grid as the generating units are connected.
I see load dropping to zero on that graph, or rather, load data disappears an hour ago.
If the grid frequency goes too far out of range then power stations trip automatically, it's not an explicit decision anyone takes and it doesn't balance load, quite the opposite. A station tripping makes the problem worse as the frequency drops even further as the load gets shared between the remaining stations, which is why grids experience cascading failure. The disconnection into islands is a defense mechanism designed to stop equipment being too badly damaged and to isolate the outage.
Interesting, but in terms of load I think think the data may just be delayed by ~1 hour. Switching to UTC, to avoid timezone confusion, it's currently 13:10:
Everything dropped to zero except wind and solar, which took huge hits but not to zero. I expect those have been disconnected too, as they cannot transmit to the grid without enough thermal plant capacity being online, but if the measurement at some plants of how much they're generating doesn't take into account whether or not they were disconnected upstream they may still be reporting themselves as generating. You can't easily turn off a solar plant after all, just unplug it.
Either that, or they're measuring generation and load that's not on the grid at all.
Probably they are estimates of not grid metered generation assets based on wind speed and solar production, at least in the UK nearly all solar is 'estimated' because it is not measured directly (apart from larger sites), at least in real time.
Rooftop solar for example just shows as a reduction in demand, not 'generation' per se.
This also true for private wind power. Britain has a measurable amount of hill top farms where it just makes good economic sense to install a wind turbine and get free electricity. But we don't meter it, it shows up in charts as an absence - on a windy afternoon maybe Britain is seemingly consuming 4GW less electricity than it "should be". If the wind drops that load reappears on the grid and must be handled by existing infrastructure.
None of this gear is suited to a black start. If you had total grid loss for a month you could doubtless rewire it to power the farm when it's windy despite no grid, maybe even run some battery storage for must-have services like a few lights so they keep working on still days but you could not start the grid from here.
It's not just about the power. System components cannot be brought to operating temperatures, speeds and pressures faster than mechanical tolerances allow. If a thermal plant is cold & dark, it can take days to ramp it to full production.
That's true of some kinds of thermal generators, but not all. Simple cycle gas turbines can come up very quickly (think jet engines). Or your car's engine.
Combined cycle turbines do have a steam component (that's where the word "combined" comes from). The waste heat from the combustion turbine front end is used to make steam in the back end.
Combined cycle turbines are not primarily powered by steam. It is a secondary consequence of their operation. This contrasts with nuclear and coal plants where steam is the prime mover.
The steam part of CC systems shows hours and hours aren't inherently needed to get a steam power plant in operation. For that matter, warships with steam propulsion show the same thing, I believe.
A true black start has several factors (which make it difficult and notable):
1. The grid has to fully collapse with no possibility of being rescued by interconnection
2. As a result, a generation asset has to be started without external power or a grid frequency to synch to
3. An asset capable of this is usually a small one connected to a lower voltage network that has to then backfeed the higher voltage one
4. Due to the difficulty of balancing supply/demand during the process, the frequency can fluctuate violently with a high risk of tripping the system offline again
None of this applies in yesterday's case:
The rest of the European synchronous grid is working just fine.
News reports stated Spain restored power by reconnecting to France and Morocco.
By reestablishing the HV network first, they can directly restart the largest generation asset with normal procedures.
As they bring more and more load or generation online, there's little risk of big frequency fluctuations because the wider grid can absorb that.
Just to add, I was at a University campus when the entire building's electrics went out and there was a significant pull due to relatively powerful computers in every room. They initially tried to bring the building back online altogether and failed. Then they tried to bring it back in sections and failed too. In the end they ended up going into each lab, turning every computer off at the wall to bring each lab's power back, and then turn each computer on one by one.
I can only imagine the difficulty of bringing large parts of the grid back online, that rush current must be immense.
Yup. I used to work with a factory that had a bunch of really big machines. Turn everything on at once and the transformer out on the pole self-destructed. Note that the breakers didn't pop--the startup transient was short enough. The power company wasn't happy.
Or look at Apollo 13. The astronauts had turned off everything possible because they had lost their generator and only had their batteries. And it took a lot of furious planning by the guys on the ground to come up with a sequence of turning things back on that didn't cause the peak draw to go too high. Can't go too fast or it trips. Can't start too early because the power is limited, but can't start too late because the systems have to be up when they hit the atmosphere.
I did a project where we predicted transformer failure (they blow up!) from changes in the oil that they have in them (its insulative properties suddenly change). This was 30 years ago so it's all a bit fuzzy, but the one thing that really stuck with me was the story that the SME that we were working with had about the UK grid teetering on the edge of failure during the "great storm" of 1987. His telling was that they really were unsure if they would get it back at all!
> It's one reason it's a good idea to always have some cash at home.
More than cash it was important yesterday to have the following in case it would have lasted longer:
- a battery powered am/fm radio with spare new batteries
- some candles and matches
- food reserves for a few days that don't need refrigeration: bread, anything in can, pasta, rice...
- some kind of gaz or alcohol stove, dry wood or bbq charcoal: you can always make a fire in the middle of the street where there is no risk of burning things around.
- water reserve (I always have like 24L of drinking water) and since I hate waste I regularly fill jerrycans when waiting for hot water in the shower that I use for manual washes (kitchenware
or gears).
Does solar power make this process easier or harder? I know that with thermal plants you have a spinning mass that you have to synchronize, and phase shift is used to assess how hard the plant is working (and whether to trip a disconnect as we see here)
But with solar, how is the synchronization provided? In like a giant buck? Or in software somehow? Does the phase shift matter as much as in the electromechanical systems?
My intuition is that solar would make the grid harder to keep stable (smaller mass spinning in sync) but also may offer more knobs to control things (big DC source that you can toggle on/off instantly.. as long as sun is out). But I don’t actually know.
Mike_hearn's comment was grey but was correct: phase following is indeed done through software in the inverter. Phase matching is still required, wherever the phase difference is not zero there is a deadweight loss of power as heat.
Currently the main driver of battery deployments is not so much energy price time arbitrage as "fast frequency fresponse": you can get paid for providing battery stabilization to the grid.
So if you have a smarter solar panel, or a smart battery, you can stabilize the grid. I’m assuming that all of the traditional software complexity things in distributed systems apply here: you want something a little bit smart, to gain efficiency benefits, but not too smart, to gain robustness benefits.
My intuition is that bringing the market into it at small timescales probably greatly increases the efficiency significantly but at the cost of robustness (California learned this “the hard way” with Enron)
> Phase matching is still required, wherever the phase difference is not zero there is a deadweight loss of power as heat
If the electronic controller is “ahead of” (leading) the grid, then that heat would come from the solar plant; if it is “behind” (following) then that heat comes from the grid. Is that right? And likely, solar plants opted for the simplest thing, which is to always follow, that way they never need to worry about managing the heat or stability or any of it.
I wonder if the simplest thing would be for large solar plants to just have a gigantic flywheel on site that could be brought up via diesel generators at night…
Most solar and wind plants follow the inertial lead of the thermal plants. They can't synchronize without enough thermal generation being online. Supposedly there are efforts to change that, I don't know enough about grid engineering to say how far along that might be in Spain.
> But with solar, how is the synchronization provided? In like a giant buck? Or in software somehow? Does the phase shift matter as much as in the electromechanical systems?
If you mean how does solar act to reinforce the grid: search for terms like "grid forming inverter vs. grid following inverter" though not all generators are the same in terms of how much resilience they add to the grid, esp. w.r.t. the inertia they do or do not add. See e.g. https://www.greentechmedia.com/squared/dispatches-from-the-g...
Harder mostly, See the frequency is set by huge rotating masses in the form of generators, and when the supply and demand is matched the frequency and voltage are stable, when demand dramatically increases it pulls the frequency and voltage down, which is effectively slowing the generators down as load / magnetic drag increases with current drawn. Having large inertial masses spinning actually helps smooth out frequency changes. whilst large solar farms can and do syncronise with the grid, they are reactive and do not add the same smoothing effect as humungous spinning masses.
Low Grid frequency & voltage can cause an increase in current & heating of transmission lines and conductors and can damage the expensive things, this is why these systems trip out automatically at low frequency or low voltage, and why load shedding is necessary
> Harder mostly, See the frequency is set by huge rotating masses in the form of generators
I'm not saying you're wrong, but this isn't obviously correct to me.
Since solar going to a grid is completely dependent upon electronic DC->AC conversion, I would expect that it could follow a lot greater frequency deviation for a lot longer than a mechanical system that will literally rip itself apart on desync.
The real reason that small scale solar PV is grid following (i.e. it depends on an external voltage and frequency reference) is that this ensures power line safety during a power outage. That's it.
An inverter can be programmed to start in the absence of an external reference and it can operate at a wide range of frequencies.
About 10 years ago I had a chance to work in the Utilities vertical for a power producer. I asked the same question: how is the frequency set and the answer was that the biggest power plant sets the frequency because they can produce a lot power and any other smaller generator would not be powerful enough to change the frequency.
Sure, the smaller plant gets pulled into the larger plant's inertia.
However, DC-AC converters don't have an inherent inertia. They can follow almost any frequency and phase within reason. Certainly a DC-AC converter should be able to respond way faster than any frequency/phase changes that a mechanical system can generate.
In theory, they should be able to set themselves to be ever so slightly closer to ideal so that the amount of power they have to sink is limited but are still exerting a very slight force to bring the grid back into compliance rather than continuing to add load which propagates the collapse.
I'd say a little harder to negligible now, but potentially way easier in the future.
The main difficulty is that the software of grid-following inverters tend to make them trip out very suddenly if the grid parameters get too far out of spec (they will only follow the grid so far), but once the grid is good they basically instantly synchronise.
But all large solar farms are likely to be mandated to switch from grid following to grid forming inverters eventually which will make them beneficial for grid security because they will help provide 'virtual intertia' that looks exactly the same to the rest of the grid as spinning mass does.
As the press release you linked points out, the black start plan Spain trains on uses nuclear energy supplied from France to re-energize the power plants.
"Luckily", France is at an historically high level of production capacity at the moment and the connection between the two countries was reestablished fast.
According to RTE (French network manager), the interconnection was maxed yesterday at around 3GW of power.
Sadly, while Spain is part of CESA, it's not very well connected. I wouldn't be surprised if one the takeaway from the whole incident is that more interconnections are needed.
Ukraine is interesting in this context because there are so many generators. In the richer parts of Odesa I've even found it hard sometimes to tell whether or not the grid power is on as literally every single building had sufficient backup generators to keep the lights on (also, many big businesses seemed to run their generators even when the power was on, I presume as a civic-minded way to add generation capacity overall and avoid interruptions when the power flips on and off).
Generators are not in sync with the grid though. And that's the hardest part.
Here's why generators were running here despite the grid being available. A generator has a very short lifetime and in order to prolong it, some owners learned to run those in the very optimal schedule. Which sometimes requires a minimum amount of time to run in a single cycle. Thus if you started it you are committed to run for X hours.
> Generators are not in sync with the grid though. And that's the hardest part.
I know. What's interesting re: Ukraine is because there are so many generators there are more options to getting power sufficiently restored for normal life than just rushing to restore the entire grid.
> Thus if you started it you are committed to run for X hours.
I'm skeptical this is the main reason. While fast starting a diesel generator is hard on it, there are other, slower, ways to start big diesel generators with minimal impact on lifespan. The blackouts in Ukraine are almost all on a schedule, so big buildings with dedicated staff and expensive generators can and do startup their generators in advance (I've personally heard this happen on occasion - big generators spooling up multiple minutes in advance of the scheduled blackout).
Also, it makes sense to turn on generators in advance anyway: gives you a chance to diagnose any issues.
Perhaps not the main reason, but heavy devices do indeed require minimum amount of time running. I think small devices as well but not as long. Honestly I don't remember what we did back then: so many things were going at the same time, longevity of generators was not very well know at the time )
And if you are a mere mortal in this world, play Factorio : Space age expansion on the planet Aquilo. To learn the precious importance of reliable multi stage power bootstrapping
Power never went out in a country completely. At the lowest point consumption was ~40% of normal for that time of day.
Ukraine went through many black starts in the first winter of Russian strikes against energy. I guess they built a skill of recovering it quickly enough that it started happening faster and easier every next time.
I would think that renewable infrastructure could be the fix, at least if you start installing larger battery capacity to meet renewable store and usage shifts, the grid essentially is installing the resources that can also be used to respond to & contain sudden source losses and prevent cascades.
I wonder if someone could build a realistic scenario into a game -- let's say some sort of smaller scale black start for, say, a space station. And throw in a unknown computer architecture for the in-game computers so that players need to RTFM to figure out how it works.
I took down the servers though, so you probably can't easily try it. I don't know if I added a way to configure the lobby server. I should have! It's open source though. And there is a video about that thing on my YouTube: https://www.youtube.com/watch?v=6TPgfa7LbiI
The game is bad and nothing of what we planned on doing actually made it into the game. The video is long and boring too. But maybe someone finds this cool and is inspired by this and makes a game like this.
The first 15 minutes of the game were actually about getting the ship moving, first by reading the manuals of half a dozen different ship systems and then following some procedure outlined in those manuals (parts of which were simply incorrect), maybe having to do some things in sync with your other players and stuff like that. I think it would have been cool to add multiple reactors and start them up in sync and stuff. The different ship systems were actually Lua programs that interacted via a message bus. So kind of a unknown computer architecture?
For maybe the first 24 hours at a grocery store, and then not so sure. Would your neighbors sell you supplies and food? Maybe not? And so many places now depend on cashless transactions and doubtful they have pen, paper, lockbox, and safe as a contingency plan.
It would be essentially the same thing as a grid black start, except that the first breaker to close has the European grid on its primary side, instead of a freshly started generator under your control.
The complex process of configuring the transmission network to bring grid power to each power plant in succession is the same.
The continental part of the EU runs on one synchronized grid. The Nordics (except half of Denmark and, uh, Iceland, Greenland etc) runs on a separate synchronized grid.
I'm confused. Would the start ever have to be truly black? Wouldn't water always be driving hydroelectric turbines, generating some electricity? Solar panels generate electricity without requiring input. I understand that synchronizing AC is not trivial, I'm only questioning the part about whether the start is truly black.
The tills have keys to manually open them, and you can just record transactions on pen-and-paper if needed and enter them later. I've seen plenty of businesses do exactly that during power outages. Not as fast. But totally doable.
Stores with tills and freezers etc will have power for the tills but the backing network for payments probably won't be up. That's the concern with being cashless. They can accept cashxl, but no one has any.
I was able to buy some groceries and pay with card. The tills had a battery backup and the network infrastructure that supports card payments was apparently working.
That said, lots of people hit the cafés and had to resource to cash payments. There was also lots of people buying bottled water at the shops.
So basically, you could divide people in two groups. Those that took it like an extra Sunday, and those that took it like the beginning of a war or something :-D
Most people around here seemed to be balanced. Yes, a lot of people outside at bars/cafes enjoying a spare Sunday, but also preparing quietly and slowly in case it takes longer time. All the stores ran out of batteries, radios and such, at least the ones we visited, and I only managed to make one card payment around 13:30 sometime, after that internet stopped working 100% until the night. Almost no super markets were open around here, maybe one or two who had generators it seems.
>It's one reason it's a good idea to always have some cash at home.
Most places are so dependent upon electricity that they can't even take cash during a blackout. And they don't even have the mechanical machines to take a credit card imprint anymore.
The last of my raised number credit cards went away last month. Those old machines will only get a blank rectangle from me. Sad, because I did actually use one of those about 3 years ago when a rural gas station had power but no network.
Network is not required for mag-strip or emv operations, but the terminal has to be configured accordingly (and has to have some way to send batch of transactions for settling). It's less common now, but fully supported.
I actually have no real clue how and where Spain/Portugal is connected to rest of Europe but could they also restart North to south with help from the grid from France?
The grid uses AC (not DC), running at 50 Hz (cycles per second). So the voltage is going up and down at that frequency, in a sine-wave pattern.
If you try to connect another generator to the grid, it needs to be at the same point (phase) in the sine-wave cycle, so that its power contribution is added, not subtracted.
If it's not in sync, huge currents can flow, causing damage. Sort of like connecting jumper cables backwards.
The bulk of the power grid is alternating current (AC), and the frequency of the resulting sine wave needs to be synchronized with the other parts of the grid it is connected to.
If an entire nation trips offline then every generator station disconnects itself from the grid and the grid itself snaps apart into islands. To bring it back you have to disconnect consumer loads and then re-energize a small set of plants that have dedicated black start capability. Thermal plants require energy to start up and renewables require external sources of inertia for frequency stabilization, so this usually requires turning on a small diesel generator that creates enough power to bootstrap a bigger generator and so on up until there's enough electricity to start the plant itself. With that back online the power from it can be used to re-energize other plants that lack black start capability in a chain until you have a series of isolated islands. Those islands then have to be synchronized and reconnected, whilst simultaneously bringing load online in large blocks.
The whole thing is planned for, but you can't really rehearse for it. During a black start the grid is highly unstable. If something goes wrong then it can trip out again during the restart, sending you back to the beginning. It's especially likely if the original blackout caused undetected equipment damage, or if it was caused by such damage.
In the UK contingency planning assumes a black start could take up to 72 hours, although if things go well it would be faster. It's one reason it's a good idea to always have some cash at home.
Edit: There's a press release about a 2016 black start drill in Spain/Portugal here: https://www.ree.es/en/press-office/press-release/2016/11/spa...