Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Yesterday's Google outage was a pool of routing servers crashing (twitter.com/uhoelzle)
124 points by tyingq on Sept 25, 2020 | hide | past | favorite | 32 comments


Dependencies of these big providers like Google, Microsoft, Cloudflare are increasing which results to failure on a wide scale even if one fails. Distribution is the key.


Well for the vast majority of simple apps you're better off failing when everybody else is. People will blame it on you less. When your alternative solution fails and everything else seems to be up the blame will fall on you.


I always prefer to have a backup solution which could at least crawl during these situation if not able to walk. I see many SaaS relying only on google/twitter/fb auth but they need to understand that having own system too won't harm them much.


Google could probably do a better job here and not put so many services on the same pool of L7 devices. Separate pools with smaller groupings would reduce the blast radius.


In a follow up tweet, he mentions a post mortem is coming. Where would that get posted?


It tends to be linked from the outage itself, probably here: https://status.cloud.google.com/incident/zall/20010

For example, PM was posted on this previous outage: https://status.cloud.google.com/incident/cloud-networking/19...


As this impacted G Suite much more than GCP, it's possible it will be posted on https://www.google.com/appsstatus instead.


Anyone know what routing servers? BGP?


(Googler, opinion is my own, I know nothing about this specific outage).

Google has LOTS of internal routing systems. BGP is about anouncing what IPs a given network can handle, which is not the case here.

Before hitting application level routing, I believe you hit the Maglev[0]. Seems unlikely this was the cause, as it would likely take down all services.

One of the first application layers balancers you hit that is well known is the GFE[1][2]. This is similar to an HTTP reverse proxy, but Google made. I could definitely see this as the cause.

[0] https://static.googleusercontent.com/media/research.google.c...

[1] https://cloud.google.com/security/infrastructure/design#goog...

[2] https://landing.google.com/sre/workbook/chapters/managing-lo...


Does that match the list of reported stuff that was down? It appeared to hit a wide range of services. Gmail, Analytics, GKE, Google Keep, Meet, YouTube, GCE buckets, Sheets, Docs, Calendar, Stadia, Firebase, Voice, Music, Nest. From the thread: https://news.ycombinator.com/item?id=24585478


Neither Maglev nor GFE are usually tied to a specific service nowadays, so it could still be either of them. Way back when, some teams or services such as Checkout had to run their own private pool of GFEs. Given Urs' mention of backends, I am slightly inclined toward GFE.


Sounds more like an application load balancer issue ("routing requests" seems to imply L7) than network routing, but I might be misunderstanding.


I don't know the details of Google's networks, but I assume something like their Maglev load balancers: https://research.google/pubs/pub44824/


Traffic entering Google's network hits a bunch of front ends that route traffic to the relevenat back ends. I'd guess it's those application-level front ends that were having trouble, rather than anything network-level like BGP.


There's a huge """secret""" Google data center in Council Bluffs, Iowa that appears to be in the finishing phases of completion. I talked yesterday to a union worker who is moving to Des Moines to work on a new Microsoft data center there tonight, it appears that work is drying up at this data center here and a lot of the travelling blue collar folk are leaving this area.

I wonder if this data center coming apparently partially online is a part of the problem?

Also, after this he is likely to work on an Amazon fulfillment center next year - impressed by all the (albeit temporary) blue collar jobs created by FAANG at the moment!


LOL you mean this one that has a big sign out front that's been there for years and years?

https://www.google.com/maps/@41.2197694,-95.8658016,3a,89.3y...


OMG just at the end of the fence there's A BACKHOE RIGHT NEXT TO BURIED FIBER INDICATOR POLES!!!

https://www.google.com/maps/@41.2196753,-95.8611598,3a,75.4y...


The predator and the prey in their natural habitat.


NSA?


FWIW he's probably talking about this one, which is newer/still under construction and not as well known (although it's certainly not "secret").

https://goo.gl/maps/sdJYnU4dsSNedrLfA


I should have been less esoteric, but yes this one. From what I can tell a gaggle of Google employees have taken over control of the building and I'd assume this absolute unit is in the process of coming online. Sure there's a Google sign visible from the private road leading up to it but you'd still have to be a nosy local (or apparently an all-knowing HN reader) to know the exact location.


Even that one opened 7 years ago.


One building of it opened in 2013 (and another in 2016 AFAIK), but it is currently still under construction (in the Google Maps view, the entire construction site south of the completed buildings are also slated to be Google DCs).



Aren't there GCP regions in Los Angeles and Salt Lake City? Interesting that those DCs don't seem to be on this public list.


no.


I think by now Google has lots of experience with bringing data centers online, so this seems incredibly far fetched to me.


Not super well known, but Google has quite a few Datacenters that aren't used for Google Cloud, but are reserved for internal use.


Probably the other way around, no?


no he was correct


Spoiler alert: it's not.


Oof. This DC has been around since 2007. I can make a VM in it on GCP right now :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: