The ATC was very much aware of the launch in progress[0]. The ATC made a mistake by instructing DL480 to fly a more northern route than the aircrew originally filed.
"Rich asshole makes demeaning comments about situations they don't know" More at 11.
Seriously though, how out-of-touch do you have to be to think that entire organizations full of smart people are so incompetent as to have 100% bloat?
Could you lay off 50% of Google today and still have a functional, competitive product tomorrow? Yes.
Would you still have a functional, competitive product in 2 years? No.
Engineers are building and maintaining the product, SREs are keeping the jobs, servers and infrastructure humming, sales is bringing in new clients and keep existing clients happy, marketing is... OK maybe Google does need a marketing overhaul...
Deep cuts will also signal to employees and competitors that blood is in the water. Any remaining loyalty would evaporate and MSFT, AAPL, AMZN, & META would all be eager to scoop up that talent to build competing products to Google Search & Ads and bolster their cloud & AI offerings.
It's a problem I want to tackle correctly, so that you'd need to put as little thought into it as possible. It should "just work". Vercel's skew protection [1] stands out as a recent example of doing this well.
> It looks like you using Flutter's Dart<=>JSON serialization; do you recommend using built_value for immutable data structures?
JSON was chosen as the primary serialization format for the reasons mentioned here [2]. Primarily, familiarity to Flutter developers, availability of JSON-compatible types in the wild, and integration with non-Dart clients.
built_value types can be used in Celest currently by giving the class `fromJson`/`toJson` methods. I haven't implemented auto-serialization for them, yet, but it should be straightforward. I've used built_value heavily in the past and agree there's no better alternative for some use cases.
> Do you support protobuf/cap'n'proto?
In the future, I plan to support more serialization formats, including protobuf and binary protocols. I will check out cap'n'proto, it was not yet on my radar.
I suspect it's not supposed to be taken at face value. It's a reference to Harvard admissions process, where asians seem to get suspiciously low "personality scores" compared to white applicants.
While I'm 100% positive the details of operational concerns like this are classified, there are 2 distinct types of submarines today with 2 different objectives:
1) Attack Submarines (e.g. Los Angeles-class & Virginia-class for USN) which usually roam within a designated operations area, surveilling, tracking, and generally keeping tabs on other nations' surface & sub-surface fleet dispositions. These subs typically have multi-week sorties and may intermittently surface for surveillance & comms.
2) Ballistic Missile Submarines aka "Boomers" (e.g. Ohio-class for USN) which are given a strategic area in which to operate and their objective is to remain silent & undetected, waiting for the hopefully-never-coming order to launch their SLBMs. These subs usually have multi-month sorties and often don't surface until the end of their patrol.
Clearly the Ballistic Missile Submarines surfaces intermittently surface
for comms as well?
If not, they won't know when to set off their missiles making then not
very useful as a deterrent
I have often wondered how close to the surface they need to get.
I would presume retractable antennas could be extended from a sub
from a non-trivial depth.
Or cable attached to buoys
Or something much smarter that I have not thought about yet.
There's a couple of different "wake up" signals that can reach deeper into water. Their biggest limitation is very low bandwidth, so an attack sub will emerge (/send up a buoy on a tether) to get an updated tactical map.
Also highlighting the E-6B Mercury (and the upcoming EC-130J), which among other communication options has a 5-mile (!) VLF antenna it deploys vertically in midair (!!) to establish limited-bandwidth communications with submarines.
UniFi Protect[0] is a decent on-prem solution and has all the main features of Nest/Ring. Certainly expensive though, minimal system for a doorbell cam is $199 for the camera[1] + $199 for the smallest NVR[2].
I use UniFi devices throughout my home but the cameras (G3 specifically) are buggy and frequently disconnect, and don't auto reconnect. Basically useless.
Over the year end holidays I was traveling and set one up to monitor my front yard. There was actually an incident while I was gone and I remoted in to find the camera offline, totally missed it when it should have had perfect perspective. The police asked me for video and in this case I would have shared it, but alas could not. Sucks as I have the CloudKey box for video storage, but its very undependable in my experience.
This has not been my experience with mostly G4 pro hardwired PoE cameras. I have their G4 doorbell and did have similar problems until I upgraded its transformer and pointed an access point directly at it. Been smooth sailing ever since.
Appreciate the feedback, glad to hear that I am an anomaly. The camera is on WiFi and I have a mesh network with multiple UniFi APs. Both get good signal where the camera is located. Even in the same room as the AP my max uptime is about 1 day before it disconnects.
I ended up making a dedicated 2.4GHz-only SSID for my wifi cameras, which seems to have helped. I think forcing them to 2.4GHz, at least for my house RF situation, was the thing that helped the most.
I have a smallish Protect system (UNVR, four G4 Pros, one doorbell, two G3 Instants).
The positives are that it pretty much just works. The mobile app is excellent, the web app on the UNVR is fine, and it has full spousal approval factor.
I have had very few issues with the system, primarily just the doorbell was unreliable until I upgraded the transformer and put an access point right next to it. I had an issue with the NVR right before it went out of warranty and I fixed it by replacing the internal USB drive with an SSD.
The negatives are that it's more costly than other options, Ubiquiti has had perennial stock problems over the last few years, and you're locked into their ecosystem. The NVR won't work with generic cameras and you can't run the software on your own hardware.
It's also possible that, if you have their remote access proxy set up (required for mobile app), you could be subject to the same warrant issues as with Ring.
The Access Points, PoE switch, and firewall work fine. My only dissatisfaction has been the G3 camera. As others have stated its a big investment so not moving off the ecosystem today. However when its time to upgrade everything I will certainly shop around.
Aren't most of those PoE (the doorbell being the exception - appears powered by regular doorbell power supply)? So, you have to run cables to each camera. Not an easy undertaking for most.
1) This is solved by 2 interlocking concepts: comprehensive tests & pre-submit checks of those tests. Upgrading a version shouldn’t break anything because any breaking changes should be dealt with in the same change as the version bump.
2) Google’s monorepo allows for visibility restrictions and publicly-visible build targets are not common & reserved for truly public interfaces & packages.
3) “Code churn” is a very uncharitable description of day-to-day maintenance of an active codebase.
Google has invested heavily in infrastructural systems to facilitate the maintenance and execution of tests & code at scale. Monorepos are an organizational design choice which may not work for other teams. It does work at Google.
Effort to update something is high because there's a lot of code, not because it's in a monorepo. Updating the same code scattered across multiple repositories takes as much work in the best case. More realistically, some copy of the same code will stay unupdated because the cost to track down every repository in the company is too much.
Can definitely feel this pain personally. Need to upgrade tooling across some dozen or so services and we're investigating how to migrate with potentially incompatible upgrades. So just suffer outage while we merge PRs across some 20 repos? The atomic changes of a monorepo are very beneficial in these cases, removing the manual orchestration of GitOps practices segmented across individual services..
When you say it's "as much work" there's an assumption the code is still used. This was years ago, but when I was doing migrations at Google we sometimes had to deal with abandoned or understaffed and barely maintained code. (Sometimes by deleting it, but it can be unclear whether code by some other team is still useful.)
If you're not responsible for fixing downstream dependencies then you don't need to spend any time figuring that out.
Sounds great to me because you are forced to delete code that's not in use anymore. Without the monorepo, that code would still be there with old libraries that are potentially insecure.
Deleting code that is not being used anymore happens way too rarely in my opinion.
The downside is if a product no longer have maintainers you are now encouraged to shut it down, even if it still works and it doesn't cost much to run.
If a product non longer has maintainers, it's probably because it's not worth it for the company. So it makes sense to delete it, from the company point of view.
The flip side is that services with an immediate need will get upgraded, and others won't, and six months later you will be saying "Why am I still seeing this bug in production, I already fixed it three times!"
Of course, the problem can be mitigated by a disciplined team that understands the importance of everybody being on the same page on which version of each library one should use. On the other hand, such a team will probably have little problem using monorepo in the first place.
Whether you have a monorepo or multiple repos, a good team will make it work, and a bad team will suck at it. But multiple repos do provide more ropes for inexperienced devs to tie themselves up, in my opinion.
I don't think that's quite true. In my experience multi-repos have the edge here.
If you have one key dependency update with a feature you need, but you need substantial code updates and 80 services depend on it, that may be impossible to pull off no matter what. Comparatively, upgrading one by one may not be easy, but at least its possible.
The importance of everyone being on the same page with dependencies might just be a limitation of monorepos rather than a generally good thing. Some services might just not need the upgrade right now. Others may be getting deprecated soon, etc.
There are languages / runtimes where there could not be two different versions of the same thing in one binary (and they eagerly fail at build time / immediately crash upon run). That is not the case for JavaScript, Rust, etc. But it is the case for C++, Java, Go, Python and more.
Everyone claims different needs if they can. Nothing could be linked together anymore if you just let everyone use whatever they want.
Or maybe people start to try to workaround this by ... reinventing the wheels (and effectively forks and vendoring) to reduce their dependency graph.
There is a genuine need for single instance of every third party dependencies. It is not unique to monorepos. Monorepo (with corresponding batch change tooling) just make this feasible, so you don't hear about this concept for manyrepos, and mentally bind it to monorepo.
Thanks. I'm not familiar with Java. I thought multiple classloaders are more like dlmopen (which doesn't help much - symbol visibility is hard) cause I saw people struggling on classpath conflict etc.
It is basically how application servers got implemented, every EAP/WAR file gets their own classloader, and there is an hierachy that allows to override search paths.
That is how I managed back in the day to use JSF 2.0 on Websphere 6, which officially did not had support for it out of the box.
How many internal libraries does your "separate services" contain? You service A depends on library alpha@1, your service B depends on library alpha@2. All happy now. Introduce another layer, your service A depends on library alpha@1, beta@1, and alpha@1 depends on gamma@1, beta@1 depends on gamma@2, what to do now? It does not even matter how many services you have now.
With Javascript it does not apply, alpha@1 can have its own gamma@1, beta@1 can have its own gamma@2. But the same does not hold for most languages.
left-pad is both amazing and sad. It's amazing because JS's "bundle entire dependency closure" approach, combined with npm infrastructure, successfully drove the usability of software reuse to the point that people even bother to reuse left-pad. This is beyond what a well-regulated corporate codebases can achieve (no matter strongly encouraged single instance or not, not matter manyrepo or monorepo), and it happens in open. It is sad because without being regulated people tends to do so too aggressively, causing, well, left-pad.
> How many internal libraries does your "separate services" contain? You service A depends on library alpha@1, your service B depends on library alpha@2. All happy now. Introduce another layer, your service A depends on library alpha@1, beta@1, and alpha@1 depends on gamma@1, beta@1 depends on gamma@2, what to do now? It does not even matter how many services you have now.
Got several thoughts on this one. First, lets look at how bad the issue really is:
To start using beta@1 you need to upgrade alpha@1 to alpha@2 that depends on gamma@2. What's the problem with that?
The same situation can arise with 3rd party dependencies, except there its much worse: you have zero control over those. Here you do have the control.
Now lets look at what this situation looks like in a monorepo: you can't even introduce gamma@2 and make beta@1 at all without
1. upgrading alpha@1 to alpha@2
2. upgrading all services that depend on alpha@2
3. upgrading all libraries that depend on gamma@2
4. upgrading all services that depend on gamma@2, if any
So you might even estimate that the cost of developing beta@2 is not worth it at all. Instead of quasi-dependency-hell ("quasi" because your company still controlls all those libraries and has power to fix the issue unlike real dependency hell) you have a real stagnation hell due to a thousand papercuts
My second comment is about building deep "layers" of internal dependencies - I would recommend avoiding it for as long as possible. Not just because of versioning, but because that itself causes stagnation. The more things depend on a piece of code, the harder it is to manage it effectively or to make any changes to it. The deeper the dependency tree is, the harder it is to reason about the effect of changes. So you better be very certain about their design / API surface and abstraction before building such dependencies yourself.
Major version bumps of foundational library dependencies is an indication that you originally had the wrong abstraction. No matter how you organize your code in your repos, its going to be a problem. (Incidentally, this is also why despite the flexibility of node_modules, we still have JS fatigue. At least with internal dependencies we can work to avoid such churn.) It should still be easier with separate services, however, as you can do it more gradually.
Last note on left-pad and similar libraries. They are a different beast. They have a clear scope, small size and most importantly, zero probability of needing any interface changes (very low probability of any code changes as well). That makes them a less risky proposition (assuming of course they cannot be deleted)
> To start using beta@1 you need to upgrade alpha@1 to alpha@2 that depends on gamma@2. What's the problem with that?
The problem is the team maintaining alpha does not want to upgrade to gamma@2 because it's an extra burden for them, and they don't have an immediate need.
The debate is not about teams owning separate services, it's about teams owning libraries.
I'm assuming a customer-driven culture where you work for your customers needs. In the case of libraries, teams using the libraries are customers. If you're the maintainer of alpha and your customer needs beta, your customer needs you to upgrade to gamma.
But then another customer still wants gamma@1, they are allowed to do that! But they also want your new features. So now you have to maintain two branches, which I hope we can agree: it is an extra burden.
This is unavoidable if we are talking about FOSS, people should be able to do whatever they want, and they do. A company has an advantage here: you can install company-wide rules and culture to make sure people don't do this. Which, in this case, happens to be: let's keep a single version of everything unless you have really good reasons.
> But then another customer still wants gamma@1, they are allowed to do that! But they also want your new features.
In this case, you still have the option of working with them to help them migrate to gamma@2, if the cost of maintaining gamma@1 is indeed too high and would negatively impact you in serving other customers. This was the original premise, wasn't it - upgrading all your dependants when you upgrade your library? That's still an option. The point is - you have more choices. And you can also help customers one by one - you don't have to do it all at once
I will agree though, restricting choices helps when the company is finding difficulty in aligning incentives through communication. But you do give up a lot for it - including ability to move fast and avoid stagnation.
From what I saw I'd say it's exactly opposite: allowing multiple versions actually means "make teams able to choose for stagnation". And because we are lazy, we certainly do! There is a non-trivial amount of people who believes "if it ain't broken don't fix it". I can work with them to migrate them over, but they might not want to do so! In this case, a hard "bump versions or die" rule is a must.
Maybe if you work in a small group of great engineers you don't need to set such rules and you can move even faster, but I unfortunately haven't found such a workplace :(
> you don't have to do it all at once
Yes. Nobody should do it all at once. Making "bump versions or die" compatible with incremental adoption is slightly harder (see sibling threads for how it's done). Still worth it I'd argue.
No to mention that if you're the first team to import a third_party library, you own it and other teams can add arbitrary cost to you updating it. You have to be very aggressive with visibility and SLAs to work around this.
In a multi-repo setup you can upgrade gradually though, tackling the services that need the upgrade the most first. Can you do that in a monorepo setup?
This also means services can be left to rot for years because they don't need to be upgraded, while all the infrastructure changes around them, which is a giant pain when you do eventually need to change something.
If you have a multi repo architecture you absolutely need both clear ownership of everything and well planned maintenance.
With multirepo setups, you don't necessarily need to update the package for all code at all.
Instead, some newer package completely replaces an old one, with no relation to the old dependency package, or with a dependency on some future one, and both can run at the same time while turning the old one off
Almost. We had a UI library on Android that was stuck on an alpha version of the library for three or so years after the library had shipped.
Upgrading the library broke many tests across the org, and no one wanted to own going in and getting each team to fix it. Eventually, the library had a v2 release, and people started to care about being able to use it.
Ultimately, they just forked the current release and appended a v2 to the package name.
Not the norm, but it happens. The monorepo works for Google, but I wouldn't recommend it for most organizations; we have a ton of custom tooling and headcount to keep things running smoothly.
From the mobile side, it makes it super easy for us to share code across the 50+ apps we have, manage vulnerabilities quicker, and collaborate easily across teams.
Oh geez, that's an entirely different can of worms that isn't related to the monorepo.
Most products at Google are not dropped because the monorepo makes it difficult for them to support - and I'm not sure how it would or how you got to that association. Also, plenty of products that are killed are not in the monorepo.
They are usually dropped due to a mix of things, but a big part is just better product management.
Better project management as in, somebody politicked their way into owning a replacement for a currently running thing?
The implemented product, as well as the vision for something like inbox or Google music is still way better than Gmail and YouTube music as the end user
Google's software mostly uses dependencies already in the google monorepo, so these issues don't crop up. The person/team working on library changes have to ensure that nothing breaks, or the downstream users are notified early on. Don't think this would apply to many companies.
It’s not really even a true monorepo. Little known feature - there is a versions map which pins major components like base or cfs. This breaks monorepo abstraction and makes full repo changes difficult, but keeps devs of individual components sane.
This was done away with years ago. Components are no more.
There are still a couple of things that develop on long lived dev branches instead of directly at head, but my personal opinion is the need for those things to do that is mostly overstated (and having sent them cls in the past, it's deeply annoying).
>> 3. It encourages a ton of code churn with very low signal.
> 3) “Code churn” is a very uncharitable description of day-to-day maintenance of an active codebase.
Also implicit in the discussion is the fact that Google and other big tech companies performance review based on "impact" rather than arbitrary metrics like "number of PRs/LOCs per month". This provides a check on spending too much engineer time on maintenance PRs, since they have no (or very little) impact on your performance rating.
Umm, from whatever I have seen in big tech "impact" is also fairly arbitrary. It all is based on how cozy one is with one's manager, skip manager, and so on. More accurate is "perception of impact".
Especially as it gets more and more nebulous at higher levels.
I believe everything is tracked at the folder/file level and not a project level. I'm not sure there even is a concept of a project. But maybe someone can correct me.
History for folders is visible in code search, it’s basically equivalent to what GitHub or Sourcegraph would give you. You can query dependencies from the build system. Anything beyond a couple levels deep is unlikely to load in any tools you have ;)
[0]: https://youtu.be/4RMhf0YELrA