You don't discuss why they do it or the downsides. "Google does this" is not good enough of an argument.
For the why, could be that the build would be dog slow if a network requests was made for every one of a bajillion dependencies. Or to avoid breaking the build if something is janked externally. Could be because C++ has no modules/libraries. Could be for the lawyer's sake.
Is any of that applicable? There are also other ways of vendoring Rust crates, e.g. mirroring them to an internal registry. That can have fewer downsides.
Security is one big reason. Avoiding link time or runtime conflicts is another. And, yes, having outgoing network calls for builds is a problem. Not because of performance concerns, but because it's a security and reliability nightmare. But consistency across the whole codebase is drastically important.
There are many many reasons but overall sanity and security concerns are paramount.
But also, read the original article referred to here... and the pain points seen there. Be careful about what you depend on.
> Security is one big reason. Avoiding link time or runtime conflicts is another.
I mean, most Linux distributions do the same thing - anything that's "vendored" as part of some project is supposed to be patched out from the source and pointed to the single system-wide version of that dependency. And Debian is packaging lots of Rust crates and Golang modules as build-time dependencies within their archive.
For the why, could be that the build would be dog slow if a network requests was made for every one of a bajillion dependencies. Or to avoid breaking the build if something is janked externally. Could be because C++ has no modules/libraries. Could be for the lawyer's sake.
Is any of that applicable? There are also other ways of vendoring Rust crates, e.g. mirroring them to an internal registry. That can have fewer downsides.