I think it's complicated by the fact that the better the company, they better they are at mentoring even "bad" hires. I.e., performance review isn't normalized across all companies. Google might think their hiring process is great because a lot of their hires do well in their performance review - but maybe that has more to do with the engineering culture and mentorship in the company, and less to do with their ability to pick future winners.
Perhaps though if one looks at the performance review weighted to a review date as soon after the hiring, and sort of tail off how much weight is given over the coming years it'd make sense.
I've just seen a lot of bad hires get better with time. And some good hires lose steam or become jaded and perform worse over time.
Thus, as messed up as it is, one of the keenest signals is tenacity in the end. And ironically tenacity in preparation for interviews (that Leetcode grind) is somewhat correlated with performance reviews in the long run and perhaps correlated with utility to the company.
That said, "decent" engineers in the long run that don't think outside the box but check their performance review boxes slowly bleed a company's ability to disrupt. I.e., performance reviews need to be expanded a bit to think about the health of the overall company and not just the employee performing their function.
As a large tech organization, I'd prefer a complex distribution of performances over an even distribution any day - because value to tech companies is still influenced by more black swan event type of innovation. But then I have a high "beta" (variation) in terms of what I'm looking for. Companies with more of a utility emphasis perhaps are better served by a hiring process that is "reliable" with a lot of false negatives.
A company's hiring practice should reflect the type of engineer they want to hire. If they want to hire engineers that can start contributing immediately, and they hire engineers that need mentoring, then the hiring process is broken.
If they widen their expectations, and hire people that they believe they can mentor into net contributors, then their hiring practices should reflect this.
There are many ways a company can make hiring more scientific, including A/B testing (ie. hiring people they normally wouldn't, have more data around how productive new hires are, etc) but I've never seen it over than "copy Google and we should be good."
I guess if they can take someone they would not have hired, and then mentored them to become a net-contributor, then the hiring process is broken. You can't really test this without hiring people you wouldn't normally hire based on your hiring process.
Perhaps though if one looks at the performance review weighted to a review date as soon after the hiring, and sort of tail off how much weight is given over the coming years it'd make sense.
I've just seen a lot of bad hires get better with time. And some good hires lose steam or become jaded and perform worse over time.
Thus, as messed up as it is, one of the keenest signals is tenacity in the end. And ironically tenacity in preparation for interviews (that Leetcode grind) is somewhat correlated with performance reviews in the long run and perhaps correlated with utility to the company.
That said, "decent" engineers in the long run that don't think outside the box but check their performance review boxes slowly bleed a company's ability to disrupt. I.e., performance reviews need to be expanded a bit to think about the health of the overall company and not just the employee performing their function.
As a large tech organization, I'd prefer a complex distribution of performances over an even distribution any day - because value to tech companies is still influenced by more black swan event type of innovation. But then I have a high "beta" (variation) in terms of what I'm looking for. Companies with more of a utility emphasis perhaps are better served by a hiring process that is "reliable" with a lot of false negatives.