> There's nothing wrong with this choice [to work extra hours to get promoted].
But if there are limited slots for promotion, and that's generally always the case, the resulting competition among deserving engineers makes the extra hours more or less mandatory. Say that Amy is a better engineer than Jim and gets a third more done per hour. If Jim puts in 60 hours instead of the expected 40, then Amy isn't going to beat him for a slot unless she also starts working extra hours.
In the end, promotion becomes more about grinding than being effective. That's not great for company culture or retention of top talent.
That doesn't make the promotion more about grinding because the company doesn't care about how much work you get done in a set unit of time compared to other employees in the same set unit of time. The company cares about how much you get done, period.
If the only differentiating factor between Amy and Jim is quantity of work done (this is never the case in real life), most companies will prefer a Jim that works 60 hours to an Amy that works 40 if Jim is producing 5% more.
In software development, sure (maybe). Most jobs aren't software development.
The vast majority of jobs your production slows as hours increase but there isn't a tipping point where you're less productive, even after accounting for errors or rework. There's a reason CPAs don't clock out at 37.5 hours during tax season, or warehouses or service desks or any number of things other than the specific thing most of us do often work more than 40 hours a week, especially when actively working to get a promotion.
One reason is that using static binaries greatly simplifies the problem of establishing Binary Provenance, upon which security claims and many other important things rely. In environments like Google’s it's important to know that what you have deployed to production is exactly what you think it is.
> One reason is that using static binaries greatly simplifies the problem of establishing Binary Provenance, upon which security claims and many other important things rely.
It depends.
If it is a vulnerability stemming from libc, then every single binary has to be re-linked and redeployed, which can lead to a situation where something has been accidentally left out due to a unaccounted for artefact.
One solution could be bundling the binary or related multiple binaries with the operating system image but that would incur a multidimensional overhead that would be unacceptable for most people and then we would be talking about «an application binary statically linked into the operating system» so to speak.
> If it is a vulnerability stemming from libc, then every single binary has to be re-linked and redeployed, which can lead to a situation where something has been accidentally left out due to a unaccounted for artefact.
The whole point of Binary Provenance is that there are no unaccounted-for artifacts: Every build should produce binary provenance describing exactly how a given binary artifact was built: the inputs, the transformation, and the entity that performed the build. So, to use your example, you'll always know which artefacts were linked against that bad version of libc.
> […] which artefacts were linked against that bad version of libc.
There is one libc for the entire system (a physical server, a virtual one, etc.), including the application(s) that have/have been deployed into an operating environment.
In the case of the entire operating environment (the OS + applications) being statically linked against a libc, the entire operating environment has to be re-linked and redeployed as a single concerted effort.
In dynamically linked operating environments, only the libc needs to be updated.
The former is a substantially more laborious and inherently more risky effort unless the organisation has achieved a sufficiently large scale where such deployment artefacts are fully disposable and the deployment process is fully automated. Not many organisations practically operate at that level of maturity and scale, with FAANG or similar scale being a notable exception. It is often cited as an aspiration, yet the road to that level of maturity is windy and is fraught with many shortcuts in real life which result in the binary provenance being ignored or rendering it irrelevant. The expected aftermath is, of course, a security incident.
I claimed that Binary Provenance was important to organizations such as Google where it is important to know exactly what has gone into the artefacts that have been deployed into production. You then replied "it depends" but, when pressed, defended your claim by saying, in effect, that binary provenance doesn't work in organizations that have immaturate engineering practices where they don't actually follow the practice of enforcing Binary Provenance.
But I feel like we already knew that practices don't work unless organizations actually follow them.
My point is that static linking alone and by itself does not meaningfully improve binary provenance and is mostly expensive security theatre from a provenance standpoint due to a statically linked binary being more opaque from a component attribution perspective – unless an inseparable SBOM (which is cryptographically tied to the binary), plus signed build attestations are present.
Static linking actually destroys the boundaries that a provenance consumer would normally want due to erasure of the dependency identities rendering them irrecoverable in a trustworthy way from the binary by way of global code optimisation, inlining (sometimes heavy), LTO, dead code elimination and alike. It is harder to reason about and audit a single opaque blob than a set of separately versioned shared libraries.
Static linking, however, is very good at avoiding «shared/dynamic library dependency hell» which is a reliability and operability win. From a binary provenance standpoint, it is largely orthogonal.
Static linking can improve one narrow provenance-adjacent property: fewer moving parts at deploy and run time.
The «it depends» part of the comment concerned the FAANG-scale level of infrastructure and operational maturity where the organisation can reliably enforce hermetic builds and dependency pinning across teams, produce and retain attestations and SBOM's bound to release artefacts, rebuild the world quickly on demand and roll out safely with strong observability and rollback. Many organisations choose dynamic linking plus image sealing because it gives them similar provenance and incident response properties with less rebuild pressure at a substantially smaller cost.
So static linking mainly changes operational risk and deployment ergonomics, not evidentiary quality about where the code came from and how it was produced, whereas dynamic linking, on the other hand, may yield better provenance properties when the shared libraries themselves have strong identity and distribution provenance.
NB Please do note that the diatribe is not directed at you in any way, it is an off-hand remark and a reference to people who prescribe purported benefits to the static linking that it espouses because «Google does» it without taking into account the overall context, maturity and scale of the operating environment Google et al operate at.
> like repeating they decreased drugs price by 600%
The NYT and other media outlets like to point out that this claim is mathematically impossible. However, “cut prices by 600%” is understood perfectly well by most people (but not pedants) to mean “we undid price hikes of 600%.”
I suspect that this phrasing was chosen as a “wedge” to drive home to the MAGA faithful that the news media is biased against them.
Does that logic apply only when the claimed cut is over 100%?
If I advertise that my store "cut prices by 50%" but the prices are actually only 33% lower (which is the same as undoing a 50% price hike), would it be pedantic to call me out on my bullshit?
> Does that logic apply only when the claimed cut is over 100%?
Yes, I’d say.
It’s the same as the informal usage of “X times smaller” to describe scaling by 1/X. The idiom generally isn’t used unless X > 1. (The exception might be when several values of X are reported together. Then one might say “0.74 times smaller” to maintain parallel form with nearby “4 times smaller” and similar claims.)
No, it does not conform. As I wrote earlier, I have not seen that usage for less than 100%. So 600% conforms; 50% does not.
That is, expressions like "twice as slow/thin/short/..." or "2x as slow/thin/short/..." or "200% as slow/thin/short/..." have a well-established usage that is understood to mean "half as fast/thick/tall/..."
But "50% as slow/thin/short/..." or "half as slow/thin/short/..." have no such established usage.
For some evidence to support my claim, please see this 2008 discussion on Language Log:
Since HN has a tendency to trim URLs and might prevent this link from taking you to the relevant portion of a rather lengthy article, I'll quote the salent bits:
"A further complexity: in addition to the N times more/larger than usage, there is also a N times less/lower than [to mean] '1/Nth as much as' usage"
"[About this usage, the Merriam-Webster Dictionary of English Usage reports that] times has now been used in such constructions for about 300 years, and there is no evidence to suggest that it has ever been misunderstood."
I believe that the history of English language usage is replete with examples such as "X times less than" when X > 1, but similar constructions for X <= 1 do not appear with appreciable frequency.
In any case, I think that continuing our conversation is unlikely to be productive, so this will be my last reply.
I will just say in closing that our conversation is a good example of why the MAGA folks have probably chosen phrasing such as this.
> This seems like an inherently terrible way to look for a story to report.
But it’s probably a great way create a story to generate clicks. The people who respond to calls like this one are going to come from the extreme end of the distribution. That makes for a sensational story. But that also makes for a story that doesn't represent the reality as most people will experience it, rather the worst case.
I think author's use of the word "the" is misleading:
> You don’t need every job to choose you. You just need the one that’s the right fit. [emphasis mine]
You don't need the job that's the right fit. There's more than one. You need any job that fits. (Or that you can make fit you.)
So, if you want to size up the search, let p be the probability that a typical job you apply for will (1) result in you being hired and (2) be a job that fits you. Then the expected number of jobs you must apply to before getting hired to a job that fits you is 1/p. [1]
Say you think p = 5%. Then, on average, you'll need to apply to 20 jobs to land one that fits you.
How many jobs do you need to apply to have a 90% chance of landing one that fits you? If q is the wanted probablity of overall success, then the number of jobs k you must apply to is given by k = log(1 − q) / log(1 − p). So, in this example, k = log(1 − 0.9) / log(1 − 0.05) ≈ 45 applications.
That's a lot of rejections and ill-fitting jobs before landing one that sticks. Which is why it's useful to be persistent and flexible. Being able to make a job fit can dramatically reduce your search.
I will offer a second positive but more reserved data point. It took me closer to a day to get my custom Bazzite build working.
Switching over to my images using bootc failed because of what I eventually tracked down to permissions issues that I didn't see mentioned in any of the docs. In short, the packages you publish to Github's container registry must be public.
Another wrinkle: The Bazzite container-build process comes pretty close to the limits of the default Github runners you can run for free. If you add anything semi-large to your custom image, it may fail to build. For example, adding MS's VSCode was enough to break my image builds because of resource limits.
Fortunately, both of these issues can be fixed by improving the docs.
> If the normal process leaves things like this to "some other time", one should start by fixing the process.
Adding regular fixits is how they fix the normal process.
This addition recognizes that most bug fixes do not move the metrics that are used to prioritize work toward business goals. But maintaining software quality, and in particular preventing technical debt from piling up, are essential to any long-running software development process. If quality goes to crap, the project will eventually die. It will bog down under its own weight, and soon no good developer will want to work on it. And then the software project will never again be able to meet business goals.
So: the normal process now includes fixits, dedicated time to focus on fixing things that have no direct contribution to business goals but do, over the long term, determine whether any business goals get met.
Thanks for sharing this! This kind of quick and easy grasp one pagers is how this type of things should be done. We should drum it up loud whenever we can.
But if there are limited slots for promotion, and that's generally always the case, the resulting competition among deserving engineers makes the extra hours more or less mandatory. Say that Amy is a better engineer than Jim and gets a third more done per hour. If Jim puts in 60 hours instead of the expected 40, then Amy isn't going to beat him for a slot unless she also starts working extra hours.
In the end, promotion becomes more about grinding than being effective. That's not great for company culture or retention of top talent.
reply