Except that there are tremendous advantages to constant-time execution, not the least of which is protection from timing security attacks/information leakage (which admittedly were less of a concern back then). Sure you can get the one instruction executed for the <6 case faster, but the transistor budget for that isn't worth it, particularly if you pipeline the execution into stages. It makes optimization far more complex...
Not to take credit away from Andrew for his ideas and writing, because at least he came up with the idea and wrote about it, but I don't understand how that idea of Jepsen style testing of package managers is a novel idea. Like... what testing would you want to do if you were building a package manager?
I have yet to hear a criticism of academia where it sounds like we're better disproportionately losing people with PhDs than without them, particularly since most of those people got their PhDs quite a while ago.
PhDs seem to be quite employable by private industry, where competency is still valued.
Vibe-coding as originally defined (by Karpathy?) implied not reading the code at all, just trying it and pasting back any error codes; repeat ad infinitum until it works or you give up.
Now the term has evolved into "using AI in coding" (usually with a hint of non rigor/casualness), but that's not what it originally meant.
AI assisted coding/engineering becomes "vibe coding" when you decide to abdicate any understanding of what you are building, instead focusing only on the outcome
reply