There's not really an exact science to it, but manually-optimised code is usually more structured/systematic to make it easier for the human author to manage the dependencies and state across the board, while automatically-optimised code is free to arrange things however it would like.
As an example of the kinds of optimisations that the best human programmers were doing before compilers took over, see Michael Abrash's Black Book: https://www.phatcode.net/res/224/files/html/index.html - you can intuit how a human might organise their code to make the most of these while still keeping it maintainable.
If you asked a three-year-old a question that they proceeded to completely flub, would you then assume that all humans are incapable of answering questions correctly?
Nobody is arguing for the quality of the search overviews. The models that impress us are several orders of magnitude larger in scale, and are capable of doing things like assisting preeminent computer scientists (the topic of discussion) and mathematicians (https://github.com/teorth/erdosproblems/wiki/AI-contribution...).
I'm a Rust main, but this argument seems... incorrect? You would not need macros for Rust to remain a usable memory-safe language. They certainly make it easier, but they're not necessary. It would be perfectly possible to design a variant of Rust that gets you to 80-90% of Rust's usability, with the same safety, without macros.
no, you just missed my point. expanding the implementation is not a safe abstraction. show me how you'd implement the functionality of the pin macro as a safe abstraction.
I didn't miss that you totally changed the subject and now you're attacking a strawman. See Steve Klabnick's response to your other comment where you did this. Of course macros are good for encapsulation and abstraction, but that's a different subject--and note that the discussion was about Zig vs. Rust, and Zig has no macros so there's unencapsulated unsafe code all over the place.
It is presented as a Wikipedia article from the future describing a subculture of tomorrow. See also https://qntm.org/mmacevedo for another example of this genre.
I've been using deletated Claude agents in vscode and it crashes so much it's insane... I switched to copilot Claude local agents and it works much better.
Idk about this whole vibe coding thing though... Well see what happens
The human operator controls what gets built. If they want to build Redis 2, they can specify it and have it built. If you can't take my word for it, take those of the creator of Redis: https://antirez.com/news/159
That is probably Composer-1, which is their in-house model (in so much a fine-tune of an open-weights model can be called in-house). It's competent at grunt work, but it doesn't compare to the best of Claude and Codex; give those a shot sometime.
As an example of the kinds of optimisations that the best human programmers were doing before compilers took over, see Michael Abrash's Black Book: https://www.phatcode.net/res/224/files/html/index.html - you can intuit how a human might organise their code to make the most of these while still keeping it maintainable.
reply