To the CPU it always boils down to sequences of instructions operating on data, so the syntax sugar put on top has be undone first, the abbreviations have to be unrolled, some way or another.
But more importantly it's really not about the lines you code write once, or very few times in comparison -- it's about the computer does any time the application starts up on any machine, or any user does something. Obviously, to an extent. You wouldn't write a thousand lines of code instead of 100 to make something that is already very fast 10% faster. But I certainly would write 10 line instead of 1 if it meant the code is 10 times faster. It's just typing after all.
My point was that CPU cycles mean nothing now. Compared to programmer cycles they are effectively free. So stuff it into RAM and forget about it. Why not? Complexity is bad, and being able to hide it (effectively) is good. I shouldn't be thinking about pointer allocation when building a a text input box for a UI.
But more importantly it's really not about the lines you code write once, or very few times in comparison -- it's about the computer does any time the application starts up on any machine, or any user does something. Obviously, to an extent. You wouldn't write a thousand lines of code instead of 100 to make something that is already very fast 10% faster. But I certainly would write 10 line instead of 1 if it meant the code is 10 times faster. It's just typing after all.