The takeaway is odd. Clearly Nagle's Algorithm was an attempt at batched writes. It doesn't matter what your hardware or network or application or use-case or anything is; in some cases, batched writes are better.
Lots of computing today uses batched writes. Network applications benefit from it too. Newer higher-level protocols like QUIC do batching of writes, effectively moving all of TCP's independent connection and error handling into userspace, so the protocol can move as much data into the application as fast as it can, and let the application (rather than a host tcp/ip stack, router, etc) worry about the connection and error handling of individual streams.
Once our networks become saturated the way they were in the old days, Nagle's algorithm will return in the form of a QUIC modification, probably deeper in the application code, to wait to send a QUIC packet until some criteria is reached. Everything in technology is re-invented once either hardware or software reaches a bottleneck (and they always will as their capabilities don't grow at the same rate).
(the other case besides bandwidth where Nagle's algorithm is useful is if you're saturating Packets Per Second (PPS) from tiny packets)
The difference between QUIC and TCP is the original sin of TCP (and its predecessor) - that of emulating an async serial port connection, with no visible messaging layer.
It meant that you could use a physical teletypewriter to connect to services (simplified description - slap a modem on a serial port, dial into a TIP, write host address and port number, voila), but it also means that TCP has no idea of message boundaries, and while you can push some of that knowledge now the early software didn't.
In comparison, QUIC and many other non-TCP protocols (SCTP, TP4) explicitly provide for messaging boundaries - your interface to the system isn't based on emulated serial ports but on messages that might at most get reassembled.
Lots of computing today uses batched writes. Network applications benefit from it too. Newer higher-level protocols like QUIC do batching of writes, effectively moving all of TCP's independent connection and error handling into userspace, so the protocol can move as much data into the application as fast as it can, and let the application (rather than a host tcp/ip stack, router, etc) worry about the connection and error handling of individual streams.
Once our networks become saturated the way they were in the old days, Nagle's algorithm will return in the form of a QUIC modification, probably deeper in the application code, to wait to send a QUIC packet until some criteria is reached. Everything in technology is re-invented once either hardware or software reaches a bottleneck (and they always will as their capabilities don't grow at the same rate).
(the other case besides bandwidth where Nagle's algorithm is useful is if you're saturating Packets Per Second (PPS) from tiny packets)