Hacker News new | past | comments | ask | show | jobs | submit login

stdout is by default buffered in almost every language. Outputting lots of non-critical stuff to stderr, or manually flushing stdout on every write, is definitely a rookie mistake. Raw terminal output should not be a big performance sink either if you do the sane thing and only write on every 100th or 1000th or whatever iteration of your main loop. No threads needed.



It's buffered, but buffers fill up. I wrote a small program [1] that calls os.Stdout.Write to write one byte, increments a counter by the number of bytes written, and records the time of the last write. Another thread then prints the byte count and the time since the last time Write() returned every second. Running "that program | sleep infinity" yields a buffer size of 65536 and writes stop returning after the first millisecond or so of program runtime. And that makes sense; nothing is reading the output of the program, and it is written to produce an infinite stream of bytes. There is no buffer you can allocate that stores an infinite number of bytes, so it has to block or abort.

[1] https://gist.github.com/jrockway/a5d96151e1c69407f491988df70...

Going back to the original context of the comment, we're calling some programmer at Microsoft an amateur because their progress bar blocks progress of the application. And indeed, that design could be improved (sample the progress whenever the terminal can accept however many bytes the progress bar takes to render), but it's a very common mistake. Any program that calls printf will eventually run into this problem. A fixed-length buffer smooths over quirks, but if your terminal can render 1 million characters per second and your program produces 1 million and 1 characters per second of log output, then there is no way your program can ever complete. You don't notice because programs don't output much information, terminals are fast, and 65536 is a good default. But in theory, your program suffers from the same bug as Microsoft's program. So it's pretty unfair to call them amateurs, unless the Linux kernel, GNU coreutils, etc. are all also written by amateur programmers. What the grandparent meant to say is "I've never noticed having made this mistake before" and nothing more.

This is something that programmers need to think about pretty much every time they do IO. Buffer, drop, or slow down. printf picks "buffer, then slow down", writing to syslog over UDP picks "drop", but what you really want depends on your application, and it is something that you have to explicitly think about.


Things weren't so nice in Windows before Microsoft added VT control sequences to Windows 10. Before then, if you wanted to draw fancy stuff in a console window, you had to call a synchronous out-of-band IPC API to conhost.exe. I don't know how PowerShell did it specifically, but you'd need to make the same sequence of slow IPC calls to draw the progress bar even if you did buffer them.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: