Just like early PC 4k's used the BIOS/DOS interrupt routines which were not counted either. And before that, there were all these custom purpose built chips for doing graphics and audio synthesis.
When making 4k intros for Linux (may apply to other platforms) you have to do your own startup and runtime linking code because otherwise the executable will go over the limit before it does anything. Because of this, you will have to minimize the amount of libraries you use and the amount of functions you use. The name of every function you use must be included as a string in the (uncompressed) binary. So even though you can use peripheral libraries, there is an incentive to minimize the amount of library functions you use.
And what are the libraries that are used? You need to get something on the screen, so many demos use SDL or similar, like this:
xor eax, eax
mov al, 13h
int 10h ; I wonder if I still remember this correctly
The former is not much easier (or harder) than the latter but the latter is simply impossible with a modern computer.
Then there's OpenGL and Direct3D, which are used to talk with the GPU. These days a GPU is basically a method of drawing triangles really really fast. But it draws only triangles and you have to program all the lighting and whatnot. Then there are fancy shader tricks you can do, but they're not (a lot) easier than it was when poking pixels directly to video memory.
It's arguably easier to do 3d graphics using a GPU than writing your own triangle fillers and perspective correct texture mappers. But this is a double edged sword. Getting shit done with triangles only is a constraint that did not exist back in the pixel pushing days.
And finally, audio. Writing a software synth consumes quite a big chunk of your byte budget. Back in the day, there were dedicated synth chips in computers, from the SID to the OPL-3 in SoundBlasters and Ad-Libs. Programming them was not super easy either, but I'd guess it takes a few bytes less than writing a PCM-emitting synth from scratch and sending that data to the sound output.
So what you say is correct, library implementations don't count to the byte limit. But if your message is that it makes stuff easier and it was 2000% more hardcore in the 1990's with DOS, I disagree.
Yeah, I think you got the int 10h right (not sure I remember this as well)
But then came VESA which was a little bit more complex and of course, screen sizes bigger than a segment (ouch)
About audio, IMHO it's much easier now, dealing with those interrupt handlers and dma controllers wasn't easy. Of course, once the library worked it was ok.
> Yeah, I think you got the int 10h right (not sure I remember this as well)
10h is correct. The contents of ax register might not be. Is the graphics mode (13h) supposed to be in the al or ah part? That I can't remember.
> But then came VESA which was a little bit more complex and of course, screen sizes bigger than a segment (ouch)
Yeah, early VESA Bios extensions were hairy. With VBE 2.0 you can get a simple linear framebuffer with a high resolution quite easily.
> About audio, IMHO it's much easier now, dealing with those interrupt handlers and dma controllers wasn't easy. Of course, once the library worked it was ok.
Interrupt handlers and DMA controllers were used for PCM audio. I agree that writing that stuff by hand was harder than doing it today with libraries. And you'd still have to write the PCM-emitting synth.
But chip synths were generally programmed with simpler memory mapped I/O and they did the actual audio synthesis for you, not only PCM output.
You are correct. And using a mov to ax rather than xor+mov to al is counter-intuitively one byte smaller (there's one more byte of immediate value in the former). Every byte counts!
When making 4k intros for Linux (may apply to other platforms) you have to do your own startup and runtime linking code because otherwise the executable will go over the limit before it does anything. Because of this, you will have to minimize the amount of libraries you use and the amount of functions you use. The name of every function you use must be included as a string in the (uncompressed) binary. So even though you can use peripheral libraries, there is an incentive to minimize the amount of library functions you use.
And what are the libraries that are used? You need to get something on the screen, so many demos use SDL or similar, like this:
instead of: The former is not much easier (or harder) than the latter but the latter is simply impossible with a modern computer.Then there's OpenGL and Direct3D, which are used to talk with the GPU. These days a GPU is basically a method of drawing triangles really really fast. But it draws only triangles and you have to program all the lighting and whatnot. Then there are fancy shader tricks you can do, but they're not (a lot) easier than it was when poking pixels directly to video memory.
It's arguably easier to do 3d graphics using a GPU than writing your own triangle fillers and perspective correct texture mappers. But this is a double edged sword. Getting shit done with triangles only is a constraint that did not exist back in the pixel pushing days.
And finally, audio. Writing a software synth consumes quite a big chunk of your byte budget. Back in the day, there were dedicated synth chips in computers, from the SID to the OPL-3 in SoundBlasters and Ad-Libs. Programming them was not super easy either, but I'd guess it takes a few bytes less than writing a PCM-emitting synth from scratch and sending that data to the sound output.
So what you say is correct, library implementations don't count to the byte limit. But if your message is that it makes stuff easier and it was 2000% more hardcore in the 1990's with DOS, I disagree.