That is an understatement. That is a full build of my 3D engine - http://i.imgur.com/3ApRyuQ.png (C not C++ but the linked article is also about a C codebase) on my current computer (4770K i7) under Borland C++ 5.0. Partial builds (modify a file and run) are instant, which is basically why i'm using it for a lot of my C code (the code also compiles in other compilers, like OpenWatcom, GCC, Clang, Visual Studio, Digital Mars C and Pelles C, but i mainly test with GCC, OW and VS, the rest i only test with occasionally).
The compiler is part of the IDE, not some external process that needs to start from a blank state for each file, needing to read the same files over and over (which is what every other "IDE" does these days, similarly with the debugger which is usually running gdb at the background and some IDEs do not even bother to perform the builds themselves and instead using cmake or whatever - honestly it is as if people forgot what the "I" stands for) and it keeps compiled objects and libraries in memory and even uses the source code directly from the open windows's text buffers instead of having to save the file and load it from disk (although it does write the object and executable files to disk, it just doesn't do the unnecessary roundtrip for compiling the source).
> The compiler is part of the IDE, not some external process that needs to start from a blank state for each file, needing to read the same files over and over...
I suspect the difference is negligible in practice. In both cases, the files are likely to be cached in memory after they're read the first time, so you're not really reading it over and over.
Probably, but it can be a convenience to make some small modifications to try out things without saving them.
From a performance standpoint the win comes from not having the compiler start from a blank state for each file but keep the compiled objects in memory and only update the changed files. I suppose a modern reimplementation of that idea (that has more memory to spare, after all the official BC++5 requirements were 16MB of RAM) would be able to have a more fine-grained approach.
I don't know how much effort is poured into these projects.
There are some clang based servers like ycmd and rtags, but these are used for linting, refactoring and search (so no incremental compilation).
TBH i was thinking more in the lines of having the text editor associate source code lines with C functions and declarations so that the IDE can recompile only the bits changed while working in the code (something like a more advanced edit-and-continue).
But yeah, i do not see much effort going on in improving these areas. I think people are just used to "patchwork IDEs" and find them good enough.
It sucked when that happened, which is how i developed a habit for saving my files even when i'm doing nothing and even pressing the save shortcut key several times :-P.
But at the same time it can be a convenience if you dont want to save but instead make a small change to try something out.
> The compiler is part of the IDE, not some external process that needs to start from a blank state for each file
I'm pretty sure Borland used to separate out the IDE executables from the compilers and a couple of other tools tool. I don't have a copy to hand to prove this but I'm sure I used to occasionally invoke Turbo Pascal's compiler from the command line outside of the IDE (due to it being a separate .exe / .com) and I vaguely recall Turbo C++ having a similar design.
I also don't recall build times being that much faster then than they are now. But maybe that's more a symptom of myself compiling on budget hardware previously where as I can now afford better spec'ed dev machines (compared to the market average).
>> I'm pretty sure Borland used to separate out the IDE executables from the compilers
Turbo C was like that, but not the first few versions of Turbo Pascal.
The whole goal with Turbo Pascal was to have everything in one small program so you could code/compile/test as fast as possible. It used a one-pass compiler and didn't have a heavy linker. It was fast even on a 8088. Anders Hejlsberg was the original author of Turbo Pascal (yes, the same guy from MFC, J++, C#, TypeScript...)
The original TURBO.COM file was very small. This was great because you could fit the whole thing on one floppy disk including your own code. No swapping floppies. Plus it was only $49.95 USD!
Pascal compiled way faster than C because there was less to do. No #includes to chew through. But Turbo C was even a fast compiler back then. A hundred thousands of lines per minute according to the ads. Imagine how slow I found DJGPP and other compilers when I finally moved to 32-bit programming.
Remember, Wirth designed Pascal as a teaching language. One-pass compilation, no forward declarations, built-in I/O. Turbo Pascal could compile to a .com file, 64KB max, 16-bit pointers. I'm not sure there was a linker; the first executable instruction was the first byte in e file.
The neat trick was debugging. Instead of tagging to object code with source-code line numbers, to break on line N, Turbo Pascal simply recompiled the source up to line N, and used the size of the output to match the instruction pointer in the debugged image. Move to next line? Compile one more line, and stop at the last produced instruction.
But these were tiny programs, written ab initio. No readline, no X, no network, no database. Hardly any filesystem. To do something akin to readdir(3) meant writing a bespoke function to call the DOS interrupt. Putting a menu on the screen required positioning the cursor in the video buffer and putting each character in successive locations, allowing for the attribute byte.
If Turbo Pascal was simple, it was also primitive. Much bigger C programs compile in the blink of an eye today. Complex programs take a long time to build today, yes. They did then, too.
The command line compiler was a separate compiler for when you wanted to use some external method for building (like a batch file) or your program was very big to compile with the IDE in memory (remember this was real mode with a 640K RAM limit, often less than that was available).
But you could take TURBO.EXE (ide+compiler+debugger) and TURBO.TPL (the library), put it on a floppy and work from there. Back when i was a kid, my process to start a new "project" was to take a blank floppy and copy those files (and a couple of units i was sharing) since i didn't have a hard disk. I still have a ton of floppies littered with turbo.exe/tpl pairs.
The Turbo C/C++ also needed only a single executable, tc.exe/bc.exe (depends on the version) and the include and lib directories.
This is the same with Borland C++ 5.0 i am talking about above, although that one also needs a bunch of DLLs too. Since i don't want to break my installation, i only renamed the bcc32.exe and bcc32i.exe to something else, run the IDE and built my engine. As i expected it worked. Although the fact that you can make modifications and have them compiled without saving the file is also an indicator.
That was a separate compiler. It would be as if someone took libtcc and made an IDE use it directly but also bundled the tcc binary for command-line builds.
GCC 6.2.0 under MSYS2 takes 6.302s for a full debug build and 10.499s for a full optimized build. With make -j8 this is down to 1.997s for a full debug build and 6.942s for a full optimized build.
I haven't tried TCC, i think i tried at the past but it was missing some libs.
I do use it as interpreter on embedded boards! Preprocess/trim the headers you need, and add #!/bin/tcc -run at the top of your .C file, add a +x to it and it'll run just fine!
I love tcc, in fact I added a firmware instruction translator to 'JIT' AVR code to simavr a few weeks ago. Takes a AVR binary, translates it to C, and compiles it on the fly with libtcc to run it :-)
>I love tcc, in fact I added a firmware instruction translator to 'JIT' AVR code to simavr a few weeks ago. Takes a AVR binary, translates it to C, and compiles it on the fly with libtcc to run it :-)
Ahah, thanks for that -- I thought it was pretty clever, but it's hard to explain why to someone :-)
If you look closer, you can see I've actually repurposed the main interpreter core, and uses a GNU awk (of all thing) to extract each opcode emulation 'meat', converts it to a string to and that string is used by the translator to generate the C for tcc...
How many passes is it doing? I suspect they aren't doing much optimization then? Maybe they patch in differences in the ASTs at the IR level and work from there?
The compiler is part of the IDE, not some external process that needs to start from a blank state for each file, needing to read the same files over and over (which is what every other "IDE" does these days, similarly with the debugger which is usually running gdb at the background and some IDEs do not even bother to perform the builds themselves and instead using cmake or whatever - honestly it is as if people forgot what the "I" stands for) and it keeps compiled objects and libraries in memory and even uses the source code directly from the open windows's text buffers instead of having to save the file and load it from disk (although it does write the object and executable files to disk, it just doesn't do the unnecessary roundtrip for compiling the source).