A great number of familiar concepts are described here in just 2 pages:
- Subroutines
- Modularity
- Unit testing
- Reusable libraries
- Documentation
- `goto`/`return`
- Debugging
The tone seems to place this writing as instructive -- someone who wanted to write a programme at the time could refer to this document for hints at what to do. Very interesting window into the past.
I haven't quite decoded exactly what the modern formulation of an `interpretive routine` is. Is it the same as the layer we would now call the "instruction set architecture" that sits above some "microcode" operations? Where some instructions do more work than others behind the scenes. It also seems to have some relation to "just-in-time compilation", just at a lower level than the current systems.
The actions parameter specifies instructions to be run inside the kernel, using a limited vocabulary. This is used to do things that are traditionally done in fork/exec code, like duping and closing file descriptors and such.
> I haven't quite decoded exactly what the modern formulation of an `interpretive routine` is.
The example given is probably the closest to evaluating a string as a series of operations on parameters passed to a function. So say evaluate('p0 * p1 + p2', p0, p1, p2). The proper evaluation of arithmetic expressions is quite hard but a recurring theme no matter what language you work with, so it would make sense to abstract that out and do it once and (hopefully) well.
The 'orders' are then the operators.
Keep in mind that this was written well before high level languages were common and that such subroutines were handcrafted pieces of code likely operating on a series of fixed memory locations rather than parameters passed on the stack.
A bit like you would do this in BASIC before structured versions of basic came along, set a bunch of global variables, GOSUB, read back the result from some other global variable.
This sort of arrangement made composition a very interesting and sometimes maddening affair.
Another way to read that fragment is as the beginnings of a high level interpreted language, I think that is given extra weight because of the debug possibilities and the fact that the program 'retains control'.
I did some more research after that post and found this explanation at [1]:
> Turing put this as follows: "an interpretive routine is one which enables the computer to be converted into a machine which uses a different instruction code from that originally designed for the computer".
It goes on to describe how Wheeler simulated a number of capabilities that EDSAC's hardware did not provide. Some more description of that is available at [2].
Finally, I found [3], which includes an actual listing of an interpretive routine for the EDSAC and a more detailed explanation. See page 47-48.
After seeing [3] it's like writing a bunch of EDSAC assembly then partway through, you want to work with floating point numbers, which no EDSAC instructions support, so you use a special code indicating your next several instructions are written for the "EDSAC+Float" machine instead, and they are interpreted that way.
This final document I located was quite interesting.
In the end, I think "interpretive subroutines" are perhaps closest to being like "virtual machines" or "emulators". Because there weren't really high level languages at the time, the concept in their mind is more like writing code for a different machine. It is like a high level language, in that it expands the set of operations that the programmer can use to speak with the computer.
> I haven't quite decoded exactly what the modern formulation of an `interpretive routine` is.
It sounded like a description of an interpreter used to emulate the behavior of the host machine, while allowing more tracing of the algorithm. One of the arguments to the interpreter would be the program data to be interpreted.
Besides describing programming concepts, this paper reveals what it was like to be a programmer:
As I understand it, it was expected from a programmer to completely understand workings of the algorithm and a tracing subroutine seems to not be in favor. Currently the systems we're working with are so complex that we can't comprehend all algorithms and need to use debugging and tracing tools:
> However the interpretive routine retains control and so it is possible to print out extra information about the course of the programme. This extra information makes it possible to follow the meanderings of the program in detail thus helping to locate the errors of a programme. This is not a good method of finding errors in programmes as it takes a long time and the programmers knowledge of the programme is not utilized - as it should be - in
tracing the fault.
Other bits I enjoyed reading:
Writing good docs was hard even back in 1951!
> there still remains the considerable task of writing a description so that people not acquainted with the interior coding can nevertheless use it easily. This last task may be the most difficult.
Timeless piece of advice
> All complexities should - if possible - be buried out of sight.
He is at liberty to be very selective about who contributes to the kernel, and how they do it. He's blunt, picky and pragmatic, but he has a valid point from the compsci and project management point of view:
> I happen to believe that not having a kernel debugger forces people to think about their problem on a different level than with a debugger. I think that without a debugger, you don't get into that mindset where you know how it behaves, and then you fix it from there. Without a debugger, you tend to think about problems another way. You want to understand things on a different _level_.
Those of us who have spent multiple weeks on single bugs in systems that were made by people who also did not believe in debuggers politely disagree.
I see a debugger as a means of last resort, if all else fails and you've spent that week reasoning about the problem where you already know how it behaves and in spite of that you still can't fix it a hint can help a lot.
Linus is showing an attitude in this message that simply does not help, at least he acknowledges it. Essentially he's saying that a debugger being present in the kernel would open up the kernel to lesser gods contributing and he can do without that, if you can't cross the hurdle then you're not welcome in the elite kernel hackers group.
It's his playground so I'm fine with that but this attitude probably is one of the reasons why the Linux kernel keeps on giving when it comes to really old bugs. As every experienced programmer knows: what the code seems to be doing is not always what the code actually does and in the presence of optimizing compilers a debugger can come in quite handy. Especially if those compilers themselves are of the buggy variety.
Making kernel writing a black art serves nobody, it just reduces the pool of contributors. I often wonder if Linux became as successful as it was because or in spite of Linus' character, even after a couple of decades I haven't been able to decide but I'm grateful for its existence anyway.
It's also another reason why I'm a big fan of micro kernels (the real variety), it makes operating systems much easier to debug because each and every little module lives in a process of its own, isolated from messing up other processes' memory. That's well worth the speed penalty.
- Subroutines
- Modularity
- Unit testing
- Reusable libraries
- Documentation
- `goto`/`return`
- Debugging
The tone seems to place this writing as instructive -- someone who wanted to write a programme at the time could refer to this document for hints at what to do. Very interesting window into the past.
I haven't quite decoded exactly what the modern formulation of an `interpretive routine` is. Is it the same as the layer we would now call the "instruction set architecture" that sits above some "microcode" operations? Where some instructions do more work than others behind the scenes. It also seems to have some relation to "just-in-time compilation", just at a lower level than the current systems.