Hacker News new | past | comments | ask | show | jobs | submit | stevekemp's comments login

Good luck learning Finnish without understanding the grammar.

Good luck getting a 3 year old Finnish person lecturing you on Finnish grammar - Even though the kid can easily ask for a ice cream in both past, present, and future.

I did once write a text-based adventure game in C, however I only did that to work out some of hte "plot" and the layout/objects I was going to work with.

My actual aim was to write a simple text-adventure in Z80 assembly, which could run upon a CP/M system. I did achieve that, and later ported the game to the ZX Spectrum.

A few years after that I used one of the inform-compilers to recode a couple of the puzzles in the Z-machine, which would also have allowed me to run the game on a CP/M system, but to be honest by that point I'd lost interest and I never ported the whole of the game's text, and the two different endings etc.

That said my toy adventure was popular when submitted here, back in the day:

https://news.ycombinator.com/item?id=26946130


The one that gets me the most is English people suddenly saying "fall" instead of "autumn".

It is a traditional one which fell out of fashion.

https://weather.metoffice.gov.uk/learn-about/weather/seasons...

https://twominenglish.com/autumn-vs-fall/

Now if we start saying "diaper" again instead of "nappy", you can start to worry.


The weirdest one to me is the English suddenly referring to police as "feds".

https://english.stackexchange.com/questions/37256/police-in-...

It's not like they didn't already have dozens of slang terms for the police.


Years ago I wrote/maintained a modal console-based email client. It was written with some UI primitives in C++ and the UI actually maintained and controlled by lua.

Viewing a list of folders? Lua. Viewing a list of messages? lua. Viewing a single message? Lua. Those were the three main modes and UI options.

All the keybindings, layout, colour setup, and similar was dynamic. It actually worked out really well. For comparison I just ran "wc -l" against he codebase: 60k lines. Combination of C++ and Lua, but mostly Lua.

Having good scope and good tests made such a thing fine to support. Mostly the pain was broken MIME messages and handling dealing with the external world - e.g. invoking gpg to handle decryption and encryption.

I'd work with big-lua again if I had the need, it's a fun language and very flexible.


I "recently" wrote a CP/M emulator, and I have a lot of love for the kinda vintage software out there that still runs on it.

https://github.com/skx/cpmulator/

Over the past few days I've seen posts on hacker news discussion 6502 assembly, people coming to the infocom games, and similar things. There's a lot of interest out there in this retro stuff, even now.


Surely some of it is just nostalgia for a "simpler" time, but I think there is a legitimate reason to preserve and celebrate these older systems, too.

It's essentially impossible for a single person to build something as complex as a modern PC "from scratch", or indeed to build an operating system that compares to Windows, Linux, or MacOS.

These old microcomputer systems are simple enough for one person or a small team to understand and build, and they are/were capable of doing "useful work", too, and not so overly-abstracted like some "teaching systems" are.

I think that for me, part of the point of digging into something like the p-System is to show some of the brilliant (and stupid) ideas that went into building something as ambitious as a "universal operating system" in the mid-1970s.


Having cut my teeth on early Archimedes machines, I have a deep fondness for arm2's 16 instructions and the (lost during a house move, I suspect) assembly book I had that gave me enough of a description of the internals of the chip that I could desk check my assembly in my head with reasonable confidence that I was mentally emulating what the chip was actually doing rather than just what outputs I'd get for a given set of inputs.

Having to remember where I'd put the relevant chunk of assembler any time I needed a division routine was, admittedly, less fun, but the memories remain fond nevertheless :)


I sometimes think about that. Consider the early versions of MS-DOS. A modern programmer could crank that out with little difficulty in a short time.

I think Tim Paterson did crank it out with little difficulty in a short time? He even called it "Quick and Dirty Operating System".

Which makes one wonder, why weren't there others (like Gary Kildall)?

I'm not sure I understand what you mean.

You probably remember Gary did start selling an MS-DOS clone (DR-DOS) after a few years, when it became clear CP/M-86 was dead. IIRC that's what inspired Microsoft to start working on MS-DOS again after several years of letting it languish. They also put anti-DR-DOS code into Windows so you couldn't start it up on DR-DOS.

And, as you know, there were a number of other bare-bones "operating systems" like MS-DOS and CP/M in those days: HDOS, TRS-DOS, ProDOS, etc. But once everyone was writing their apps for MS-DOS, there was little point in bringing out a new OS that wasn't compatible with it unless it was dramatically better in some way.

So, why weren't there other members of what set?


> there was little point in bringing out a new OS that wasn't compatible with it unless it was dramatically better in some way.

Make a free open source one.


We do have FreeDOS now! Someone could have written it in 01981, but, as I understand it, the ideological motivation for such activities wouldn't be articulated until Stallman founded GNU years later.

QDOS had the advantage of being able to reimplement the CP/M-86 design rather than starting from scratch.

There were lots of disk operating systems created for 8 and 16-bit machines, as well as a number of BASIC + DOS type systems. But CP/M is the one 8-bit OS to rule them all - even running on an Apple II or C64 with a Z-80 CPU card or cartridge.


> QDOS had the advantage of being able to reimplement the CP/M-86 design rather than starting from scratch.

CP/M was little different from the PDP-11 operating system, which it used as a model.

CP/M was not as innovative as often thought.

Both CP/M-86 and MSDOS were just an interrupt table and some implementation routines. The 8086 chip was designed around that interrupt table, so of course any OS would use it.


I assume you're talking about RT-11? Do you want to elaborate on the similarities? Although I've never used RT-11 (just CP/M, HDOS, MS-DOS, and VMS), I think they may be more superficial than you're suggesting.

Looking at https://bitsavers.org/pdf/dec/pdp11/rt11/v1_Sep73/DEC-11-ORT... (RT-11 System Reference Manual, DEC-11-ORUGA-A-D, Sept. 1973, Chapter 8, Programmed Requests) I see printing of ASCIZ strings, 16 numbered I/O channels for open files, stream (wordwise) rather than purely blockwise access to those files (though the start position is specified as a block number), an open set of device names, RADIX-50 filenames, the ability to "swap" the "user service routines" into memory temporarily so they don't have to be resident the whole time your program is running, and "tentative files" that automatically replace a permanent file if successfully closed, and asynchronous I/O (.READ and .WRITE as opposed to .READW and .WRITW or .READC and .WRITC); all of these would have been improvements over the design CP/M actually used. On the other hand, it says RT-11 only supported contiguous storage of files (like the p-System), a CRLF is automatically appended to any string you print, and the filenames are 6 characters rather than 8, which are points where CP/M wins.

The whole FCB thing, which is about 80% of CP/M BDOS, seems to have been absent in the RT-11 system call interface. I'm not sure whether it's better or worse (it's substantially more painful to use, but permits your program to allocate space for the number of open files it's actually going to use) but it's certainly a very different approach. RT-11 has .SAVESTATUS and .REOPEN to work around the 16-file limitation when necessary.

Because you can only read or write starting at a block boundary in RT-11, it seems like it usually wouldn't make sense to read less than a block. But the inability to read more than a block was a real bottleneck for I/O in CP/M, as Tim Paterson explains in the blog post I linked from https://news.ycombinator.com/item?id=43729165:

> At least part of the reason CP/M was so much slower was because of its poor interface to the low-level “device driver” software. CP/M called this the BIOS (for Basic Input/Output System). Reading a single disk sector required five separate requests, and only one sector could be requested at a time. (The five requests were Select Disk, Set Track, Set Sector, Set Memory Address, and finally Read Sector. I don’t know if all five were needed for every Read if, say, the disk or memory address were the same.)

(Actually, this is the BIOS interface; I think the BDOS interface was more reasonable, but still only able to read one 128-byte record at a time.)

Even the "Keyboard Monitor" described in Chapter 2 sounds very different from the CP/M command processor, for example, using "." as its prompt, supporting user-defined device names and command abbreviation, being able to make octal dumps of RAM and change its contents byte by byte, requiring an explicit "run" command to run programs, no way to pass command-line arguments to programs, and echoing character deletion in a teletype-friendly fashion\noihsaf\format. Most of the control keys are the same, I guess? And the editor sounds pretty similar to CP/M's benighted ED?


The things like the TYPE command, the DEL command, the 8.3 case insensitive filenames (6.3 for RT-11), the / for switches, the drive:, CRLF, etc. Anyone using RT-11 could pick up MSDOS in about 5 minutes. I know I did (I had an H-11, and bought an IBM PC).

I bought a hard disk drive for my H-11, wire-wrapped an interface board for it, and wrote the device driver for it. It was a fun project, and didn't take much time. It was straightforward. I even got RT-11 to bootstrap off of it.

Sorry, I don't think any of that stuff is a work of genius.

My profile pic on twitter is of the machine:

https://x.com/WalterBright

from before I added the HDD.

It's also been 40 some years since I touched an 11, so my memory of the details needs a refresh :-/


Some of the things you're talking about are features MS-DOS had in common with RT-11 but where CP/M was totally different; specifically, DEL was called ERA on CP/M, and CP/M didn't have switches. (Except PIP, which, bizarrely, wrapped its switches in square brackets: PIP A:=B:*.COM[W]. See https://ia902808.us.archive.org/23/items/osborne-cpm-users-g...) MS-DOS got drive letters from CP/M; on RT-11, as you might remember, instead of A:, B:, C:, etc., you had SY0:, SY1:, and DK:. (HDOS copied that, as well as /switches.) I'm not sure where the 8.3 filenames are from, but CP/M and MS-DOS had them, and, as you say, RT-11 didn't, using 6.3 instead.

So, of the six similarities you listed between CP/M and RT-11, four were actually differences; only two were actually similarities (the TYPE command and the use of CRLF), with a third debatable one (8.3 is like 6.3 in that a three-character file type code forms part of the filename in some contexts).

If CP/M had used RADIX-50 like RT-11 did, it could have had case-insensitive 9.3 filenames in 8 bytes instead of 8.3 filenames in 11 bytes. I think that would have been a big improvement.

So, I don't think any of CP/M's deviations from RT-11 are a "work of genius", but it wasn't just a copy of RT-11, "little different", as you say. It clearly deviated from RT-11 in a lot of ways, to an extent that suggests drawing from some other source. Maybe RSX-11, dunno.

The page you link to just says "Sign in to Twitter". For the sake of courtesy, I'd rather not go into how I feel about that invitation.


The differences such as A: vs SY0:, are differences only in detail. The unix command line is fundamentally different, not just different in detail. BTW, RT-11 used PIP.

> The page you link to just says "Sign in to Twitter". For the sake of courtesy, I'd rather not go into how I feel about that invitation.

It goes to my profile page. Of course, I am logged in to twitter. I had no idea that it was necessary to sign in to twitter to see my profile page. There was no nefarious intent. I am not aware of any benefit that may accrue to me from you signing up for a twitter account.


I agree that Unix was fundamentally different in many ways, but CP/M wasn't a copy of Unix either; if anything, RT-11 was slightly more Unix-like than CP/M was. Because CP/M was evidently worse than RT-11 in many apparently unnecessary ways, I suspect that it was drawing from some other source.

I didn't suspect any nefarious intent, but if I didn't tell you it had happened, you would never have known. My apologies if it sounded like I was blaming you for it.


I don't see any heritage of unix in CP/M, but I do see a heritage from DEC. Not an exact copy, of course.

> if I didn't tell you it had happened, you would never have known

That's right, and now I know. Thanks!

> My apologies if it sounded like I was blaming you for it.

Thank you. Apology accepted!


I broadly agree, but I would quibble on the "-86" part; CP/M-86 uses a different interrupt than MS-DOS, so I suspect that the model for QDOS was CP/M-80. I'm not even sure CP/M-86 had been released when Paterson wrote QDOS.

Paterson claims CP/M-86 wasn't released yet in http://dosmandrivel.blogspot.com/2007/09/design-of-dos.html?...:

> We knew Digital Research was working on a 16-bit OS, CP/M-86. At one point we were expecting it to be available at the end of 1979. Had it made its debut at any time before DOS was working, the DOS project would have been dropped. SCP wanted to be a hardware company, not a software company.


Probably what you want to check out is Oberon, which is a modern PC built basically from scratch, along with an operating system that compares to Windows, Linux, or MacOS, built originally not by a single person but by maybe a dozen people. It's capable enough that it was the daily driver for numerous students during the 80s; the earliest versions of it were built in-house by necessity because graphical workstations weren't a product you could buy yet. Wirth's RISC CPU architecture avoids all the braindamage in things like the Z80 and the 80386. I think that, with their example to work from, a single person could build such a thing.

Oscar Toledo G. also wrote a similar graphical operating system in the 01990s and early 02000s, working on the computers his family designed and built (though using off-the-shelf CPUs). You can see a screenshot of the browser at http://www.biyubi.com/art30.html and read some of his reflections on the C compiler he wrote for the Transputer in his recent blog post at https://nanochess.org/transputer_operating_system.html.

There's a lacuna in the recursivity of Wirth's system: although he provides synthesizable source code for the processor (in Verilog, I think) there's no logic synthesis software in Oberon so that you can rebuild the FPGA configuration. Instead you have to use, IIRC, Xilinx's software, which won't even run under Oberon. Since then, though, Claire Wolf has written yosys, so the situation is improving on that front.

CP/M is interesting because it's close to being the smallest system where self-hosted development is bearable; the 8080 is just powerful enough that you can write a usable assembler and WYSIWYG text editor for it. But I don't think that makes it a good example to follow. We saw this weekend that Olof Kindgren's SeRV implementation of RISC-V can be squoze into 5900 transistors (in one-atom-thick molybdenum disulfide, no less) https://arstechnica.com/science/2025/04/researchers-build-a-... https://news.ycombinator.com/item?id=43621378 which is about equivalent to the 8080 and less than the Z80. And Graham Smecher's "Minimax" https://github.com/gsmecher/minimax is only two or three times the size of SeRV and over an order of magnitude faster.

There's no reason to repeat the mistakes Intel made in the 01970s today. We know how to do better!


> There's no reason to repeat the mistakes Intel made in the 01970s today. We know how to do better!

CP/M, WordStar, and Turbo Pascal were/are pretty good though!

As you suggest, someone really should port an open source FPGA toolchain to Oberon to honor Prof. Wirth's great work.


I agree about WordStar and TP. You can kind of justify all of CP/M's problems by reference to the limits of the machines it had to run on (for example, they often had no real-time clocks), but I still think you could do better in many ways. For example:

- The command processor didn't have to be so limited, as amply demonstrated by ZCPR, or so hard to use, as demonstrated by the p-System.

- Record-oriented file access was probably a mistake. The 128-byte record size meant that you still had to use 2-byte record numbers to get files of over 32K, and that writing a single record required wastefully reading a whole 512-byte sector in order to not lose the other three records in the sector. Byte-based file access would have been far better for the usual case, even at the expense of needing 24-bit seek offsets for very large files (over 64K). This would have to be built on sector-based access, but the intermediate layer of record-based access is purely dead weight for most applications. Sector-based access (or access in larger blocks as in Forth) would allow you to use 1-byte sector numbers until the advent of double-density disks.

- Using different BIOS calls to write to the terminal, the printer, and the paper tape punch was obviously a mistake, and one that made it very difficult to extend the set of available devices. I believe HDOS did a better job here. Moreover, if you adopted byte-based file access, you could use a single BDOS call to write to a file, the terminal, the printer, or the punch, and analogously for reading. This also would have made it easier to support multiple terminals.

- The user interface of ED was stuck in the teletype era. But almost nobody ran CP/M on a teletype, because teletypes cost more than CP/M crates. Before long most CP/M machines had a monitor built in, like the Kaypro, Osborne, and H-89. Even BASIC-80's unusably bad line editor gave you a live display of the line you were editing, and WordStar demonstrated that it was possible to do much better on the same hardware.

- The "user" facility in its filesystem was useless.

So, I'm not a huge fan of CP/M. I think virtually all of its major design decisions were mistakes, except for the BIOS/BDOS split, though not such serious mistakes as to make it completely unusable. I'm interested to hear why you disagree so strongly.


> I think virtually all of its major design decisions were mistakes, except for the BIOS/BDOS split, though not such serious mistakes as to make it completely unusable. I'm interested to hear why you disagree so strongly.

"Pretty good" isn't the same as "perfect". All of your comments seem completely reasonable. But that BIOS/BDOS design made CP/M a minimum viable portable OS that enabled binary software portability and created an ecosystem across 8-bit hardware from many vendors (not to mention homebrew), starting in 1974. The closest things to it in that space were Microsoft BASIC (with many incompatible versions) and the UCSD p-System, as you note. So I rate CP/M "great" in terms of its industry innovation – specifically how it applied technology to develop a successful 8-bit PC compatibility standard, providing a model for the 16-bit era of DOS and IBM PC compatibles.

Years later, Linux seems to be stuck with multiple, incompatible formats for binary software packaging (snap, flatpak, appimage, docker, win32...) Though I guess you don't have to worry about different sizes of floppy disks.


Interesting that you picked out the use of different API calls for writing to terminal, printer, and the tape. There was at least an attempt at unifying that with the "IOByte" configuration.

The idea was that depending on the state of the IOByte the actual destination of "stuff" could vary.

Of course in the CP/M emulator I wrote/maintain I ignore that byte, because it turns out everybody else did too. (Kinda like user-numbers/areas, most people ignored them.)


Aha, thanks! I didn't know about that!

I just found Tim Paterson's article at http://dosmandrivel.blogspot.com/2007/09/design-of-dos.html?... which says my comment above got an important thing wrong: the sector size on 8" disks was typically 128 bytes, so CP/M was reading per sector. He mentions that North Star DOS used 256-byte sectors. I'm pretty sure the logical HDOS sectors on my H-89 5¼" single-density floppies were 512 bytes, but I've never interacted with the low-level format of the disks. They were 100K per side, 10 sectors per track (with 11 holes punched in the disk to indicate their positions), which I guess would imply 20 tracks, which sounds too low! Maybe the physical sectors were 256 bytes.

https://heathkit.garlanger.com/diskformats/HDOS_Disk.pdf confirms: 256 bytes, 10 sectors, 40 tracks.

Paterson's article explains why he copied the FAT filesystem from Microsoft BASIC but extended the cluster numbers to 12 bits. He also has some pretty damning criticisms of both the CP/M filesystem design and its BIOS interface.


Great reference. I didn't realize the FAT filesystem was from Microsoft Disk BASIC (though I knew it was different from CP/M's) so I guess it came back full-circle with DOS. (Though perhaps not directly compatible with Disk BASIC's FAT8 filesystems.)

I like how your emulator (like RunCPM) can work with native directories and files. It's much more convenient than messing around with disk images.

Thanks! One of my biggest frustrations with the retro-scene is having to deal with old compression-formats, and disk-archives, so that was very much a design choice.

Many of the recent/modern emulation projects work the same way. In addition to RunCPM there's also the excellent rust-based iz-cpm project which I enjoyed studying at times.


I like the idea of modern machines as universal systems that can seamlessly run whatever software you might like to run. When you switch to a new machine, all of your old software and files are still usable.

Note that old software is likely to contain security flaws and other bugs, so isolation/sandboxing is important.

Thanks for the iz-cpm reference!


There were some, short-lived, projects/groups trying to run their own processes. DWF is one that I recall, though it is dead again:

https://lwn.net/Articles/851849/


Dangerous indeed. They come in boxes of six, which I always treated as a single-serving.

I moved to Finland so I can't get them here, but at least I'm consoled by the availability of Irn-Bru!


They still have Irn Bru in Finland? They stopped making it in Scotland once the sugar tax came into force :(.

Although if I'm honest, I drank far too much of the stuff so it's probably a good thing I can't buy it any more.


They do. Sometimes I see individual cans in K-supermarkets, but otherwise:

https://www.verkkokauppa.com/fi/product/327752/Irn-Bru-virvo...


Ingredients: ... sweeteners (aspartame, acesulfame K)

Unfortunately that's the new "not really Irn Bru" recipe :(.


It's still Scotland's leading soft drink and you can buy the original high-sugar recipie:

https://irn-bru.co.uk/products/1901


That's the original recipe, but not the one that was known as Irn Bru immediately before the sugar tax -- it's got no caffeine in it. I've not tried it.

Any changes made before I started drinking it are merely historical baggage, but any changes made after I started drinking it have a fundamental effect on whether I consider it the same product. See also: "New Coke".

(Also, I can taste Aspartame -- so while many people might say that the low sugar variant is sufficiently similar in taste, I really don't enjoy it. Although, as I posted above, this is probably a good thing.)


You can definitely still buy it in Scotland.


You can buy something marketed as Irn Bru, but it's not really Irn Bru. Real Irn Bru does not contain artificial sweeteners.


We're gonna go "No true Scotsman", on irn-bru? That takes some doing!

I get what you're saying, and I kinda believe the best Irn-Bru was the stuff in a glass bottle, delivered to the door, when I was a kid.

But the stuff that's out there, to my mind, is still just fine. It cures hangovers. It tastes of sugar and love, whether there's real sugar, real iron, or real love in it is almost immaterial.


"Made in Scotland, from Girders" :).

Un(?)fortunately I taste Aspartame as being bitter so it's not as sweet for me as it used to be :(. Which is probably a good thing, given how much I used to drink :P.


Single serving indeed, they don't really work for sharing. Similar for the 8 Tunnock's caramel bars...


I haven't tested either, but I would of course give [1] a try for comparison purposes. If you already have, how did they compare?

[1]: https://brunberg.fi/en/tuoteryhma/chocolate/kisses/


They're very similar to look at, but the chocolate is wrong and the consistency of the "foam" part differs too.

Close, but sadly not close enough! (I enjoy both, but the Tunnock's teacakes, and their caramel logs too, are a clear winner for me. Probably due to my childhood memories and associations as much as anything else. Bias!)


Thanks, this kind of information is very hard to find elsewhere and ... I just find snack food interesting. :)


Same for me, nothing comes close!


My parents moved home just before Christmas, and we had to spend a lot of money on short notice replacing windows, etc.

As a result of low cash for the first time ever my parents bought my sisters and I a shared present - A Sinclair ZX Spectrum, 48k.

The computer came with 10-12 casette-tapes, a tape recorder, bundle of manuals and a joystick. Unfortunately the tape-recorder didn't work so we couldn't load any of the games.

I spent Christmas reading the BASIC manual, and my sisters spent it being disappointed.

I wrote about this here, in the past in a little more detail:

https://blog.steve.fi/how_i_started_programming


I put together this example after reading the article:

https://github.com/skx/simple-vm

Simpler than the full gomacro codebase, but perhaps helpful.


For optimal speed, you should move as much code as possible outside the closures.

In particular, you should do the `switch op` at https://github.com/skx/simple-vm/blob/b3917aef0bd6c4178eed0c... outside the closure, and create a different, specialised closure for each case. Otherwise the "fast interpreter" may be almost as slow as a vanilla AST walker.


That's fair, thanks for taking the time to look and giving the feedback.


I used it for scripting routers, more than anything else.

I guess expect became something almost popular enough to be considered external to TCL in the end, too.


Expect is very cool and useful. I have used it some.

I read almost the full Expect book by Don Libes, the author of Expect, several years ago. He had a unique and very interesting style of writing, in that book.

Yes, so popular that there are also ports of Expect to some other languages, such as Python and others.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: