Hacker News new | past | comments | ask | show | jobs | submit login

Ah the funny things we resad about in 2020.

In 1985... yes I said 1985, the Amiga did all I/O through sending and receiving messages. You queued a message to the port of the device / disk you wanted, when the I/O was complete you received a reply on your port.

The same message port system was used to receive UI messages. And filesystems, on top of drive system, were also using port/messages. So did serial devices. Everything.

Simple, asynchronous by nature.

As a matter of fact, it was even more elegant than this. Devices were just DLL with a message port.




And it worked, well, with 512K memory in 1985.

The multitasking was co-operative, and there was no paging or memory protection. That didn't work as well (But worked surprisingly well, especially compared to Win3.1 which came 5-6 years later and needed much more memory to be usable).

I suspect if Commodore/Amiga had done a cheaper version and did not suck so badly at planning and management, we would have been much farther along on software and hardware by now. The Amiga had 4 channel 8-bit DMA stereo sound in 1985 (which with some effort could become 13-bit 2 channel DMA stereo sound), a working multitasking system, 12-bit color high resolution graphics, and more. I think the PC had these specs as "standard" only in 1993 or so, and by "standard" I mean "you could assume there was hardware to support them, but your software needed to include specific support for at least two or three different vendors, such as Creative Labs SoundBlaster and Gravis UltraSound for sound).


Something else that's mentioned less than the hardware side is AmigaDOS and AmigaShell, which were considerably more sophisticated than MS-DOS, and closer to Unix in power (e.g. scripting, pipes, etc.).

The fate of Amiga is so infuriating. It's mind-boggling to think how Microsoft was able to dominate for so long with clearly inferior technology, while vastly superior tech (NeXT, Amiga, BeOS) lost out.

There are many such unhappy stories, and I often think about the millions of hours spent on building tech that should have conquered the world, but didn't. The macOS platform is a rare incidence of something (NeXT) eventually winning out, but the Amiga was a different kind of dead end.


If you think about it, the triumph of "good enough in the right place at the right time" describes most of history of computing. Unix was that, as well, compared to many of its contemporary OSes. C was several steps back from the state of the art in PLs. Java, JavaScript, PHP... the list goes on and on.


There’s also the “tearing your competitors to shreds, regardless of the law or ethics” which is how I think of Microsoft in the pre-iPhone era.

As someone who loves software, there was a very clear feeling at that time that Microsoft was putting a huge chilling effect on the whole industry, and that the entire industry was stagnating under their control.

Thank god for Netscape, Google, Apple, Facebook, and Amazon (in that order) who were able to wrest that control from them. Now at least there are multiple software ecosystems to move between. When one of these massive companies poisons the water around them, there are other ecosystems doing interesting things.


Got bad news for you my friend. All of them (well, of course Netscape doesn't exists anymore) poisoned the waters around them. So regardless where you move, you still inhale some poisoned air.


“C was several steps back from the state of the art in PLs”

This very accurately describes Go


Sure, if literally the only metric you judge a language by is how state of the art it’s expressiveness is.

Sometimes it feels like all the hate on HN toward Go is ignorance that there is a whole domain of software outside of scripting and low level systems programming, and how some enterprises value 20 year maintenance over the constant churn of change eg Rust and JavaScript. And yes, I often hear people saying “you can still do that in x or y” but the point is that Go does it better than most languages because it was purposely designed with those goals in mind - hence exactly why it suffers from expressiveness, state of the art features et al. And I say this from 30 years of experience writing and managing enterprise software development projects across more than a dozen different languages.

Go might not be cool nor pretty, but it’s extremely effective at accomplishing its goal.


I think this is a very two-dimensional way of looking at the problem.

Go reduces complexity in order to make it easier to build resilient systems.

A language like Perl has bucketloads more features, and more expressive syntax, but I’d still say Go is many steps ahead of Perl.

On another note, I’d actually argue that some of Go’s features, such as “dynamically typed” interfaces and first-class concurrency support are streets ahead of most other languages. Not to mention its tooling, which is better than any language I’ve used, full stop (a language is so much more than simply its syntax).

I believe that functional languages, with proper, fully-fledged type systems, are the best way to model computation. But if I had to write a resilient production system, I’m choosing Go any day.


> This very accurately describes Go

...and was a deliberate design objective made by an ex bell labs guy.


C was designed to fill its main purpose in writing portable OS in a higher level language, and given that majority of the today's world's OS is written in C, is the testament of its success.

It is interesting to note that while Brian Kernighan and Ken Thompson are involved in initial Go language design, C was largely Dennis Ritchie's baby and his got a complete PhD thesis on programming language design meaning that he basically aware of the state-of-the-art of programming languages design at that time.

The main argument of several steps back is probably about the lack of the functional language aspects like closure and this feature probably at the very bottom of the programming language features list that you want to have in porting OS, given the computer systems CPU and memory limit at the time. The other is object oriented, but you can perform object oriented programming in C inside the kernel just fine but not as gung-ho as things like the multiple inheritance nonsense [1].

Jury is still out on Go. The fact that Kubernetes is very popular for the cloud now does not mean it will be as successful as 50 years of C. Someone somewhere will probably come up with better Kubernetes alternatives soon that uses different languages. To be relevant in today and in the future, Go needs to adopt generics and its designers are well aware of the deficiency of not having generics for current Go implementation.

[1]https://lwn.net/Articles/444910/


Not at all, C was designed to fill its main purpose in writing portable OS in a higher level language at Bell Labs, the rest of the world was doing it since 1961. Quite easy to find out for anyone going through digital archives from bitsavers, ACM and IEEE.

The majority of the today's world's OS is written in C, as a testament to the success of free beer OS given alongside tapes with source code, while other mainframe platforms required a mortgage just to start.

Had Bell Labs been allowed to sell UNIX and there wouldn't exist a testament of anything.


The main competitor to UNIX namely VAX/VMS is mainly written in C and also its natural successor Windows NT kernel, it is probably the second most popular OS in the world. The more modern BeOS and MacOS kernels are written in C. Even the popular JVM (equivalent to Java mini OS) is written in C. Why are these UNIX alternatives have chosen to use C while other alternative programming languages are readily available at the time for examples Pascal, Objective C and including the safe Ada?

And do the mainframe OSes were written for portability in the first place like UNIX?


VAX/VMS was written in BLISS, it only adopted C after UNIX started to widespread and they needed to cater to competition and their own in-house UNIX implementation, learn history properly.

https://en.wikipedia.org/wiki/BLISS

The even the popular JVM is written in a mix of Java and C++, with plans to port most of the stuff to Java, now that GraalVM has been productised, https://openjdk.java.net/projects/metropolis/

Speaking of which, there are at least two well known version of the JVM written in Java, GraalVM and JikesRVM. Better learn the Java eco-system.

UNIX was written in Assembly for the PDP-7, C only came into play when they ported it to the PDP-11 and UNIX V6 was the first release where most of the code was finally written in C.

IBM i, z/OS or Unisys ClearPath in 2020, have completly different hardware than when they appeared in 1988, 1967 and 1961 respectively, yet PL/S, PL/X and NEWP are still heavily used on them. Looks like portable code to me.

Mac OS, you know the predecessor for macOS, was written in Object Pascal, even though eventually Apple added support for C and C++, which than made C++ with PowerPlant the way to code on the Mac, not C.

BeOS, Symbian were written in C++, not C.

Outside of the kernel space, Windows and OS/2 always favoured C++ and nowadays Windows 10 is a mix of .NET, .NET Native (on UWP) and C++ (kernel supports C++ code since Windows Vista).

NeXT used Objective-C in the kernel, that's right, NeXT drivers were written in Objective-C. Only the BSD/Mach stuff used C.

macOS replaced the Objective-C driver framework with IO Kit, based on Embedded C++ again not C. Nowadays with userspace drivers the C++ framework is called DriverKit in homage to the original Objective-C NeXT framework.

Arduino and ARM mbed are written in C++, not C.

Android uses C only for the Linux kernel, everything else is a mix of Java and C++, and since Project Treble you can even write drivers in Java, just like in Android Things allowed to since version 1.0.

Safe Ada is used alongside C++ on the GenodeOS research project.

Inferno, the last iteration of the hacker beloved Plan 9, uses C on the kernel and the complete userspace makes use of Limbo.

F-Secure, you might have heard of them, has their own bare metal Go implementation for writing firmware and is used in production via the Armory products.

IBM used PL.8 to write a LLVM like compiler toolchain and OS during their RISC research and only pivoted to Aix, because that was what the market wanted RISC for.

Contrary to the cargo cult that Multics was a failure, the OS continued without wihtout Bell Labs and was even assessed to be more secure by DOJ thanks to its use of PL/I instead of C.

There is so much to the world of operating systems than the tunnel vision of UNIX and C.


Personally I'd consider C++ as C with classes or object oriented extension of C, prior to 2010. The modern C++ after that is more of a standalone language after some other languages' features adoption (e.g. D). Objective-C on the other hand is totally a separate language.

The original JVM written by Sun was in C not C++ or Java.

Windows NT the kernel part is mainly written in C. The chief developer of Windows NT Dave Cutler is probably the most anti UNIX person in the world, but the fact that he has chosen C to write Windows NT kernel in C is probably the biggest testament you can get. Dave Cutler is also part of the original developers of VMS, if BLISS with its typeless nature is better for developing OS than C, he'd probably has chosen it.

For whatever reasons Multics had failed to capture wide spread adoption compared to UNIX and the fact that its name existed mainly in most of operating System books as pre-cursor OS to UNIX. For most people Multics is like B language that is just a pre-cursor to C language. I know it a shame that Multics had become a mere footnotes inside OS textbooks despite its superior design compared to UNIX.

PL/I language is interesting by the fact that it is quite advanced at the time but as I mentioned in my original comments, Dennis Ritchie had to accommodate the fact that some of languages features are over engineered based on the hardware of the day and had to compromise accordingly. Go designers, however, have chosen to compromise not based on the hardware state-of-the-art but what the language designers think are good for Google developers at the time of the original language design proposal.


>" Unix was that, as well, compared to many of its contemporary OSes"

Which other OSes from the era are you referring to here?


Yes, the Worse is Better thing.


On Microsoft dominating - because developers don’t matter, users do. None of these superior technology would be willing to make an exception in the kernel so that SimCity can run (look up the story from Raymond Chen if you are not familiar). Linux found considerably more success in servers as the users themselves are “developer like”.


Closest thing I can find is from Joel Spolsky:

I first heard about this from one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.

https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...


Yes I could not find the original link too. Closest I could find is (the link mentioned in the paragraph below is broken) -

"Interesting defense. While Chen has a good point about Microsoft taking the blame unfairly over this and perhaps similar issues its not like Microsoft is renown for their code quality. Indeed, check out an item in J.B. Surveyer's Keep an Open Eye blog from September 2004 which details how Chen's team added code to allow Windows to work around a bug in Sim City!"

https://www.networkworld.com/article/2356556/microsoft-code-...


Here you go:

https://web.archive.org/web/20070114082053/http://www.theope...

It actually refers back to the Joel on Software blog post in the comment above.


The Amiga lost the war for completely different reasons. The competition at the time was MS-DOS 4 or 5, windows 2 (and 3.0) were still a toy — and they had compatibility problems.

BeOS had no chance to be evaluated on its own merits because by that time Microsoft had already applied anticompetitive and illegal leverage on PC vendors - for which they were convicted and paid a hefty fine (which was likely a calculated and very successful investment, all things considered).

The Amiga wasn’t even expensive for what it gave: it was significantly cheaper than a PC or Mac with comparable performance, and even had decently fast PC emulation and ran Mac software faster than the Mac.

It did not have a cheap “entry level” model, though, which was one big problem. The other (not unrelated) problem was incredible incompetence among Commodore management.


Do you not consider the A500 cheap? I believe you they were going for around $600 in late 80’s money, at least in the US. This wasn’t Atari ST cheap but still not a bad deal.


A starter no-brand beige box was always cheaper. And iirc, for a long time you couldn’t get an el-cheapo monochrome monitor for the Amiga - only color or TV, which was ok for the C64-upgraders but not for the PC competition.


True. PC clones were always cheaper.

You could use a composite monitor on the A500/2000. It was only in monochrome. I did that for the first couple months I had my A500.


Around me, they still cost twice as much as a CGA or Hercules (or even dual) monochrome monitor. The starter Amiga cost twice the starter PC until 1993 or so, and by then the war was lost. It was too expensive for a middle class family where I lived.


There's a similar explanation for Linux's success on servers: Linus is very strict on backward compatibility for the kernel. But for Linux on desktop, the rest of the stack (GUI environments) is made by a bunch of CADT devs who don't care about backward compatibility is, so of course it failed..


What kind of backward compatibility for GUI environments are you talking about?


The kind where you can still use an application 2 years later without having to recompile it.


That generally isn't a problem if you pin/vendor your dependencies... the same thing that most developers seem to do on Windows and mac anyway.


CADT? What do you mean?


Cascade of Attention-Deficit Teenagers (open-source software development model)



google "CADT jwz". For various reasons it can't be linked from HN.


For those curious about the story https://www.joelonsoftware.com/2000/05/24/strategy-letter-ii... is a great read with other useful tidbits as well.


There is something to be said about the 2 companies mentioned other that MS, that did the right backwards compatible thing, transmeta and paymybills one is bust and the other doesn’t show up the the 1 page of a search.


> the other doesn’t show up the the 1 page of a search.

It merged with PayTrust, which was acquired by Metavante, who then sold the customers to Intuit. All this happened in the early 2000s. More recently it seems that Intuit sold Paytrust back to Metavante. It's still operating a service at Paytrust.com.


My first Linux box felt pretty comfortable after cutting my teeth on an Amiga's shell, which was largely inspired by Unix and still similar enough in concepts to make the transition easy.


I was an Amiga user for most of the late 80's and early 90's. The hardware didn't change much over the years. Software wise, OS 2.0 was a huge upgrade, but hardware wise, it felt like little changed until AGA. AGA machines (1200/4000) were too little, too late. If the had come out in 1990 instead of 1993, it might've been enough of a lead. Maybe in an alternate universe where the A3000 had AGA.


AGA was the only significant hardware revision.

They should have ditched m68k too. I loved it, but with 68040 and 486, the writing was on the wall for everyone to see.

By the time of Pentium, the writing was on the wall, the floors, the windows, the ceiling, the windows.

Yes, 68060 held a candle against early Pentium but it was not intended as a Personal Computer CPU, more "fast embedded".

The Amiga OS was great. No memory protection, but Win 3.1 had none, DOS had none, Win 95 had some but it somehow crashed relentlessly anyway. It took years for them to discover that it had a max uptime of 48 days because of a timer running out of bits.

AGA could and should have been incrementally upgraded with more modes, ever keeping backwards compatibility. (Like AGA did with the original chipset.) They could have sold Amigas on PCI boards, with a cheap 68000 to boot legacy Amiga OS until the transition was complete with emulation or whatever, and using the PC x86 for games code. So many possibilities, but R&D was on a shoestring budget.

The "game console like" conformity was the strength and ultimately the downfall of the platform, but not because that's bad inherently, but because the revisions stopped coming. The original PS2 was compatible with the PS1, and the original PS3 was compatible with the PS2.

The iPhone also shows the strength of vertical integration, Commodore had a great chess board opening but traded all its pieces for nothing, except in the end, pork for the CEO and board.


My grampa gave me my first pc and it had a cli I learned and then years later when I was introduced to Linux I just had an intuition for the basic commands and usage, I haven’t been able to track down what that PC was (it was cli only but could load games from floppy’s). I think it may have been an Amiga (this was in 98 but the pc was a decade+ old at the time.)


macOS may have technical advantages, though less and less over time. But it always had a dramatically more restrictive business model from the very beginning.

This is what made it unattractive to business and continues to make it unattractive to many.

The restrictiveness of Apple is likely an advantage for novice mobile users, and other vendors copied it.


You say it's unattractive to business and yet MacBook seems to be the single most popular brand of laptop at most tech companies these days.


Maybe the most visible brand, among web-dev, but HP, Lenovo, and Dell own the business/enterprise laptop market in a big way.


I'm aware, I'm a full-time Windows developer.


Around here it is a mix. Macs aren't extremely popular (3 - 5 of 20 or something) and Linux have quickly become more common and seems to be eating into the Windows marketshare.


Every time there's a huge media article about a newly discovered tracking mechanism in Windows 10, you can immediately see the posts of newcomer questions in Linux specific areas.

A lot of people seem to have switched to Ubuntu or Arch due to Windows 10 tracking. And these are also non-technical people that have no idea what they are doing, which is kinda awesome.

I always love when using Linux gets a bit easier to use as a Desktop for the wider audience.


50/50 with Linux among developers in the case of my company. In my team there are like 4 Dells and only 1 Mac.


Software developers are a tiny fraction of business users. In our industry - vaccines - most people use Windows laptops to get work done. Execs do use Apple stuff, but they're not doing the actual ground-level work, so its probably the same everywhere. Apple hardware is expensive to buy, expensive to maintain, and difficult to service due to its flawed design (everything is soldered, no easy access to components, no third party repair, no access to parts, generates lots of e-waste, etc). The OS also is not capable enough to be easily administered by IT.


The PC was an open free specification, so the hardware was cheaper. I think that was the main driver behind the PC winning, not Microsoft..


Only thanks to Compaq, that wasn't part of IBM plans.


Maybe its only clear to you. I don't see what is "clearly inferior" about Microsoft tech. It is reliable and rock solid and has worked for our industry very well (vaccines).


I'm of couse referring to the periods when Amiga and Apple competed with MS-DOS and Windows, which were vastly inferior technologically.


Amiga multitasking was actually preemptive. It was only cooperative in the sense that all processes / tasks were in the same address space and without memory protection...


Between the preemptive multitasking and purely message-based communication it sounds a lot like Erlang.


Unlike erlang, though, when it crashed, it crashed.

RIP Guru meditation.


I learned C programming on an Amiga. It made me very careful. If you messed up, you were looking at the guru followed by a couple minutes for a reboot. Fun times...


We have to "thank" Compaq for it as well.

Even with all its management flaws, the Amiga might have survived without the mass production of PC clones.


Minor nitpick: The Amiga had preemptive multitasking (and thus didn't depend on user code to willingly give up their time slice, in that regard it was more like UNIX, and unlike early Windows and MacOS versions).


Another amazing thing is that Carl Sassenrath, creator of the Amiga OS kernel, also went on to create the REBOL language, which seemed quite innovative too - I've checked it out some - though it is kind of dormant now, and now there is the Red language, based somewhat on REBOL. https://en.m.wikipedia.org/wiki/Carl_Sassenrath


I think the multitasking was actually preemptive. But yes, it had no memory protection: the message passing infrastructure relied on it and it would have been very hard to retrofit even on cpus with an MMU (although I think recent versions might have actually tried).


True, but as I read from more knowledgeable sources than myself, the problem of the Amiga was that the software was intimately linked to and effectively exposed hardware implementation details.

This made upgrading chips nigh impossible without full software rewrites, which ultimately caused stagnation.

Indeed, as an A500 kid I used to laugh and was horrified by my first PC...


The OS actually had a very advanced abstraction layer. The problem is most games bypassed it (and the OS itself in fact) and talked directly with the hardware.


Thanks for the clarification


A friend of mine was amazed by this capability of the Amiga when I showed him that on one screen I could play mod.DasBoot in NoiseTracker, pull the screen down partly then go on the BBS in the terminal by manually dialing atdt454074 and entering, without my A500 even skipping one beat...

All I had was the 512kB expander, he had a 386 with 387 and could only run a single tasking OS


Linux was originally built for the 386.


He could've done the same thing with DOS if he bought DesqView. That let you multitask multiple DOS applications on a 386.


Not quite. The multiple screen thing allowed several full screen graphical applications with different resolutions on screen at once, divided by a vertical barrier (a title bar similar to those on windows). This was a hardware feature at its core, if memory serves me right.


Yes, I remember the screen feature on my A500. It was neat.

To say a 386 is limited to single tasking is wrong though. That was my main point.


I remember NetWare's IPX/SPX network stack used a similar async mechanism. The caller submits a buffer for read and continues to do whatever. When the network card receives the data, it puts them in the caller's buffer. The caller is notified via a callback when the data is ready. All these were fitted in a few K's of memory in a DOS TSR.

All the DOS games at the time used IPX for network play for a reason. TCP was too "big" to fit in memory.


"In 1985... yes I said 1985, the Amiga did all I/O through sending and receiving messages"

I do remember that, and it was cool. But, lightweight efficient message passing is pretty easy when all processes share the same unprotected memory space :)


L4 uses a similar model, and the last ~20 years of research around L4 has mostly focused on improving IPC performance and security. The core abstraction is a mechanism to control message passing between apps via routing through light weight kernel invocations (which is indeed practically the only thing the kernel does, it being a microkernel architecture).

Memory access is enforced, although not technically via the kernel. Rather at boot time the kernel owns all memory, then during init it slices off all the memory it doesn't need for itself and passes it to a user space memory service, and thereafter all memory requests get routed through that process. L4 uses a security model where permissions (including resource access) and their derivatives can be passed from one process to another. Using that system the memory manager process can slice off chunks of its memory and delegate access to those chunks to other processes.


When you want to squeeze every bit of performance out of a system, you want to avoid doing system calls as much as possible. io_uring lets you check if some i/o is done by just checking a piece of memory, instead of using read, pool, or such.


One thing that doesn't change is that every decade people will look at the Amiga and admire it the same no matter how much ~advances have been made since.


This over-romanticizes Amiga (a beautiful system no doubt) because there have been message-passing OSes since the 1960s (see Brinch Hansen's Nucleus for example). The key difference with io_uring is that is an incredibly efficient and general mechanism for async everything. It really is a wonderful piece of technology and an advance over the long line of "message passing" OSes (which always were too slow).


Purely for entertainment, what is the alternate history that might have allowed Amiga to survive and thrive? Here's my stab:

- in the late 80s, Commodore ports AmigaOS to 386

- re-engineers Original Chipset as an ISA card

- OCS combines VGA output and multimedia (no SoundBlaster needed)

- offers AmigaOS to everyone, but it requires their ISA card to run

- runs DOS apps in Virtual 8086 mode, in desktop windows or full-screen


the reverse was actually possible: there were PC compatible expansion cards for the amiga [1]. The issue is that they were very expensive and 8088 only.

[1] for example: https://en.wikipedia.org/wiki/Amiga_Sidecar although "card" is stretching it :)


Yes, those PC Card addons for Mac/Amiga/etc are endlessly fascinating to me. But with the benefit of hindsight, the crucial factor wasn't just being able to run DOS applications on your fancy propriety computer, it was riding the PC Compatible rocketship as it blasted off. Creative Labs and 3Com and Tseng and many others showed that there was more value in manufacturing a popular expansion in the massive PC world than in owning your own closed platform bow-to-stern.



Nice. I did not know those.


> Devices were just DLL with a message port.

Reminds me of: https://en.wikipedia.org/wiki/Unikernel


Just like Erlang + receive.


All this fuss because Linux wouldn't just implement kQueue ... Sigh.


Please explain to me how kqueue facilitates submitting arbitrarily large numbers of syscalls to the kernel as a single syscall, to be performed asynchronously no less. Even potentially submitted using no syscall at all, in polling mode.


Linux should have had kqueue instead of epoll. But io_uring is a different thing.


AFAIK it's unnecessary at this point, Linux has most of the equivalent functionality and there is a shim library for it: https://github.com/mheily/libkqueue


yes, these days you can get a file descriptor for pretty much everything so epoll is sufficient.

I think that epoll timeout granularity is still in milliseconds, so if you want to build high res timers on top of it for your event loop you have to either use zero timeout polling or use an explicit timerfd which adds overhead. I guess you can use plain ppoll (which has ns resolution timeouts) on the epoll fd.


This is corrected in io_uring too, if you use IORING_OP_TIMEOUT that takes a timespec64.


Solaris didn't port kqueue either. We're doomed to reinvent the wheel.


And Bryan Cantrill has expressed quite a bit of remorse about that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: