The modern MUSH forks do generally support telnet, but yes -- as a 29 year old who's been pathologically obsessed with "MUD archeology" off and on, I'll confirm -- historically, most MUDs did not do any sort of Telnet negotiation.
Further, most older clients did not anticipate any kind of Telnet negotiation from the server, and will print garbage to the screen if connecting to modern MUSHes that do. (I've tested tinywar, vt, and that one VMS client...)
MUCKs never, to my knowledge, implemented telnet, though. They barely support ANSI escapes, nevermind Telnet. :-)
Funny, I've always found it interesting how "on point" it was...
Granted, yeah, we never (or haven't yet) really transitioned to running "full legacy software" inside the browser, or at least it's not common place. That said, I've seen people compile Wine to wasm, Linux to wasm, and lots of other things to wasm, and run em in a browser. Many of the "fake" demos could be done for real now.
The one aspect that remains thoroughly farcical is an equivalent of Wine for OS X/Cocoa good enough to run a web browser. :-(
[edit] And asm.js kind of died on the vine. Not sure how to feel about that one. Wasm could he described as an evolution of the same idea, but in a lot of ways it's something entirely different.
Yes! I don't use my O2 a lot (I think the PSU is flaky, and I'm not super interested in IRIX), but I'm aware of at least https://forums.sgi.sh/index.php, among other similar sites, full of people porting/developing software for IRIX. It's a pretty active community for a 90s workstation platform, the most active one I'm aware of!
It's worth pointing out that Nim is going to cache all of the compilation up to the linking step. If you want to include the full compilation time, you'd need to add --forceBuild to the Nim compiler.
(Since a lot of the stuff you'd use this for doesn't change often, I don't think this invalidates the "point", since it makes "nim r" run very quickly, but still)
There's also the Nim interpreter built into the compiler, "NimScript", which can be invoked like:
#!/usr/bin/env -S nim e --hints:off
echo "Hello from Nim!"
The cool thing is that, without --forceBuild, Nim + TCC (as a linker) has a faster startup time than NimScript. But if you include compile time, NimScript wins.
Yep, always forget about '--forceBuild'.
You can see in the script above the nimcache directory was overriden to tmpfs for the measurement, though. Caching will be helpful in real usecases, of course.
Nimscript is cool but very limited, not being able to use parts of the stdlib taht depend on C. Hope this will change with Nimony/Nim 3.
Yes, Chromium has "native" sandboxing on all those platforms, Windows [0] Linux [1] and MacOS [2].
Chromium uses both seccomp filtering as well as user namespaces (the technology that Docker/Podman use).
The Windows and MacOS sandboxing strategies are more "interesting" because I've seen very few (open source) programs that use those APIs as extensively as Chromium. On Windows, it makes use of AppContainer [3] (among other things), while on MacOS it uses the sparsely documented sandbox API [4], which I think was based on code from TrustedBSD?
I haven't done LFS since my tweens (and I'm almost 30 now), but I remember the sysvinit portion amounted to, past building and installing the init binary, downloading and extracting a bunch of shell scripts into the target directory and following some instructions for creating the right symlinks.
Obviously, you can go and check out the init scripts (or any other individual part of LFS) as closely as you wish, and it is easier to "see" than systemd. But I strongly protest that sysvinit is either "Linux" (in that it constitutes a critical part of "understanding Linux" nor that it's really that understandable.
But setting aside all of that, and even setting aside the practical reasons given (maintenance burden), when the majority of "Linux" in the wild is based on systemd, if one wanted to do "Linux From Scratch" and get an idea of how an OS like Debian or Fedora works, you would want to build and install systemd from source.
For me, Linux From Scratch is not about compiling linux from scratch, but on building up an entire Linux distro from the ground up, understanding how every piece fits together.
Doing it via systemd is like drawing a big black box, writing LINUX on the side, and calling it a day.
You are necessarily working with very big blocks when you're doing this, anyway. You don't do a deep dive on a whole bunch of other topics in LFS, because otherwise the scope would become too big.
That's what I was trying to get at -- yes, you can say that sysvinit is easier to understand than systemd, and less of a black box. But, even still, a "real Linux distribution" is full of these black boxes, especially the closer you get to being able to run "real applications". I'd argue that once you get into full desktop seat management, you add so much complexity on top of sysvinit that the difference narrows...
Which is why I asked "learn about what stuff". I think if the goal is to learn about "Unix" or OS design/ideas, you're better off with a leaner, "pedagogical" OS, like xv6. If the goal is to piece together an OS and really understand each piece, I don't think you really want sysvinit. You want something closer to an /etc/rc.local that just kicks off a few daemons and hopes for the best.
You can argue that sysvinit makes a better "compromise" between usability and clarity, and I'd entertain that idea, but then I think dinit is far easier to understand than sysvinit. And of course, at that point you can shave yaks till you fill the bike shed with wool.
Realistically, as much as people may hate it, if you have to pick a single init to standardize on for clarity and "building an entire Linux distro from the ground up, understanding how every piece fits together", systemd is the most rational choice. It's the most representative of the ecosystem, and requires the least "extra layers" to make the "desktop layer" work.
Superficially similar, but from a look at the README, it has no polymorphism or generics, which hugely differentiates it from Nim, which leans very, very heavily on templates/generics throughout the entire language/standard library.
Granted, that also means Tomo probably has better incremental compilation, and would likely be more amiable to creating shared libraries. You can do that with Nim, too, but the external interface (generally) has to be "C" semantics (similar to most other "high level" languages).
> would likely be more amiable to creating shared libraries.
Why's that? There's a gc/no-gc barrier to cross, and also being able to use other features in an implementation doesn't make creating a C interface harder.
I was thinking more along the lines of compiling Tomo code, then being able to link against that pre-compiled binary from other Tomo code. Basically being able to substitute a source file module for a binary module.
I don't know if Tomo supports anything like that, but not having generics would make it easier/simpler to implement (e.g. no need to mangle symbol names). Note "easier/simpler", Nim can also "precompile Nim libraries for Nim code", but the resulting libraries are brittle (API-wise), and only really useful for specific use cases like creating binary Nim plugins to import to another Nim program, where you'll want to split out the standard runtime library [0] and share it across multiple images. It's not useful for e.g. precompiling the standard library to save time.
I know Nim has been working on incremental compilation, too, but I don't know what the state of that is. I think it might have been punted to the rewritten compiler/runtime "Nim 3".
If you count 70s and 80s "Unixes" then on its face it is a bit strange, but a lot of 70s and 80s "Unixes" don't exactly resemble what we think of as "Unix" anyway.
If instead you think of SysVR4 as the first "Unix", then Amiga Unix was indeed a very early Unix. I think this is a useful distinction, because de facto most of the software interfaces we associate with "Unix" are just System V (especially R4) in a trench coat. Note that POSIX and and SysVR4 released the same year (1988); they're technically unaffiliated efforts but represent a consolidation of a bunch of competing ideas into a ... tacit compromise.
Or, being more practical, SysVR4 is the absolute oldest "Unix" you're going to have a good chance of building modern (1990-2020s) software made "for unix" on. You can get a surprising amount of mileage out of a SysVR4 distribution -- but go any older, and you'll be in for a lot of "fun"!
> but a lot of 70s and 80s "Unixes" don't exactly resemble what we think of as "Unix" anyway
And that's exactly why the term "early Unix" suggests "pre-SVR4". Once a platform has matured, it's not "early" anymore.
The whole thing is weirdly written. For example:
> Like many early Unix variants, Amiga Unix never became wildly popular
Except SVR4 was popular.
So they're either saying Amix was early Unix, then the GP is correct that it wasn't early Unix. Or they're saying that SVR4 was unpopular, which is also untrue.
I don't think the blurb is intending to suggest either of these points though. I'm sure people maintaining a fan site for Amix would understand their history. So I just think they've written the blurb very poorly. Poor enough that the default conclusion people are likely to draw is a technically incorrect one.
My point is that if they’re talking specifically about SVR4, then it was popular. And if they’re not talking specifically about SVR4, then it’s not “early Unix”.
As I said, I’m not trying to claim that they’re “wrong”. Just that the whole thing is phrased poorly because it’s really not clear what their context is. And that’s easily demonstrated by the fact that we’re arguing over said context here.
> a lot of 70s and 80s "Unixes" don't exactly resemble what we think of as "Unix" anyway.
As someone who was a UNIX developer (both kernel and userland) working for a UNIX support shop (Interactive Systems Corporation, later bought by Kodak and then Sun) from the mid 70's, starting with UNIX 6, through the late 80's and once gave a Usenix talk called "Everything you wanted to know about System V but were afraid to ask", where I held up the white System III manual and the black System V manual and joked that they had gone to the dark side, I find this comment utterly nonsensical. I can look through today's BSD man pages, or its code, and it's very familiar.
> If instead you think of SysVR4 as the first "Unix"
Yes, but SunOS 4 was both extremely popular (enough that a lot of software had explicit support for running on it) and implemented a decent amount of System V and POSIX compatibility!
Probably most notably, it implemented SysV shared memory (sys/shm.h) plus messages/semaphores, STREAM support, SysV termio, SysV libcurses, and probably others I'm not aware of.
I'm not sure how much any of these helped run software, but it bears pointing out anyway.
Very true. SunOS 4.x is still my favorite 90's Unix. I had a Sun 3 box for a while, then got a low end Sparc Station at home! Eventually in the late 90's I gave in and installed Solaris. 2.4 and earlier was kinda rough, but it was pretty decent by 2.5.
I can't speak to this case specifically, but it's worth pointing out that Windows itself applies many patches for specific applications, so it follows that Wine could be obliged to mimic that behavior in cases where the application relies on it.
This isn't really my arena, but I did happen to recently compare the implementation of ReactOS's RTL (Run Time Library) path routines [0] with Wine's implementation [1].
ReactOS covers a lot more of the Windows API than Wine does (3x the line count and defines a lot more routines like 'RtlDoesFileExists_UstrEx'). Now, this is not supposed to be a public API and should only be used by Windows internally, as I understand it.
But it is an example of where ReactOS covers a lot more API than Wine does or probably ever will, by design. To whom (if anyone) this matters, I'm not sure.
That's an interesting data point. I wonder if there is a hard technical reason why that logic could not be added to WINE, or if the WINE maintainers made a decision not to implement similar functionality.
There is not a hard technical reason, just different goals. WINE is a compatibility layer to run Windows apps, and thus most improvements end up fixing an issue with a particular Windows application. It turns out that most Windows applications are somewhat well-behaved and restrict themselves to calling public win32 APIs and public DLL functions, so implementing 100% coverage of internal APIs wouldn't accomplish much beyond exposing the project to accusations of copyright infringement.
IIRC, there is also US court precedent (maybe Sony v. Connectix?) that protects the practice of reverse-engineering external hardware/software systems that programs use in order to facilitate compatibility. WINE risks losing this protection if they stray outside of APIs known to be used (or are otherwise required) by applications.
There's also another partial Win32 reimplementation in retrowin32, with the different goal of being a Windows emulator for the web, not for Linux or as alternate OS, at https://evmar.github.io/retrowin32/ It thus has an even more sparse path/fileapi.h implementation [2] than WINE and ReactOS. Written in Rust.
Further, most older clients did not anticipate any kind of Telnet negotiation from the server, and will print garbage to the screen if connecting to modern MUSHes that do. (I've tested tinywar, vt, and that one VMS client...)
MUCKs never, to my knowledge, implemented telnet, though. They barely support ANSI escapes, nevermind Telnet. :-)
reply