Solaris is used in life-critical environments, like for example medical systems or energetics. Anything that's absolutely life critical and must run for thousands of days without rebooting. And it's really nice for running software because of its advanced features like the runtime linker, zones containerization and fault management architecure features like SMF. It's awesome software on awesome hardware.
The same way it has worked for the past 30+ years: the firmware boots from first designated boot device; if that device isn't bootable, it moves on to the next one. The next one boots since it's part of a mirror and has all the necessary data to do so, and in the rare case where the device is "half bootable" one would simply intervene and select the next good bootable device manually.
Ansible is like a fifth wheel on a car, since all of the configuration can be done inside of OS packages, and orchestration can be done via SSH (which is exactly how Ansible does it). Put those two together and Ansible is a solution to a non-existent problem.
Not by hand! With configuration packages in OS-native format!
1. When a system comes in, it is scanned and entered into the asset management database, which then triggers a process to enter the scanned MAC address into the DHCP, by generating a new DHCP configuration package.
2. the previous version of the DHCP configuration package is upgraded with the new DHCP configuration package.
3. the system is hooked up to the network and powered on.
4. the firmware is permanently reconfigured to boot in this order:
1. HD0
2. HD1
3. network.
5. since HD0 and HD1 are not bootable, the system boots from the network, whereby the infrastructure automatically provisions it with the standard runtime platform, which consists solely of packages in OS-native format, including configuration packages which configure things which all servers have in common.
6. as part of the automatic installation, the server is automatically installed with additional configuration packages based on which profile it is in, turning it into a specific application server.
7. the server comes up after automatic installation, and reports back to the infrastructure that it is ready to serve.
Actually “Hacker” “News” has been a point of ridicule for quite a while now outside of the bubble here, so arguing that it’s an opinion forge plays right into its notoriety.
286 was a very shitty system: it had only a beeper, practically no DMA, no graphics hardware acceleration of any kind (not even hardware scrolling; sprites were science fiction). Almost everything in that shitty PC bucket was driven by the processor, and that processor was dog slow.
There was that time I used my 12 MHz 286 to develop software for a Z-80 based CP/M machine and I could emulate the Z-80 faster than a real Z-80.
Graphics and sound really did suck compared to machines specialized for that. The best you could do for games at the time were tile systems like the Nintendo NES and the TI-99/4A which were not that different from text mode except the ‘font’ was customizable and the tiles were square. The Atari 400/800 normalized changing your video mode on every scan line but the C-64 had the best system balance in terms of quality output on an NTSC television.
The EGA introduced tricks that would let you make the video card copy multiple planes of data at the same time, also you could do some pretty neat tricks with the palette to make layers that blended like sprites. I see Commander Keen as a watershed because it had scrolling as good as Super Mario Brothers and ran on the EGA. Of course they figured out Mode X for the VGA and we got Doom a few years later.
It's interesting that there really wasn't much of a "game video" option for early PCs. CGA was a disaster, and the PC platform never really got sprites or smooth scrolling until it wasn't relevant anymore. The PCjr/Tandy graphics system offered a few more colour options, but that's about it.
Obviously, you couldn't buy a hundred thousand GTIA or VIC-II chips and solder them into ISA cards, but there were some non-exclusive options-- the graphics chips used on MSX hardware, for example.
There were definitely low-end PCs that would have been able to tread into the "home computer" space occupied by the C64 and Atari 800, by price and design (there were a few 8088-class "PC in the keyboard" designs-- some Tandy 1000s, and a Vtech/Laser one come to mind), but they were hardly the ones kids were gonna beg their parents for with those mediocre graphics.
New Mexico Tech had Sun workstations based on SPARC and in 1993 the talk was of Linux because a 386 machine would beat a Sun workstation easily.
Other 32 bit machines like the high-end Amigas and the Atari Falcon 030 could not keep up. Macs were expensive but sold well because of the refined GUI but when Windows 95 came out with an adequate GUI, Apple went into crisis.
I don't understand how anyone could leave the job half-finished and compromise in this way: in these scenarios, almost all the legwork is done to also build a compiler, for those times when the interpreted code runs just right and is ready for production. Why are such interpreters never finished into a compiler?
This is a research paper, so I think the idea is to show it can be done and would be useful. Hopefully the teams actually building WebAssembly compilers will find it of interest.
Indeed, I am already building the next tier in Wizard, a baseline JIT, and factoring out the codegen to be shared by both interpreter and JIT. I'm in contact with several browser engine teams and the design might get adopted into them, as their priorities allow.
Thank you for writing this proof of concept paper. Do you think CMU is a school that enhanced your ability to do projects like this, or did you know how to do this before you went to CMU or did you learn crucial stuff to accomplish it there ?
I studied at Rice University under Ken Kennedy and it was a great place to learn about compilation techniques and processes. Corky Cartwright was also instrumental in learning about abstract datatypes especially in the Strachey-Scott formal views.
I am just starting out at CMU. Most of the work was done before I joined in January. I am hoping to teach students how to do this, beginning with a course in the fall on Virtual Machines and Managed Runtimes.
Except that a JIT is still not a compiler in the conventional sense. I am specifically referring to a program capable of generating and permanently storing machine code on stable storage.
This was a good article, and more such articles with examples are needed because understanding how pointer aliasing works can yield significant performance gains, since the compiler can be directed to make optimizations it would otherwise not make.
Maybe before starting to teach others, you should teach yourself how to cleanly package your software into OS packages, instead of peddling around MS-DOS style shell scripts with a .sh extension. Didn't anybody teach you that UNIX-like operating systems purposely do not use extensions, in order to abstract the implementation language of the executable from the user?
How the hell are you going to properly teach others, if you have such knowledge gaps yourselves?
It's blind leading the blind again; boy does this make me mad.
I (25 years working with GNU/Linux) usually name my bash scripts `something.sh`.
I think it's debatable wether it's always good or bad, and when specifically it can be a good idea to leave out the `.sh`.
Also, it doesn't look like you looked into the game and came out with some conclusions. It looks more like you superficially had a look, you had a thougt and rushed to comment here about what you believe is a deadly sin. I personally never enjoy this kind of comment, as there's not much thought behind them.
I tried it (before recommending to a teammate) and I also noticed some things that could be improved. But it never crossed my mind to post a judgement on the project after trying it out for just five minutes.
> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.
It does not benefit me: you mistake niceness for kindness. There is little I can learn from you and your guideliness when you yourself cannot tell which is which.
I'm glad you're mad, but based on this comment and the rest of your comments in this thread it seems that's just your normal operating state. Your argument seems to be that unless someone uses a computer exactly as you do, and has your exact selection of knowledge in their brain, then they're incapable of sharing any knowledge at all. Furthermore, there aren't many Unix/Linux users that operate in a vacuum, and often you'll be using files with extensions that come from other OSes that use extensions; and additionally, file extensions are a useful fast way of knowing exactly what kind of data the file contains. You're a mega idiot who deserves to be constantly seething
Thank goodness that all the many people I've learned from over my life didn't have to meet some arbitrary gold standard of knowledge before they were allowed to teach me! Hardly anyone could have ever taught me anything by this standard.
Similarly, thank goodness I've had the opportunity to expand and solidify my own knowledge by teaching people things I knew, even if I wasn't sure of all the details. I can't count the number of times I've had to stop mid-explanation and say "wait a second, that thing I just said doesn't really make sense. Let's take a deeper look and figure it out together."
I'm confused as to why you're choosing this hill to die on.
I also use ".sh" at the end of bash script files for usability: at a glance it's obvious how run "build.sh" and exactly what it does without the person building it having to read a readme or an email that you authoritatively sent months ago regarding the optional build processes.
You are not supposed to "identify" anything on UNIX-like operating systems; that is what the file(1) command is for, specifically, especially so since the entire concept of UNIX is that everything is a stream of bytes.
Attempting to manually manage files in this way defeats the purpose of the OS abstracting it away for the user; and the users of your executable should not have to care what your executable is written in because you grew up on a PC-bucket whose operating system stems from CP/M -> MS-DOS!
As someone who did grow up on Windows and still uses it regularly alongside Linux, I want to respond to this comment just to play the Devil's advocate (i.e. infuriate you).
> especially so since the entire concept of UNIX is that everything is a stream of bytes.
I consider this an archaic, anachronistic, ancient, outdated, primitive (they all mean the same thing; I just used a thesaurus to really drive home my point) file model. While it made sense for the limited computers of the early 1970s, it is extremely hobbling today and the fact that no one really complains about it is... quite astounding. If every file is merely a bag of bytes, then every native program shall have its own file-parsing/byte-parsing routine. What a waste of effort, writing and rewriting parsers over and over again.
The fact that one has to write a shell script that is interpreted, and itself calls not one, but three other binaries (cat, grep, awk) with arcane, not-easily-remembered flags just to extract out certain words in the last several lines of a file is... ridiculous. That you have to call 'file' instead of directly querying the OS or shell for file attributes is farcical. Consider PowerShell or Python as alternatives to shell scripting. In the former, the entire .NET library is available; in the latter, the default libraries may be imported as one sees fit, and additional libraries are available online.
> defeats the purpose of the OS abstracting it away for the user
The UNIX philosophy does a poor job of 'abstracting it away from the user'. For well-abstracted OSes, see any smartphone today (especially iPhones).
Furthermore, not every UNIX/Linux user interacts with their computer solely over the command line; I use KDE Plasma, for instance. In general, when I see a file on Linux with no extension, I expect it to be a binary; I am surprised when it is, in fact, a shell script.
This is a fantastic response, honestly. I'll add that your last point about how different users interact with their systems differently is especially relevant for the longevity of open operating systems. General users coming over to Linux who do not have extensive knowledge with computers beyond Windows will primarily be using graphical interfaces on their desktop, and making that process confusing for them by not accommodating that will quickly make them give up on the whole thing. If we wish to keep the software alive with a userbas then it's actually extremely important to allow and support things like this that are more intuitive.
I'm not sure that I can add much beyond what's already been said in reply, but I do want to say:
If something this (arguably) trivial does, truly, make you mad, I would encourage you to reflect on _why_ and perhaps talk it over with a friend or colleague.
This works on the same principle as the video backup system (VBS) which we used in the 1980's and the early 1990's on our Commodore Amigas: if I remember correctly, one three hour PAL/SECAM VHS tape had a capacity of 130 MB. The entire hardware fit into a DB 25 parallel port connector and was easily made by oneself with a soldering iron and a few cheap parts.
SGI IRIX also had something conceptually similar to this "YouTubeDrive" called HFS, the hierarchical filesystem, whose storage was backed by tape rather than disk, but to the OS it was just a regular filesystem like any other: applications like ls(1), cp(1), rm(1) or any other saw no difference, but the latency was high of course.
That's how digital audio was originally recorded to tape back in the 1970s and 80s: encode the data into a broadcast video signal and record it using a VCR.
In the age of $5000 10 MB hard drives, this was the only sensible way to work with the 600+ MB of data needed to master a compact disc.
That's also where the ubiquitous 44.1 kHz sample rate comes from. It was the fastest data rate could be reliably encoded into both NTSC and PAL broadcast signals. (For NTSC: 3 samples per scan line, 245 scan lines per frame, 60 frames per second = 44100 samples per second.)
130 MB for the whole tape is not a lot. It equals to a floppy disk throughput, which is probably not a coincidence. However, basic soldering implies that the rest of the system acts like a big software-defined DAC/ADC.
Dedicated controllers were absolutely out of the question because nobody could afford them, which is why Amigas were so popular: a fully multitasking, multimedia computer for 450 DM. That's 225 EUR! Somebody that cost sensitive won't even consider a dedicated controller; back then wasn't like it's today.
This was at a time when 3.5" floppy disks were expensive (and hard to come by), and hard drives were between 40 - 60 MB, so 130 MB was quite practical. The floppy drive in the Amiga read and wrote at 11 KB / s.
And yes, this was a DAC and an ADC in software, with added Reed-Solomon error correction encoding and CRC32. The goal was to be economical. The end price was everything; it had to be as cheap as possible.