> For a long time the X Window System had a reputation for being difficult to configure. In retrospect, I’m not 100% sure why it earned this reputation, because the configuration file format, which is plain text, has remained essentially the same since I started using Linux in the mid-1990s.
It's because X's config files were asking you questions that there was no good way of knowing the answers to other than trial-and-error. (After all, if there was some OS API already available at the time to fetch an objectively-correct answer, the X server would just use that API, and not ask you the question!)
An example of what I personally remember:
I had a PS2 mouse with three mouse-buttons and a two-axis scroll wheel ("scroll nub.") How do I make this mouse work under X? Well, X has to be told what each signal the mouse can send corresponds to. And there's no way to "just check what happens", because any mouse calibration program is relying on the X server to talk directly to the mouse driver — there wasn't yet any raw input-events API separate from X — so in the default X configuration that assumes a two-button mouse, none of the other buttons on the mouse get mapped to an X input event, so the mouse calibration program won't report anything when you try the other parts of the mose.
So instead, you have to make a random guess; start X; see if the mouse works; figure out by the particular way it's wrong what you should be telling X instead; quit X; edit the config file; restart X; ...etc.
(And now imagine this same workflow, but instead of something "forgiving" like your mouse not working, it's your display; and if you set a resolution + bit-depth + refresh rate that add up to more VRAM than you have, X just locks up the computer so hard that you can't switch back to a text console and have to reboot the whole machine.)
Yup, things are so much better now that they just work. Except when they don't, because now it's harder to do anything about it.
I've lost count of the number of Linux machines I've seen that won't offer the correct resolution for a particular monitor (typically locked to 1024x768 on a widescreen monitor).
I don't know whether the problem's with Linux, Xorg, crappy BIOSes or crappy monitors - but even now I occasionally resort to an xorg.conf file to solve such issues.
Do you work with a lot of KVMs? Directly plugged monitors usually just work thanks to EDID info, but cheap KVMs frequently block that signal and cause problems. It's rare for a monitor plugged directly into the computer to have problems these days, even on Linux.
No KVMs involved - but three of the machines I have in mind (not identical, but all running the same version of Linux Mint) have two monitors attached, one of which is OK and the other isn't. (Not mine - so I haven't put any time into trying to solve it yet.)
Another machine - which is mine - used to have a 19" VGA monitor attached which worked happily at 1280x1024 for months, then one day something got updated and it wouldn't do anything beyond 1024x768 after that until I resorted to an xorg.conf file.
Honestly, it is pretty easy to configure the X Server these days; very little manual intervention has been required since the mid-2000s if you want to accept the defaults, which largely are correct and good. I am mindful about what families of hardware I buy, though, but that's not too restrictive. The only piece of hardware of mine that needs manual configuration is the Logitech TrackMan Marble, but that's only because I operate the mouse with a right-handed layout with my left hand. Interestingly the TrackMan Marble does not work with its full feature set in Wayland (core example: the buttons to enable horizontal/vertical panning of a wide or tall document), and this is not exotic hardware. How configuration is being handled in the X to Wayland conversion is a mystery to me. Some of it is happening in libinput (I think), but other parts aren't. This is one of the reasons I am deferring the Wayland migration for as long as I can.
Configuring software stack that runs (think: what the ~/.xsession file manages) is the place I've invested most of my effort, and that's purely about aesthetics and behavior: DPI, font rendering settings, window manager, etc. And this is pretty easy to do these days, because most of these things can prototyped and altered in an existing X session (keeping a tight edit-run loop).
And both of these situations can be alleviated by storing critical configuration files (e.g., ~/.xsession or the X Server configuration) under version control. There's no point in having to invest in reconfiguration of the same hardware these days when there's cheap version control and storage.
Oh, for sure. I've been using X in various capacities for ~3 decades.
I remember how it was. I'm impressed with how it is.
I recently switched back to X as a primary desktop after a rather long hiatus of doing [mostly!] other things. There was some initial driver discourse (standard nVidia vs. OSS necessary nonsense), but it wasn't really so bad once I sorted out what I needed and most "regular" Linux users can skip by a lot of this by default.
So far, I've done zero manual configuration of X itself outside of using XFCE4's GUI tools to arrange the three monitors in front of me in the right order -- and I don't presently see any reason to change anything else.
It's been very pleasant, all said, even though I got here on Medium-Hard Mode with an a rather barebones base install of Void on an existing ZFS pool for root.
X really was one of the easier parts of the whole operation.
(I have no interest in Wayland. It offers no clear advantage to me as a user that I can identify; even the games I like to play run splendidly in X. I've also always adored the concept of remotely displaying GUI applications. It's convenient -- I ran remote X apps for years immediately prior to this recent switch, and it worked well. Remote X apps have saved my bacon a few times by allowing me to quickly get a thing done in a familiar way instead of learning how to do it using something else entirely and maybe stuffing it up in some unforeseen fashion.)
Thanks. Now I'll have nightmares of the time I spent trying to help a friend get the 32" TV they won in a contest (back when an LCD of that size was still both unusual and expensive) to work at proper native resolution in Windows.
Windows really wanted it to be 1080p, and the TV supported this input, but it was a blurry mess.
It was advertised as 720p, and the TV supported this input as well, but that was also a blurry mess.
It actually had a physical vertical resolution of something like 760 lines, which was not one of the modes that it offered up over DDC as an option for whatever was driving it to use.
Fun times.
(I did eventually get 1:1 pixel mapping, but IIRC I had to give him a different video card for this to happen.)
What Linux solves through configuration, Windows solves by having everything you buy come with its own model-specific drivers that burn the needed configuration into the .INI + .DLL.
Windows "not liking your video card" is presumably because you either aren't using the right driver, or because Windows doesn't like your driver — i.e. the monitor is old enough that there's no version of that driver for current Windows.
PnP will always work for a simple+direct+modern (GPU → DisplayPort or HDMI → display) display path; but there are a lot of people who for whatever reason still need to use VGA.
Despite EDID being invented during the VGA era, it wasn't invented at the beginning of it — so older VGA displays don't support EDID, and therefore don't support reporting their valid modes. (And this is relevant not just for CRTs, but LCDs too — and especially projectors for some reason. Some projectors released as recently as 2010 were VGA-only + non-EDID-reporting!)
Remember Windows saying "you proposed a new monitor resolution; we're gonna try it; if you don't see anything, wait 15 seconds and we'll undo the change"? That's because of expensive mid-VGA-era EDID-less VGA monitors. These were advertised as supporting all sorts of non-base-VGA-spec modes that people wanted to use, but came with nary a driver to tell Windows what those modes were — so Windows was in that era just offering people essentially the same experience as editing xorg.conf to add ModeSet lines, just through a UI. And obviously, if even Windows had no proprietary back-channel to figure out what the valid modes were, then Linux didn't have a penguin's chance in hell of deducing them without your manual intervention.
---
But also, people are often trying to "upcycle" old computers into Linux systems (embedded systems like digital-signage appliances being especially popular for this) — and these systems often only come with VGA outputs, and video controllers that don't capture EDID info for the OS even when they do receive it.
Hook an early-2000s "little guy" (https://www.youtube.com/watch?v=AHukN0JsMpo) to any display you like over VGA, no matter how modern — and it still won't know what it's talking to, and will need those modeset lines to be able to send anything other than one of the baseline-VGA-spec modes (usually 800x600@60Hz.)
And this is, of course, still true if you try to use one of these devices with a modern HDMI display using an adapter.
(You might think to get away from this by using a "USB video adapter" and creating an entirely-new PnP-compatible video path through that... but these devices are usually old enough that they only support USB 1.1. But hey, maybe you'll luck out and they have a Firewire port, and you could in theory convince Linux to use a Firewire-to-DVI adapter as an output for a display rather than an input from a camcorder!)
---
Besides the persisting relevance of VGA, there's also:
• https://en.wikipedia.org/wiki/FPD-Link, which you might encounter if you're trying to get Linux running on a laptop from the 1990s, or maybe even a "palmtop" (i.e. the sort of thing that would originally have been running Windows CE);
• and the MIPI https://en.wikipedia.org/wiki/Display_Serial_Interface, which you might see if you're trying to do bring-up for a new Linux on a modern ARM SBC (or hacking on a system that embeds one — a certain popular portable game console, say.)
In both of these cases, no EDID-like info is sent over the wire, because these protocols are for devices where the system integrator ships the display as part of the system; and so said integrator is expected to know exactly what the specs of the display they're flex-cable-ing to the board is, and write that into a config file for the (proprietary firmware blob) driver themselves.
If you're rolling your own Linux for these systems, though,
then you don't get a proprietary-firmware-blob driver to play with; the driver is generic, and that info has to go somewhere else. xorg.conf to the rescue!
> on modern machines you almost never want to be editing the xorg.conf.
No one ever wanted to be editing xorg.conf! (xkcd 963 anyone?)
I did try the "modern" way when I hit this problem (which would have been in early 2022) - but even if it had worked (which it didn't) I don't think it would have persisted beyond a reboot?
I've never had this technique fail on me. I've done it a lot since I work with a variety of crappy KVMs and run into this problem often enough. You do need to make it a startup script, but that's pretty easy to do.
If it didn't work it's possible you have deeper problems, like X falling back to some crappy software only VESA VGA mode because the proper drivers for your card got corrupted. I've not seen this in many many years, but it's possible. The last time it happened it was really obvious because the whole thing was crazy slow, like the mouse cursor was laggy and typing text into the terminal had over a second of delay. It wasn't subtle at all.
I seem to remember at the time I had trouble finding "current" instructions - I think the syntax changed somewhere along the line? - so there may well have been some crucial step missing.
I'm sure it hadn't fallen back to a VESA mode because I was using compositor features like zooming in on particular windows while screencasting.
> Directly plugged monitors usually just work thanks to EDID info
If you are dealing with consumer grade stuff that is sold a million times sure. I stopped keeping track of how often some special purpose/overpriced piece of display hardware had bad EDID information that made it glitch out.
> If you are dealing with consumer grade stuff that is sold a million times sure.
It's not a sure thing. Out of a bunch of mass produced monitors sharing the same model number and specs, some may still malfunction not reporting the correct EDID.
KVMs do tend to cause issues, especially when it comes to power management and waking from sleep. However, just two weeks ago I had issues with Debian when connecting directly to a monitor. Booting from the live image with a Nvidia GPU resulted in 1024x768 garbage. Surely the installer will take care of that and the open drivers will be sufficient. Surely.
Nope. I had to reinstall and the option to add the proprietary repository was not as obvious nor as emphasized as it should have been. It almost seemed like an intentional snub at Nvidia. I bailed for other desktop-related issues and ran back home to another distro.
But maybe Debian doesn't want to focus on desktop users and that's fine - they can continue to rule their kingdom of hypervisor cities filled with docker containers. The world needs that too.
> It almost seemed like an intentional snub at Nvidia.
I don't think anybody can come up with better intentional snubs at Nvidia than the Nvidia itself.
When it comes to their older graphics hardware, their drivers just refuse to work with newer kernels. GPU was capable to draw windows and play videos for a decade, but then, after a kernel update, it doesn't even show 1024x768 "garbage". Just black screen.
So effectively, buying Nvidia to use with Linux equals to buying hardware with expiration date.
I'm surprised the reverse-engineering folks that like jailbreaking game consoles and decompiling game ROMs, aren't all over the idea of decompiling old Nvidia drivers to modify + recompile them to speak to modern kernel APIs.
Once a card is old enough you might have to switch to the Nouveau driver instead, which is probably fine since using a card that old on a modern machine suggests you aren't that interested in games or VR.
There is no other choice but Nouveau. But it's not that fine because it means losing hardware video decoding.
> using a card that old on a modern machine suggests
It's an old laptop. Totally adequate for scrolling web, watching movies and arguing about very important stuff on Hacker News. There is no way to change GPU there or switch to integrated Intel one.
> which is probably fine since using a card that old on a modern machine suggests you aren't that interested in games or VR
I think a more correct assumption is that you're likely interested in running games of at most the era the computer was purchased in. It'd be a shame if your 7-year-old GPU going out-of-support with a distro upgrade, meant that you suddenly become unable to run the 7-year-old games you've been happily playing up until that point.
I'm genuinely pleased to hear that it works for you.
Unfortunately that doesn't make the problem I'm having go away! (On two of the machines I have in mind the issue is with a second monitor - that may well have something to do with it.)
> For a long time the X Window System had a reputation for being difficult to configure.
Honestly I thought it was hard to configure because until I used Linux, my X terminals didn’t need to be configured at all!
I may be misremembering but I think my NCD terminal used bootp and probably a little tftp, then off it went. The hardest part was probably finding an Ethernet drop to plug it into.
You surfaced memories of childhood me installing RedHat 5.2, carefully selecting packages and X config options, getting it wrong, not knowing how to get back to that magical installation UI, and reinstalling the OS just to have another crack at it.
Eventually I figured out how to launch that xconfig utility and found some sane defaults, and was thrilled when I finally saw the stippling pattern or even a window manager.
The manual that came with your laptop of 25 years ago isn't going to tell you whether your touchpad is Alps or Synaptic, or which PS/2 protocol it imitates.
True. Though laptops were in some ways easier than desktops, since laptops tended to have the same set of hardware in each unit, so hopefully you only had to find an `XF86Config` or `xorg.conf` that someone had shared for that model.
To the people down-voting you: X is from a time when devices actually came with manuals. When the people using it were engineers and scientists and reading a datasheet or a manual was a normal thing to them.
I think this started around the 90ies that devices turned into magic black box consumables that are expected to "just work" while being undiagnosable when they don't.
> To the people down-voting you: X is from a time when devices actually came with manuals.
To a degree. At least from my experience, something like a monitor and video card manual would provide you with enough information to filter through a list of example modelines to figure out which ones may work. Yet they did not provide enough information to create your own modelines.
> devices turned into magic black box consumables that are expected to "just work" while being undiagnosable when they don't.
"Just work" and being diagnosable are not mutually exclusive concepts. For the most part, the Linux ecosystem reflected that and still reflects that. I suspect the shift in behavior actually came from end users. They were less willing to look through the diagnostic messages and far less willing to jump through hurdles for things that they thought should just work.
> I think this started around the 90ies that devices turned into magic black box consumables that are expected to "just work" while being undiagnosable when they don't.
I would say that it's more that the architectures where a manual created by the integrator could tell you anything useful, became irrelevant/obviated by architectures where it wouldn't.
Including a manual with a printed wiring block diagram of the hardware, made sense in the 1970s, when you (or the repair guy you called) needed something to guide your multimeter-probe-points for repair of a board consisting of a bunch of analogue parts.
And such a manual still made sense in the 1980s, now for guiding your oscilloscope signal-probing of jellybean digital-logic parts ("three NOT gates in a DIP package" kind of things) to figure out which ones have blown their magic smoke.
But once you get to the 90s, you get complex ICs that merge (integrate!) 90% of the stuff that was previously sitting out as separate components on the board; and what's remaining on the board at that point, besides those few ICs, just becomes about supporting those complex ICs.
At that point, all of the breakage modes that matter, start to happen inside the ICs. And if it's the ICs that are broken, then you none of the information from a wiring block diagram is going to be helpful; no problem you encounter is likely to be solved by probing across the board. Rather, you'll only ever be probing the pins of an individual IC.
Which means that what really helps, in the 90s and still today, are pin-out diagrams for each individual IC.
Providing that information isn't really the responsibility of the board manufacturer, though; they didn't make the ICs they're using. Rather, it's the responsibility of the IC company — who you don't have any direct relationship with, and therefore who don't have cause to be sending you you data-sheets.
Thankfully, these IC companies do sell these parts; and so they mostly have their IC data-sheets online. (No idea how you would have figured any of this out in the 90s, though. Maybe the 90s equivalent of Digikey kept phonebook-thick binders containing all the datasheets they receive along with the parts they order, and maybe repair people could order [photo]copies of that binder from them?)
I think, for at least the first 30 years of my life, every Linux system I've ever built was with "hand-me-down" hardware. First hardware from my parents, then from various friends, then finally from my own expired projects.
When I was young (eleven!), this meant that we'd get a new computer, and now the very old computer it replaced could be repurposed as a "playground" for me to try various things — like installing Linux on — rather than throwing it out. (My first Linux install was Slackware 3.4 on a Pentium 166 machine. Not the best hardware for 1998!) Nary a manual in sight; of course my parents didn't keep those, especially for something like a monitor.
When I was a teenager, this meant getting hand-me-down hardware from friends who had taken the parts out of their own machines as they upgraded them. Never thought to ask for manuals, of course. (Also, sometimes I just found things like monitors laying on the side of the road — and my existing stuff was so old that this "junk" was an upgrade!)
And during my early adulthood, my "main rig" was almost always a Windows or (Hackintoshed) macOS machine. So it was still the "residue" of parts that left that rig as it got upgraded, that came together to form a weird little secondary Linux system. (So I could have kept the manuals at this point; but by then, the manuals weren't needed any more, as everything did become more PnP.)
It's only very recently that I bought a machine just to throw Linux on it. (Mostly because I wanted to replace my loud, power-sucking Frankenstein box, with one of those silent little NUC-like boxes you can find on Amazon that have an AMD APU in them, so I could just throw it into my entertainment center.) And funny enough... this thing didn't come with a manual, or even a (good) data-sheet! (Which is okay for HDMI these days, but meant that it was pretty hard to determine e.g. how many PCIe lanes the two M.2 slots on the board are collectively allocated.)
> You didn't have to guess, you just had to read the specs in the manual that didn't come with your equipment.
Hey you missed a word so I added it in for you. Most consumer PC equipment definitely did not come with any documentation covering the sort of stuff X's config file was asking about.
When that documentation was available it was something you could only get by contacting the manufacturer about. But you couldn't mention the word "Linux" because the CS rep would give a blanket "we don't support Linux" and you'd get nothing.
Sure it did. There was a page in the pamphlet that came with my viewsonic 15" that listed the supported timings. You just threw it away, but that's not X's fault.
Given that the example mentioned above was about making the scroll wheel work? When Microsoft released the IntelliMouse it came with a driver disk, just plugging in the mouse without reading the manual left you with a non functional scroll wheel. Support for Microsoft style mice by later versions of Windows also did not stop companies from requiring their own drivers to enable non standard functionality.
In the very early 90s, my dad started using some sort of unix again (I don't know if it was an early linux or a BSD of some sort.) Up until that point, I'd only ever seen him used windows 3.1 or some raw terminal/TTY emulator.
It was winter and suddenly his screen was a fuzzy grey, with funny looking windows, instead of the comforting (to me) windows teal.
At the time, it represented to me, a change into the unkonwn. As it was (assume) the start of a new contract (my dad worked at home alot) it was also a time of financial pressure.
So to me, I hated X, and how it looked. It was to me, the equivalent of a brutalist housing block. Well built sure, but foreboding to look at.
Later when I was I was using Linux my self (around redhat 5/6) If you suddenly saw that you were dropping into a "natural" X, It was a sign that you'd fucked up the window manager, or that the switch between gnome and E (or which ever one you were trying) had gone wrong.
The stipple and X cursor are forever ingrained into my memories. I remember it so vividly how back in 1998 when I installed my first Linux distro (suse 6-ish) and after some configuring i typed "startx" and then BOOM! Grey "unix-y" weirdness for a minute or two and then KDE 1. It will never not hit me with immense levels of nostalgia whenever I come across it, which admittedly is not very often these days.
Wow, I could have written that exact comment myself. Those were the happy startx, after you got the monitor sync rates right. The myth went that if you got them wrong, you could fry the monitor. I remember that suse package came with a couple of pins. one tux and one suse chameleon. I preserved them for a long time. But I moved way too many times. Fun times. Thanks for the nostalgia:)
That part about "...you wouldn’t want to wing it with the configuration, because allegedly you could break your monitor with a bad Monitor setting" -- strike the "allegedly"! Or at least, let me allege it from personal experience: I did that to one monitor, in the early 1990s. You could smell the fried electronics from across the room.
For the interested: CRT monitors have a high-voltage power supply which uses an oscillator. Cheap(er) monitors allegedly reused the horizontal sync frequency for the power supply oscillation, to save an oscillator, so if the horizontal sync frequency was very different from expected, or worse, completely stopped, it could burn out the HV power supply.
Has anyone tested this hypothesis? It could also be that the horizontal sync itself burns out, although that seems less likely.
(In even more detail: Like any other switching power supply, the HV supply in a CRT runs on a two-phase cycle: first, a coil, which creates electrical inertia, is connected to the power source, allowing current to build up. Then the current is suddenly shut off, and the force of the coil attempting to keep it flowing creates a very high voltage, which is harvested. If the circuit gets stuck in phase one, the current never stops increasing, until it's limited by the circuit's resistance, much higher than it's supposed to be. The excessively high current overheats and burns out the switching component. Anyone working on switching power supplies will have encountered this failure mode many times.)
It is not really about saving one oscillator, but about two things:
- saving the drive circuitry for the flyback, which is usually combined with horizontal deflection amplifier. Also such a design probably simplifies the output filter for horizontal deflection as the flyback primary is part of that filter.
- synchronizing the main PSU of the display to the horizontal sync as to make various interference artifacts in the image stay in place instead of slowly vandering around, which will make them more distracting and noticable.
It is not that hard to see the whole CRT monitor as essentially being one giant SMPS that produces bunch of really weird voltages with weird waveforms. And in fact is you take apart mid-90's CRT display (without OSD), the actual video circuitry is one IC, few passives and lot of overvoltage protection, rest of the thing is powersupply and the deflection drivers (which are kind of also an power supply, as the required currents are significant).
Your (parenthesized) explanation of switching power supplies made a lot of "secondhand knowledge" click in my head -- like, for instance, why there's lots of high-frequency noise in the DC output. Thank you!
I was briefly pleased with the ability to run an 8" monitor that looked like the kind on 90s cash registers at the impressively high resolution of 1024x768. Then after about 10 seconds it blinked out, smelled like burning electronics, and never worked again.
Neal Stephenson's Cryptonomicon made reference to a hacker dubbed The Digi-Bomber, as he could make his victims CRT monitors implode in front of them by remotely forcing a dangerously bad configuration.
In the early ChromeOS days when they were thinking about which graphics stack to use, the quiet but definitive top manager said, if they picked X11, that he'd better not see any stipple on boot. It's such a funny comment that stayed with me because it really captures how seeing that stipple is such a symbol of "I guess you're booting X11 now", and his insight on how it's not what he wanted the first impression of the product to be.
My understanding is the root weave is a pattern designed to be hard on your monitor(a crt when it was designed). It is ugly as sin but that tight flip from black to white was intended to expose any weakness in the driving beam, ether from misconfiguration or components failing. Where another pattern may obscure the problem. I think it is also rough on lcd's where a misbehaving one really sparkles on the weave.
I am not sure why it was the default, I suspect it was to give you a chance to see how your monitor was behaving on a fresh install and you were expected to set the background to something else.. I still run the root weave on my desktop, it is obsd with their xenocera where it is still the default. but I also run a tiling window manager so only actually see the root window once in a blue moon.
The stipple pattern always reminded me of the pattern on Sun workstation mousepads. For those of you who don't remember: Sun workstations had optical mice, but they're not like the Intellimouse-derived ones we enjoy today that can track on any suitably textured surface, even your pant leg. They had to go on a special mousepad with a clear, slick glass or plastic surface and a special dot pattern underneath that the optosensor would use to reckon movement. I think even getting the upper surface dirty or fingerprinted could negatively mess with the tracking (like smudging a CD could affect playback).
The Mouse Systems mouse used on Sun workstations had two LEDs (and matching sensors) of two different colors, and the solid mouse pad had vertical bars of one color ink, and horizontal of the other. You can take it from there.
Inventor / founder Steve Kirsch used some of the proceeds to fund Frame Technology, which then went on to be sold to Adobe. And then Infoseek (a search-engine also-ran), sold to Disney; and then Abaca (anti-spam), sold to Proofpoint.
> In the old days, it used to be that mouse, keyboard, video card, monitor, fonts, plugin+module data, etc. needed to be spelled out in detail in /etc/X11/XF86Config.
Man does it make me feel old that the /etc/X11/XF86Config days don't feel like the 'old days' to me. That stipple takes me back to using TWM on Sun3 workstations because OpenWindows was too slow.
Yes, it takes me back to configuring my X session for the first time on an NCD Xterminal in the computer lab at uni, connected to the schools's Sun and Dec servers. It was so much better than all the vt220 serial terminals, and they were "scary" enough that it was surprisingly easy to get one.
Saw the stipple just last week on a (presumably) failed startup of an airplane's seat back entertainment system. Not the X cursor but the normal X11 arrow. Recognized it immediately and was, in my own way, entertained.
That stipple background with the X cursor triggered many positive emotions. Like getting a remote X display to work. Another memory: Hummingbird X Server "eXceed" on Windows (NT I guess).
I remember eXceed being used in a bank I was working with. NT was used for the office stuff and the “big boy” trading applications were still mostly running on UNIX (mainly Solaris). With Windows 2000 a bit later, quite a few applications got ported to Windows IIRC.
I remember using a 486 Linux box and eXceed to allow people to use Netscape over the LAN with X because it was a better experience that way than using any other method of using Netscape on our OS/2 Warp desktop machines at the shop.
(That little Linux box was the star of many shows. It also delivered our mail, routed our WAN, was our primary place to run IRC clients and news readers, and it served files.)
As a youngster, the first time I managed to get Slackware installed via floppies, I was having a great time chatting with ircII and browsing with lynx. Someone on IRC told me I needed X Windows and I was like, that sounds cool, so I learned as much as I could to try to get a working config with my video card. Many hours later I got startx to take over the screen and now I'm staring at the stipple and X cursor.
It looked broken, and I assumed it was broken, so I gave up. It took me a long time to get the concept of window managers, but eventually I understood and realized that I had actually gotten X working that time years ago. Gosh.
I fondly remember programming my own higher resolution graphics modes via X86Config.
I used to scrounge around at work to find the highest bandwidth monitors, and then I'd program my own modes with oddball non-VESA resolutions beyond the 1024x768 'standard' of the day.
All this could be figured out by reading the specifications section of the monitors operating manual.
I did break a beautiful compaq 21" CRT by setting an unconventional modeline to play gradius in mame in its original resolution. It was glorious. But it dropped a big brown screen from time to time. Until I understood why/when it turned brown. But it was too late.
A modeline is not like reprogramming a firmware or anything, it’s just settings on how to move the electron beam. I don’t know what I did, but it probably moved too far or something, It wouldn’t show anything else but brown
Yes; trying to drive a refresh rate higher than what’s rated can do it-I think it had something to do with the flyback transformer? Some (later) crts had guards against this, and ddc more ore less prevented it.
Well, you'd want some pattern, because on a 1-bit display your only choices otherwise are pure white or pure black, both of which kind of suck. The Blit terminal also used a stippled pattern, I think the Perq did the same, etc.
I think the interesting thing about the X background is that it's not a simple stipple, it's actually a pattern that looks very much like woven fabric.
I think if you clicked there like arrow buttons it cycled through the defaults. I don't remember them having any pattern that wasn't in MacPaint, though.
I think they were all custom patterns specific to MacPaint and the Finder. The QuickDraw manual page 3-7 [1] says there are only five built-in patterns: "white", "black", "gray", "ltGray", and "dkGray".
Tangential, but it's wild to me to see a vector PDF (as opposed to a scan) for a book from 1994!
Adobe Acrobat came out in 1993 but it's not like 800-page books were being distributed for it by the following year, at least not that I remember. It's really cool that whatever program that manual was typeset in, someone eventually went back and managed to export the manuscript to PDF.
Or, of course, maybe it was output in PostScript originally and they saved that and so a later conversion to PDF was trivial.
A lot of documents like that were typeset in DTP applications like FrameMaker. FrameMaker et al were sort of like WYSIWYG takes on TeX, you'd lay out template-like design elements and the input text would have markup that put the text in the elements.
Most such applications would directly spit out PostScript, enabling PDF export was straight forward. Also as you mentioned converting PostScript to a PDF was straight forward as well.
Single color or tiled pattern wastes no system memory for background buffers. Color value or bitmap (actual bit map) was written directly to VRAM each time some region needed to be filled with background, and VRAM was the only component that needed to be big enough to hold the screen buffer. A single byte value or a couple of bytes was enough to define the whole background in the program.
Later, small tiled bitmaps (pix maps) were used for the same reasons to fill the screen. They were compact enough to fit somewhere in discardable free memory, and also to read from the disk with a single operation, which meant there was no benefit in keeping them in memory backed by swap file.
Some modern distributions use programmatic gradient generation for desktop background, though I am not sure whether it still works directly, or simply creates a temporary full screen image in memory, which is no different from using a regular photo.
This reminds me of the first time I ran the X Window System on my computer. After one hour trying to start X unsuccessfully, I finally got the stipple background, which left me wondering what was the next step. There was no next step. I right-clicked on the background and a menu popped up. That was it. That was X, before my very eyes, in all its splendour.
"So knowing now that root weave and all of that is from 1986, should I send X.Org a pull request to rename the party_like_its_1989 global variable to party_like_its_1986 or party_like_the_1980s"
Well, that would kind of spoil the Prince reference
Holding my first child for the first time decades later approached the sense of otherworldly bliss and joy that I experienced when, as a young teen in the mid nineties, I got X to work on my 486.
My blood pressure rose, my hands started shaking, and my feet went cold. After someone let out the happy smoke out of a monitor, I would always triple check everything... everything... and then adjust... then change the monitor with the fiberglass screwdriver... you are SCARING ME! but ... the GDM-1907 really did work at 1280x1025, with a front porch in phase.
I remember hacking away at the X Config files for a long time installing slackware on my 486 laptop and some external displays in 1995-1996 and being super worried about breaking stuff.
That was kind of before you could look stuff up easily on the internet, plus you might not have had the modem or ethernet card working in linux yet either.
Commerical X servers were really something, especially those without an academic/FOSS heritage. Desqview/X [1] was one of those "DOS plus a UI and multitasking" OSes that competed with Windows 2/3.x and GEOS/PC, that windowed with... you guess it, the power of Motif and X. I think twm was the original window manager that came with it, and you could run emulated DOS instances and Windows sessions in X terminal emulators.
I remember setting multiple modelines and cycling through them with ctrl + alt + plus/minus
Then the monitor would freak out and start buzzing on the high resolution modeline that didn't quite work so you would switch away and go back to tweaking it.
It took me a while to get my first monitor to run at 1280x1024 @72Hz
I later had some Mitsubishi 21" monitors and got them to run at 2048x1536 @75Hz
My old desk still has a permanent bend to it from those two because they were so heavy.
A commercial X server was often worth the price of admission for a boxed copy of Rehat or SuSE. That and a copy of StarOffice. IIRC SuSE 6.x boxes came with a personal license for AcceleratedX and RedHat 5.x and 6.x came with MetroX licenses.
They still had XF86 available but I believe defaulted to the proprietary servers. Seeing the XF86Setup screenshot evoked bad memories of inscrutable setup sessions that I was always worried would burn out my monitor.
> If you are of a certain vintage, this image is burned indelibly somewhere in your posterior parietal complex:
> Oh, my old friend. How it’s been a long time.
Heh, basically the opposite for me.
I switched to linux in 2008, Ubuntu on an HP laptop. For the most part it "just worked" and I never really needed to edit the X configs, but I do remember fiddling with them occasionally for some reason. I think it was for some peripheral or other (like a mouse, when I usually used the touchpad).
Generally at the time I'd only see this backround if I was experimenting with my window manager and it crashed. Ubuntu was using Metacity at the time, and I'd switched to Beryl and was going wild with customizations. And when the window manager crashed and all I had was that and windows I couldn't move, I had no idea how to recover and had to hard boot.
I'm fairly sure Ubuntu was hiding this on startup already at that time, if not very shortly afterwards.
Sun 4c crowd represent! Pop up X windows over from jarthur, a 32 way SMP machine using 386's in order to cover up some of that sweet stipple action. Retro, indeed!
saw those images and could smell the stale cigarette smoke on warm bakelite and hear the whine of a vga x terminal monitor with its refresh rate set too high.
thinking about sending an xroach -display to the punk who portscanned me and cluttered my logs while leaving his display open. dumb enough to forget to set an xhosts file means dumb enough to panic when he saw the bugs...
> It also turns out that Windows 3.1 (maybe even Windows 3.0 if memory serves) had a bitmap-style background pattern feature that included a pattern composition tool.
The very first version of MacOS had a pattern editor in 1984. Mac System 3.0 (1986) improved this with a number of pre-made patterns. One of the patterns is pixel for pixel identical to "wide-weave".
That stippled background, and particularly with the X mouse cursor, brings back memories of long waits for X to start. New software updates were always exciting because they promised newer, more interesting graphics that showcased the capabilities of the hardware. Cooler icons! Motif! New ideas! And new hardware updates were even more exciting because they were expensive to get a hold of and highly anticipated. Faster drawing speed! Better resolution! Higher color depths! You could tell on a human scale how much faster the new hardware was and where your money went.
On the topic of X nostalgia, the monospace font used in TFA appears to be Go Mono, developed for golang. To me, it is heavily evocative of the console font on Sun Microsystems SPARC gear. Brings back another flood of memories.
I started usinc Linux around the same time with RedHat 5.0. I do remember that even with Metro getting the X server running was not super easy and took me a few weeks and a few trips to the library to finally have a working GUI.
> So why did the stipple go away? I’m not terribly certain to be honest [...]
I always assumed that the reason was that it was time when cheap (and low-quality) LCD displays with VGA-only inputs started to appear en-masse. And on most such monitors the stipple pattern often either was displayed incorrectly (smeared) or even prevented the monitor from reliably synchronizing to the signal.
i was surprised recently to find that tightvncserver didn't display this stipple and cursor when started up without any x-windows clients; now i know why. to me they bring back memories not of configuring xfree86 (which was easy since i didn't get my own computer until 01996 and didn't equip it with a leading-edge graphics card) but of using x-terminals at the university starting in 01992. the xdm login screen had the default stipple and x cursor
but i guess matt t proud is a youngster, or maybe had enough money to have his own linux-capable computer when xfree86 was hard to configure
what's the best 1-bit-deep stipple pattern for this kind of thing? the zorzpad display (same as the playdate one) is 175 dpi and has a lovely deep black but no grays. the x-windows weave pattern cited here seems like a pretty nice option if you're constrained to 4×4:
it's slightly more ornamental than pure black or white, or just vertical lines or something, but it doesn't really seem aesthetically preferable to the old x boot stipple
cool!! what is this amazing little yeso.h library?! My googling is coming up empty...
btw, a cool platform to experiment with things like this is the Commodore 64... You can fill the 40x25 screen with the same 8x8 character, then modify that character and it fills the whole screen. This can also be be expanded to make cool animations and whatnot.
i never had a working c64, but i did what you're describing on the c64 on the ti-99/4a, and it's pretty similar to how vga text modes and the nintendo work as well. it was an important technique when 64k was a lot of ram and cpus were slow enough that you couldn't rewrite the whole framebuffer every frame (much less generate the video waveform on the fly in software). nowadays it just saves you five lines of code
for (int y = 0; y < fb.size.y; y++) {
ypix *p = yp_line(fb, y);
for (int x = 0; x < fb.size.x; x++) {
char sc = stipple[(y % 4) * 8 + (x % 4) * 2];
p[x] = (sc == '.' ? 0 : -1);
}
}
a modern equivalent might be fragment shaders like https://www.shadertoy.com/view/XtcBWH, which i made six years ago; like the stipple pattern we started out this thread with, it simulates the appearance of woven cloth or basketry
I wonder why they removed the stipple, just to add an opt-in to re-enable it. Was it causing problems with some newer hardware? I don't get why one would make such a deliberate change to a foundational piece of software if it was purely cosmetic.
Display managers were starting the X server and then immediately drawing a background on top of it, which meant you saw the stipple for a fraction of a second and it was just kind of jarring. These days the X server commonly won't even draw its own background, it'll just inherit the boot framebuffer contents and then draw over them, so you go straight from boot splash to login prompt without flicker.
Because modern operating systems like macOS don't have it and it looks old. When it comes to GUIs -- modern good, old bad. This is actually borne out by psychological research in which users found more modern-looking UIs to be subjectively easier and more pleasant to use. It can be jarring to the end user if some component fails and that visually noisy 1bpp stipple shows up. A more modern, seamless experience is worth more than the frisson of nostalgia you get seeing your X server look exactly as it did in the 80s upon initialization.
> This is actually borne out by psychological research in which users found more modern-looking UIs to be subjectively easier and more pleasant to use.
[[Citation needed]]
I find older-fashioned ones easier to use. I actively favour Xfce over GNOME, KDE, Budgie, etc.
I am wondering what this research was, who paid for it, the age group of the people they tested, and so on.
Because it looks horrendously moire if you twitch your head ever so slightly. I am, for one, very glad Ubuntu got rid of it so I don't have to damage my poor eyesight even further by ever having to look at it.
probably because cosmetics aren't mere; they directly create pleasure or suffering
if tightvncserver is to be trusted, the new default is a solid gray, which i have to admit looks nicer. that doesn't seem optimal from the point of view of testability (for example, it'll look the same if your gamma is wrong, or if you've confused bgr with rgb, or used a tiled memory layout instead of the correct raster order or vice versa, though maybe not if you have a bit-plane layout or the wrong number of bits per pixel) which suggests that there was a different objective
It's because X's config files were asking you questions that there was no good way of knowing the answers to other than trial-and-error. (After all, if there was some OS API already available at the time to fetch an objectively-correct answer, the X server would just use that API, and not ask you the question!)
An example of what I personally remember:
I had a PS2 mouse with three mouse-buttons and a two-axis scroll wheel ("scroll nub.") How do I make this mouse work under X? Well, X has to be told what each signal the mouse can send corresponds to. And there's no way to "just check what happens", because any mouse calibration program is relying on the X server to talk directly to the mouse driver — there wasn't yet any raw input-events API separate from X — so in the default X configuration that assumes a two-button mouse, none of the other buttons on the mouse get mapped to an X input event, so the mouse calibration program won't report anything when you try the other parts of the mose.
So instead, you have to make a random guess; start X; see if the mouse works; figure out by the particular way it's wrong what you should be telling X instead; quit X; edit the config file; restart X; ...etc.
(And now imagine this same workflow, but instead of something "forgiving" like your mouse not working, it's your display; and if you set a resolution + bit-depth + refresh rate that add up to more VRAM than you have, X just locks up the computer so hard that you can't switch back to a text console and have to reboot the whole machine.)