Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice read. I wouldn’t have minded more technical details about the implementation and challenges, but that’s probably because I’ve had to write generic SVGA drivers to support generic graphics cards before. (I’m not clear on what was the more convenient alternative to VMs that the author ended up using, though?)

Side-bar but still on-topic: It really irks me to no end that Windows 3.x and Windows 95 could get a fairly hardware-agnostic (fallback) software-rendered GUI fully up and running and exposed to user space in the early 90s and even today Linux/BSD can’t manage that (even just in SVGA mode) without vendor specific drivers. Xfree86 and then Xorg with the fb driver were attempts at doing the same but I can attest they never achieved that same universality. I had hoped EFI fb could finally give us the same for modern PCs but the chances of open source efifb drivers in userland working on a chipset/implementation they haven’t been tested against are a real crapshoot.

I had “success” (compared to the status quo, not compared to the situation on Windows) writing X drivers that wrote to the kernel framebuffer but that broke when everything was rewritten or deprecated in order to support EFI. Even then, the support for listing supported modes and changing to them was very poor (which make sense given how little serious use the kernel frame buffer sees), never mind figuring out what modes intersected with those the display supported. Laptops with integrated plus discrete graphics cards (or desktop motherboards with the same) were also problematic for various reasons.



Historically you wanted the -vesa Xorg driver, today -modesetting should work fine on the firmware-provided framebuffer using simpledrm. But the EFI spec provides no way to change screenmode at runtime, so you're going to need a native driver for that under all circumstances.


IIRC in real-world testing on actual consumer hardware in the field (sample size in the hundreds of thousands), we found fbdev was better supported than vesa (with more restrictions, of course).

Inability to change EFI resolution is OK except with high-res (4k) discrete GPUs running without scaling. But that’s ok because you can do fake DPI scaling in software - the actual tissue is performance at 4k sucks if not hardware accelerated.

Bigger problem today is that EFI fb support is still in its nascence both manufacturer- and software-side. Manufacturers ship crap that’s not up to spec while some EFI FB handlers are too strict in what they expect or haven’t added quirks for some of the very common hardware you might run across.


fbdev would only work on BIOS systems if you were using vesafb or had a hardware-specific framebuffer driver loaded. Vesafb would tend to cause problems with suspend/resume and needed to be configured with bootloader arguments (the kernel had no support for transitioning to real mode and the protected mode entry point for VBE basically never worked, so either the bootloader or the 16-bit kernel entry code needed to be used to do the mode setting). The -vesa ddx either ran in vm86 mode or used x86 emulation code so could do more setting at runtime. I don't know of any cases where vesafb would work and -vesa wouldn't. I did a bunch of the hardware enablement work for Ubuntu back in 2004/2005, so real world deployment experience here is in the millions.

I've no idea what you're talking about as far as EFI fb support goes. The spec literally does not provide a mechanism for a running OS to get anything other than a pointer to a linear framebuffer and the metadata you need to use it. There's nothing more to support. What quirks are you talking about?


Yeah, we specifically disabled suspend and resume and supplied the needed parameters at boot time when “safe graphics mode” was selected. There were definitely cases where neither vesafb nor vesa worked for us and we resorted to our own basic framebuffer driver with out-of-band management of modes via exposed IOCTLs to bypass X’s logic here altogether.

I know you’re posting from the perspective of a Linux dev, but some of this was on FreeBSD (and some of it was not).

The EFI stuff was all on BSD, which took a lot longer to make that transition than Linux did. I didn’t dig into those as much as I did with the BIOS/VESA stuff as my role in the product was being slowly phased out at the time we added native EFI support but had plenty of devices where customers would resort to using CSM to workaround EFI framebuffer graphics driver issues.


I remember using Xvesa in the 00's on 90's hardware with great success. Damn Small Linux, etc.

iirc, -vesa got kinda bad after the advent of the "GPU". Cards didn't natively support VBE and emulated a subset of it just for compatibility purposes.

It's gotten worse these days. I don't know if I'd call it a bad thing though - with the push for hardware accelerated rendering to help with battery life on portable devices, many of the desktop environments have lost support for "software" graphics. They instead depend on software OpenGL support via llvmpipe and chug hard even on modern devices (if you don't have a driver installed) and VMs.


While it’s true that having a better proliferation of hardware accelerated displays in use is a net win, you can’t discount the need to be able to bring up a GUI on generic hardware without knowing the underlying stack in advance.

While under X it was possible to install a dozen drivers and - mostly - be able to cycle through them (auto detection sucked and continues to suck at matching drivers to hardware via manufacturer/device ids), DRM/KMS drivers are unfortunately another story. You often cannot bring up a KMS driver for one hardware on another and expect to be able to gracefully unload if it’s not supported. There KMS drivers that can’t be installed in parallel (you have to choose which set of cards to support over the other a priori), and there is a ton of legacy hardware that will never get KMS drivers working, ever.


What KMS driver will attempt to bind to unsupported hardware, and which other KMS driver would you be attempting to replace it with in that scenario?


We had dependency conflicts between the latest separate releases for both older and newer nVidia hardware (obviously all complaints should be directed at nVidia there), and issues with the latest AMD drivers for their older devices panicking on unload in the presence of certain newer hardware and/or issues loading KMS for the newer products at the same time time as the separate driver for the older. Can’t remember the specifics but we had to load and try them in a certain order (not having a perfect map of hardware ids).


Parallel shipping of the various versions of the proprietary stack is definitely more tedious than it should be, but the PCI IDs that they support are declared in the module metadata and only the appropriate version will be bound to the hardware by the kernel. The scenarios where you'll have trouble are where an in-kernel driver has bound before the proprietary one (eg, nouveau) and potentially left the card in a state the proprietary driver isn't expecting. But the answer there is not to simultaneously ship both the free and the proprietary stacks.


> in-kernel driver has bound before the proprietary one (eg, nouveau) and potentially left the card in a state the proprietary driver isn't expecting

Thanks for reminding me of that - we definitely ran into that some. It wasn’t always with proprietary drivers, though. I think there were cases where legacy intel or amd cards would be incorrectly brought up by the old non-KMS drivers and then the KMS driver wouldn’t load, or maybe vice-versa as there were definitely specific cards within generations of chipsets that were documented as working under KMS/DRM failing and we would have to resort to the legacy drivers.

> But the answer there is not to simultaneously ship both the free and the proprietary stacks.

We didn’t really have a choice given that there wasn’t a one-size-fits-all driver (we didn’t try simpledrm - iirc it went through phases of breakage). But truth be told, while nvidia was the worst and most painful to deploy and upgrade, aside from the issue that we had to choose which version of the nvidia driver we wanted to ship (until we got the conflicting shared dependencies situation worked out by hot swapping the components before loading the driver), it was the most likely to work with all the cards it advertised support for. Only issue there was older cards not supported on current kernel versions or lacking drm drivers altogether.


Cards didn't natively support VBE and emulated a subset of it just for compatibility purposes.

What do you mean by "natively support VBE" or "emulated"?

VBE is just a set of functions in the VBIOS to do modesetting and get information about the modes.


It also has some calls to have the card do accelerated blitting on later versions.


That's VBE/AF but I'm not aware of any VBIOS that actually contained the VBE/AF extension functionality, despite that being the original goal; instead, VBE/AF mainly exists as a TSR "soft VBIOS" and only for relatively few early GPUs.


Some degree of VBE is required for Windows to boot in BIOS mode (and even in EFI mode before Windows 8 for really tedious reasons). I'd expect it to be fine for basic mode setting, but you're still stuck with the modes the card firmware provides which means it's probably 4:3 ratios.


What are those tedious reasons? Like when booting on very old hardware, you got a Vista-like splash screen on Win7?


Some part of the boot stack expected to be able to make VBE calls to set the inital screen mode before loading actual drivers, which means hardware that wants to support Windows 7 needs to provide a CSM layer to support that even if it's otherwise booting via EFI (the CSM layer just rewrites VBE calls into EFI GOP calls, so the graphics card doesn't need to provide VBE itself)


On the other hand, Xorg pulled the original -vga driver quite a long time ago because it was quite a lot of code.


I recently installed FreeDOS on a Ryzen system to get a parallel port device working. It boots fine but anything that uses graphics modes is flaky and will often hang the machine hard where Ctrl+Alt+Del doesn't work. SB16 compatibility is out of the question.

Curiously I got it into a weird mode where artifacts from a corrupted framebuffer in DOS were persistent on screen as a faint ghost image after booting into Linux even when cycling the power for a short duration. I have no idea how that could happen.


> Curiously I got it into a weird mode where artifacts from a corrupted framebuffer in DOS were persistent on screen as a faint ghost image after booting into Linux even when cycling the power for a short duration. I have no idea how that could happen.

This is most definitely a hardware issue with your display, not the frame buffer. What you describe would require alpha blending in software and couldn’t just “happen” with a linear array of corrupted bytes; it would require intentionality.


I cycled (soft) power on the screen too. It was persistent during the BIOS boot phase, kernel text framebuffer, and into Wayland.


Wasn’t Windows back then dependent on vendor supplied device specific display drivers for anything above basic VGA? There wasn’t a real standard for “SVGA”, and the VBE stuff wasn’t that well supported early on?

The Windows fallback until circa Windows XP was good old 640x480x16.


> Wasn’t Windows back then dependent on vendor supplied device specific display drivers for anything above basic VGA?

In theory, yes, but the reality was starkly different as I recall (I worked there in the late 90s). In my experience, Microsoft was doing a ton of heavy lifting helping vendors with device drivers.


I remember using a 800x600 16 color generic SVGA driver with early Windows (3.11 or 95?). At registry level, I guess the mode was made mostly standard as a derivate of the 640x480 16 color standard VGA mode. Indeed both modes fit within the 64 kiB segment using 4 layers and matching the 256 kiB of VGA memory.


Regarding the more convenient alternative - as I understand it, it was using assembler and compiler that can run on a modern system, and only passing the floppy image between the host and the vm, instead of running the development tools inside the vm.


Yes my understanding of the article was that he was cross-compiling and cross-assembling on his modern system.


S




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: