Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
i386 in Ubuntu won't die (popey.com)
145 points by jandeboevrie on Aug 27, 2023 | hide | past | favorite | 120 comments


I just want to point out something a bit pedantic, but I feel is an important distinction: Linux community and devs need to stop saying i386 and say i586 or even i686 as this is closer to what is meant. Linux doesn’t support 386 anymore, you can make it work on 486 but most distributions don’t enable support for this. Pentiums (586) are usually the minimum and some even push toward Pentium MMX/Pentium II which would be 686.


It’s the name of the compilation target architecture in the compiler. Unless the architecture changes, the name doesn’t change. i386 refers to Intel 32-bit, which was superseded by x86_64


In gcc the target triplet is i686-pc-linux-gnu


Huh, I’ve been mentally translating i386 to mean “32-bit intel” for at least a decade, but I use clang.


i686 is very easy to accidentally misread and confuse for x64.


The x64 term never really sat well with me. I prefer the bsd method and call it amd64.

  i386 intel's 386 architecture
  ia64 intel's 64 bit architecture(aka the itanium)
  amd64 amd's 64 bit architecture(aka the 64 bit extensions to i386 aka x86-64 aka x64)


How? There isn't even a 4 or an x in the name.


It definitely reminds me more of "x86-64", the name used on early publications before they decided to brand it (AMD64, Intel calling it EM64T or something, and eventually x64 appearing as a shorthand). You've got the "86" and a 6, and unlike i386 or i486, "686" never had significant currency as a "real" chip name.

I feel like "i586" and "i686" are sort of a hairball logistically. I can recall a friend in the late 1990s whose forum signature boasted having a "686" overclocked to 262MHz-- an AMD K6. While we sort of understand i586 to mean Pentium, and i686 to be PPro/PII/PIII and beyond, you can make cases for the non-Intel Pentium-class CPUs to be anywhere from i386 (NexGen Nx586-- where's the FPU),to i686 (something like a K6-IiI+ is clearly more advanced than, say, the original i586 Pentium 60, although not necessarily matching PPro additions one-for-one)

Going completely on a further tangent, it feels like we stopped awarding architecture tiers like that. There were definitely new instructions in some of the later-generation parts, why didn't the Pentium III with SSE, for example, become an 'i786' tier? Why is there only one 'amd64'? Guessing the overall strategy is that modern code feature-detects and compilers build a binary with multiple code paths-- something that will run grudgingly on a Pentium Pro or original Socket 940 Opteron, but will pick more efficient code paths on a more recent CPU. Although, wasn't that sort of the promise of things like Gentoo-- by building things yourself, you could make a hyper-lean system knowing it only needed to run on the exact hardware you had? I'd expect a marginal performance uptick, just because you'd be avoiding checks and branches related to feature selection and smaller binary sizes, but it doesn't seem like there's a clear consensus it carries its weight.


Not in the compiler but in the package manager.

https://github.com/guillemj/dpkg/blob/6a03732ab0e917272a9fe1...


It's actually just the Debian name for the architecture.


> Pentiums (586) are usually the minimum and some even push toward Pentium MMX/Pentium II which would be 686.

At least in gcc i686 is Pentium Pro, not Pentium II or MMX. It is important distinction as it lacks MMX extensions


Ah, the days when I ran a Pentium Pro and thought I was hot stuff....


The i386 lacks both MMX and SSE2.

SSE2 is the endorsed floating point architecture for x86_64, although the 8087 extensions remain present.

It is reasonable to assert that i386 is an insufficient designation to describe these capabilities.


I fixed a bunch of kernel bugs for 486 and Cyrix support a while back. It ought to work.

But userspace is not the kernel, and a lot of userspace code requires newer CPUs, for good reason.


Tell that the big vendors. Only mingw AFAIK ships i686 packages (with simd support), all others stay in the safe i386 camp still, without simd support.


Linux dropped support for real 80386 a decade ago in 3.8, so `Linux > 3.8` and `CPU model == Intel 80386` are mutually exclusive. Whatever i386 is supposed to be, it's something more modern.

1: https://www.phoronix.com/news/MTI0OTg



> Only mingw AFAIK ships i686 packages (with simd support), all others stay in the safe i386 camp still, without simd support.

Conflating i686 vs i386 with SIMD vs no SIMD is wrong. The Pentium Pro was an i686 processor with no SIMD, and the Pentium 2 was its successor that added MMX — a very narrowly-useful integer-only SIMD extension (which was first introduced on the pre-i686 Pentium MMX). Floating point SIMD (SSE) didn't show up until the Pentium 3 (on the Intel side; AMD's 3DNow was earlier but was eventually dropped).

It's certainly reasonable to enable SSE and SSE2 when building for 32-bit x86 these days, but calling that target "i686" doesn't accurately describe the hardware requirements.


Linux kernel hasn't supported 386 for over 10 years, I don't think there's any reason for them to do that


Gentoo ebuild USE and CFLAGS flags allow the user to specify custom compilation settings as necessary for optimization on a given processor architecture like i386, i486, i586, or i686.

For example, with ebuild (or dpkg-repack or rpmrebuild) a person could build the compiler packages with all of the appropriate optimization flags for the chipset on the box where the package will be installed: https://packages.gentoo.org/useflags/custom-cflags

Cross-compilation is easier with clean build containers.

With distrobox (and qemu, qemu-user-static, and binfmt-support) https://github.com/89luca89/distrobox/blob/main/docs/useful_... :

  $ uname -m
  x86_64
  $ distrobox create -i aarch64/fedora -n fedora-arm64
  $ distrobox enter fedora-arm64
  user@fedora-arm64:$ uname -m
  aarch64


Fedora ships i686 packages. At least that's what they're called.


> Plenty of other pieces of software such as WINE only work when 32-bit libraries are installed on a 64-bit system.

Funny to mention Wine specifically; they're implementing a more sophisticated Wine-on-Wine64 setup that, among other things, will make having 32-bit system libs unnecessary, through the power of Heaven's Gate, not unlike how Windows handles syscalls in real WoW64.

(As far as I understand it, Linux, having a stable syscall interface, just supports the old software interrupts for 32-bit syscalls, making this unnecessary. In fact, they seem to even work from a 64-bit process to some degree.)


I fail to see how Linux support for 32-bit syscalls here is relevant. The reason they need 32-bit libraries is that the original 32-bit windows app needs to run with the CPU in 32-bit mode. That means all the original DLLs and support libraries need to run in 32-bit as well. None of that has to do with Linux syscalls. So all the 32-bit components would still need to thunk to an out of process wine64 instance and that seems like an architecture that will perform poorly vs just having a 32bit build continue.

From my research the situation is a bit murkier in that even 64-bit wine seems to require 32-bit components for some reason but that may be a basic packaging issue and the real blocker.

Do you have any supporting links I can read for your claims? I haven’t been able to find anything.


The last note there is more of a point of interest regarding differences between Linux and Windows. In particular, WOW64 works differently than multilib on Linux because on Linux, you can still call int 80h to make syscalls.

On Windows the syscall interface was, I believe, at int 2Eh. It should still work for backwards compatibility reasons, but WOW64 does not rely on it (and presumably, it is one of those things that Microsoft could take away at any time, just like how syscall numbers change and the PEB/TEB structures move around). Exactly what happens for WOW64 syscalls, I'm not sure, because I don't really want to look at disassembly listings for Windows DLLs, so unfortunately my understanding is limited to what things I know from reading books and Raymond Chen. It is quite likely I have some of the details a bit off, no question about that.

64-bit Wine does not require 32-bit components, but traditionally to make a WOW64 Wine build, you need to do a special build process that involves building Wine twice. This is likely where things get murky in terms of packaging, because you can still separate out the two builds of Wine. As far as I know, WOW64 in future versions of Wine will not work this way and shouldn't need 32-bit packages at all.

Unfortunately though I really don't know of a good source regarding the situation and history of it. I am pretty darn sure I saw this discussed on the Wine mailing list before, though.


> WOW64 in future versions of Wine will not work this way and shouldn't need 32-bit packages at all.

Correct, with the “new”/32-bit-code-in-64-bit-process Wow64, there’s an option you pass to ./configure to specify what archs to build the PE DLLs for (i386 and x86_64 for this case)


wow64 itself would have to be a 32 bit library that interacts with 32-bit wine. I don’t believe there’s a way around that. The talk of syscalls seems irrelevant since Wine has to intercept all the syscalls and redirect them to libc, reimplement, or make its own Linux syscall. Being able to do syscalls from 32-bit mode and the back compat of that seems completely irrelevant here.

Clarification: you could actually have Wow64 do IPC to 64-but wine to get rid of the need for a 32-bit wine, but I suspect there’s too much overhead for that, especially since WoW64 would still need to be in 32-bit.


IPC is not necessary: you can in fact jump from 32-bit code to 64-bit code using a special kind of FAR call across code segments! This is what the new Wine-on-Wine64 is doing. I'm mentioning Windows syscalls because Wine implements the Windows syscalls by redirecting them to UNIX libraries, and it happens to be using the same exact idea as how Windows syscalls go from Windows-on-Windows64 for calling 64-bit libraries in Wine-on-Wine64. What is a syscall in Windows is a library call in Wine, and they happen to use a similar mechanism. (I mentioned elsewhere in the thread that I realize this is not a case where things are exactly the same, but it's closer related than I think you're giving it credit.)

As I understand it, the way that AMD64/EM64T processors actually evolved, rather than having completely separate "modes" for real mode/protected mode/long mode, instead, internally, the processors are more-or-less just masking features on and off. So jumping from 32-bit to 64-bit code isn't actually impossible, in fact it's obviously necessary for 32-bit code to be able to make syscalls in the first place, but what might not be clear is that at least on AMD64, you can do this jump purely in usermode, too.

The funny abilities of x86 processors even allow you to mix and match a bit with the different modes; For example, the old Linux x32 ABI[1], or Unreal Mode[2], not to mention the fact that older protected mode OSes (like Win9x) would often thunk to real mode for legacy drivers.

[1]: https://en.wikipedia.org/wiki/X32_ABI

[2]: https://en.wikipedia.org/wiki/Unreal_mode


With the “new”/32-bit-code-in-64-bit-process Wow64, the Unix process and all Unix code/shared libraries are 64-bit. But (most of) the Windows code is 32-bit, and calls through thunks to Unix code (either Nt* syscalls or into Unix libs).

I don’t think there’s much of a performance effect, if anything it should be faster since all the Unix code running is 64-bit (and x86_64 has more registers, newer ISA extensions). You also don’t have to worry about Unix libraries eating up precious 32-bit address space.

The only current downside is OpenGL/Vulkan calls that return (64-bit) pointers to memory, those buffers need to be copied to 32-bit before being returned to the application. A Vulkan extension is in the works for that.


That doesn’t make sense. You can’t intermingle 32 bit and 64bit code in the same process at all afaik. This is a CPU decision.

An obvious problem is the fact that 32bit applications will use 32bit addressing modes which are illegal when the processor is running in 64-bit mode.

Wow64 is just a support library to make 32bit apps run in a 64-bit OS, but that library itself is 32-bit.

If you have any supporting evidence to the contrary I’d love to read it because it’ll blow up my conception of how multi arch works on 64-bit.


You can mix 32- and 64-bit code in a 64-bit process on both Linux and macOS (since 10.15), and this is what Wine uses.

It works by setting a LDT with a 32-bit code segment and then doing a far/long cross-segment jump to the 32-bit segment. 32-bit code runs, and it jumps back to the 64-bit segment for Nt* “syscalls”, Unix lib calls, and signals.


The term to search for is "heaven's gate". TL;DR: Intel processors can, in usermode, switch between 32-bit and 64-bit mode.

Note that the concept of a process is irrelevant: processes don't exist to Intel processors. There was a concept called a "task" early on in I believe the 286 line, but nowadays all OSes just set up a single task segment on the CPU and do all of the context switching via other means, because it simply wound up being faster anyway. The processor just has tons of registers that you can flip around, and being able to switch between protected mode and long mode is a property of the code segment currently being executed (IIRC) which is something you can jump between in usermode using a far call. (And this is, as far as I know, just about the only way in which x86 segments remain relevant today.)


When I said processes, my impression was that the kernel was responsible for switching the CPU into / out of 32-bit mode through a privileged operation (ie that’s the process boundary). Turns out that’s not the case and it’s in the unprivileged CS register which can be mutated calling jmp/call with a target segment set to 0x33.

Thanks for correcting me!

[1] https://stackoverflow.com/questions/24113729/switch-from-32b...


I assume this one for heaven's gate: https://www.alex-ionescu.com/closing-heavens-gate/

But that's very windows specific, I think the general term is jumping between long mode and protected mode?


I'm not sure exactly where the term comes from, but I've seen people use it to refer to doing the same thing on Linux, too. (IIRC, the code segments for protected mode and long mode also happen to be the same on Windows and Linux. Not sure why exactly.)


Intel processors can, as opposed to AMD processors? or both can?


Sorry, both.


The CPU runs in either 32bit or 64bit mode. That determines how it interprets the code. A CPU in 64bit mode would interpret 32bit code as 64bit garbage and vice versa.


I had some weird issues with Ubuntu 22.04, the OEM Steam Installer and some missing 32-bit library. Steam required it, everytime I tried to install it it broke somethong, mostly WiFi. No idea why, and I am no expert, not even close.


>will make having 32-bit system libs unnecessary, ... , not unlike how Windows handles syscalls in real WoW64.

WoW64 literally has an entire 32-bit equivalent to Windows's native 64-bit libraries and binaries.


I should've known better than to be vague here.

I'm talking about just syscalls. Windows differs from Linux in that Windows doesn't have backwards compatibility for the kernel ABI but the userspace ABI; all of the syscalls HAVE to be dispatched by NTDLL, because the syscall numbers are not stable between kernel versions anyway. (Not that this entirely stops crapware like anti-cheat from doing so, but nonetheless.)

From my point of view, the closest Wine equivalent of a "syscall" would be calling out to UNIX system libraries, so it does bear some similarities even if it's not quite the same.

I don't personally want to open WOW64 NTDLL in IDA because doing so would probably limit me from contributing to Wine, but my understanding is that WOW64 NTDLL is just thunks to Win64 NTDLL, using an intermediate library (one of the wow64*.dlls presumably?) to perform a Heaven's Gate call into the 64-bit NTDLL.


WOW64 syscalls are indeed implemented using heaven's gate, the 32-bit ntdll calls into a "wow64cpu.dll" module, which does the long-mode transition and ends up calling into the 64-bit ntdll. Unfortunately manual syscalls are still possible (and widely used) on windows, either by hardcoding syscall IDs for common versions or performing very rudimentary "disassembling" of the ntdll syscall stubs.


Not even that is true anymore since some Windows games are now hardcoding system call numbers.


Why in god's name!?


It's mainly going to be anti-cheat and DRM trying to be tricky and mitigate hooks and obfuscate their behavior.


Syscalls aren't just dispatched by NTDLL, they're also dispatched by "Win32u.dll". In earlier versions of Windows, "User32.dll" also makes system calls as well.


Okay, I genuinely did not know that User32.dll made syscalls directly. Interesting, and kind of bizarre.


Glad to see the Ubuntu devs decided not to make the same mistake as Apple, a lot of recent desktop linux adoption is being driven by Steam and its huge library of Linux-compatible games.


There's work ongoing in Wine that will allow 32-bit applications and libraries to be able to call 64-bit host libraries. Hopefully this means that we won't have to mess around with multilib for much longer :)

https://www.winehq.org/announce/8.0


I wouldn't call Apple's decision a mistake, they knew exactly what they were doing and their long term plan required it. The relative insignificant size of user base that still needs 32 bit support is dwarfed by all devices that will never need that. Apple has always been quick to drop backward compatibility to support innovation both at hardware and software. They dropped floppy support and CD/DVD support eons before the rest of the desktop market. Since they own the complete stack at this point and with everything SoC, at some point they will start saving die space not wasted on 32 bit support. To get there however requires they start pushing the software first.


I recently took the list of the commercial software that supported MacOS that I’ve paid for over the years and checked current platform compatibility.

Twice as many packages run under Linux than under MacOS, specifically because of the lack of 32 bit support.


While that may be tragic, it still is the intentional effect and hardly a mistake. Apple focusing on its $250B iPhone market over its < $50B Mac market very much makes sense. When they removed 32 bit support we might have guessed at Apple Silicon on the desktop, and lo and behold that did come to pass. The Intel to ARM transition was much smoother than the PowerPC to Intel move in part because of 2 vs 4 versions in the Universal Binaries. I am going to go out on a limb here and say that Apple was also aware of ARMs forthcoming complete removal of AArch32.


Are you talking about games? Or old versions of apps?

I know of very few actively-maintained 32-bit Mac applications that didn’t make it to 64-bit: MathType and AccountEdge are the ones I remember right now.


I mean to not have 32 bit support by default makes sense. It wastes disk space for nothing. On ArchLinux for example to have 32 bit support you have to enable the multilib repo, that makes sense because in most installations you don't need it.

Plus 32 bit software can still run if it's stacically linked or run inside a container. The only thing that doens't ship is the dynamic libraries for 32 bit executables to run.


It's not installed by default so it's not wasting space if you don't use it.

It does get installed if you want to play games on Linux, though because computer games will require 32 bit libraries for many years to come.


I think they have something a bit like a container built into Steam: https://github.com/ValveSoftware/steam-runtime


Steam definitely can’t run 32 bit games under current MacOS. (Most of those games run under Linux Steam though; I’m hoping Asahi gets an installer working for my 16” M2 soon).


As I understand it, in Apple’s case 32-bit support was also negatively impacting development of Cocoa/AppKit due to some Objective-C technicalities.

I think probably the right way to handle 32-bit compat is well-integrated virtualization ala Classic Mode from OS X’s early days. When the user tries to run a 32-bit binary, boot up a minimal old copy of the host OS and run it there. It’s not as nice as running it directly, but I think that’s fine; it gently pushes devs to bring their antiquated software into the modern era while allowing users to continue to run it and keeps OS development unshackled from the past.


I thought exactly the opposite. Why keep all the old clutter around just for some edge cases that can easily be resolved by virtualization?


Desktop linux has finally found a niche where it can actually compete with Windows - gaming - and the first thing distro developers and some users demand is changes that will make it much worse for that purpose. Require more hoops to jump through. Reduce compatibility. Reduce performance. All for the sake of "muh purity".

There is some serious aversion to providing users what they want in the Linux world.


If Steam doesn't work on Ubuntu, Ubuntu may as well kill off what remains of their consumer desktop product. Barely anyone uses Linux at all, and 9f the few people that do a significant amount of people like to play a video game on their computers every once in a while. Based on the Steam hardware survey and the number of Steam users, I'd estimate about 8 million people would suddenly lose the ability to play games.

Many of them would go back to Windows. Others would move to another Linux distro. Either way, Ubuntu would make a lot of people mad.


Steam works out-of-the-box in Fedora Silverblue, via Flathub.

Ubuntu avoids Flatpak for strategic reasons, which is a valid choice that might make sense for developers, but it also makes Ubuntu less easy-to-use for people who just want to use their computer.


What solution does Flatpak provide, other than the fact the x86 files are now in a special hidden directory? Someone still needs to build all the 32 bit dependencies for org.freedesktop.Platform.Compat.i386.

I suppose Ubuntu could stop packaging these files if they distributed all 32 bit software over snap or Flatpak, but I don't think this solves as many problems as removing x86 multiarch would create.


>Ubuntu avoids Flatpak for strategic reasons,

Is flatpack still Desktop focused , depends on desktop session or changed recently?

It this still holds true then your claim is bullshit, snap is a more powerful tool because I could setup CLI programs on a server without a desktop session.

From flatpack FAQ I still see

>Flatpak is designed to run inside a desktop session and relies on certain session services, such as a D-Bus session bus and, optionally, a systemd --user instance. This makes Flatpak not a good match for a server.


Meh, why not use Docker for that?


Between a docker or a snap I prefer snap, something simple to package a simple snap.


Whether it is easier to package for snap is debatable. Building docker images is a much more widespread and well established workflow.


I thought that most games run under Proton? Wouldn't Proton eventually adopt WoW64, eliminating the need for 32-bit executables on the host? https://www.winehq.org/announce/8.0

(Threads on x86-64 Linux can freely switch between 32-bit and 64-bit mode.)


Wine is making excellent progress, but you still need specific versions of Proton/Wine for specific applications and games. I don't think we'll be free of 32 bit libraries just yet.

It'll happen eventually, but I think we've still got a few years of multiarch ahead of us.


]barely anyone uses Linux at all

The steam console runs Linux. Android runs a Linux.


Fair enough, Android has actual market share. It's rarely considered "Linux" in this context though.

The Steam Deck is a device serving a few million in a market of billions. The Switch outsold the Deck ten to one, with the Switch being an old console and the Deck in its release year.

Even with Steam Deck included in the survey, about 2% of Steam users (which is a subset of the gaming market) use Linux.


It was clear from the context that he was talking about traditional Linux desktops with Gnome, KDE etc; not systems that happen to use the Linux kernel as a base but are otherwise completely different.

(That's the case for Android at least; I'm not sure about the Steam Deck.)

Android could swap out Linux for some other kernel (e.g. Fuchsia) relatively easily and users wouldn't notice. Google already did that with the Nest Hub.


> (That's the case for Android at least; I'm not sure about the Steam Deck.)

AFAIK, the Steam Deck is a traditional Linux desktop (KDE), which automatically runs the Steam launcher (in full-screen Big Picture mode) by default (but you can easily exit it and go back to the normal desktop if you want).


I would love to see another competitor, but fuchsia is not there yet.

It is simpler for when you have minimal hardware, but phones are no longer minimal.


Because lot of recent desktop linux adoption is being driven by Steam and its huge library of Linux-compatible games.

I wonder though, whether with GeForce (etc), streaming games from the cloud, the local OS becomes even less important after all. Even a MacBook could be a decent gaming laptop (the hardware is already a superb gaming laptop, just apple keeps the software crippled).


Eventually all Steam games will need to run in containers or something similar to manage the dependecy hell, expecting the host system to have some collection of libraries in compatible versions is just recipe for disaster.


They already do even for native Linux games, that’s what Steam runtime is. Unfortunately some libraries need to be from the host (e.g. the Vulkan/OpenGL libraries).


> Unfortunately some libraries need to be from the host (e.g. the Vulkan/OpenGL libraries).

AFAIK, they don't need to be from the host, they only need to be compatible with the host hardware and the host kernel (and the kernel ABI stability rules makes this easier). For instance, when running the Steam flatpak, these libraries come from the freedesktop runtime, not from the host.


Yeah big -1 from me on Steam using its own graphics/sound/hmi drivers.


This is what flatpak solves


Static linking / bundling libraries helps absolutely nothing. My SimCity 3k from Loki is dead because of OSS vs lack of hardware mixing, because it tries to go fullscreen with nowadays unsupported resolutions and refresh rates, etc. . The static linked copy may actually "load" but it's actually the dynamic exec which saves the day since you can at least replace/hook some functions in order to provide better compatibility with a recent desktop.


You can still access those files in the flatpak, replace them with your new ones, and package it back as a flatpak and let users use it.


No, you can't. That would most certainly involve modifying and then redistributing a modified flatpak, which is certainly not going to fly in the face of binary-only proprietary software like the one most likely to benefit from a stable ABI over the years.

Not to mention, flatpak would still add nothing. Just additional annoyances.


Eventually == over a decade ago, at least on windows.

From a technical perspective, steam was originally basically just a chroot-style environment that let you have an independent set of DLLs installed for each windows game on your machine.

Before that, I averaged four hours of fucking around with directx diagnostic bullshit whenever I bought a new AAA title for windows (and getting the new game to work usually broke some old games)

These days, Steam’s Linux support for Windows games is better than native Windows support ever was.

Currently, they require a fairly small base set of 32 bit Linux libraries with a relatively stable ABI. That lets them abstract away all the other crap on your Linux desktop.


Agreed, I've gone through the same when I was on windows and switched back to Linux about three years ago. Windows games that I run, leading and AAA titles in some cases, runs better on Steam Proton.


Except Windows by and large satisfies that expectation.

Linux throwing backwards compatibility and stability out the window is one of the biggest reasons it doesn't appeal to the common user. Note that Android provides this (granted less than Windows does), and we see common people use Android.

And before you mention it: Yes, I know Linus Torvalds is (sort of) adamant about backwards compatibility; the problem is the rest of Linux does not and will not care.


Windows backwards compatibility for games is abysmal, up to the point users are forced to use Wine libraries to run old Windows games on Windows.

It's still not as bad as desktop Linux (e.g. some Loki games cannot even be loaded due to breaking changes in _glibc_ out of all libraries), but it is still bad enough to qualify as abysmal.


It’s particularly bad for Vista-era “Games for Windows” titles. Getting Fable III to run on a modern Windows requires so much hoop jumping you’d be forgiven for thinking you’re playing Portal.


That was a beautiful metaphor, and I’m stealing it.


Exactly, this is uniquely a linux problem that these consumer facing operating systems were designed to avoid. In theory all Steam needs is a proton version "good enough" to emulate a handful of major eras of Windows programs.


That "rest of Linux" that you talk about is not Linux at all.


What's another term you think they should use?


Maybe "commonly encountered Free Software desktop and server userspace components"? After all, much of that is what you will also find on BSDs et al., and will give you the same trouble (if any) there.

The Linux kernel itself is in fact very, VERY extensively backwards-compatible, which is why I find it particularly unfair (on top of being wrong) to use the label Dalewyn did. It's the installed userspace libraries that aren't - at least not in all cases, but the situation sure has improved a lot over the last decade or so.


>The Linux kernel itself is in fact very, VERY extensively backwards-compatible, which is why I find it particularly unfair (on top of being wrong) to use the label Dalewyn did.

Did I ever dispute that?


Ubuntu, Gnome, etc


This is already the case for GOG which - for the good old games - ships a bundled DOSBox container wrapping the game.


Sometimes I sympathize with the snap developers to be honest


Really? Yesterday I tried to open GIMP over a remote SSH connection to quickly edit something. Turned out that it couldn't open the display because snap doesn't work over remote connections ...

Imho, the least they can do is get the basics right. Of course, it's Canonical's fault for pushing something that is not ready for the real world.


I wonder how long it will take for people to claim X11 doesn’t support remote displays (in the same way they claim it doesn’t support high DPI displays).


It's probably because the env from the host system is sandboxed. You should explicitly set the DISPLAY somehow when you run the snap


That's strange because xeyes (not a snap app) worked just fine, so $DISPLAY was certainly set.


That's because xeyes is not sandboxed, snaps are. They don't automatically inherit the environment.


Would it be as simple as:?

  DISPLAY="$DISPLAY" gimp


I don't run Ubuntu so I wouldn't know, but the other sandbox system I know, Docker, has its own syntax to set environment variables. It's worth having a look at the man pages.

      DISPLAY="$DISPLAY" gimp
This is a no-op. It sets DISPLAY in the current env with the value of DISPLAY in the current env. Processes spawned from the shell already inherit the current environment (unless they're running in a separate namespace, of course)


Happy to be proven wrong, but I really don't think Gimp would be usable over X/SSH in anything slower than local LAN.

VNC or something similar might be a better choice


Um the L in LAN stands for “Local”. Sorry I’m from the department of redundancy department just conducting an inspection here.


True, but actually I wanted to mean that if you're out of the 1st router it might be too slow already (especially for something like GIMP)


Fwiw, I was trying to do this over LAN. In fact to a headless machine in the same office. This kind of basic thing should just work.

I use VNC all the time, but sometimes it is just more convenient to use a remote X connection.


It was 20 years ago, I tried running GIMP over remote X on a 100base-T LAN. It was pretty much un-useable. Granted modern day Gigabit networks might have made it more palatable, but I suspect with all the modern UI toolkit assumptions about the X server being on the same machine, the end result is that it will still be worse than an optimized VNC.

As a comparison, windows remote desktop on the same network was pretty snappy at the time.


Very true, but there is no excuse for prioritizing performance over correctness.


> because snap doesn't work over remote connections

Nor does Wayland, which is the default in Ubuntu now.


We can sympathize with the problem they want to solve without agreeing with their proposed solution.


Yes if they'd do this inside a research lab far away from regular users.


Now that Ubuntu is treating 32-bit like a second class citizen (by having an explicit allow-list of 32 bit packages that are required by Steam), can the BSD’s just take the same narrow target and provide good Steam support, or have they already?

There are only a few things keeping me on Linux: Steam, Slack and Zoom. I haven’t checked for BSD support for any of that recently, but Slack used to(?) run better in the browser than in their app.

Edit: Also native docker, does that still have to live in a vm, windows/mac style?


This is yet another thing I find particularly disappointing of the "open source era". Linux probably has the best hardware support of all OSes out there, having long surpassed Windows itself. However, no other open OS even comes close. I am now as stuck with Linux as I was stuck with Windows.


This is more “steam remains a crappy app we’re still forced to use for the vast majority of games”.

After multiple years it’s still an intel binary on Mac, despite being a chrome wrapper and the Mac games in the catalogue all being native binaries at this point.

I would assume if not for the 64bit transition it would still be i386 on Mac as well.


i386 will not dies in Ubuntu because it is not dead in Debian. Once it is dead in Debian, it'll die in Ubuntu too.


Why do you think that? They decided to drop it before, as pointed out in the article: https://lists.ubuntu.com/archives/ubuntu-announce/2019-June/...


I mean, my point was is that they are free to drop earlier than Debian will, but if Debian drop they will not maintain it by themselves.


That makes sense, but it's the opposite of what you did say.


i386 will die, and so will x86-64.

RISC-V is inevitable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: