2038 is going to be an interesting year that’s for sure. I’m big into Final Fantasy VII modding/speedrunning and while reverse engineering the game I stumbled upon some code that actually checks if the current date is before 2038. If it’s not it refuses to run. I have no idea why they put this kind of check in the game's code, I tried to remove it but the game actually still didn’t start so there is probably some issue that prevents it from working when the date overflows.
I can only imagine that a lot of other legacy software will have similar issues when we reach year 2038.
> I stumbled upon some code that actually checks if the current date is before 2038. If it’s not it refuses to run. I have no idea why they put this kind of check in the game's code,
That's actually very clever. Instead of crashing in unexpected ways or doing odd things, just cleanly exit. If you really want it to run after 2038, you have to emulate the clock, which would then avoid these potential Y2038 bugs.
Clever would be fixing the Y2038 bugs. This sounds more like a lazy/overworked programmer getting a QA bug report and marking it as WONTFIX because 2038 is going to be someone else's problem.
To clarify: Clever would be to to spend several days hunting down a bug that will affect 0.0001% of your users, and has an easy, well-known workaround?
It's going to affect 100% of your users in 2038. So yeah, if you see the game as just a product with a limited lifecycle then sure you can make excuses for cutting corners. If you see the game as a piece of art then you should have some interest in it working in the future. It's not like time_t overflows are rocket science.
Although the most clever solution would be to not have any bugs in the first place ofc ;)
It will break earlier because unix time isn't only a display format. Tons of calculation are scheduling things into the future. But if the future ends up being in the past? Then you will have problem now. Starting from things with 10 year cycle, 5 year, yearly, monthly, daily. The more nearer to 2038, the more things explodes.
Definitely. I was charged with fixing Y2K problems at a hospital in the late 90ies. The software scheduling checkups for pregnant women was first, because it had the estimated birth date, i.e. up to 9 months ahead.
Keep in mind that FF7 originally had a PC version around launch (that's the version I first played) so it's quite plausible that all versions could have this code if it dates to the original. Don't think it would have made sense to have it on the PS1, though.
Every port of Final Fantasy VII (PS4, Switch, Android, Xbox One, etc) is just the original 1998 PC port EXE running in an emulation layer similar to Wine. The general consensus in the community is that Square has either lost the game's source code or feels that doing things this way is safer than rebuilding the code. In a fairly forward-thinking move, FF7 PC has the ability to use an external DLL as its graphics driver, so the ports use this functionality for new graphics drivers that allow for higher resolutions and more modern APIs than the DirectX 5 support the game shipped with.
Each version has the correct button prompts for its controller. I don’t know enough about FF7 to know how this was accomplished, probably by editing the script files. One thing you might be interested in- there’s evidence that the team responsible for the new FF7 renderer code either hired a modder to write it or used his code as a base. https://blog.julianxhokaxhiu.com/2020-02-19-final-fantasy-vi...
I owned that one. It was a tricky thing to install back then on my pentium 2, 255mhz. Could get the graphics to work without glitches or the sound but not both at the same time. :D
I think a lot of systems, a lot of them embedded, will fail in odd ways. Y2K was mostly a data format issue. This is much more interesting (and widespread).
BCD could be one of many problems, but ASCII strings would me more common in systems I dealt with. There were a lot of assumptions about two digit years that maybe made sense sometime in the 70s for code that you didn’t think would survive more than a few years.
Also take a look at struct tm. Its tm_year looked like just a 2 digit year and as such people may format it with printf(“19%02d”,…). It is actually the number of years since 1900. In early 2000 I had to fix a broken ftp server that was sending 19100 as the year.
I think it's likely to be better handled, but at the same time people keep citing the non-disaster of Y2K as a reason not to do disaster preparation, so I don't know.
My sense is that it'll be a lot worse, Y2K was only a data format issue, whereas 2038 issue has to do more with underlying hardware. It really all depends, and we'll see. Certainly, many old software will stop working in 2038.
I'd say both are a data format issue, y2k was usually at a higher level and occured in custom data formats of individual software, while 2038 is in the OS and basic libraries, or even in hardware.
I do think though there were some bioses that messed it up too, so that's rather low level too.
For systems running in finance, the problem should have already shown up when calculating dates for 30 year bonds and mortgages. But as another poster said, there's a ton of embedded systems out there running Linux that likely aren't handling it correctly.
In spite of knowing about this issue in 2038, registering a lot of domains to the affect and hoping I can burst out a lot of contracting work that year, make bank and retire.
I’m assuming by EPOC you’re talking about Unix timestamps? There’s nothing wrong with them if they’re 64-bit.
As I understand it, it seems like it’s mostly software using 32-bit integers that will struggle.
So if you’re writing modern code on a modern runtime running on 64-bit platforms you should be fine (easy to verify by changing your dev environment’s clock).
I kid, fixing this will require an even greater amount of effort compared to the Y2K bug considering how many more linux devices have been deployed since then.
Consider how many of those linux devices are difficult or impossible to update, and you start to realize the mess we'll be in. At least Y2K affected systems that by and large could be updated easily.
It all depends. We have more devices and software now, but a lot more critical stuff is centrally hosted by cloud providers that'll be ready long before the deadline.
It’s good this came up here. I just spent the last week or so going through the MaraDNS code base and fixing all of the little Y2038 issues.
I know, time_t is 64-bit with pretty much any new Linux distro out there, so why are people seeing Y2038 issues? It’s because the Windows 32-bit POSIX compatibility layer handles Y2038 very poorly. Once Y2038 is reached, the POSIX time() call in a 32-bit app fails with a -1. It doesn’t use a rolling timestamp somewhere in 1901 the way 32-bit Linux applications with 32-bit time_t do. It fails hard, returning -1 for every call to time().
Now, it’s true that Microsoft does have proprietary calls for time and date which are Y2038 compliant, and, yes, native Windows32 apps should use those calls instead of the POSIX ones, but in the real world, it’s sometimes a lot easier to, say, just use stat() to get a file’s timestamp instead of having to use CreateFile() followed by GetFileTime().
This is why a lot of Windows apps are still seeing Y2038 issues.
In terms of Linux apps, the Y2038 stuff is mainly seen in old 32-bit binary only apps. Since that stuff is mainly games, where an inaccurate datestamp isn’t a serious issue, I think we will see emulation libraries which give old games a synthetic time and date so they aren’t outside of the Y2038 window. New apps will use a 64-bit time_t even if compiled as a 32-bit binary.
For those of us that chuckle at it at this time, this is likely to be far far far bigger than Y2K. The reason is because software that runs critical aspects of human life will have proliferated to a greater degree when compared to >2000, and <2038.
Before 2000, we had software running our systems, yes. But it was not as distributed, and not as ubiquitous, and not as deeply ingrained into human culture as it is today. This proliferation will obviously continue past today, and while hardware and low-level OS/software mitigations (as well as a herculean effort to clean up the mess) will make up the gap, it's not hard to see that this is likely to be much more impactful upon failure because of the "embeddedness" of these systems.
A box that has just been doing its thing for 40-50-60 years and all of a sudden fails, is likely to be more impactful than one that was 20-30 years old even.
Y2K38 is a "standard Unix functions" problem. I don't think the surprises this time are going to be mission critical software doing date math, I think the surprises are going to be "non-mission critical apps" that "don't do date math" where it is not that "no one wants to risk touching it" but more "no one has thought to touch it in years because it isn't mission critical and it's just some random thing in the stack".
Case in point, this article's pointing a finger at fontconfig. Who considers fontconfig mission critical software? fontconfig has been open source forever, is it just "too boring" that no one has bothered to do a Y2K38 audit on it? It probably doesn't even really care about dates for the most part, so maybe no one even realizes it needs a date math audit? Multiply that across the very long tail of Unix apps and libraries since 1970. That's the weirder risk of Y2K38 than Y2K: the huge amount of "non-mission critical"/"non-date math" code that potentially exists in every Unix-derived tool. (With all non-Windows OSes in common usage today themselves being Unix-derived, that's a lot of surface area.)
Y2K was looking for everything that did date math with varchar(2) or 2-digit BCD. Those were needles in haystacks certainly, but the needles were sharp enough to know when you found one. Y2K38 is looking for subtle differences in (mostly) C macros and C library function calls and making sure that time_t structs are appropriately sized for modern platforms. That almost sounds to me more like looking for particularly colored straws in a haystack.
On an older Windows version, one day I lost internet connectivity on my machine and I didn't understand what had happened because the network interface was reporting that it was ok and the external connection appeared to be operating correctly.
Virus? Possible, but unlikely. A virus wants to spread, not limit its opportunities to do so.
After investigating, I was able to determine that the system clock somehow had gotten set past 2038 and this was sufficient to destroy network connectivity. As soon as it was corrected, everything was fine again.
Not the first time I have run into a clock issue breaking software.
We are lucky 2038 is still 16 years away and not next month.
There's parts of me that still think that 2020 will be the great shiny future, even though we all know that it turned out to be an epic bust. 2050? Can't see it from here.
SSL certificates are only valid for a slim date range, setting your clock too far ahead or too far behind will result in invalid certificates throwing errors.
I had a similar issue. I've also had issues with dual boot to Linux. When logging back into Windows the clock is always wrong. I have to turn on and off the automatic date setting function to get it to refresh.
You can fix that by forcing one or the other OS to use the time method the other one does. The issue is that by default one uses UTC and the other uses local system time.
Windows expects the system clock (the one you can set from the BIOS that keeps time when you don't have NTP) to be set to the local time zone by default. Linux and most other operating systems expect the system clock to be set to UTC.
Usually it's easier in a dual boot environment to set your non-Windows operating system to treat the system clock as local time, most Linux distros literally have a checkbox for this, but sometimes this isn't an option (IIRC Mac OS on a Hackintosh is one of these cases) and sometimes you just want to stand on principle that UTC is "correct" and make Windows adapt to what the rest of the computing world agreed on.
In that case, you can open up the registry, navigate to "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation", and create a QWORD "RealTimeIsUniversal" which is set to 1. Reboot and now Windows will treat the system clock as UTC.
When i first learned of the 2038 problem, i changed the date on my computer and watched it tick up to 2^32. All sorts of things crashed, most notably, the Norton antivirus software. This was on a windows XP if I recall correctly
Interesting that Norton seems to use Unix timestamps. I’ve never developed for Windows, is it common for Windows devs to use them too? Or just some niche feature causing a more widespread problem?
Unix timestamps are common in windows software, though the standard timestamp used by the operating system is the number of ticks (100ns) since 1601-01-01 UTC.
I vaguely remember a few different timestamp formats in use in different places, but the 100ns-tick is very common in Microsoft APIs.
Chuckling at the thought of people who things around the year 60040 getting nervous... nobody knows how to deal with Windows or C any more, but much essential software has run for 58000+ years, and it's about to crash.
A long time ago, I was playing Far Cry. Console was opened. Then all of a sudden IRC chat messages from the IRC client I was running in the background would appear in-game, with fronts and effects (shadows) from the game. shrugs
Oh man, this just unlocked a cool memory - I was in my computer graphics course in college and was trying to write a shader that would generate a wood-grain texture.
I messed something up about it, and must have been pulling from the wrong graphics memory, and noticed That the brown wood didn't have a swirl pattern but had what looked like text in it?
After staring a bit closer, I noticed it was the text that was written to the console in visual studio, I had somehow brought that graphics buffer in to use as my "swirl pattern". I had to sit back and think a bit about how data on your computer isn't always as safe as you think sometimes after that...
I think that's just what happens if you read memory that hasn't been zeroed on the gpu. There's no out of bounds checks or anything on GPU memory. (probably/hopefully there is on the web)
I've seen many bits of cahced browser viewport textures when writing shaders and making mistakes. I've been wanting to create something procedurally from it somehow ever since I encountered it.
As recently as 2012 I remember my MacBook Pro having rendering glitches where a webpage I had been looking at hours before would be displayed in the frozen window of another app, with scan lines or sometimes inverted.
If I recall correctly this is exactly how the 3DS was hacked... Nintendo locked down the os userspace to not have access to certain parts of memory... the GPU however had DMA and access to all ram... there is a Chaos Communications Congress talk about it. Interesting stuff.
It seems odd that the problem is in the access time of the files - why does a font library (or, almost any program) care about the last read time of a file? Sure, the modification time is important, but it's pretty rare that code should care about when a file has been read before. The only program I have heard of that broke when access times are unreliable was mutt, the email client.
> why does a font library (or, almost any program) care about the last read time of a file? Sure, the modification time is important, but it's pretty rare that code should care about when a file has been read before.
There's no separate system call for the modification time; a single system call (https://man7.org/linux/man-pages/man2/stat.2.html) returns the three times (atime, mtime, ctime) together. The font library probably wanted just the modification time (to check whether the font cache is stale), but it cannot get the mtime without also getting the atime (and ctime).
Sure, but EOVERFLOW isn't coming from stat() in this case, is it? The man page states that this is returned for problems with file sizes too big for 32 bits, not for struct timespecs. Something else must be doing things with the atime, I would guess?
I think stat() in the 32bit version of libc is making the 64 bit system call to get the value from the kernel and noticing that it would overflow. The man page for stat() [1] says that EOVERFLOW is a possible error value so that lines up.
Yes, st_ino too, which means it could also break on a filesystem with large enough inode numbers. Using a 32-bit userspace nowadays seems more problematic the more I look.
The nice thing about openbsd having no abi guarantees is that they can fix this problem the correct way. they made time_t 64 bit on all architectures.
The downside to having no abi guarantee is that you will not have old binaries to run in the first place, hope you remembered the source. sigh
All things considered, if you have the source to everything, abi is overrated, if you don't, it is vital.
extra thoughts: obenbsd is cool because they don't have or need the *64 file access functions. (fopen64, fseek64...) however this sucks when porting... because they don't have the *64 functions.
> The nice thing about openbsd having no abi guarantees is that they can fix this problem the correct way. they made time_t 64 bit on all architectures.
The correct way is to create APIs that take a 64-bit time_t and migrate applications over to them. No ABI guarantee means the old APIs can be removed if they are a burden to implement, but obviously for the case of time_t they aren't, so sticking a warning message in there is sufficient for the next 15 years or so.
> All things considered, if you have the source to everything, abi is overrated, if you don't, it is vital.
ABI might be, but API isn't. Even within a single application, the correct way to do internal interface changes that affect a lot of code is generally to create the new one, move callers, then remove the old one. Certainly in a case like this where keeping the old APIs around is trivial.
And OpenBSD does not have the source code to everything, and even in ports, there tends to be an upstream and issues with porting.
however the binary interface changes every couple of weeks. and they have a flag day(breaking incompatible change) every year or so.
As such, actions that are unthinkable on linux, like an abi flag day. The openbsd project has gotten really good at handling, after all, if you break stuff all the time you get good at picking up the pieces. to misquote Raul Julia "For you, linux, the day your abi changed was the most important day of your life. But for me, it was Tuesday."
This means that the openbsd project is exceptionally unfriendly to binary only programs(commercial software), As much as I like openbsd I would not even try.
> changing
>
> typedef time_t int32_t
>
> to
>
> typedef time_t int64_t
>
> does not change your api
Not sure I agree, because time_t itself is part of the API, and programs can use it for more than just calling your syscall, like in their own structures.
Linux has found they don't need these flag days, they're an ugly old sledgehammer that used to be quite common in systems programming, but Linux (and presumably Windows though I haven't seen the source code to make a judgement) really pioneered much more disciplined, thoughtful, and structured way to manage API and ABIs such that new versions can be brought in with little disruption and old versions can also be maintained usually with little burden to the code base. It's a better system all around IMO, even if you did decide to remove the old stuff right afterwards, the change process is just the right way to go. And keeping around the old stuff and not having to change the world or break your users is actually a good thing too, the ability to make changes less painful than these big hammer flag days makes things very flexible and adaptable.
EOVERFLOW: pathname or fd refers to a file whose size, inode number, or number of blocks cannot be represented in, respectively, the types off_t, ino_t, or blkcnt_t. This error can occur when, for example, an application compiled on a 32-bit platform without D_FILE_OFFSET_BITS=64 calls stat() on a file whose size exceeds (1<<31)-1 bytes.
None of off_t, ino_t or blkcnt_t are to do with times, they are related to file size. The man page has nothing to say about EOVERFLOW and times. Perhaps the man page is out of date, or perhaps it is another syscall that is returning EOVERFLOW?
I'd be surprised if it was actual userspace code in the font library that was generating that errno. After all, if you care enough to spot an overflow in your calculations, you probably care enough to handle that error case better (and know enough about the situation to handle it properly). Something must be making a specific syscall, getting EOVERFLOW, then throwing it back up to the user. But is it really the ubiquitous stat() ?
Note: The linux manpages cover the syscall interface of the linux kernel, not the glibc implementation. You can check the glibc source code yourself, but glibc will set errno for multiple reasons outside of the raw syscall, including for time overflows.
> Note: The linux manpages cover the syscall interface of the linux kernel, not the glibc implementation.
They cover both, but they focus more on the glibc wrappers. For instance, the manpage for stat(2) we're talking about says "On success, zero is returned. On error, -1 is returned, and errno is set to indicate the error.", which is not the syscall interface return value (the syscall does not know about errno, it returns the negative of what will end up in errno instead of -1). Another example is the manpage for exit(2), about the _exit() function (the exit() function, without the underscore, is at exit(3) since it's not a system call), which says "In glibc up to version 2.3, the _exit() wrapper function invoked the kernel system call of the same name. Since glibc 2.3, the wrapper function invokes exit_group(2), in order to terminate of the threads in a process."
Failing before returning the wrong data is good when someone is around to fix the failures. Not so good if it bricks the application without a fix even though the bad data would not have been important, such as e.g. file access times.
There is no way to be sure the date in question is or is not important. In that case, instead of giving someone a lifetime exposure to ionizing radiation, or pausing the respirator because the last time the person took a breath is in the distant future, it's better to crash right away so a person can figure out what to do.
The original post is about a game distribution client. You don't think that the approach for health/safty SW and entertainment SW can have different approaches to error handling?
Also remember that the latter class of software is usually abandoned after a short time so there will be noone around to fix the bug once a user runs into your fail-early check. So yes, pretty please don't fail early in release builds of single-player offline games and other SW with similar characteristics - currupting some state that *might* result in problems is much preferable to intentionally making the game unplayable.
Its also a matter of how you fail and how much. Surely you don't want your computer to BSOD if there is any application error so maybe there are other cases where you want to limit the scope of the error handling too. E.g. zeroing out the atime if it overflows could make sense. Failing the whole stat call likely causes more probblems than it prevents.
If you are going to cheat those achievements then you might as well use Steam Achievement Manager to get all of them without messing with your system time. Or just don't bother, but can't fault anyone for succumbing to Steam's psychological manipulation.
Even without the Y2038 issue, lots of things use timestamps for cache invalidation and might mess up when things are updated while you are in the future.
Steam client fonts are a mess even without y2038 bugs. There's no good way to change size globally - despite the entire client being based on HTML/CSS tech! The current 'solution'* is to edit stylesheets manually as if it were a new skin, and find/modify every invocation of font-size.
* Or use Big Picture mode, which has other issues.
I continuously spent over 25 years of my life not playing the Stanley Parable. If I have to twist the Narrator's arm to force him to realize it, so be it.
Windows' native API for dates consists of the types FILETIME and SYSTEMTIME which will handle 5-digit years, and even the old MS-DOS FAT timestamp goes up to 2107. The 2038 problem came from Unix and applications which use 32-bit Unix epoch time.
It's entirely possible somewhere in the stack of a 32-bit Windows application sits a DLL that uses POSIX time, stat, etc. functions and will fail in a similar way.
The 2038 bug is definitely expected, and unlike in Linux, with its highly distributed and composable architecture, management of fonts on Windows is a responsibility of the monolithic operating system maintained by one company.
It's not out of the realm of possibility, but I'd be a little startled if Windows hasn't already been gone over with a fine tooth comb for Y-2038 issues.
Windows stopped supporting 32-bit Windows (i.e., Windows 11 is 64-bit only). They still fully support 32-bit apps, of which a lot still exist. Visual Studio 2022 is the first version of Microsoft’s flagship IDE that is 64-bit, for example!
You could probably guess I don't claim to be authoritative here. :D I just recall there was a story not long ago about them doing some hard decisions going into 64 bit. https://learn.microsoft.com/en-us/troubleshoot/windows-serve... seems to indicate that you should mostly expect it to work.
Odds are high I just saw the headlines about how they are stopping 32 bit sales of their OS. Though, I couldn't tell you for sure what I was misremembering.
Apps will still overflow if they try to parse a 64-bit Unix timestamp using a 32-bit integer. If that's the case, I imagine they'll break in interesting ways.
Similarly I had a bug where KDE's
zip unpacker would extract empty(?) folders but interpret uninitialized memory as the folder date, creating folders with a modification time after 2038 that Wine couldn't open.
We had a computer dedicated to encoding video that had a misconfigured date that was in the future so all of the encodes from it had the incorrect date. It played havoc with another program on a different system with the correct date. It took a few days for that program's support team to recognize the issue. There was nothing wrong with the file as in the video/audio data was not corrupt or anything. Doing a stream copy to a new file with a sane date made the program happy again.
Never did understand why the software even looked at dates, and support couldn't explain it either.
For the curious: https://doc.ntp.org/reflib/y2k/. 2^32 seconds since 1900-01-01 00:00 ends on 7 Feb 2036. The 2038 problem is 2^31 seconds since 1970-01-01 00:00, the Unix epoch.
Y2038 programming is my retirement plan. I’ll be around 60 and probably kicked out of Silicon Valley employability due to ageism. Like the cobol programmers in the 90s, in 2038 they will be scrambling to find old farts who still know C and UNIX.
C++ and Java are gonna be the obscure legacy languages. Plain bare-metal C is still a valid option for "baby's first programming language" today due to the popularity of Arduino.
fontconfig is the library responsible for indexing and looking up fonts (e.g. by name - note that most software doesn't specify fonts by file name but rather by font name)
If fontconfig fails to stat() a font file, it presumably aborts trying to record any information about that font file. Notably, it doesn't know what the font file's actual font name is. If all fonts fail due to stat(), no information will be available about any font. fontconfig has multiple levels of fallback (e.g. "similar" fonts first), but in this case since nothing is known about any font you just get whatever font happens to be first in the list of all fonts.
Or possibly you just get whatever font _didn’t_ have an atime too far in the future. Of course, now that you have actually opened and read the file, you will have changed the atime and thus might have made it unavailable the next time you run Steam.
Honestly, everyone should run their filesystems with the `noatime` option set, so that they never record access times at all. We very rarely care about when files are accessed. Usually we care most about when a file was last _modified_, the mtime, so this loses us almost nothing and saves a huge amount of writes to the filesystem at the same time.
It's funny because I'm actually actively using the atime: when I need to know what's the last video file I watched in a series, I use `ls -ult [--time-style=full-iso]` or `stat -c '%x %n' *.mkv | sort -r` to show files by order of access time: last accessed (played) will appear first.
Is Linux really not commonly using 64-bit file times yet? I know there are some years left, but it seems like a relatively straightforward problem that needs to be solved for everyone.
> Is Linux really not commonly using 64-bit file times yet?
It is, and that was the problem. The legacy 32-bit API the library loaded within Steam was using, or more precisely, the legacy 32-bit system call used by that library, could not represent the 64-bit time, so the kernel returned the EOVERFLOW (Value too large for defined data type) error. The root cause of the problem seems to be that Steam is still a 32-bit application (and hasn't been recompiled to used the Latest and Greatest version of glibc, which allows for 64-bit timestamps even on 32-bit processes).
If his Linux installation did not use 64-bit file times, the timestamp stored for the file would have fit in 32 bits, and the error wouldn't have happened (though other things would probably have broken first).
(Well, actually, Linux is using 34-bit file times, at least on ext4 and xfs, but that's a bit of a nitpicking: what matters is that it doesn't fit in 32 bits.)
The Steam client is super old and crusty. Ubuntu tried to remove 32 bit libraries like Apple a while ago but Valve told them if they did that, Steam would no longer be supported so the plan was scrapped.
Even if Valve finally get around to updating the client to 64-bit (it already is on macOS AFAIK), most old games are not going to be updated so insisting on keeping the 32-bit libraries around is the right thing to do. Keep in mind that the distro just needs to supply the base system (libc, OpenGL drivers, etc.) in 32-bit and Steam already comes with copies of everything else via the steam runtime.
Sometimes games are bugged and won't properly unlock achievements. For example achievements will work fine on Windows, but be broken in the native Linux port.
The software can also re-lock achievements you've already gained, like if a family member plays on your account and unlocks some that you were planning on earning during your playthrough. Or if you just want to reset your achievement progress for some reason.
The other day I was playing around with modding a game, and inadvertently unlocked some achievements I didn't truly earn, so I re-locked them with the achievement manager. I'll wind up unlocking them in the future during normal play.
Of course you could also use it to cheat, and instantly gain 100% completion in a title.
Semi-related question, but on Linux how does one get to sane font handling like on MacOS where you can basically completely forget that fonts exist? I've had so many problems getting fonts looking good on Linux that I practically gave up on the problem as it seems completely intractable.
I have stopped using MacOS many years ago, so I do not know if the font handling has improved meanwhile.
When I was still using MacOS, I have not noticed anything special about its font handling. The difference in comparison with Linux or any other free OS was that MacOS included a very good set of high quality fonts, much better than those provided by default in any Linux distribution, not that it had some special font handling. Because of that, even after I have ditched MacOS, I have kept a few of its typefaces. Even today I am still using a couple of them.
On Linux, it should be possible to obtain any complex or weird font matching behavior that might be desired, by editing the fontconfig rules, which should be located in some directory like "/etc/fonts/conf.d/", but the exact place might vary between Linux distributions. However, I have never attempted to do that, beyond establishing nice defaults or replacements for the fonts that might be specified in Web pages, e.g. "serif", "sans-serif", "Arial", "Times New Roman" and so on (i.e. by editing 60-latin.conf, 60-generic.conf, 45-latin.conf, 45-generic.conf and the like).
The main method by which I have ensured that everything is displayed with beautiful typefaces on my Linux computers has been simply by uninstalling all the default fonts and installing other nicer fonts in "/usr/share/fonts/". If you omit to uninstall some ugly font, there would always be some application or Web page that might insist to use that, despite your attempts to suggest better fonts.
Many years ago, I have bought a number of beautiful typefaces from some on-line stores like Linotype, Adobe and others, and I am using mostly those on Linux. Nowadays it is much easier to replace the default fonts with better ones, because, unlike a decade ago, now there are a relatively large number of good fonts that are open-source or at least free of charge.
When I first heard about the issue, I changed the system clock to one day before 2038. Once day later, all sorts of things crashed. It's amazing how dependent we are on these things. Hopefully, we'll get it fixed, we've got less than 26 years...
I have a similar issue with Caprine, the desktop frontend for Facebook Messenger. I have not however messed with my datetime settings, not is it such a big issue that I have taken the time to try to fix it (I don't use Messenger often and on desktop even less).
just couldn't fake the game out like that with stanley game. Why mess with clock? Just turn it off for 5 years, like that seems to be the point of the entire game. You just playing yourself...thinking too hard about it
I agree that cheating this achievement is silly and if you do that you might as well be hones with yourself and use Steam Achievement Manager instead of messing with the system time. But remember that Steam does push you "complete" games by showing achievement progress as a percentage and listing ones you don't have yet right on the game page in the client - and that (as well as the immersion-breaking achievement notifications) cannot be turned off.
Quick question, do most OSes need admin/root/whatever perms to change the system time? If not, I wonder if there are any potential serious exploits that could take advantage of this ability.
Replay attacks come to mind (a credential valid for five minutes oh hey it's valid again now) but if an attacker can change the system time you probably have more important things to worry about, such as them having root or worse.
I can only imagine that a lot of other legacy software will have similar issues when we reach year 2038.