Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you have any specific examples of what would break because of this?


I hate to suggest a web search but there are a lot of examples online if you search, e.g., https://www.viva64.com/en/a/0004/

One of the big ones is that integer constants are 32-bit on LP64 / LLP64 systems unless they are large enough, so e.g.

    // Equal to 0x80000000 on ILP32
    // Undefined behavior on LP64 / LLP64
    size_t max = 1u << (sizeof(size_t) * CHAR_BIT - 1);

    // Correct version
    size_t max = (size_t)1 << (sizeof(size_t) * CHAR_BIT - 1);
This can also happen with e.g. multiplication

    #define K 1024
    #define M (1024 * K)
    #define G (1024 * M)
    #define T (1024 * G)
    // Undefined behavior, on both 32-bit and 64-bit.
    size_t max_object_size = 10 * T;
    // Correct version (#1)
    size_t max_object_size = 10995116277760;
    // Correct version (#2)
    static const size_t K = 1024;
    static const size_t M = 1024 * M;
    ... etc ...
Or if you need to align your pointers for some reason, so you do the cast correctly (with uintptr_t) and get

    void *align_ptr(void *p) {
        // Mask is 0xfffffff0 on 32-bit (correct)
        // Is also 0xfffffff0 on 64-bit (mistake)
        return (void *)(((uintptr_t)p + 0xf) & ~0xf);
        // Correct version
        return (void *)(((uintptr_t)p + 0xf) & ~(uintptr_t)0xf);
    }
Consider that you might do some pointer alignment e.g. to work with SIMD. This stuff isn’t so crazy and if you haven’t been targeting 64-bit, a lot of it can creep into your code base over the years. Legacy projects may not compile even remotely cleanly with warnings enabled, it’s just a fact, so even though warnings / static analysis will catch some of the errors above you’re not safe.

This is why so many languages have stricter rules about converting integers to narrower / wider types.


I was hoping for some real life examples from some games and reasons why they’d use something like pointer arithmetic in such a way or so often it couldn’t be easily migrated, or why they’d even do it in the first place on a desktop machine. I get the old software.

Of course basic examples between 32 vs 64 are easy to find. I guess I should have clarified as the OP sounded like he knew the topic well.

I’ve never dug hard into C++ gaming architectures and the patterns they use.


In my experience a far more common and much harder issue is dealing with dependencies.

A large project probably has lots of them. If you are lucky there is a 64 bit version of it. Very often though there isn't so you have to find something equivalent and rewrite all interactions with it.

And there might have been very good reasons for choosing those specific dependencies.

That can take ages upon ages and can be quite demotivating. You might have issue even finding out if you have any problems in your own application after you've spent months on replacing dependencies.


A Collection of Examples of 64-bit Errors in Real Programs: https://www.viva64.com/en/a/0065/


Any kind of pointer math that assumes a pointer is 4 bytes is an easy example.


Why would you write code like that, except in a well-contained module?


It doesn't matter why they decided to. They did. We all know better, we can all point fingers until we are blue in the face, but that doesn't change that people will lose access to software that they purchased.

> except in a well-contained module?

This is probably one of the best reasons. Game developers aren't entirely to blame here, middleware developers seem to be sticking to 32bit like balsamic glazing. Even if you could pry their cold dead hands off their archaic architecture, they'd probably make you pay full price for the 64bit upgrade.

As for DAWs? Think about all the the VST plugins out there. Most of which probably aren't maintained and only exist as zip files on the artist's dropbox.

Dropping 32bit is aspirationally sound. It's grossly inconsiderate of the very most obvious aspects of reality.


I think that's reasonable if we were talking about a tighter time-scale, but we're talking about at least 13 years since it's been 100% obvious that macOS would become 64-bit.


Yeah, but 64 bit doesn't give your users anything usually, so it's a tax that is more of a drain on smaller software shops. Especially consider that lots of programs and especially games have a spike of sales when new and then sales decline to nothing. There may not be new versions, ever. So going back and updating them is pure loss for the developers.

This is one reason why ecosystems like Java are so valuable! The 64 bit transition was so easy for it because of the common insistence on "pure Java" for portability. Combined with pointer compression 64 bit was hardly noticed.


You're right, but here we are anyway. Reality is always absurd.


To make the counterpoint: if it's not forced, apparently no amount of time is enough to convince people to change the code.

Forcing it is a net-good perhaps?


Because, among many other reasons, game developers work 60 and 80 hour workweeks, and have to deliver a project that takes 3 years in 6 months.


A lot of apps over the years have exploited the fact that kernels used to sit above the 2GB address space boundary to encode data into the high bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: