Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think it's a good comparison. The architectures mentioned in the pyca/cryptography repo are:

- Alpha: introduced 1992, discontinued in the early 2000s

- HP/PA: introduced 1986, discontinued in 2008

- Itanium: introduced 2001, end of life 2021

- s390: introduced 1990, discontinued in ~1998

- m68k: introduced 1979, still in use in some embedded systems but not developed at Motorola since 1994.

ARM was once not as popular as it is nowadays but it was never moribund and in my experience has always had decent tooling and compiler support. Furthermore I'm sure that if tomorrow HP/PA makes a comeback for some reason, LLVM will add support for it. Out of the list I'd argue that the only two who may be worth supporting are Motorola 68k and maybe Itanium but even then it's ultra niche.

I personally maintain software that runs on old/niche DSPs and I like emulation, so I can definitely feel the pain of people who find new release of software breaking on some of the niche arch they use (I tried running Rust on MIPS-I but couldn't get it to work properly because of lack of support in LLVM for instance). These architectures are dead or dying, not up-and-coming like, say, RISC-V which has gained some momentum lately.

But while I sympathize with people who are concerned by this sort of breakage, it's simply not reasonable to expect these open source projects to maintain backward compatibility with CPUs that haven't been manufactured in decades. As TFA points out it's a huge maintenance burden: you need to regression test against these architectures you may know nothing about, you may not have an easy way to fix the bugs that arise etc...

>open source groups should not be unconditionally supporting the ecosystem for a >large corporation’s hardware and/or platforms.

Preach. Intel is dropping Itanium, HP dropped HP/PA a long time ago. Why should volunteers be expected to provide support for free instead?

It's like users who complain that software drops support for Windows 7 when MS themselves don't support the OS anymore.



SystemZ, what was s390x/390 seems relatively well supported by IBM and Red Hat in my experience.


See footnote 6 in the original article:

"That’s the original S/390, mind you, not the 64-bit “s390x” (also known as z/Architecture). Think about your own C projects for a minute: are you willing to bet that they perform correctly on a 31-bit architecture that even Linux doesn’t support anymore?"


Is it not reasonable to throw this back at the CPU makers? If you want to bring out a new cpu architecture, port all the compilers to it before you start selling it.


The problem is that chipmakers have historically made their development environments closed-source and, often, not very pleasant to work with. Maybe this is more of a problem with demonstration boards meant primarily for embedded systems people, but if you rely on TI, for example, to provide a compiler, they'll give you a closed-source IDE for their own C compiler which may or may not be especially standards-compliant.

I hesitate to imagine what it would take to get a hardware maker to contribute patches to LLVM.


And we have seen, that is to the detriment to the chip maker. It’s said that ATMEL chips became way more popular than PIC because of AVR Dude and cheap knock off programmer boards on eBay. A modern day architecture would be competing with these already established open source tool chains so they would either remain obscure like FPGAs are now, open source their stuff, or be on the scale of Apple or Microsoft where they are able to outcompete them open source stuff (for what purpose though)


If new CPU makers are expected to update all existing compilers, wouldn't the counterpoint be that new compiler writers are expected to support all existing CPUs?


IMO it depends who stands to gain from it. If you make a new compiler, you need to make sure x86 and ARM work because that’s what most of your users will be using. There is almost no gain in adding support to some ancient cpu that no one uses anymore.

On the other side, if you make a new cpu architecture, all of your users (people buying the chip) will gain from porting compilers.

No one is expected to do anything (unless they are being paid). It’s just logical for people to work this way.


Sure, but that wouldn't have helped in this case since 68K, Alpha, PA-RISC, and S/390 were not "existing" CPUs at the time Rust was invented.


FWIW, s390 wasn't really discontinued in 1998. There's still new s390 chips being designed and used.


s390 is the 31-bit-only variant, that has been discontinued for some time. Modern variants are 64-bit based, and still supported.

All that being said, it's quite worthwhile to include these "dead" architectures in LLVM and Rust, if only for educational reasons. That need not imply the high level of support one would expect for, e.g. RISC-V.


Two architectures currently being added to LLVM are m68k and csky. I don't think either are that new (I thought csky was, but it was explained to me by Linux kernel architecture folks that it has old roots from Motorola, with folks from alibaba using that for 32b but moving to riscv for 64b).


Yes, csky is mcore derivative. It's not entirely compatible, like m68k and ColdFire.


Lots of 32 bit code still gets run on these machines.


Could you expand on that? Are you saying that s390x can run binaries compiled for s390 and that today binaries are being compiled to s390 for the purpose of being run on s390x?


Yes to both (at least for user mode code, or "problem mode" in IBM parlance. Kernel and hypervisor code is 64-bit only on newer chips). There's something like a 30% average memory savings for 32-bit code, so if your program its in 2GB, it's a win on these massive machines that'll be running 1000s of VMs at close to100% load. Nice for your caches too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: