Hacker News new | past | comments | ask | show | jobs | submit login

Well, Windows has existed for other platforms, especially in the server world. And for end users, you have things like Windows RT that are still supported.

https://en.wikipedia.org/wiki/Windows_NT#Supported_platforms




>things like Windows RT

But that didn't go too well. The same reason it didn't go too well (binary compatibility) is the reason why it's not going to be viable.


> The same reason it didn't go too well (binary compatibility)

That's the legacy


What makes you think this time it will work?


The limitation to Modern UI (Metro) style applications that could only be installed from the Windows store is rather the reason why Windows RT failed.


Tell to Mac users that suffer two changes of CPU family and handled the binary compatibility.


The serious problem of emulating x86/x86-64 is that it has a strong memory model, while most other platforms (ARM, PowerPC, Itanium, Alpha) have a weak memory model. Only SPARC (SPARC TSO) and zSeries have a similarly strong memory model as x86/x86-64.

See https://en.wikipedia.org/w/index.php?title=Memory_ordering&o... for details.


Exactly; binary compatibility is a hard issue, and it's especially non-viable in systems with no awesome unified package managers a la Linux.

Case in point: despite an ARM processor, running conventional desktop programs on a Raspberry Pi is mostly fine (barring performance issues). Yes, all operating systems' build stack can emit ARM binaries, but it's useless unless the developer supports it well (not gonna happen), or if there is really nice automation (like Debian).

edit: unless you're trying to say that the binary compatibility wasn't a pain in the butt. they solved it by (IIRC, not a long time OS X user), by bundling two arch binaries together for a bit. Not viable if you have 10 different architectures.


Around OSX 10.4, there was a transition from PowerPC to x86 architecture. It was handled by bundling a JIT interpreter called rosetta [0]. It ran at pretty much the same speed as if you ran it on the original architecture.

They had already done something similar in their first arch transition, 68k => PowerPC [1].

Point is, binary compatible is possible. Far from easy, sure, but it's been done before. The question is, did CPU evolve so much it became impossible to translate from one arch to another ?

[0] https://en.wikipedia.org/wiki/Rosetta_(software) [1] https://en.wikipedia.org/wiki/Mac_68k_emulator


>The question is, did CPU evolve so much it became impossible to translate from one arch to another ?

I'd say software got a bit more complex compared to then.

Furthermore, that works if you have controlled hardware (which means easier testing and less edge cases to worry about) and a single transition to worry about (from A to B, not {A,B,C,D,E} -> {A,B,C,D,E}).

Can you imagine how insane it will be if Windows shipped a compatibility layer that translates x86 software to ARM, to RISC-V, to MIPS, to whatever? You need to test compatibility for not one but 3 architectures. No way people are gonna do that; the RoI is almost nonexistent.

So the only solution is to recompile, which is annoying if you don't have the great software infrastructure to do so.


You mean like it did with FX!32 back in 1996 to run x86 stuff on alpha processors?

https://en.wikipedia.org/wiki/FX!32

Its not nearly as hard as you think to translate between isas. Some things won't directly translate, like say the matrix multiply resister in some mips super computers, but you can easily just swap that out for a more mundane approach.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: