I’ve read the other replies but I remember well why it failed: only single precision floats.
You can’t run physics simulations on single precision. So the people who would have bought and tinkered and pushed it forwards were all turned off by it.
The military was allowed double precision; but there was no point making it a compute cluster otherwise.
I don’t know if it was a marketing or budget decision but basically all the early adopters who would have championed doing impressive things on there (I was so hyped at the time for the cell architecture) couldn’t do anything useful with it.
There's a lot of physics you can do in single precision. In fact, current gen climate models are mostly in single precision, and some of the next gen models are trying to do a lot of the work in half precision (float16).
I’ll give you it’ll be algo and scale dependent. I work with climate modellers and I think they’d get more bang for their buck with optimising for GPU architectures.
I recall I was very excited at the prospect of getting my hands on a ps3 for modelling, but all my work (1) at least required double precision and it left a long bad taste in my mouth when it wasn’t to be.
You can’t run physics simulations on single precision. So the people who would have bought and tinkered and pushed it forwards were all turned off by it.
The military was allowed double precision; but there was no point making it a compute cluster otherwise.
I don’t know if it was a marketing or budget decision but basically all the early adopters who would have championed doing impressive things on there (I was so hyped at the time for the cell architecture) couldn’t do anything useful with it.