The software on mainframes only shines in reliability and the fact that the machines have been build for money transaction from the start. For example doing "decimal" math (if you think python) is as inexpensive as doing float math due to hardware support.
The machines themselves are impressive (hardware wise) and reliability wise, for example you can swap mainboards one by one in a full frame without ever taking down the machine (think raid on a mainboard level, RAIMB ?).
But the high start-up cost makes most startups going the other road. I am not convinced that the vertically scaling is cheaper than horizontally, if you need the ACID guarantees... but it is hard to say.
The reason why us old dogs say it is hard (not impossible) is due to the single-image and acid requirements. There is no good way to do that distributed (look up the CAP theorem).
So having a massive computer (with double digit terabytes of memory AND cache, and truly massive i/o pipes.. just makes building the need-to-work stuff simpler.
As an example, a few years ago I was (on my own money) on a mainframe conference (not doing mainframe work in my work day).. at that time the machine had more bandwidth to the roughly 200 PCIe adapters that a top-of-the-line intel CPU had between the L1 cache and the computing cores) - and that meant that given enough ssd's you could move more data into the system from disk that you could move into an intel cpu from cache...
Also mainframes can run two mainframes lockstep (as long as they are less than 50km apart), that means if one of them dies during a transaction (which in itself is extremely rare), the other can complete it without the application being any the wiser.. Try that in the cloud :)
The machines themselves are impressive (hardware wise) and reliability wise, for example you can swap mainboards one by one in a full frame without ever taking down the machine (think raid on a mainboard level, RAIMB ?).
But the high start-up cost makes most startups going the other road. I am not convinced that the vertically scaling is cheaper than horizontally, if you need the ACID guarantees... but it is hard to say.
The reason why us old dogs say it is hard (not impossible) is due to the single-image and acid requirements. There is no good way to do that distributed (look up the CAP theorem).
So having a massive computer (with double digit terabytes of memory AND cache, and truly massive i/o pipes.. just makes building the need-to-work stuff simpler.
As an example, a few years ago I was (on my own money) on a mainframe conference (not doing mainframe work in my work day).. at that time the machine had more bandwidth to the roughly 200 PCIe adapters that a top-of-the-line intel CPU had between the L1 cache and the computing cores) - and that meant that given enough ssd's you could move more data into the system from disk that you could move into an intel cpu from cache...
Also mainframes can run two mainframes lockstep (as long as they are less than 50km apart), that means if one of them dies during a transaction (which in itself is extremely rare), the other can complete it without the application being any the wiser.. Try that in the cloud :)