You know, if I was flame-baiting, I would go ahead and say 'there goes the standard 'performance is more important than actually shipping' comment. I won't and I will address your notes even though unsubstantiated.
> Effective use of IO at such scale implies high-quality DB driver accompanied by performant concurrent runtime that can multiplex many outstanding IO requests over few threads in parallel. This is significantly influenced by the language of choice and particular patterns it encourages with its libraries.
In my experience, the bottleneck is mostly on the 'far side' of the IO from the app's PoV.
> I can assure you - databases like MySQL are plenty fast and e.g. single-row queries are more than likely to be bottlenecked on Ruby's end.
I can assure you, Ruby apps have no issues whatsoever with single-row queries. Even if they did, the speed-up would be at most constant if written in a faster language.
> Inefficient data transformations with high amount of transient allocations will run at least 10 times faster in many of the Ruby's alternatives. Good ORM implementations will also be able to optimize the queries or their API is likely to encourage more performance-friendly choices.
Or it could be o(n^2) times faster if you actually stop writing shit code in the first place.
Good ORMs do not magically fix shit algorithms or DB schema design. Rails' ORM does in fact point out common mistakes like trivial n+1 queries. It does not ask you "Are you sure you want me to execute this query that seq scans the ever-growing-but-currently-20-million-record table to return 5000 records as a part of your artisanal hand-crafted n+1 masterpiece(of shit) for you to then proceed to manually cross-reference and transform and then finally serialise as JSON just to go ahead and blame the JSON lib (which is in C btw) for the slowness".
> Many testimonies on Rust do just that. A lot of it comes down to particular choices Rust forces you to make. There is no free lunch or magic bullet, but this also replicates to languages which offer more productivity by means of less decision fatigue heavy defaults that might not be as performant in that particular scenario, but at the same time don't sacrifice it drastically either.
I am by no means going to dunk on Rust as you do on Ruby as I've just toyed with it, however I doubt that I could right now make the performance/productivity trade-off in Rust's favour for any new non-trivial web application.
To summarise, my points were that whatever language you write in, if you have IO you will be from the get go or later bottlenecked by IO and this is the best case. The realistic case is that you will not ever scale enough for any of this to matter. Even if you do you will be bottlenecked by your own shit code and/or shit architectural decisions far before even IO; both of these are also language-agnostic.
You know, if I was flame-baiting, I would go ahead and say 'there goes the standard 'performance is more important than actually shipping' comment. I won't and I will address your notes even though unsubstantiated.
> Effective use of IO at such scale implies high-quality DB driver accompanied by performant concurrent runtime that can multiplex many outstanding IO requests over few threads in parallel. This is significantly influenced by the language of choice and particular patterns it encourages with its libraries.
In my experience, the bottleneck is mostly on the 'far side' of the IO from the app's PoV.
> I can assure you - databases like MySQL are plenty fast and e.g. single-row queries are more than likely to be bottlenecked on Ruby's end.
I can assure you, Ruby apps have no issues whatsoever with single-row queries. Even if they did, the speed-up would be at most constant if written in a faster language.
> Inefficient data transformations with high amount of transient allocations will run at least 10 times faster in many of the Ruby's alternatives. Good ORM implementations will also be able to optimize the queries or their API is likely to encourage more performance-friendly choices.
Or it could be o(n^2) times faster if you actually stop writing shit code in the first place.
Good ORMs do not magically fix shit algorithms or DB schema design. Rails' ORM does in fact point out common mistakes like trivial n+1 queries. It does not ask you "Are you sure you want me to execute this query that seq scans the ever-growing-but-currently-20-million-record table to return 5000 records as a part of your artisanal hand-crafted n+1 masterpiece(of shit) for you to then proceed to manually cross-reference and transform and then finally serialise as JSON just to go ahead and blame the JSON lib (which is in C btw) for the slowness".
> Many testimonies on Rust do just that. A lot of it comes down to particular choices Rust forces you to make. There is no free lunch or magic bullet, but this also replicates to languages which offer more productivity by means of less decision fatigue heavy defaults that might not be as performant in that particular scenario, but at the same time don't sacrifice it drastically either.
I am by no means going to dunk on Rust as you do on Ruby as I've just toyed with it, however I doubt that I could right now make the performance/productivity trade-off in Rust's favour for any new non-trivial web application.
To summarise, my points were that whatever language you write in, if you have IO you will be from the get go or later bottlenecked by IO and this is the best case. The realistic case is that you will not ever scale enough for any of this to matter. Even if you do you will be bottlenecked by your own shit code and/or shit architectural decisions far before even IO; both of these are also language-agnostic.