How does this stack up against the Titan X for ML applications? Is the extra 1gb or ram critical or will this be a comparable (or even greater) offering?
The difference between 11gb and 12gb isn't meaningful for most users of ML.
Yes, there are problems where squeezing in a slightly larger model than 11gb would give slightly better results; but that size of problem also would generally require a lot of computing power, at that point you're not comparing specs of a single card but benchmarks and scaling for clusters of multi-gpu machines.