fincancial fraud detection bottlenecks are typically between ram and the processor. Thousands of snychronously incoming transactions have to be examined simultanously, because they correlate heavily.
It is not like "here is one transaction, is it a fraud?" but "here are 2^20 transactions, what are the frauds?".
You could do this by pipelining, but I guess Banks want a zero downtime system and I personally would not trust an API in terms of reliability.
Another point is, that banks will not give you the original data. They will have to "pseudonymize" several entries, such as credit card numbers, names, ...
This would force them to preprocess the data which gives every transaction a very little + O(n) and which might decrease the speed even more.
(I'm not saying it's technically impossible, but I'd say there are better ways, such as releasing it closed source or just using it to predict financial data - which as we all know is possible and being done by hedge fonds, so this should be the best way IF you have that algorithm ;)
I've taken classes from people who worked on fraud detection for banks (Fair Isaac) and they were working on legacy hardware (shitty old mainframes) with retardedly limited floating point precision.
Performance is of the essence in these situations; any clever trick you can think of to speed things up should be used in such a situation (but keep it fairly simple; lookup tables and so forth, for example).
Oh yes, they do. That I do know.