Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends entirely on what you've got on the FPGA and what's slow-path. My firm had order entry done exclusively by the FPGA and the non-FPGA code would have orders in the FPGA for sometimes tens of seconds before they would fire. The non-FPGA code was still C++, but the kind of everyday C++ where nobody has been wringing all the water out of it for speed gains. If we rewrote it all, it'd probably be python. But that's just us. Every firm has their own architecture and spin on how this all works. I'm sure somebody out there has C++ code that still has to be fast to talk to an FPGA. We didn't - you were either FPGA-fast or we didn't care.

edit: also, fwiw, "say about 100us, and absolutely never more than 1ms." things measured in that many microseconds are effectively not what my firm considered "low-latency". But like I said, there are lots of different speed games out there, so there's no one-size-fits-all here.



Your hardware is configured by your software. You pre-arm transactions to be released when a trigger occurs.

Releasing the transaction when the trigger occurs is the fast path, and what the hardware does. The slow path is what actually decides what should be armed and with what trigger.

If your software is not keeping up with information coming in, either you let that happen and trigger on stale data, or you automatically invalidate stale triggers and therefore never trigger.

100us is short enough to keep up with most markets, though obviously not fast enough to be competitive in the trigger to release loop.

But it's mostly an interesting magnitude to consider when comparing technologies; Python can't really do that scale. In practice a well-designed system in C++ doesn't struggle at all to stay well below that figure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: