A possible reason not mentioned in the post is that writing efficient incremental algorithms is just fundamentally hard, despite the primitives and tooling afforded by the differential dataflow library. For example, even with a lot of machine learning libraries targeting python, there are only a couple that really implement online algorithms.
Can confirm, am still unsure if DDlog [0] can be switched to Worst-case optimal joins (WCOJ) [1] with the recent (unreleased, but almost 1y old) calculus operators of DDflow [2][3], because at least the original dogs^3 approach supposedly doesn't work in iterative contexts (which are necessary for recursive operations, like graph computations). The calculus blog post ends on a promising note, however.
I'm trying to help a couple (friends?) with getting the analysis of rslint [4] running well on DDlog or at least DDflow, with the end-goal being a perceptually zero-latency linter that typically responds faster than a human types.
We're currently seeing initial delays in the single-digit second range, and that's not even on large projects (the incremental performance is far better, but we would like to out-compete the official TS typechecker even in CI settings that don't keep the linter's state across runs).
The good news: we're making nice progress on profiling tools and I might get to trying some WCOJ code later today.