> In VALORANT’s case, .5ms is a meaningful chunk of our 2.34ms budget. You could process nearly a 1/4th of a frame in that time! There’s 0% chance that any of the game server’s memory is still going to be hot in cache.
This feels like an unideal architectural choice, if this is the case!?
Sounds like each game server is independent. I wonder if anyone has more shared state multi-hosting? Warm up a service process, then fork it as needed, so there's some share i-cache? Have things like levels and hit boxes in immutable memfd, shared with each service instance, so that the d-cache can maybe share across instances?
With heartbleed et al, a context switch probably has to totally burn down the caches now a days? So maybe this wouldn't be enough to keep data hot, that you might need a multi-threaded not multi-process architecture to see shared caching wins. Obviously I dunno, but it feels like caches are shorter lived than they used to be!
I remember being super hopeful that maybe something like Google Stadia could open up some interesting game architecture wins, by trying to render multiple different clients cooperatively rather than as individual client processes. Afaik nothing like that ever emerged, but it feels like there's some cool architecture wins out there & possible.
It does sound like each server is its own process. I think you're correct that it would be a little faster if all games shared a single process. That said, then if one crashed it'd bring the rest down.
This is one of those things that might take weeks just to _test_. Personally I suspect the speedup by merging them would be pretty minor, so I think they've made the right choice just keeping them separate.
I've found context switching to be surprisingly cheap when you only have a few hundred threads. But ultimately, no way to know for sure without testing it. A lot of optimization is just vibes and hypothesize.
This feels like an unideal architectural choice, if this is the case!?
Sounds like each game server is independent. I wonder if anyone has more shared state multi-hosting? Warm up a service process, then fork it as needed, so there's some share i-cache? Have things like levels and hit boxes in immutable memfd, shared with each service instance, so that the d-cache can maybe share across instances?
With heartbleed et al, a context switch probably has to totally burn down the caches now a days? So maybe this wouldn't be enough to keep data hot, that you might need a multi-threaded not multi-process architecture to see shared caching wins. Obviously I dunno, but it feels like caches are shorter lived than they used to be!
I remember being super hopeful that maybe something like Google Stadia could open up some interesting game architecture wins, by trying to render multiple different clients cooperatively rather than as individual client processes. Afaik nothing like that ever emerged, but it feels like there's some cool architecture wins out there & possible.