Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't explain completely, but check out http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-ma....

Relatively speaking, I don't think context switching is _that_ expensive compared to other areas you can focus on like doing smarter memory management.

I don't know exactly what they mean by threads having heavy memory overhead, but possibly they mean cache interference mentioned in that link? I'd be curious if there is an actual large memory chunk other than stack/scheduling details in play here too.

On modern cores too, there are a decent chunk of registers, including floating point registers. I've looked into timings before for embedded applications and the performance hit here isn't trivial when looking at interrupts and the like; but I'd be surprised if it was that overwhelming on servers.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: