Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Copying virtual stacks on a context switch sounds kind of expensive. Any performance numbers available? Maybe for very deep stacks there are optimizations whereby you only copy in deeper frames lazily under the assumption they won't be used yet? Also, what is the story with preemption - if a virtual thread spins in an infinite loop, will it effectively hog the carrier thread or can it be descheduled? Finally, I would be really interested to see the impact on debugability. I did some related work where we were trying to get the JVM to run on top of a library operating system and a libc that contained a user level threading library. Debugging anything concurrency related became a complete nightmare since all the gdb tooling only really understood the underlying carrier threads.

Having said all that, this sounds super cool and I think is 100% the way to go for Java. Would be interesting to revisit the implementation of something like Akka in light of this.



Yes, lazy copying tricks are employed, and some work round stack frames delayed until the stack is moved by the GC on the assumption most will not live that long.

There was a lot of work done on debugging so standard Java debuggers work well.


That's pretty cool! What about the blocking issue? Presumably also if you are using JNI all bets are off?


If you are using JNI then that puts a native frame on the stack and pins the virtual thread. Loom’s strategy works because we know what can be on the Java stack, and that nothing external points into it, but we don’t know that for native frames.


> Also, what is the story with preemption - if a virtual thread spins in an infinite loop, will it effectively hog the carrier thread or can it be descheduled?

As I understood it, there's no preemption. You're supposed to not do busy waiting on virtual threads (and better not do it at all, use wait-notify or a barrier or whatever). Virtual threads are for I/O-bound tasks. For CPU-bound tasks you'd want an OS thread per CPU core anyway.


They did implement pre-emption at some point and there are parts of that still in the code. It's intended to let you more easily manage load e.g. by de-scheduling complicated batch jobs that are low priority and then re-scheduling when the service is under lighter load. But it won't ship as public API as a part of Loom, maybe it will never ship (generally once OpenJDK finishes a project it gets de-staffed and they don't go back to it).


Stacks in idiomatic Java aren't usually that deep (pointers instead of in place values) so this isn't that big of a deal, unlike say in C++.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: