The honest answer is: don't know yet, but if it doesn't happen in 1.0, it will be the priority for 1.1.
I'm going to do some experiments in the next few days and see how it goes.
Roughly, the way we're thinking of adding Windows support to Bun is:
1) Get all the Zig code using platform-specific system APIs to use the closest Windows equivalent API. Fortunately, we have lots of code for handling UTF-16 strings (since that's what JS uses in some cases)
2) Get uSockets/uWebSockets (C/C++ library we use for tcp & http serve) to compile for Windows, or fall back to using libuv if it takes too long to make it work
3) Get the rest of the dependencies to compile on Windows
4) Fix bugs and perf issues
There are a lot of open questions though. None of us are super familiar with I/O on Windows. JavaScriptCore doesn't have WebAssembly enabled on Windows yet.
The biggest thing I'm worried about (other than time) re: Windows is async i/o. In Bun, we _mostly_ use synchronous I/O. Synchronous I/O is simpler and when using SSDs, is often meaningfully lower overhead than the work necessary to make it async. I've heard that anti-virus software will often block I/O for potentially seconds, which means that using sync I/O at all is a super bad idea for performance in Windows (if this is true). If that is true, then making it fast will be difficult in cases where we need to do lots of filesystem lookups (like module resolution)
On Windows you may consider using higher level IO routines. For example, for HTTP requests you can use WinHTTP which is super fast and scalable. For other IOs you can use Windows Thread Pool API(https://learn.microsoft.com/en-us/windows/win32/procthread/t...) so that you do not need to manually manage threads or register/unregister IO handlers/callbacks. gRPC uses that.
Though Windows IOs internally are all async, actually it makes using sync I/O easier and you do not need to say it is a super bad idea. Windows has IOCP. If the machine has n logical CPUs, you may create a thread pool with 2*n threads. And, by default the operating system will not make more than n threads active at the same time. When one of the threads is doing blocking IO and entered IO wait state, the OS will wake-up another thread and let it go. This is why the number of threads in the thread pool needs be larger than than the number of CPUs. This design doesn't lead to an optimal solution, however, practically it works very well. In this setting you still have the flexibility to use async IOs, but it is not a sin to use sync IO in a blocking manner in a thread pool.
Disclaimer: I work at Microsoft and ship code to Windows, but the above are just my personal opinions.
> JavaScriptCore doesn't have WebAssembly enabled on Windows yet.
I got JavaScriptCore compiling with WebAssembly enabled yesterday, but I don't know how long it'll take to get it to actually work.
The bigger problem for Bun is that JavaScriptCore doesn't have the FTL JIT enabled on Windows [1]. It's going to be much slower than other platforms without that final tier of JIT, shows up pretty dramatically on benchmarks.
Sync IO is probably fine on Windows with the exception of CloseHandle, in which case Windows Defender or other AV will invoke a file filter in the kernel's file I/O filter stack to scan changes for data recently written to the file. A common approach used in Rust, version control software, and other runtimes is to defer file closing to a different thread to keep other I/O and user-facing threads responsive. All that said, I think IOCP on Win32 is a far superior asynchronous programming model to the equivalent APIs on Linux which feel far less usable (with more footguns).
This definitely also used to be true on macOS. Bun previously would just request the max ulimit for file descriptors and then not close them. Most tools don't realize there are hard and soft limits to file descriptors, and the hard limit is usually much higher.
On Linux, not closing file descriptors makes opening new ones on multiple threads occasionally lock for 30ms or more. Early versions of `bun build` were something like 5x slower on Linux compared to macOS until we narrowed down that the bug was caused by not closing file descriptors.