Using Moonscript (Lua's CoffeeScript) the performance benefit of LuaJIT when compared to all JavaScript implementations makes it a really interesting.
Well, actually Lua doesn't look bad, even though it really needs syntactic sugar for OO, but there is enough of it. Not just Moonscript, but also Loose (Perl's Moose ported to Lua).
This is quite similar to boost.asio library, which has got sockets and timers. I know for a fact that boost.asio is thread safe and you can use several threads running the same I/O service allowing you to multiplex n sockets to m threads. Do you guys know whether libuv (and node.native) are thread safe or not?
This is a little off topic, but thought I'd ask the C++11 aficionados who're viewing this thread. For those who've taken the hit and really explored C++11:
1) What are the best resources to learn how to code C++11 in a Pythonic style? Looking for a book or production source code example that uses the new language features idiomatically: lambdas, vector and dictionary literals, and the like.
2) What does your toolchain look like for C++11? Just the latest gcc and latest emacs, plus maybe a few tools like makeheaders[1] and gtags[2], or something more?
1) What are the best resources to learn how to code C++11 in a Pythonic style? Looking for a book or production source code example that uses the new language features idiomatically: lambdas, vector and dictionary literals, and the like.
I don't think you can find good books or comprehensive resources at this time. There are various articles and blog posts detailing the C++11 additions to the language. Of course there's the spec but it makes very boring reading.
Not a whole lot has changed so you can pretty much jump right in and use the new features as you need them.
std::thread and std::chrono are pretty sweet.
2) What does your toolchain look like for C++11? Just the latest gcc and latest emacs, plus maybe a few tools like makeheaders[1] and gtags[2], or something more?
I used to build gcc from git sources, but these days my operating system package manager ships with a gcc version that has most of the C++11 stuff I use. Just add -std=gnu++0x to your CFLAGS.
A cool hack, but I can't help this feeling pointless in practice.
It solves the problem of using Node userland code to do heavy computation. But
a) serious developers know better than to do heavy computation inside of Node anyway, and
b) it comes at an enormous, almost unforgivable cost: working with C++ is a nightmare compared to working with Node when it comes to realtime web projects (I've been in those trenches)
What would be exponentially more useful in practice is a concise, well-documented way to hook (multithreaded) C++ code into Node's engine.
But that wouldn't be nearly as sexy as a promise of Node, but with native performance.
Without commenting on Node.native, I'd just like to note that C++11 isn't your grandparents' C++ anymore. An experienced programmer probably can write code in C++11 that is almost as elegant as, say, the equivalent C# code. And for larger projects many people would prefer something like C# to JavaScript, if only for the static typing.
I've actually been using C++11 for two years now (experimental stuff in GCC), and yes I agree: C++ is loads better than it used to be, and it's a pleasure to work with.
That still doesn't mean it's a good idea to write a full web app in a C++ event loop while doing your own memory management. I've tried this (in C++11 no less), and it took nearly 30K LOC for me to admit defeat. I wish I were joking.
the problem isn't writing code, the problem is build environment and deployment - correct versions of compilers, headers, libraries, makefiles, cmakelists, wafs, scons', bjams, whatevers...
No. Its not. I've been programming primarily in C++ for a few years (been using it as a hobbyist programmer since about 2002 and on and off professionally over the last 4 years) and this has never been an issue for me.
I used to feel that way about Make until I tried Rake. I'll never go back now, I think build scripts feel the need for a general purpose language often enough that it really pays to have them as an embedded DSL in a scripting language.
sudo apt-get a few packages is great when it works, but if you want/need a newer boost version, you're on your own. Makefiles are tedious to write and maintain in anything but a toy application. That's not to say everything else isn't, but most of the time you can save yourself some hassle.
> I would like to understand the implications. Does this improve performance? Or does it just allow coding in C++.
This is practically a C++ wrapper for libuv, the backend of node.js. It allows you to write code similar to node.js in C++ without using JavaScript and the V8 JavaScript environment. Does it improve performance? In theory, yes. In practice, maybe.
The biggest potential win here is that you could run many threads that can run I/O in parallel. In Node.js/V8 you only have one native OS thread while in C++ you can put many native threads to serve the I/O sockets. It's quite a lot cheaper to change from one thread to another than it is to change from one process to another (if you have many node.js processes running) so there is potential for performance increases if you have some CPU-heavy stuff in your code. Or you have more sockets/requests to serve than a single CPU is capable of.
This is assuming that libuv and node.native don't have anything that would inhibit running in multiple threads. The system calls running underneath (read, write, epoll/kqueue) are thread safe, but libuv/node.native may have something that isn't. I didn't double-check.
> To clarify, node already has parallelized I/O threads. The only thing that isn't parallelized is the JS-land code.
But Node.js runs only in a single native thread, right? So it's single threaded I/O multiplexing using epoll/kqueue and the thread calls JS callbacks when some I/O takes place. In C or C++ you could run n native threads serving m sockets using a single "reactor".
Node.js only has a single thread for the event loop. (where the user's javascript is run) Internally it uses a thread pool.
You could certainly do what you describe and it would be very similar to node's Cluster module, which uses multiple processes. In fact, I believe in the next release they're giving you the option to replace processes with threads under the hood.
That actually isn't the case anymore. It has been removed for various reasons, a major one is stability. While the promise of isolates was nice, I think the choice to remove them was a good one considering the tradeoffs.
Doesn't Go have it's own asynchronous I/O system built into the language runtime? I was under the impression that Go has I/O multiplexing w/green threads similar to Haskell. So when you write socket.read(), it will be put into an epoll/kqueue behind the scenes and a co-operative green thread switch occurs. This way you don't have to write dirty I/O callbacks like in Node.js or Boost.Asio but you get the same effect with neater code.
So if Go has this feature, using libuv would not only be useless, it would be harmful because it doesn't play nice with the language runtime w.r.t. parallelism.
I think that writing asynchronous parallel code should be done by the compiler and the runtime, not by the coder. Node.js code is a mess of callbacks that have to be written in continuation passing style by the user. Haskell (or Go?) code that has the same effect is written like regular imperative code, which is transformed into Node-style callback code by the compiler and multiplexed by the runtime. Compilers are excellent in doing this kind of transformations.
So the Go runtime doesn't automagically transform blocking system calls into non-blocking calls, but if one goroutine is waiting for I/O, other goroutines can run in the meantime.
But you're right in that with Go you can write simple, easy to debug imperative code, but at the same time confidently spawn 10's of thousands of goroutines because they're so lightweight compared to conventional OS threads. The Go runtime handles the bookkeeping of multiplexing goroutines over real OS threads.
> So the Go runtime doesn't automagically transform blocking system calls into non-blocking calls, but if one goroutine is waiting for I/O, other goroutines can run in the meantime.
But when using Go's standard file.read or socket.write, what you get is I/O multiplexing of goroutines (with epoll or kqueue) in the runtime.
Naturally, if you call a C library from Go that uses unix read or write and executes a blocking system call, that cannot be magically intercepted by the Go runtime.
Half-true. os.File is not multiplexed, so if there are multiple concurrent blocking reads, each of them will consume an OS thread until it completes. net.Conn, on the other hand, uses polling so only one OS thread is necessary. Technically speaking, this is a feature of the standard library and not the runtime system.
It seems like goroutines in Go don't do as much asynchronous stuff as Haskell's I/O manager and green threads do. Looking at Go sources, there is some async stuff w/epoll in the net library with sockets, but file.Read is indeed just a plain syscall.
It makes me wonder, what happens to a goroutine when a system call blocks. Go is supposed to mux many goroutines to a smaller number of OS threads, but what happens when one of those threads has been blocked in a system call?
There are plenty of vague descriptions on how goroutines work but none of the ones I found quickly explained what happens on a blocking system call that allows another goroutine to run, apart from hand-waving about "another goroutine running".
The Go runtime keeps track of the number of goroutines that are currently executing and ensures that this doesn't go above GOMAXPROCS. As soon as a goroutine enters a system call it is excluded from this count. If there aren't enough threads to run GOMAXPROCS goroutines the runtime may launch a new one.
See pkg/runtime/proc.c (entersyscall, ready, and matchmg).
1. Mozilla's Rust http://www.rust-lang.org/
2. Tim Caswell's LuaNode https://github.com/ignacio/LuaNode
3. Ben Noordhuis and Bert Belder’s Phode async PHP project https://github.com/bnoordhuis/phode
4. Kerry Snyder’s libuv-csharp https://github.com/kersny/libuv-csharp
5. Andrea Lattuada's web server https://gist.github.com/1195428
6. Steve Yen's Lua wrapper https://github.com/steveyen/uv
Source http://blog.nodejs.org/2011/09/23/libuv-status-report/