Hacker News new | past | comments | ask | show | jobs | submit login
Node.native - Node.js in C++0x (github.com/d5)
138 points by johnx123-up on Feb 8, 2012 | hide | past | favorite | 52 comments



Other projects around libuv:

1. Mozilla's Rust http://www.rust-lang.org/

2. Tim Caswell's LuaNode https://github.com/ignacio/LuaNode

3. Ben Noordhuis and Bert Belder’s Phode async PHP project https://github.com/bnoordhuis/phode

4. Kerry Snyder’s libuv-csharp https://github.com/kersny/libuv-csharp

5. Andrea Lattuada's web server https://gist.github.com/1195428

6. Steve Yen's Lua wrapper https://github.com/steveyen/uv

Source http://blog.nodejs.org/2011/09/23/libuv-status-report/


Shameless plug ;-) Python bindings for libuv: https://github.com/saghul/pyuv


Nice list, though that's not Tim Caswell's project; this is: https://github.com/luvit/luvit


Using Moonscript (Lua's CoffeeScript) the performance benefit of LuaJIT when compared to all JavaScript implementations makes it a really interesting.

Well, actually Lua doesn't look bad, even though it really needs syntactic sugar for OO, but there is enough of it. Not just Moonscript, but also Loose (Perl's Moose ported to Lua).


Nice list. I hope libuv gets a wiki that lists all of them.

Oh and does anyone know a Perl one?


This is quite similar to boost.asio library, which has got sockets and timers. I know for a fact that boost.asio is thread safe and you can use several threads running the same I/O service allowing you to multiplex n sockets to m threads. Do you guys know whether libuv (and node.native) are thread safe or not?


No kidding, asio is fast and battle-tested. I only see risks associated with being an early adopter this. The Node hype is a deafening roar.


This is a little off topic, but thought I'd ask the C++11 aficionados who're viewing this thread. For those who've taken the hit and really explored C++11:

1) What are the best resources to learn how to code C++11 in a Pythonic style? Looking for a book or production source code example that uses the new language features idiomatically: lambdas, vector and dictionary literals, and the like.

2) What does your toolchain look like for C++11? Just the latest gcc and latest emacs, plus maybe a few tools like makeheaders[1] and gtags[2], or something more?

[1] http://www.hwaci.com/sw/mkhdr/

[2] http://www.gnu.org/software/global/


1) What are the best resources to learn how to code C++11 in a Pythonic style? Looking for a book or production source code example that uses the new language features idiomatically: lambdas, vector and dictionary literals, and the like.

I don't think you can find good books or comprehensive resources at this time. There are various articles and blog posts detailing the C++11 additions to the language. Of course there's the spec but it makes very boring reading.

Not a whole lot has changed so you can pretty much jump right in and use the new features as you need them.

std::thread and std::chrono are pretty sweet.

2) What does your toolchain look like for C++11? Just the latest gcc and latest emacs, plus maybe a few tools like makeheaders[1] and gtags[2], or something more?

I used to build gcc from git sources, but these days my operating system package manager ships with a gcc version that has most of the C++11 stuff I use. Just add -std=gnu++0x to your CFLAGS.

FWIW: I use vim + ctags + cscope.


What is the dramatic advantage of Node.native over using libev/libevent directly?


Familiar interface, good pre-packaged core libraries.

That's about it.


familiar interface for whom?


Node uses libuv, which supports Windows and abstracts over a few other things too.


none. I'm using libevent for my C++0x webapps and it rocks.


A cool hack, but I can't help this feeling pointless in practice.

It solves the problem of using Node userland code to do heavy computation. But

a) serious developers know better than to do heavy computation inside of Node anyway, and

b) it comes at an enormous, almost unforgivable cost: working with C++ is a nightmare compared to working with Node when it comes to realtime web projects (I've been in those trenches)

What would be exponentially more useful in practice is a concise, well-documented way to hook (multithreaded) C++ code into Node's engine.

But that wouldn't be nearly as sexy as a promise of Node, but with native performance.


Without commenting on Node.native, I'd just like to note that C++11 isn't your grandparents' C++ anymore. An experienced programmer probably can write code in C++11 that is almost as elegant as, say, the equivalent C# code. And for larger projects many people would prefer something like C# to JavaScript, if only for the static typing.


I've actually been using C++11 for two years now (experimental stuff in GCC), and yes I agree: C++ is loads better than it used to be, and it's a pleasure to work with.

That still doesn't mean it's a good idea to write a full web app in a C++ event loop while doing your own memory management. I've tried this (in C++11 no less), and it took nearly 30K LOC for me to admit defeat. I wish I were joking.


the problem isn't writing code, the problem is build environment and deployment - correct versions of compilers, headers, libraries, makefiles, cmakelists, wafs, scons', bjams, whatevers...


Is it really an issue? Are you proud of the fact that you can't sudo apt-get a few packages?

I'm not a fan of moving away from Makefiles, however. Makefiles are sooo simple, why complicate it?


Is it really an issue?

No. Its not. I've been programming primarily in C++ for a few years (been using it as a hobbyist programmer since about 2002 and on and off professionally over the last 4 years) and this has never been an issue for me.


I used to feel that way about Make until I tried Rake. I'll never go back now, I think build scripts feel the need for a general purpose language often enough that it really pays to have them as an embedded DSL in a scripting language.


Makefiles are only simple if you are only supporting a single operating system, and a few build configurations.


I've worked on a makefile that built shaders, libs, and assorted executables for multiple projects across 6 platforms, and I can confirm this.


sudo apt-get a few packages is great when it works, but if you want/need a newer boost version, you're on your own. Makefiles are tedious to write and maintain in anything but a toy application. That's not to say everything else isn't, but most of the time you can save yourself some hassle.


If you're developing company-internal stuff, you can just standardize on one compiler and/or statically link everything.


it comes at an enormous, almost unforgivable cost: working with C++

If you are already writing an application in C++ which needs a web component, this may be an interesting choice.


Great to see libuv being used in the wild (outside of Node.js). It's a nice little library that does one thing and does it good.


FWIW it's also used in the Rust runtime (https://github.com/mozilla/rust/tree/master/src)


I would like to understand the implications. Does this improve performance? Or does it just allow coding in C++.

Please help. (Sorry, noob here.)


> I would like to understand the implications. Does this improve performance? Or does it just allow coding in C++.

This is practically a C++ wrapper for libuv, the backend of node.js. It allows you to write code similar to node.js in C++ without using JavaScript and the V8 JavaScript environment. Does it improve performance? In theory, yes. In practice, maybe.

The biggest potential win here is that you could run many threads that can run I/O in parallel. In Node.js/V8 you only have one native OS thread while in C++ you can put many native threads to serve the I/O sockets. It's quite a lot cheaper to change from one thread to another than it is to change from one process to another (if you have many node.js processes running) so there is potential for performance increases if you have some CPU-heavy stuff in your code. Or you have more sockets/requests to serve than a single CPU is capable of.

This is assuming that libuv and node.native don't have anything that would inhibit running in multiple threads. The system calls running underneath (read, write, epoll/kqueue) are thread safe, but libuv/node.native may have something that isn't. I didn't double-check.


To clarify, node already has parallelized I/O threads. The only thing that isn't parallelized is the JS-land code.


> To clarify, node already has parallelized I/O threads. The only thing that isn't parallelized is the JS-land code.

But Node.js runs only in a single native thread, right? So it's single threaded I/O multiplexing using epoll/kqueue and the thread calls JS callbacks when some I/O takes place. In C or C++ you could run n native threads serving m sockets using a single "reactor".


Node.js only has a single thread for the event loop. (where the user's javascript is run) Internally it uses a thread pool.

You could certainly do what you describe and it would be very similar to node's Cluster module, which uses multiple processes. In fact, I believe in the next release they're giving you the option to replace processes with threads under the hood.


That actually isn't the case anymore. It has been removed for various reasons, a major one is stability. While the promise of isolates was nice, I think the choice to remove them was a good one considering the tradeoffs.

More info: https://groups.google.com/forum/?fromgroups#!topic/nodejs/zL...


(I submitted the link, but...) Do you think a C port will improve performance over C++? Google only returns wishlist something like this http://rajeshanbiah.blogspot.in/2012/01/nodec-la-nodejs.html


In all honesty, probably not. Node.js is too thin to really make a C port all that worthwhile.

Unless the dispatcher is extraordinary slow, the major performance issues are going to be in the userland code more than the actual Node code.


A properly coded C++ app should run as fast as C; it can be faster...


Not while C++ doesn't have restrict pointers


Yeah, all performance benefits of C come from a single under used feature. </sarcasm>


While the standard does not specify a restrict keyword, it is practically available in quite a lot of compiler.


It allows coding in C++ in a fashion similar to Node by taking advantage of the lambda syntax added in C++11.

It also makes use of libuv and http-parser, which are used in Node, to offer some of the same out-of-the-box functionality.


It might be useful to use node-like patterns in platforms where node is not available (iOS comes to mind).


Wow, great work. Really good for both CPU and IO critical missions (a.k.a people who are cheap on servers)


I'd kill to have this in Go instead. Looks like it's a wrapper around libuv, shouldn't be hard with cgo. Hm, maybe another thing to add to the list.

edit: it appears that I'm confused as to what this does exactly, my bad. Egg, all over my face.


Doesn't Go have it's own asynchronous I/O system built into the language runtime? I was under the impression that Go has I/O multiplexing w/green threads similar to Haskell. So when you write socket.read(), it will be put into an epoll/kqueue behind the scenes and a co-operative green thread switch occurs. This way you don't have to write dirty I/O callbacks like in Node.js or Boost.Asio but you get the same effect with neater code.

So if Go has this feature, using libuv would not only be useless, it would be harmful because it doesn't play nice with the language runtime w.r.t. parallelism.

I think that writing asynchronous parallel code should be done by the compiler and the runtime, not by the coder. Node.js code is a mess of callbacks that have to be written in continuation passing style by the user. Haskell (or Go?) code that has the same effect is written like regular imperative code, which is transformed into Node-style callback code by the compiler and multiplexed by the runtime. Compilers are excellent in doing this kind of transformations.


Almost, but not quite.

From http://golang.org/doc/effective_go.html#goroutines: "Goroutines are multiplexed onto multiple OS threads so if one should block, such as while waiting for I/O, others continue to run."

So the Go runtime doesn't automagically transform blocking system calls into non-blocking calls, but if one goroutine is waiting for I/O, other goroutines can run in the meantime.

But you're right in that with Go you can write simple, easy to debug imperative code, but at the same time confidently spawn 10's of thousands of goroutines because they're so lightweight compared to conventional OS threads. The Go runtime handles the bookkeeping of multiplexing goroutines over real OS threads.


> So the Go runtime doesn't automagically transform blocking system calls into non-blocking calls, but if one goroutine is waiting for I/O, other goroutines can run in the meantime.

But when using Go's standard file.read or socket.write, what you get is I/O multiplexing of goroutines (with epoll or kqueue) in the runtime.

Naturally, if you call a C library from Go that uses unix read or write and executes a blocking system call, that cannot be magically intercepted by the Go runtime.


Half-true. os.File is not multiplexed, so if there are multiple concurrent blocking reads, each of them will consume an OS thread until it completes. net.Conn, on the other hand, uses polling so only one OS thread is necessary. Technically speaking, this is a feature of the standard library and not the runtime system.


...what? All file.Read does is wrap the read syscall. There's no concurrency in file.Read.


It seems like goroutines in Go don't do as much asynchronous stuff as Haskell's I/O manager and green threads do. Looking at Go sources, there is some async stuff w/epoll in the net library with sockets, but file.Read is indeed just a plain syscall.

It makes me wonder, what happens to a goroutine when a system call blocks. Go is supposed to mux many goroutines to a smaller number of OS threads, but what happens when one of those threads has been blocked in a system call?

There are plenty of vague descriptions on how goroutines work but none of the ones I found quickly explained what happens on a blocking system call that allows another goroutine to run, apart from hand-waving about "another goroutine running".


The Go runtime keeps track of the number of goroutines that are currently executing and ensures that this doesn't go above GOMAXPROCS. As soon as a goroutine enters a system call it is excluded from this count. If there aren't enough threads to run GOMAXPROCS goroutines the runtime may launch a new one.

See pkg/runtime/proc.c (entersyscall, ready, and matchmg).


The Go runtime already does that, it abstracts away the while asynchronous I/O for you.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: