As someone who's been a vigorous proponent of (P)NaCl for a long time, I have to say that I'm excited to see this. If it can truly deliver on its promise (near-native-code speeds with no imposed GC overhead), it will be a truly welcome advance indeed.
I hope that if it does succeed that it will open up even more possibilities, like optional SSE/AVX intrinsics (for added speed in the most demanding software) and threading support.
It's great and educational to see an alternative approach to the problem that is obviously quite different than (P)NaCl. May the best technology win.
For much, NaCl has a 2x perf penalty (v. native C++), and PNaCl higher (compilation overhead, and compilation has a higher cost: delaying runtime from starting). asm.js already, despite being brand-new, for much has a 2x perf penalty: equal with NaCl, yet more portable. (See zlib, for an example of this.)
There are still cases where asm.js is slower (box2d, for example, though there it's broken JS through the point needed for 60fps), but I'd expect nothing but the difference to decrease. Unlike PNaCl, it's not mere research (after several years it's still not shipped), works cross-browser, and further unlike NaCl, works cross-platform.
I expect someone will write some binary serialization for asm.js: you have all the primitives for iadd, isub, etc.
Why would we go this route? Why not just define a bytecode spec and be done with all of this nonsense? Mozilla is perfectly free to implement the bytecode by compiling it to a restricted subset of JS (in fact, it might make a lot of sense to do so). Defining a simple register or stack based machine and producing bytecode for it would greatly simplify the job of implementers, it would make defining the specification and determining compliance with the standard very straight forward, and it would let us eventually do web programming without being tied to a specific language.
"Why not just define a bytecode spec and be done with all of this nonsense?"
Because such bytecode would not be backwards compatible.
"Defining a simple register or stack based machine and producing bytecode for it would greatly simplify the job of implementers"
Let's take as a given that every browser that wants to implement asm.js needs a JavaScript parser (for JS content on the Web) and an asm.js compiler IR. The question, then, is whether it's simpler to write a verifier that simply walks over parse trees or whether it's simpler to write a reader for whatever custom bytecode is defined. I don't think that the verifier is much harder: the JS version of the verifier clocks in at just over 1,000 lines of code [1], which is nothing compared to the size of the VM as a whole. Certainly that 1,000 lines of code doesn't justify breaking backwards compatibility, in my mind.
"it would make defining the specification and determining compliance with the standard very straight forward"
The specification already more or less exists, and determining compliance is straightforward: 1,000 lines of code for the reference implementation.
"it would let us eventually do web programming without being tied to a specific language"
asm.js doesn't tie you to a language. On the contrary, it allows you to use alternative languages in order to get better performance than you would if you used normal JavaScript. JS is used as the transport format for backwards compatibility, nothing more. In fact, in SpiderMonkey, asm.js has its own set of IR opcodes (Asm*) that are separate from the JavaScript opcodes.
I don't know you, but I'm guessing you are somehow in love with javascript and invested in seeing it succeed.
The world is largely divided into two camps, the people who are in love with javascript and the people who think it is a sick joke. Those of us in the 'sick joke' camp would like to see javascript made optional to web programming, and not a part of our toolchain at all. If javascript is part of the toolchain at all, I will have to eventually debug javascript code, and javascript was written under the 'principle of maximum surprise' making it a very awful language to those of us who haven't based our careers on learning every single edge case (and every case is an edge case in javascript!)
The other half of the world is the 'javascript is great' people, and they usually learned to program with javascript or have spent a large portion of their coding career inside of it. They don't see what the big deal is, it's a perfectly good language, right? Well, it's actually a pretty gross language that has one amazing feature, it is available on every computer without having to install anything extra.
Please, let us have the OPTION of a completely javascript free web. The javascript fans can keep it, just give the rest us a choice.
> I don't know you, but I'm guessing you are somehow in love with javascript and invested in seeing it succeed.
I'm not. I just asked you two technical questions I was curious about, given what you said. If you had answered them, I might understand your position better.
I'm guessing you are somehow in love with javascript and invested in seeing it succeed.
In the discussions regarding asm.js I often see this tack taken by the folks who are opposed to it as a solution. Many people have an obvious hate-on for Eich and often Mozilla by extension. I will not accuse you of that.
As far as 'the principle of maximum surprise' goes, I would recommend trying to understand how the spec is designed to accomplish its goals.
As someone who is more in the second camp than the first (JS is something of a joke disguised as a language), actually, I see asm.js as a great way to eventually kill off Javascript. Whereas I see trying to specify -- and get all browsers to implement -- a whole new VM, as quixotic.
Although JS, warts and all, is no joke at this stage. It will die hard, but my hope is that JS VMs become multi-lingual in as good a way, or better, than the JVM and the CLR. Certainly better in terms of diversity of implementation, reach of the Web, and consequent interop testing in the large.
I get the point of Math.imul, but it seems at odds with the idea of asm.js being backward compatible. I guess it's easy enough to provide a polyfill (MDN even provides one), but that seems rather inelegant.
It's a pretty minor issue - asm.js is viable without it. For multiplications of small numbers, like 5 times x where x is 32-bit, you don't need Math.imul, normal multiply is fine. The only case where Math.imul is useful is x times y where neither x nor y is known at compile-time, so in theory they could be big enough to cause double-rounding in JS.
But even in that case, emscripten can emit code without Math.imul (there is a compiler flag). The code will work, but is a little slower than Math.imul, that's all. In fact in practice you don't even need the polyfill on MDN (which is precise), you can do imprecise multiplication with double-rounding, that works in 99% of cases in my experience, making Math.imul even less crucial.
But it's nice to have Math.imul, just to say that even in the worst case (odd codebase with tons of integer multiplies that are very often of integers both very very large), performance will be predictable and fast.
I don't care that asm.js is backward compatible with JavaScript. If my program uses an appreciable fraction of the available CPU an it has to act real time in some fashion (anywhere from 60fps for a game to simply acting responsive in a GUI app), execution several times slower by a standard JS VM is no better than no execution. If I want to be cross-platform in the hopefully near future where asm.js is not widely supported, I would pick Chrome and Firefox to support for now and use a NaCl plugin for the former. From this perspective, a "true" bytecode VM would be no worse - and it would hardly be boiling the ocean, since the size of code required to parse a simple bytecode is negligible. (If you want to compile it efficiently, either you bring in all of LLVM or modify your JS engine, which is hardly negligible, but asm.js is the same in that regard. The only difference is parsing.)
But I think it's nice that the "bytecode" is human readable and pretty much writable. It will be better to use a (possibly lightweight) too to compile to asm.js, but it's nice that small kernels can just be written without a compiler.
> execution several times slower by a standard JS VM is no better than no execution
Having worked in games for a while I disagree. From a user perspective, I've found that when games don't work at all players often feel the developer is too blame, while if it runs but just unplayable slow they are much more open to blame the system they play it on.
And let's be honest, if this is important for any sort of app, games are indeed one of them.
Additionally, I'd much rather all my features work, but have to scale back on visual effects, rather than have to write both a high-performance, and a low-performance version, and still be required to scale back effects.
From my perspective in game development the asm.js approach seems to be a significant win.
> Having worked in games for a while I disagree. From a user perspective, I've found that when games don't work at all players often feel the developer is too blame, while if it runs but just unplayable slow they are much more open to blame the system they play it on.
Steam comments on their forums seem to blame developers regardless from just my history of reading them. There might be more arguments about who is to blame (when there's some it runs fine for), but plenty still blame the developer when it runs slow and their system should be able to run it (such as in claims x similar game runs so y should [even if it's not apples and oranges, but users think so anyways]).
CoffeeScript and JavaScript are semantically almost the same. CoffeeScript is not going to magically become faster because of asm.js. As I understand it, asm.js is really intended as a compiler target for languages with different semantics from JS that can achieve better performance by hinting more directly about what the machine should do.
Ah, I see. That's why GWT would really benefit from asm.js but coffeescript would not: the Java-source already contains the extra type information (that is currently being mostly ignored), whereas Coffeescript is just as dynamic as native JavaScript.
Not sure why I got downvoted for asking though. Oh well! Cheers.
It would need types, basically. That would be a big change for CoffeeScript, and doesn't feel like what the language is aiming for, so probably not relevant I suspect.
x | 0 as declaring int. +x as declaring float. Math.imul to multiply integers. seriously???!!! It took me roughly half an hour to decide whether it's an elaborate joke or an actual idea.
Also javascript (without introducing new concepts) is not low level enough to write down everything you might need (have fun implementing 64bit integer operations with overflow for example).
> x | 0 as declaring int. +x as declaring float. Math.imul to multiply integers. seriously???!!!
|0, + etc. are how compilers to JavaScript - emscripten, mandreel, others - have been implementing integers and so forth for years. The bitwise operators and + have the right semantics for that, and are concise. So these are not ways to "declare" integers - they are functional syntax, things that actually do something.
In the asm.js type system, it was therefore natural to use them to also "declare" types. But only in the sense that asm.js figures out the types exactly as JS engines do, from code that has actual effects. (Compare to closure compiler type annotations, which are arbitrary and have no effects.)
Math.imul is not strictly necessary, but it fixes a specific pain point in JS (that multiplying large integers can be rounded due to JS numbers being doubles). You can use asm.js without it though, might be some slowdown with large ints but not large. asm.js is perfectly viable without Math.imul.
Besides all this though, the important bit to remember is that this is a compiler output. It's not a human-readable language. It doesn't matter if multiplication is * or DO_MUL or anything else, just like it doesn't matter how multiplication commands look (in terms of binary data) in x86 or ARM machine code.
> Also javascript (without introducing new concepts) is not low level enough to write down everything you might need (have fun implementing 64bit integer operations with overflow for example).
Depends on what you need. Current asm.js can run Sauerbraten, a 100 KLOC complete game engine with physics, AI, rendering, world geometry system, scripting language - not JS! :) - , level file formats etc etc. It is also enough to run large projects like Python, Bullet, etc. So it already supports quite a lot. In fact almost the entire emscripten test suite runs as asm.js, and that's a lot of C/C++ code.
But yes, there are some areas that are trickier, like true 64-bit ints.
| Depends on what you need. Current asm.js can run Sauerbraten, a 100 KLOC complete game engine with physics, AI, rendering, world geometry system, scripting language - not JS! :) - , level file formats etc etc. It is also enough to run large projects like Python, Bullet, etc. So it already supports quite a lot. In fact almost the entire emscripten test suite runs as asm.js, and that's a lot of C/C++ code.
well, yes, computers got a lot faster. I just wish the software stop getting slower at a similar or faster pace. The fact that I have to buy a new computer to play 8-bit games that were available on DOS and run fine on 386 is kind of ridiculous (but hey, they're over the internet).
>> Depends on what you need. Current asm.js can run Sauerbraten, a 100 KLOC complete game engine with physics, AI, rendering, world geometry system, scripting language - not JS! :) - , level file formats etc etc. It is also enough to run large projects like Python, Bullet, etc. So it already supports quite a lot. In fact almost the entire emscripten test suite runs as asm.js, and that's a lot of C/C++ code.
> well, yes, computers got a lot faster. I just wish the software stop getting slower at a similar or faster pace.
First, asm.js is the opposite of that. As shown in the data in my slides here,
with asm.js we already (after just a few months of speccing and engineering) get to 2x slower than native - same range as Java and C# - which compared to before is up to 5x faster than normal JS engines. asm.js is making the web faster, not slower.
Second, the context I was replying to is your saying asm.js is missing stuff as a compiler target. That a full-featured game engine can run in asm.js - as well as Python, Bullet, etc. and many other real-world projects - shows it is not missing crucial features as a compiler target IMO.
>The fact that I have to buy a new computer to play 8-bit games that were available on DOS and run fine on 386 is kind of ridiculous (but hey, they're over the internet).
320x200 with 16 or 256 colors and maybe 15 fps?
Your magical nostalgia glasses are perhaps a little bit too thick.
x|0 is a standard idiom, as explained in the ECMAScript Spec:
The production A : A @ B, where @ is one of the bitwise operators in the productions above, is evaluated as follows:
- Let lref be the result of evaluating A.
- Let lval be GetValue(lref).
- Let rref be the result of evaluating B.
- Let rval be GetValue(rref).
- Let lnum be ToInt32(lval).* <-- Convert to 32 bit Integer
- Let rnum be ToInt32(rval).*
- Return the result of applying the bitwise operator @ to lnum and rnum. The result is a signed 32 bit integer.
x|0 is therefore the result of ToInt32(x), which is effectively a coercion to integer.
It's actually very likely I have to write it. I write compilers. Hence the hatred. I would seriously choose x86 ASM with all the backwards compatibility quirks going back to 80s any day.
Well, if you're interested in compiling to something that runs in all browsers, you have to compile to JavaScript anyway. Given that, asm.js is strictly an improvement over the current state of affairs: the rails to stay on for high performance are well defined and (hopefully eventually) cross-browser.
The alternative is to try to make a clean break from the past and forego backwards compatibility in the name of a cleaner encoding, so that compiler authors can generate object files with nicer syntax. That sort of thing been tried many times in the history of the Web and usually hasn't ended up succeeding. For example, take XHTML 2: despite the fact that XHTML 2 was hugely cleaner than HTML 4/XHTML 1, it never got any traction and was abandoned by the W3C. At the scale of the Web, practical considerations end up dominating engineering considerations.
It's a very bad target for compilers too. Guys should just get their stuff together and write a reasonable bytecode format for browsers that is fast to read and verify. Would also save quite a bit of transfer.
PS. If intel told me to do such crap in their architecture manuals, I would be the last user of PPC out there. I suggest reading them, or JVM bytecode spec to see what is a good compiler target.
Emscripten is nowhere near the complexity of a SelectionDAG-based LLVM backend. In fact Emscripten is mostly written in JavaScript. The only comparatively tricky part is the Relooper, which is just a few pages of code.
"Guys should just get their stuff together and write a reasonable bytecode format for browsers that is fast to read and verify."
A nice-looking bytecode doesn't seem worth the loss of backwards compatibility. The encoding is ugly, but compatibility with existing browsers seems like such a huge win for something that really just amounts to a different encoding.
Whats your metric for "bad"? It seems like complaining about ELF format or the different assembly formats used by assemblers. Ok, there is no goto/jmp but apart from that its just a different assembly syntax which happens to also support expressions and not just diadic/triadic instructions.
This is not a green field solution - which would require alignment from all browser vendors, but instead an attempt to explore within the confines of what already runs today.
Yeah, that's the position of Google's NativeClient project. But it's not backwards compatible with older browsers, which is the reason for doing this how they are. asm.js will run in any browser supporting JS already, and will be faster in browsers supporting the optimizations. It's not perfect, but it works now.
You are overstating your case. It is not ideal, but it is serviceable. It is most importantly better than what we have now. To call an indisputable improvement "very bad" pretty much strips the word "bad" of all meaning.
You're mistaken. It is not that people "just did not want to let JS go," but that "let JS go" is so immensely hard at this point in time that it is sitting near "boil the ocean" on the practicality scale. We don't know how to do it, so it's not a meaningful suggestion. In order to convince people that it's even worth trying, you need to enunciate a) a practical plan for "letting JS go," and b) a case for why the advantages of that plan in the context of the modern browser landscape outweigh the advantages of the plan the Mozilla devs are going with. You didn't do that.
To put it another way: If we were implementing Web browser scripting completely from scratch with no history to lean on, this is not how I would do it and probably not how Dave Herman would do it either. But we aren't in that situation. As it stands, there are lots of existing JavaScript implementations that browser makers are heavily invested in — and they are not going to throw that all out and implement a new, incompatible and much more restrictive standard just because I wave my magic wand. The idea behind asm.js is that it is the smallest change you can make to still get the desired effect, because change is hard.
Even if you don't think asm.js is perfect, it's a huge step in the direction you want to go. Complaining that it doesn't teleport us all the way there seems like missing the point.
I mean I get it but not acknowledging that sometime in the future, the band-aid will have to be torn off, and not talking about what would is the best way to do so is not helping either.
Sure, that would be an interesting conversation to have. But that's not what anyone's doing. Complaining that asm.js piggybacks off JavaScript instead of being a bytecode for which no interpreters currently exist — which is what you and fijal seem to be doing — is not "talking about what would be the best way to do that."
In case you've forgotten, your suggestion in the other thread was literally just "a standardized VM in the browser." When I tried to get you to elaborate on what that meant in practical terms, you couldn't explain how it would be any different from what we have with asm.js except that you want a bytecode syntax for the input instead of the syntax asm.js uses.
If you want to talk about practical ways we might move beyond JavaScript someday, feel free. I think that's an interesting topic. That's why I think it's unfortunate that so far, people just seem to be complaining that asm.js doesn't involve bytecode.
Personally, I think asm.js is an important step in the direction of a post-JavaScript world. It moves us away from dependence on JavaScript by helping to better enable alternative languages in the browser.
I'm not complaining about asm.js, it's more that I feel like the hole we (and I don't mean I personally) are digging is getting deeper and deeper and switching is only getting less and less likely.
When you say that I could not explain how would it be different, maybe I just did not notice your response which is why I did not respond (and the last response is where you asked how is having JS as bytecode that different from any other bytecode).
How would it be different? I mean it's not that I insist on the bytecode for the sake of having bytecode (I kind of hint at that several times in the last thread, but I guess I could have been more explicit). It's more that I feel that some sort of fundamental change in the way that web applications are built is necessary. Why? I don't think that JS & DOM scripting can be pushed that much further, like I'm having a hard time imagining that say 5-10 years down the line JS applications will be able to compete with native, somewhat computationally heavy applications (which is what I assumed was the direction in which this whole web thing was going). Today, when I open some site which is written as a SPA (e.g. HootSuite) in Chrome, within a couple of minutes, the CPU goes to 100% (on i7). And that is an application that is not really doing anything that computationally expensive. And it's not just HootSuite.
So yeah, I don't really care that much about bytecode, and I realize that some change would not happen overnight. Do I have concrete steps how to achieve this? Can't say that I do.
> I don't think that JS & DOM scripting can be pushed that much further, like I'm having a hard time imagining that say 5-10 years down the line JS applications will be able to compete with native, somewhat computationally heavy applications (which is what I assumed was the direction in which this whole web thing was going).
You're still not getting it. Native (i.e. compiled from C/C++) computationally-heavy applications is exactly what asm.js allows!
The fact that it's a subset of JS is a clever hack that gets over the backwards-compatibility problems, but don't let that fool you -- asm.js really is a low-level compilation target.
Someone has to break it at some point. Making it live longer is not serving us any good. We would still be using primarily ALGOL, COBOL and Fortran 68 with this attitude
I agree that someone has to break it someday, but that's similar to how someday I'll buy a house — that doesn't mean I'm ready today! They aren't making JavaScript live longer — that will happen regardless. There simply isn't the will among browser-makers to break it all at once, so all you're doing by breaking it is making something that no one can use because it's completely incompatible with everything on earth.
I feel like people are misunderstanding the challenge here. The challenge is absolutely not creating a proposed standard or creating a virtual machine. Those are relatively easy. The challenge is herding browser-makers to whatever solution you propose. The nice thing about asm.js is that it automatically works everywhere and it's easy for browser-makers to transition their existing products to it. Because it takes the path of least resistance, has a better chance of living up to the real challenge — gaining acceptance — than any other plan I've heard.
There are already lots of virtual machines out there. Throwing another one at the wall isn't going to suddenly bring us into a post-JavaScript world. It reminds me of the xkcd about standards:
[Situation: There are 14 competing standards.]
Guy: "14?! Ridiculous! We need to develop one universal standard that covers everyone's use cases!"
Feel free to do so yourself. Start a bytecode project, get all of the major browser vendors to buy in, rewrite all of the APIs to work with your bytecode, and post it to HN. I will upvote you even if you fail.
If there were a comment about corruption in politics, would your response be 'well, if you don't like it, start your own party, win the election and stop bitching'? Probably not. Do you see how the two are similar?
In all seriousness I think headway could be made with starting a byte code project for those that want to go down this path.
Your VM is going to need to run in and generate js (or asm.js subset) - or you'll need to patch IE, Firefox, Chrome etc.
Which is why there are no very high performance multi language VMs.
High level languages such as C still do not compete with hand rolled assembly for diamond-patterns or any situation where the minimum unit of scheduled work is very fine grain.
> In all seriousness I think headway could be made with starting a byte code project for those that want to go down this path. Your VM is going to need to run in and generate js (or asm.js subset) - or you'll need to patch IE, Firefox, Chrome etc.
Well, when I was thinking about this (very theoretically) in the past, the road that seemed to me to be the easiest to take (but not exactly elegant) to get to some prototype was to have a proxy on the client that all browser communication goes through, which executes all code returned from the server and which does all the heavy lifting and returns already rendered html to the browser.
And yeah, I realize that doing a high performance VM is probably the hardest part.
Alon Zakai is just a dude who made Quake and you're bitching at him because it's not Crysis.
Maybe Alon Zakai just isn't as ambitious as you. Maybe he just wants to make the web a little bit better today and you want to make the web a lot better 10 years from now. Good for you, get to work, let others work on the projects that are of interest to them. If you succeed I think you'll be pretty darn famous, and like I said, I'll upvote your efforts even if you fail, it's worth a shot. Get coding.
Well, I can't help you with intentional obtuseness.
> Alon Zakai is just a dude who made Quake and you're bitching at him because it's not Crysis.
Super cool argument from authority (or I'm not quite sure what argument you were making so maybe not), bro.
> Maybe Alon Zakai just isn't as ambitious as you. Maybe he just wants to make the web a little bit better today and you want to make the web a lot better 10 years from now. Good for you, get to work, let others work on the projects that are of interest to them. If you succeed I think you'll be pretty darn famous, and like I said, I'll upvote your efforts even if you fail, it's worth a shot. Get coding.
No shizzle that if someone builds this they would be famous. Also, I was not convinced if it would be worthile for me to sink any time into this until you said that you'd upvote me, but now that that division of labor is established, there is nothing in the way.
I hope that if it does succeed that it will open up even more possibilities, like optional SSE/AVX intrinsics (for added speed in the most demanding software) and threading support.
It's great and educational to see an alternative approach to the problem that is obviously quite different than (P)NaCl. May the best technology win.