From a quick glance this looks like a further subsetting a la strict mode, which isn't really what I meant
I clarified in another thread, but what I meant to convey is a way of giving very specific hints about specific pieces of one codebase:
"Function X will only ever take two arguments, they will be of types Y and Z"
"This object will only ever have these properties; it will never have properties created or deleted"
"This prototype will never be modified at runtime"
This could guarantee ahead-of-time that certain optimizations - packing an object as a struct instead of a hashmap, for example - can be done, so that V8 doesn't have to spend time speculating about whether or not to JIT something
The problem is less about the annotations (you can already infer most of this from a single pass over the code) than it is about the calls. If you have a reference to a function, you need that data handy. Unless you're copying it around with your function reference, you don't have that data (and it's prohibitively expensive to try).
JS is heavy on function references (closures are one of the most popular JS idioms), so it's not easy to know at call time how you can optimize your code.
>"This object will only ever have these properties; it will never have properties created or deleted"
>"This prototype will never be modified at runtime"
You can already give these hints with Object.freeze/Object.seal
Not really hints though, they enforce a certain behavior. Because hints aren't enough. A JS engine may already assume you'll never change the objects, but it still has to support cases where you then do so anyways.
I don't think these alone would enable any substantial speedups, since the performance issues arise where unknown objects are used.
> You can already give these hints with Object.freeze/Object.seal
You can but my understanding is the performance tradeoffs aren't great unless you have a workload that especially benefits from it. Those calls are expensive.
OK let's say you have f, it only takes arguments a:Y, and b:Z.
I call f(Z, Y). What happens? Does the machine segfault? Probably not.... so you're in "check the types at the callsite" territory. And honestly those kind of boundary checks are the problem, so to speak.
Are we doing static analysis of the entire codebase (a sort of closed-world theory?) You could maybe do something like this for non-exported functions, though at the V8 level that's tough.
I think that there would be possibilities to somehow get systems to be set up in a way to do more cleverness along these lines, but so long as you're taking in stuff from the outside world you end up needing to set up relatively costly checks, relative to the cost of the "default" JS semantics that you're trying to avoid paying.
Maybe from a certain perspective. But it worked by conforming the bundle to a particular subset of JS that the authors knew to be optimized, instead of expressing information to the interpreter outright. That seems like a comparatively limited channel for communication.
From what I understand, in Safari at least they tried to make many of the optimizations general. So if you use asm.js-style type indications in your code even without following the full spec, you might see some performance benefit.
I have sometimes found a speedup when adding apparently an superfluous |0.
The highly optimizable, explicit and precise code for the interpreter is WASM.
JS VMs don't need more hints from the code. They need guarantees. They already analyze and profile JS, but as long as JS is allowed to be dynamic (and it has to, it won't be JS without it), then they have to keep the complexity and cost of all the extra runtime checks and possible deoptimizations.
AFAICT the v8 JIT collects such information form code's behavior, and uses it to generate efficient machine code.
If v8 could accept Typescript directly, it probably could pass this kind of information to the JIT directly, too. But the input language is ES6 or something like it, and it has to follow the calling conventions of it. I'm afraid that adorning ES6 with this information would be either brittle or backwards-incompatible.
It can't trust that all the metadata is correct, otherwise security issues can happen. And if you need to do that, why not just gather / regenerate the same information at compile time.
Also, what do you do about code loading? e.g. scripts loaded from other files at runtime, or eval? Does it throw an error if a third-party script uses a function incorrectly? Or do we assume that metadata is local-use only?
There are a lot of things in JavaScript and also the "browser environment" (e.g. ads, third-party scripts) that can limit the utility of traditional compiler techniques.
There are already stepped levels of optimization going on in the JS engine; it goes to great lengths to reverse-engineer whether something can probably be optimized or not, and to handle all the edge cases where it needs to bail out into a less-efficient mode when some assumption is violated. All I'm talking about is a way to give it extra hints about what to eagerly optimize and how. All of that other robustness can stay in place.
Pretty much all that buys you is a small reduction in an already-small warm-up phase before the jit chooses a late-stage optimization (possibly with an increased cost for loading that phase, so even that small gain may be reduced). Only for code that uses this. And performs worse if it proves incorrect, as bail-out logic is generally a bit costly.
Browsers have experimented with hints quite a lot. Nearly all of them have been rolled back, since adding the general strategies to the jit is vastly more useful for everyone, and perform roughly as good or better since they identify the optimization everywhere, rather than only in annotated code.
---
The only ones I'd call "successful" so far have been 1) `"use strict";` which is not really an optimization hint, as it opts you into additional errors, and 2) asm.js, which has been rolled back as browsers now just recognize the patterns.
asm.js did at least have a very impressive run for a while... but it's potentially even more at risk of disappearing completely than many, since it's rather strictly a compiler target and not something humans write. wasm may end up devouring it entirely, and it could happen rather quickly since asm.js degrades to "it's just javascript" and it continues working if you drop all the special optimization logic (which is to its credit - it has a sane exit strategy!)
[0] https://docs.google.com/document/d/1Qk0qC4s_XNCLemj42FqfsRLp...
[1] https://groups.google.com/g/strengthen-js/c/ojj3TDxbHpQ/m/5E...