> const performs a lot better in optimized code than var or let
This puzzles me; if only ever one value is assigned, I would have expected at least let to perform identically to const in optimised code, because I expect the optimiser to look at the let and say “never reassigned, turn it into a const”. By the sound of it, I’m wrong, and I’d be interested to know why I’m wrong.
JavaScript is very dynamic. Here I define a variable with `let` and then change it:
let foo = 10;
eval("fo"+"o = 10");
I could "obfuscate" that eval assignment as much as I like. So you can't completely statically analyse `let` variables.
That said, it's likely you'd end up with perf very close to `const` in a very "hot" part of your code, since a good JIT compiler like V8 will eventually make "assumptions" about your code, and optimise around them, while having (ideally cheap) checks in place to ensure the assumptions continue to hold.
An interesting point, however direct "eval" is a special case.
It's already known that functions containing direct "eval" are not subject to the same level of performance optimisations as other functions. There is no way to obscure the call to direct "eval" itself; the compiler knows clearly whether it occurs.
Without "eval" appearing syntactically inside a function's scope, there is no dynamic access to "let" variables, and there's no need for the JIT code to check the assumption at run time.
Despite no other assignments, the "let" variable does change value: It has the assigned value after the "let", and the "temporal dead zone" value before it in the same scope. However "const" also has this property so it's not obvious why there would be a speed difference.
Nice try! The Jsfuck method can't encode a direct "eval", only an indirect one :-)
Thus the emphasis on direct. Well, Jsfuck can encode a direct eval inside an indirect eval, but that doesn't give it any advantages, it still can't access the surrounding lexical environment in the form of "let" and "const" variables.
(I didn't know about Jsfuck though - it looks fun, thanks!)
Not sure if V8 does this, but a general strategy used in HotSpot is to compile assuming the good case and install a "trap" to deoptimize the code if the condition is violated. So you could compile the let like a const with no checks on the use of the variable, but have a trap such that if any code ever tries to assign to that variable, all code compiled under the assumption that it was constant is throw out.
The const keyword also guarantees that once a value is assigned to its slot it won't change in the future.
As a result TurboFan skips loading and checking const slot values slots each time they are accessed (Function Context Specialization)
Right... but the question was 'why can't the same thing be done with let'.
The answer is probably a combination of 'they haven't gotten around to it yet', 'they don't see the need', 'they don't want the complexity', and 'they don't want to spend compile time doing that.'
In strict mode you can because you can statically determine if eval is present. If it isn't, it is trivial to determine if it is written to after initialization.
Otherwise it has the same issues as const (is there a temporal dead zone violation?)
You're not doing static analysis - you're doing dynamic analysis and speculation. But this has the downsides I said - it's more complicated, it takes more time, etc.
What speculation? If you have a "let" in some lexical scope, AND this is strict mode, AND there are no calls to eval in that scope, AND there are no assignments within that scope... how is that different from const?
I can understand why in a JavaScript engine there may be plenty of other concerns and it may be completely reasonable to not do the analysis, but in the common case it should (at least) be something you could do just by walking the AST, if you so desired, without even knowing the surrounding code.
> Speculating that a debugger is not attached and modifying local variables, is one example.
A debugger can do anything. You can't outthink a debugger and shouldn't try.
> Yes that’s why it’s speculation - handle the common cases and speculate away the uncommon cases.
Speculation is when you guess and need to have a guard in case the guess is wrong. They're describing a situation where it can't be wrong and you wouldn't need to speculate.
> You'd get extremely slow code with this approach!
You must be interpreting that differently than I meant it. The approach I'm suggesting is "pretend debuggers don't exist when optimizing". It gives you the fast code.
A debugger can break any assumption you make. Even unoptimized code could crash if a debugger messes with it. The fear of a debugger should never make you decide not to do an optimization.
How would you use a guard if you want to deoptimize because a debugger attached? You'd have to have a guard between each instruction, and even then it might not be enough.
> Yes, you're guessing that nobody will attach a debugger and you're guarding that no debugger has been attached. That guard is usually implicit.
> So you deoptimise when someone starts debugging.
If an "implicit" guard means "we'll have a function the debugger calls, telling us to redo the compilation", then that's not something you need to do dynamic analysis for, and it doesn't make your compilation more complicated. You don't "speculate away" that case unless you're using the word "speculate" to include "completely ignore for now, without even a guard", which I didn't think fell under that umbrella. Does it?
> It can be wrong... if someone's using a debugger.
It's not wrong. A debugger can make 2+2 be 5. Debuggers don't follow any rules, but that doesn't mean your compiler should try (and inevitably fail) to make code that works in a world where no rules exist.
I think it's just a case of me using more generalised terminology.
'pretend X doesn't exist' is in my mind 'speculate that X isn't enabled'. It really means the same thing doesn't it?
You don't need a guard between every instruction, as attaching the debugger is an async option - it's already non-deterministic when the application will receive the instruction to move to debug mode, so as long as it checks frequently enough (usually once per loop iteration and once per function call) it's enough.
I think "we'll have a function the debugger calls, telling us to redo the compilation" does describe speculation and deoptimisation. Remember the function may be currently executing and may never return, so it's not as simple as replacing it with a different version. You may need to replace it while it's running.
It does make compilation more complicated because you need to be able to restore the full debug state of the application, which means storing some results you may not choose to do otherwise, and storing extra meta-data.
> Debuggers don't follow any rules
Debuggers can be a formally or informally specified part of the language, and their behaviour may have to follow rules about which intermediate results are visible which may constrain your compilation.
My argument is: if you do treat debugging as speculation then your model is simpler and easier to work with and you don't need two kinds of deoptimisation. Real languages are implemented this way.
Here's two papers I've written about these topics taking the idea even further.
> 'pretend X doesn't exist' is in my mind 'speculate that X isn't enabled'. It really means the same thing doesn't it?
Not when you're talking about needing "dynamic analysis", which is what made me not understand the way you were using that word.
> It does make compilation more complicated because you need to be able to restore the full debug state of the application, which means storing some results you may not choose to do otherwise, and storing extra meta-data.
You don't need to, in the general case.
> Debuggers can be a formally or informally specified part of the language, and their behaviour may have to follow rules about which intermediate results are visible which may constrain your compilation.
> My argument is: if you do treat debugging as speculation then your model is simpler and easier to work with and you don't need two kinds of deoptimisation. Real languages are implemented this way.
I suppose, but that's only one option. You could make the deoptimization for debugging much weaker or nonexistent, and that would be a valid option too, without having to give up simplicity.
And separately, wanting to change the value of a const while debugging is a valid use case too. But once you support that, there's no reason a never-written let needs to be optimized differently from a const.
Because such analysis isn't free, especially when it's just in time. In theory it could and maybe will in the future, but it's hard to do every possible analysis and optimization.
The doc is incorrect, `let`(and even `var`) does perform as well as `const` in optimized code (but it comes with performance cliffs, if there is a re-assignment).
"Looking at the let" refers to code analysis during JIT compile time, which happens once for each bit of code, not run time which happens many times as the same bits of code are run.
When comparing the speed of "const" versus "let", the JIT compile time is irrelevant; the speed differences being looked at are entirely run time, inside loops.
Also the JIT compile time difference from "looking at the let" will be so low as to be virtually unmeasurable anyway. (It is such a trivial check, much simpler than almost everything else the compiler does.)
(However, see rewq4321's sibling comment about analysability and JavaScript being a very dynamic language when "eval" is used.)
FYI - you're probably getting downvoted because JIT-time is generally considered runtime and the cost of JIT time is usually consider with end user performance in mind.
> the cost of JIT time is usually consider with end user performance in mind.
Indeed it is. I know that (very well), I guess I didn't phrase my comment well though.
The article points out that a "const" variable access is faster than a "let" variable access due to removal of certain run time checks in the JIT-generated code.
The per-statement cost of lexing, parsing, analysing, and code-generating for each statement of the JavaScript source is far, far greater than the cost of actually executing the generated code for a "const" or "let" variable access.
So much so, that if the generated variable access was run as a one-off, the speed difference would be undetectable.
So the only way for there to be an end-user visible difference in speed between "const" and "let" is in code where the JIT-code-generated variable accesses are run many times repeatedly.
The time to perform "is the 'let' variable assigned to" is part of the one-off checks, not the repeated executions. Since we're also talking about "optimised code" not "unoptimised code" from the JIT, it's also a tiny amount of time compared with most other things done by the optimiser. It is literally a boolean flag, false by default, set true if any assignment to the variable is seen when parsing or optimising. And "optimised code" is only produced for code that is run many times.
So technically, yes, the check would technically take end user-visible time. But only in situations where there is another end user-visible time which dominates over it (the slowdown of "let" versus "const" variable accesses done repeatedly), and which the check is intended to remove. Thus you could say, the check's time would be cancelled out.
(It's a little bit like comparing the O(1) parts of an algorithm with the O(N) parts, when you have been told that N is significant. If you can do something to reduce the O(N) constant factors and it costs O(1) to do, it's a net speedup for sufficiently large N.)
I know you jest, but I don't think that's a good example of "contrast" between the engines. That was just a bug/oversight and fixed in a couple of weeks after it was reported [1].
YMMV, but I find that Safari uses dramatically fewer local resources on my computer than Chrome does, for the same workload. Chrome can’t remotely handle the number of tabs I typically keep open (it chews up all available memory, and hangs or crashes). Even for lighter browsing usage, Chrome chews battery faster and slows the rest of my computer down. Overall Safari is the most usable, followed by Firefox.
In various numerical computing in Javascript projects/experiments I’ve done, Safari is typically fastest, but not always. At any rate, it is clear that all of these teams have done very extensive optimization work, and all of the modern browsers are engineering marvels.
But there are a lot of weird nooks and corners in all of the browsers when it comes to Javascript performance.
As someone who uses JavaScriptCore in most of my projects (there is one for which I am currently intending to use v8, which is seriously "off brand" for me), and only finds Chrome usable with The Great Suspender (at which point I do use it instead of Safari for various reasons, particularly since I now use a Windows desktop ;P), I agree with you; but, everything you said seems unrelated to the topic?...
For years people have been ragging on Safari for being slightly slower to adopt new features, usually with the implication that they are holding back the web. But I think their current priorities are just fine.
I assume all of the ES2015 features will eventually see the same level of optimization as ES5-era code. I’m not too worried if it takes a few more years.
It's a specific term used in the ECMAScript specification [1]. That refers to any object for which at least one internal method does not behave like for a standard object created for example with `{}`.
I mean yeah... You're telling the engine to do 2 different things there that just may have the same end-result.
The results diverge specifically when your array contains empty items, which will be converted to items containing undefined with the latter expression.
The JS engine would need a specific optimization for cases where:
- The expression is equivalent to arr.slice(0)
- The iterator being de-structured is a vanilla array
- The array doesn't contain any empty items
Of course your code will be faster when all that's required is just a shallow copy.
This puzzles me; if only ever one value is assigned, I would have expected at least let to perform identically to const in optimised code, because I expect the optimiser to look at the let and say “never reassigned, turn it into a const”. By the sound of it, I’m wrong, and I’d be interested to know why I’m wrong.