What speculation? If you have a "let" in some lexical scope, AND this is strict mode, AND there are no calls to eval in that scope, AND there are no assignments within that scope... how is that different from const?
I can understand why in a JavaScript engine there may be plenty of other concerns and it may be completely reasonable to not do the analysis, but in the common case it should (at least) be something you could do just by walking the AST, if you so desired, without even knowing the surrounding code.
> Speculating that a debugger is not attached and modifying local variables, is one example.
A debugger can do anything. You can't outthink a debugger and shouldn't try.
> Yes that’s why it’s speculation - handle the common cases and speculate away the uncommon cases.
Speculation is when you guess and need to have a guard in case the guess is wrong. They're describing a situation where it can't be wrong and you wouldn't need to speculate.
> You'd get extremely slow code with this approach!
You must be interpreting that differently than I meant it. The approach I'm suggesting is "pretend debuggers don't exist when optimizing". It gives you the fast code.
A debugger can break any assumption you make. Even unoptimized code could crash if a debugger messes with it. The fear of a debugger should never make you decide not to do an optimization.
How would you use a guard if you want to deoptimize because a debugger attached? You'd have to have a guard between each instruction, and even then it might not be enough.
> Yes, you're guessing that nobody will attach a debugger and you're guarding that no debugger has been attached. That guard is usually implicit.
> So you deoptimise when someone starts debugging.
If an "implicit" guard means "we'll have a function the debugger calls, telling us to redo the compilation", then that's not something you need to do dynamic analysis for, and it doesn't make your compilation more complicated. You don't "speculate away" that case unless you're using the word "speculate" to include "completely ignore for now, without even a guard", which I didn't think fell under that umbrella. Does it?
> It can be wrong... if someone's using a debugger.
It's not wrong. A debugger can make 2+2 be 5. Debuggers don't follow any rules, but that doesn't mean your compiler should try (and inevitably fail) to make code that works in a world where no rules exist.
I think it's just a case of me using more generalised terminology.
'pretend X doesn't exist' is in my mind 'speculate that X isn't enabled'. It really means the same thing doesn't it?
You don't need a guard between every instruction, as attaching the debugger is an async option - it's already non-deterministic when the application will receive the instruction to move to debug mode, so as long as it checks frequently enough (usually once per loop iteration and once per function call) it's enough.
I think "we'll have a function the debugger calls, telling us to redo the compilation" does describe speculation and deoptimisation. Remember the function may be currently executing and may never return, so it's not as simple as replacing it with a different version. You may need to replace it while it's running.
It does make compilation more complicated because you need to be able to restore the full debug state of the application, which means storing some results you may not choose to do otherwise, and storing extra meta-data.
> Debuggers don't follow any rules
Debuggers can be a formally or informally specified part of the language, and their behaviour may have to follow rules about which intermediate results are visible which may constrain your compilation.
My argument is: if you do treat debugging as speculation then your model is simpler and easier to work with and you don't need two kinds of deoptimisation. Real languages are implemented this way.
Here's two papers I've written about these topics taking the idea even further.
> 'pretend X doesn't exist' is in my mind 'speculate that X isn't enabled'. It really means the same thing doesn't it?
Not when you're talking about needing "dynamic analysis", which is what made me not understand the way you were using that word.
> It does make compilation more complicated because you need to be able to restore the full debug state of the application, which means storing some results you may not choose to do otherwise, and storing extra meta-data.
You don't need to, in the general case.
> Debuggers can be a formally or informally specified part of the language, and their behaviour may have to follow rules about which intermediate results are visible which may constrain your compilation.
> My argument is: if you do treat debugging as speculation then your model is simpler and easier to work with and you don't need two kinds of deoptimisation. Real languages are implemented this way.
I suppose, but that's only one option. You could make the deoptimization for debugging much weaker or nonexistent, and that would be a valid option too, without having to give up simplicity.
And separately, wanting to change the value of a const while debugging is a valid use case too. But once you support that, there's no reason a never-written let needs to be optimized differently from a const.
I can understand why in a JavaScript engine there may be plenty of other concerns and it may be completely reasonable to not do the analysis, but in the common case it should (at least) be something you could do just by walking the AST, if you so desired, without even knowing the surrounding code.