I was there live! Good times. A friend convinced me to dip out of another session to go watch Gary's talk. I feel lucky.
I've seen Gary give a few talks. All fantastic.
The Birth & Death of JavaScript is my second favorite. The fact that he stayed in character as someone from 2035 for the entire talk and used a slightly evolved English.. brilliant. I saw that at Strangeloop and have fonder memories than the PyCon rendition on his site.
Yeah, I re-watched that video tonight. We're coming up on the 10th anniversary of "The Birth and Death of Javascript".
Much of it still holds water. I think what's disappointing is how little progress we've come in the 9 years since that video has come out. WASM has been a disappointment -- at least to me.
And the problem WASM solved really would be almost better instead if we just had a ring 4 for web browsers. It seems clear that X86 has won, unless you think that RISC V still has a chance (which is dubious unless you're in the embedded space)
> And the problem WASM solved really would be almost
> better instead if we just had a ring 4 for web
> browsers. It seems clear that X86 has won, unless
> you think that RISC V still has a chance
Mobile has a large and growing lead over desktop[0] for web traffic, and every popular mobile device released for the past ~decade has run on ARM.
I'm struggling to phrase a response that would be polite enough for HN, so instead I'll encourage you to re-read both my post and your link more carefully.
I thought we tried this with Google's NaCl and nobody liked it; the WASM problems don't seem to be particularly related to the instruction set, but rather to its lack of OS services; you'd need to reinvent those for a NaCl-like.
Yeah the NaCl model is a dead-end for web apps. It's a massive struggle to get all the vendors to align on things and trying to get them to align on a huge from-scratch redesign and reimplementation effort is way way harder. We'll see if it works out for WASI, which is basically an attempt at that but in the server application space.
Apple has arguably proven that it is possible to transition to ARM, and considering how much of a foothold that architecture already has in mobile and game consoles, it seems unwise to declare that x86 has won.
I'm not defending javascript, but I just want to answer the "wat" by saying these are all just jabs at its type coercion rules, the fact that an array is an object, and that the toString method for each type usually isn't that helpful by default.
None of this matters most of the time, but yeah it's still funny.
When I first saw this video, besides laughing constantly I also generally anticipated what was coming next because I was intimately familiar with JS type coercion esoteria. A decade later with a near decade long strict personal rule to never rely on implicit type coercion (with one exception, undefined/null, because it’s almost always what you’d want/intend), quite a lot of that memorization has fallen out of my head, and I’m prepared to let it stay that way.
Because yep, it pretty much reduces to the category Type Coercion Rules. But that category in JS is so absurd that normal absurdities feel like minor tomfoolery. (And lest I paint myself as the typical HN anti-JS crusader, I do almost all of my work and personal dev in JS/TS, mostly without complaint.)
I once read the spec to see why the hell type coercion was the way it was... And it kinda made sense? But this was 5-6 years ago and I forgot all the details. Because without going into deep dark corners of the spec it doesn't really make sense. At all :)
And even that last one you can now finally explicitly account for with the incredibly unfortunately syntaxed, but critically useful, double question mark.
Yeah, and I also make exceptions for == null and != null where it aids readability of the code. Linter blocks any other type coercive comparison, but allows that one (and honestly I think it should be mandated, but I haven’t had a need to push that lever for a long time either).
The sentiment is that JS coercion is so bad that explicit coercion programming behaviors, linters, Typescript, and adding parsing safeguard libraries (io-ts, Zod, Joi, json schema, etc) have become so commonplace that people don't really let implicit coercion occur very often, and thus it doesn't matter most of the time.
Whenever I kick off a new npm init, it's always followed by installing typescript, eslint, and usually Zod before I even write the first line of code. If double equals was removed from ECMAScript, I would likely never notice.
It doesn't matter in the sense that it's a case of "garbage in, garbage out". I have no idea what what `{} + 1` gives you (it's `'[object Object]1'` BTW), but whatever it is doesn't matter because it's a rubbish computation to make.
It would have made more sense to throw an exception, but JavaScript at it's origin was meant to be a simple scripting language. Nowadays with TypeScript it's much less of an issue.
Obviously this isn't everything and not perfect, but a lot of this tedium can be automated away if you have a few good examples of the happy path and some basic tests in place to prevent quick and dirty changes from poking holes in these layers.
Implicit conversions are very pernicious. In Virgil, the only implicit conversions that happen are promotions, i.e. representation changes that don't change values, like float -> double, int -> larger int, int -> double (where no rounding occurs), etc. AFAICT this is a pretty sane model.
It's not just implicit conversions. It's misuse of operators, too. Using the '+' operator string append is the kind of thing that language designers get initially seduced by ("it just makes sense" "so convenient" "neat") but the reality is that combined with implicit type coercions it's just a recipe for exotic and unexpected behaviours.
Precedence rules and expectations for 'string append' vs 'add numbers' is different. And then if you through in 'multiplication = repeat.' It's ... now you have this Wat-talk video (which is a classic which I always share with people when the topic of this comes up.)
Maybe I'm just getting stuffy as I get old, but the older I get the more .. explicit.. I want my PL semantics to be.
Browbeating JS with this derision has/had become almost an article of faith amongst online circles. It's clearly not great behavior but generally it doesnt really have much impact on day to day.
Yet people love making themselves feel better by latching on to criticism. Signaling your greatness & expertness by beating up on others.
That's completely fair, but I FEEL (don't know for sure) that many people who criticize JavaScript don't know it very well and don't pick meaningful things to criticize, but things like its rubbish type coercions (which doesn't have a big impact in real life).
You should a) pick big impact items to argue your case b) still acknowledge strengths & successes amid your critique.
Also if you are facing the world's most popular language & a very successful one people build all sorts of stuff with without complaint, it's a big barrier.
I think most language bigots lack a good narrative. They have a language they love, or a language they hate, but they harp upon specific issues. But to I think most coders, choice of language is actually pretty far down the list, especially if you ask them to ignore the ecosystem of packages/tooling & focus on the language itself. Trying to go beyond scoring points & telling a story about life being actually authentically really different on one path or another keeps being harder and harder to se, as we keep marching through the decades & seeing few really really strong distinct clear advantages anywhere, no one really excelling at their claimed bag, in an unreplicateable way that shows it really is the language, not just ecosystem. (rust being one genuine exception)
Agreed that this is a wildly broad social phenomenon.
As for what to do, I think socializing a defense has merit. Talking about these dangers, trying to make people smart, critical of criticism, wary of over zeal or conviction. Posting about & sharing & discussing malignent maligning forces, drawing light to radicalization, showing some of the bad impacts this talk has had. Not as a talk itself, but in terms of what a magnet for very much problematic & zealotous negativities.
I see the challenge here as not merely personal, but also societal, in how we deal with especially highly critical highly skeptical behaviors, those that write things off. And the excuses, however shallow, they can stand on to reject.
In case some JavaScript developers are looking for Wats in other languages when people come to beat up on JS again :), here's a "delightful" one from Unity in C# that I've had in my notes for years but never published anywhere. (Note that C# by itself doesn't have this behavior of course - Unity is the Wat factory.)
[Serializable]
public class Foo {
public String bar = "Wat";
}
public class Test: MonoBehaviour {
public Foo foo = null;
void Update() {
Debug.Log(foo.bar); // logs Wat
}
}
As someone who writes (not Unity) C# for a living, what is going on here? Does Unity maintain it's own, non-standard version of C# and it changed the behavior of null?
"Inline serialization can’t represent null, instead, it replaces null with an inline object that has unassigned fields."[0]
The key thing here is the [Serializable] attribute above the class. [Serializable] is an attribute provided by the Unity Engine.
It's used to convert in-memory data structures / objects into a format that can be stored and loaded easily. You could use it to build a save system for your game, but more generally it's used for working on your project in the Unity Editor and then savings those to .scene, .prefab, .asset files and so on. Serializable objects and their fields (public fields are serialized by default) are editable in Unity Editor's Inspector tab, which is great for quickly iterating and working with designers who don't want to open a code editor or Excel in order to modify game object values and scenes.
But Serializable has some "wat" moments and one of them is that it does not support null objects.
MonoBehaviour is a Serializable type provided by UnityEngine. You generally make Game Objects by composing together different MonoBehaviours. Because the "Test" Monobehaviour has a public field"Foo foo", it is serialized by default. But it cannot be null. If it is assigned to null, Unity replaces it with an instance that has default field values.
The correct way to handle this (though not very obvious unless you're deep into Unity) is to use the [SerializableReference] attribute instead. Or even better, follow extrememacaroni's recommendation to use Odin's serializer replacement. It's Apache license and basically a drop-in replacement with some additional features.
No non-standard C# involved here, just the runtime reflection allowed by attributes.
Unity does in fact have a special approach to nullability, though they may have recently changed it. They (iirc, it's been a while) overloaded the == and != operators so that non-null instances could compare equal to null if the native object represented by the C# object had been destroyed.
Not with that syntax. That syntax declares 'bar' as an instance variable. and 'foo' there is a null pointer, so trying to dereference it like that should produce a NullPointerException.
Sure, you could change the declaration of 'bar' to include 'static', and change the access of 'bar' from 'foo.bar' to 'Foo.bar', but... that's not what the code says.
I think you can do the same in C++, or at least something very similar. If a method isn't virtual and doesn't touch the object's fields, you can invoke it on null quite happily.
"Mutating the immutable" is my personal favourite - the second line of code fails with an exception despite having already succeeded:
>>> t = ([1, 2], 5, 6)
>>> t[0] += [3, 4]
TypeError: 'tuple' object does not support item assignment
>>> print(t)
([1, 2, 3, 4], 5, 6)
I've never been a fan of the decision to have `a += b` sometimes do `a = a + b` and sometimes update `a` in place. My objection was initially just due to theoretical inelegance - but this code shows that decision directly causing a concrete problem.
Bryce Adelstein Lelbach, the ISO C++ Library evolution chair, says that he would rather be consistently wrong than inconsistently right. Maybe that's what the Python core team thought, "we'd better make this consistent with +=, whose quirks people already know".
Python itself warns you nowadays too, it will print out
<dis>:1: SyntaxWarning: "is" with a literal. Did you mean "=="?
Some more weird/fun stuff about that: If you throw that code into one file instead of a repl, `x is 257` will be True. If you instead instantiate `x = 257` in a different file, and import it, `x is 257` be False again. In a repl if you run them on the same line it ends up as True!
>>> x = 257
>>> print(x is 257)
False
>>> y = 257; print(y is 257)
True
If they're in the same file (or same line in a repl), the integer literals are loaded into the `co_consts` object during the initial parse to bytecode, and so they refer to the same object. This only works for integer literals and simple statements that can be optimized to literals by python (e.g. it also works for `x = 257+1; print(x is 258)`
And here I was thinking it’s hilarious that JS `undefined` is assignable! I’m sure there’s good historical reason for this, but being able to change booleans to mean their opposite is something extra special to me
Apparently it's False in python2. Python2 does integer division that rounds towards negative infinity. It makes the order of operations important in -1/3. (-1)/3 (and hence -1/3) evaluates to -1, and -(1/3) evaluates to 0.
I gave it a try as well, and GPT-4 taught me something new about Python (I don't normally write Python, so it's not surprising):
def add_to_list(value, target_list=[]):
target_list.append(value)
return target_list
list1 = add_to_list(1)
list2 = add_to_list(2)
print(list1) # [1, 2]
print(list2) # [1, 2]
In this example, one might expect list1 to contain only 1 and list2 to contain only 2. However, because the default value for the target_list parameter is mutable, it gets shared between function calls, leading to unexpected behavior.
The other examples it gave me were not that unexpected. Prompt was:
> Show me examples of where Python type inference works in unexpected ways.
Some people might not know this yet, the common idiom is to have the default value be None, and then start your function with checking if it's None and initializing to empty list.
Ah, yeah, compared to the examples in the video that makes so much sense in a normal day to day application. I commonly do the following operations without much thought:
[] + []
{} + {}
[] + {}
{} + []
Great examples :)
The point is not what the data or naming of the functions/variables is, but what's happening when you run it...
[] + [] is more likely to happen than taking list.append(value) and defining a function that does list.append(value) and then returns whatever list.append(value) returns.
I'm talking about the following where we don't intend to do [] + [] but we feed some lists to some functions that do something to the lists and we end up with empty lists in some cases without inspecting the contents of a and b after we feed it to the functions.
Why would you define a default argument of an empty list if you're not gonna check if it's a list and if it's empty?
a = [1,2]
someFunction(a)
b = [3,4]
someOtherFunction(b)
a + b
The reason it's weird is because Python evaluates default arguments at function definition time rather that function call. This means that it's unsafe to use mutable values as default values in params. See the equivalent in JavaScript:
I remember encountering number 4 in the wild, so I think it deserves to be a genuine "wat". That said, I'm not sure why I (or anyone else) thought I would need to use the identity operator on numbers, but it's certainly a head-scratcher until you dig into the underlying reason.
Maybe such code is written by people scarred by JavaScript and trying to avoid ==. But not sure: I saw this in the wild as well — actually, the victim brought the broken function to chat scratching his head — and he was a quite experienced backend guy (and the experience wasn't in nodejs, AFAIK).
A pretty funny kind of bug, actually: it was some relatively simple function, passing the unit tests but doing some nonsense on real data. Thing is, it was comparing some sequential ids and working perfectly as long as there were no more than 256 objects.
Furthermore, the Ruby one fails for me too, in irb 1.4.2, ruby 3.0.5p211:
irb(main):002:0> def method_missing(*args); args.join(" "); end
(irb):2: warning: redefining Object#method_missing may cause infinite loop
/usr/bin/irb: machine stack overflow in critical region (fatal)
Your context is implicitly parenthesizing the expressions you are evaluated, treating them as expressions rather than statements. Statements are also expressions of a sort too in JavaScript so uh there's also that. Here's the full wat matrix:
> do {[] + []} while (false);
''
> do {[] + {}} while (false);
'[object Object]'
> do {{} + []} while (false);
0
> do {{} + {}} while (false);
NaN
> do {([] + [])} while (false);
''
> do {([] + {})} while (false);
'[object Object]'
> do {({} + [])} while (false);
'[object Object]'
> do {({} + {})} while (false);
'[object Object][object Object]'
(as a hint, `{} + []` is actually `{}; +[]`, that is not an addition operator in that context).
Yeah, Gary was actually wrong about that one, you can actually hear objections from the audience:
1. [] + {} is actually the string '[object Object]' not an object per say, {} is the object and [] + casts it to string.
2. {} + [] depends on runtimes. In firefox the first {} is actually an empty statement, not an object Object.fromEntries([]) + [] is always the string "[Object object]" just like [] + {}. In node you can do this {} {} + [] or even {1 + 1} + [] for the desired 0. Effectively it is the same as +[]
3. {} + {} is the same. The first {} is an empty statement
The format would have inevitably had "wa" somewhere in it for "web assembly", and indeed I assume that the authors were well aware of this famous talk and chose "wat" as a cheeky reference.
I did some digging on that in the past and didn't come up with any conclusion. I guess it's possible that someone somewhere came up with it as a reference to the video. But also, maybe it's just a coincident and they came up with WebAssembly Text Format first, then the acronym. But then it should have been WATF instead, so maybe no, after all...
Would love any sort of reference to the discussion(s) where the naming and file format name comes up, but didn't find anything relevant last time I looked.
For a while they were .wast files, I don't remember there ever being an official discussion or debate over whether to make it .wat. Your guess at least seems plausible.
JavaScript is some of the best evidence we have that design consistency is not the dominant factor in determining language popularity.
I haven't figured out what is yet. I have a hunch that having a short path to working small and medium complexity projects is a big part of it. But I can't rule out the possibility that the most important thing is just application context... Would anybody still care about JavaScript if the browser hadn't happened?
The wat video is one of my favorite videos to show newer programmers. It really convinces them not to use Javascript, and consider more serious, mature languages... like Rust.
(If this was a TED talk, this is where the audience would confusedly applaud)
While I like Rust a lot in many ways and have been around for quite some time :) Javascript, like Rust, is just a tool and experienced programmers choose the right tool for the job.
Just going by syntax (or in this case type coercion) alone is too simplistic for me.
Erlang, Eiffel, C, ASM, JS, Rust, C#, F#, Java, or any other language almost can be good choices depending on circumstances.
As for the specific claim of maturity. Rust is quite solid and attractive to me, but based on the number of changes over the last years not necessarily more mature than any other bigger language out there and arguably quite young albeit with a nice and growing ecosystem. Rust specifically is also in danger of going down the C++ route with a lot of over-engineering though by having too many levels of abstractions (And I write this as someone who learned C++ from a typewriter style written book of Stroustrup himself and loved every minute of it)
Really, you're just closing a door on them based on your opinion. Newer programmers, and you let them consider Rust vs JS? That's a `wat` in my opinion.
This was probably more humorous in person, but I do wonder if it's human instinct to ridicule things one doesn't understand. Intuition isn't universal, and what might appear to be straightforward rules in isolation can lead to complex and surprising emergent behaviour when combined.
I am in no sense a person who has the chops, inclination, or talent to create a programming language. However, I do sometimes think about "What would a programming language with the following precepts be like?" and pose to myself various hypothetical axioms, or results, or constructs.
It would be ... interesting ... to design a language without WATs. Where the truth tables and the NaNs and whatnot all made a kind of sense. I am left wondering if perhaps a minimum number of WATs would inevitably be present, no matter your design.
I think the language spec is where you would define things so you don't get "WATs". Getting a language that is intuitive and consistent is part of the tricky part.
I figure out that "|_| Some(2) =>" in a match arm is just a disguise for the alternation "_ | Some(2)" with prefix separator, but what does the `let` do there?
Not sure what js shell he uses, since it is an old video but in browser or node you will get predictable results for his examples.
What you need to know that in js '+' sign will add numbers only if both operands are numbers, otherwise it will coerce both to string type, which is true for {} or []. To convert to string js will use '.toString()' method like this {}.toString() and [].toString(). Now {}.toString() is equal to a string '[object Object]' and [].toString() is equal to an empty string '' (for non empty arrays it will print numbers separated by comma). So if we add anything in JS and on both sides are not numbers we actually add strings! And adding strings means concatenation and it works as you would expect, just put two strings together.
Having that, let's take a look on those examples again
1. [] + {} equal to '[object Object]'
2. {} + [] equal to '[object Object]'
3. {} + {} equal to '[object Object][object Object]'
So no 'magic' involved there. Note that some results are different as in the video, you can check it in console in your browser
None of the behaviors mentioned in the video have changed; they are all pinned down by ECMAScript standards and to change them would be a breaking change to the internet.
The behavior of your console might be slightly different from the behavior of his console, but there are still many reasonable ways to replicate his results. See my comment below if you're still unsure about this point.
> but there are still many reasonable ways to replicate his results
But can you replicate his results for js in any modern browser or node environment? I agree that standards are fixed, but there is definitely a difference between implementation of those standards in different environments
Yes. It is very easy to replicate his results for JS in every modern browser and Node.js environment, because these results are part of the ECMAScript standard and can never ever ever ever be changed. Again, if you are unsure about this point, please read the other comments I have left in these conversational threads.
(Just to be clear, I'm not saying any of this to jump on a bashing-JS train. Think of e.g. the abiding love of having been married for 20+ years, it's not like your spouse has no sandpapery parts or faults, we can healthily admit them to each other at this point -- I just have learned not to rub against them.)
It’s been more than a decade of this drivel, that largely is not weird or bizarre if you actually think about what is being shown. The only legitimate complaints come from the type coercion rules, and even those are completely consistent albeit suffering from the era of “scripting languages should do conversions implicitly”.
Things like
{} + {}
Behave weirdly, only if you don’t ask what happens in every other brace based language when given that syntax: the first {} is parsed as a block statement in C, C++, Java, C#, giving you a +{} expression.
Similarly conflating repl output with the value of expressions as though the language is doing something odd.
Or the absurd belief that typeof NaN should be something other than number. While I do love the idea of statically typed languages being forced to crash on NaN results at runtime, that being because the value type is suddenly wrong is stupid.
One can fully understand the type coercion rules and the effects that result from them, and still acknowledge that said type coercion rules are not optimal, at least because they lead to very non-optimal outcomes.
I for one am tired of people claiming that this is not "weird or bizarre", because not only is it obvious that several parts of JavaScript including the type coercion rules are (at best) strange, it's also absolutely not surprising from its history: The person who originally invented JavaScript wanted to implement Scheme, but management wanted to go for "something similar to Java". JavaScript was prototyped in ten days[1], and I doubt anyone back then really anticipated the role it would later come to have.
There is an entire standards body that aims to hammer JavaScript into something that makes more sense, because, well, we're stuck with. (Or rather, you are stuck with it, I don't do web development.)
Brilliant and actually humorous presentation. At first, I thought it was going to be some nerd deep-dive, but then it really was just a presentation of wats.
And honest to god, how do people program in JavaScript?
Javascript has a lot of weird type coercion behavior, but it's not actually hard to avoid. "Never use ==" and "don't use operators on things if it's not obvious what they would do" gets you pretty much everything you need there.
This is not to say that there's nothing else weird going on in Javascript, but the things people show off most as "wow, isn't this weird?!" aren't that consequential.
That’s the problem. This is an unexpected situation and the JS philosophy is to silently do something that tries to make the best of it, rather than e.g. throw an error or warning.
I think it’s perfectly reasonable to bash JS for this. This is a direct consequence of the JS design philosophy. This is similar to the memory-safe vs memory-unsafe debate (except IMO, slightly dumber because I can’t personally think of any reason this is the behaviour you would want, whereas there might be performance reasons for memory-unsafe). It is increasing the bar of producing good code, without any actual benefit.
These issues pretty much all come down to implicit conversions, which you quickly learn to avoid as it’s usually unhelpful (likely a reason for typescript’s popularity).
Since JavaScript has only a few data types, and no operator overloading, I'm just going to memorize how the operators work on each type. This will save my sanity from the underlying logic.
TypeScript has its own WATs too, and to me they are almost worse, because you have this false sense of security based on it being TypeScript and not JavaScript.
I don't understand why this is a "wat"? This is just a) not understanding how `map` works, and b) not understanding the args to `parseInt`. The callback function for `map` is passed three args: the current item, the index of that item, and the whole array. `parseInt` takes two args[1]: `number` and `radix`. So you're calling (unrolled):
So TypeScript intentionally suppresses this kind of complaint when matching up function types even though it issues them when calling functions directly. The reason for that has a justification, but in contexts like this, I think it is a WAT-worthy interaction.
Map passes three arguments to parseInt but parseInt only takes two. I think a language with a type system ought to have a way to get a warning for that kind of thing. Especially when it does so in other contexts.
I understand why it is this way, but I think it is very surprising and unintuitive behavior.
I agree with my sibling comments, but want to add that nowhere does TS imply it has it's own stdlib or anything. If you type map, you know fully it's the JS map that's going to be executed. I don't think TS ever wants or could replace the JS stdlib.
That’s a JS WAT, and it’s fair to say TS should error, but TS is explicitly not a linter and the code is “correct” so it’s unclear what else it should do.
I disagree that the code is "correct". TypeScript warns for passing too many arguments to functions in other contexts even though that could also be called "correct" because even though it works, it was probably unintentional.
It’s the wat I’ve seen have the most security impact.
Deep merging two JSON parsed objects is innocuous enough everywhere else that most don’t think twice about doing it. Lots of widely used libraries that provide deep merging utilities have had security vulnerabilities because of this.
I guess you could argue that the wat is that objects coming out of JSON.parse don’t have null as its prototype.
Isn’t that just that new behavior of Twitter where people can forbid what circles can see their posts? I also found that I can no longer view several “global” posts unless I follow the given person, which has to be approved or something like that?
I've seen Gary give a few talks. All fantastic.
The Birth & Death of JavaScript is my second favorite. The fact that he stayed in character as someone from 2035 for the entire talk and used a slightly evolved English.. brilliant. I saw that at Strangeloop and have fonder memories than the PyCon rendition on his site.
https://www.destroyallsoftware.com/talks/the-birth-and-death...