Most of the examples you point out make sense if you know that using greater than or less than casts the value to a number. >= and <= always cast their parameters to numbers while == only does if either side is already a number.
[] and {} create new objects. === always compares by identity, which isn't a unique feature. It's equivalent to Java's ==, and two different objects aren't ever equal to each other. === is equivalent to Python's `is`, and you'll get false from `[] is []` in Python for example.
This notion of equality would make pattern matching on empty lists or objects impossible in languages such as Erlang or Haskell. Perhaps it makes sense with the knowledge that each array is a different object, but it is highly counterintuitive.
Do any of those return true if you use double equals? The distinction only really comes up when you have to different types on either side and then It will coerce one of the types to match the other.
You're comparing two expressions of the same type so it doesn't matter whether you use == or ===, but == can return true even if the expressions are of different types.
Anyways, that video is awful and totally wrong. Node code can be clean, threaded code can be ugly. Both can be fast, but the perf characteristics are different (for OS threads, anyways).
I'll use a link shortener to prevent spoilers. This link elaborates exactly which rules are abused:
https://goo.gl/mu5xtr
"These requirements are a willful violation of the JavaScript specification current at the time of writing. The JavaScript specification requires that ToBoolean return true for all objects, and does not have provisions for objects acting as if they were undefined for the purposes of certain operators. This violation is motivated by a desire for compatibility..." with old Internet Explorer.
Does anybody really know why JS's type system was designed the way it was? It seems so out of whack with what people want out of a language, dynamically typed or otherwise.
It's because Eich was told to make JS C-like. In C, an empty array, an empty string, 0, false and null are all the same value: a word with all bits 0. So in JS, those are all are falsy and == to each other.
In C, empty strings are truthy, being a non-null pointer to a NUL character. Depending on what you mean by "an empty array" that might also be a non-null pointer to a zero-length region of memory, and thus also truthy in C.
"" is actually true in C, but that's also because it represents something in memory. Consider how C encodes strings: an empty string is a _pointer_ to a 0 (or a '\0', if you want to think of it as a char). However, the pointer (being an address) is non-zero (since the address 0 is NULL).
That one isn't specific to Javascript, though. If you can find a mainstream language that doesn't have any values that aren't equal to themselves, I'll give you a cookie.
(Going from memory of the last time I tried this challenge, since I can't get it to load at the moment.)
The important property of evaluation is determinism defined by the equality of the output. Classically, id(x) != id(x) is logically inconsistent.
Boolean Logic and natural languages are pretty mainstream in my opinion. Do you mean programming languages? If IEEE whatsthenumber is implemented in the FPUs to provide fcmp (Floating-point Compare Instruction), the languages don't have much of a choice.
When there are different types of equality, ie. compare instructions, you have no equality. That's maybe a bit binary. I'm sorry, I thought this was Computer Science.
You seem to be splitting hairs and I'm not entirely sure why. Following the flow of conversation it should be pretty easy to infer the parent was referring to programming languages. Originally you said you don't think there should be a solution to the reflexive problem, and that in reply to a comment mentioning that Javascript was ugly (which is where the context of _programming_ language came from). I, and I believe the parent as well, took that to mean that you consider Javascript ugly because you don't believe it should operate the way it does. However, the point the parent was trying to make is that this is something common to all (programming) languages which follow the IEEE 754 standard and is therefore not some special case which makes Javascript any more ugly than another (programming) language.
It may not seem obvious and it may seem logically inconsistent but, like you said, this is computer science and things don't always work exactly the way we think they should. Usually because the obvious logical solution has problems when implemented so we change things around. In this case, NaN != NaN because of some of the limitations at the time IEEE 754 was proposed.
No, of course that was news to me. But I am not advocating any programming language. This looks like a wart, but a compelling one to commit to, because it's a simple solution.
> It may not seem obvious and it may seem logically inconsistent but, ...
An if an argument is made in defense of inconsistent logic, how could the argument be expected to be logically consistent?
Your argument by authority is not very good. There are involved explanations on stackoverflow.com, but I didn't bother to read, yet. Another solution would be to set the carry flag.
> Classically, id(x) != id(x) is logically inconsistent.
Equality is domain dependent. For example ∞!=∞ could be justified, as can (0/0)!=(0/0). In the domain of real numbers, equality can be an undecideable problem[1]. I guess the designers of these languages had to choose between tolerable defaults or throwing exceptions. I'm happy with JS doing this:
% node
> 1.0/-0
-Infinity
> 1.0/0
Infinity
> 0/0
NaN
> 0/NaN
NaN
> 1+NaN
NaN
The counter one (level 4) can be done with 11 chars. Solution below, found on Reddit[1].
Also, I could not manage fewer than 9 characters for level 3. (Edit: thanks @ensard)
the underscore is another variable but a single variable does not require parenthesis. So instead of two chars for the two parens, you use a single char for a single letter variable. JS FTW ;)
Oh, undef is as easy as knowing HTML trivia. Getting a falsey value to return an arbitrary string when invoked as a function is at least a little harder...
What is with the User/Score/Browser table? I only did like 4 of the tests and that thing popped up. I would finish it but I have some other things to do.
Looks like once you successfully score, it shows you how many characters it took others to solve. For example, the first one you can trivially type in true to solve it, with four characters. But it's solvable with only two characters.
ReturnTrue:1 Fetch API cannot load https://bigger.alf.nu/db/true/H5VmsyVCBD62113H5Slu. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://alf.nu' is therefore not allowed access. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
The link points to a page that only contains the text "return true to win" in large type on a white background. As with most modern art, I don't get it.