Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's great to see JS getting some of the features of better planned languages.

But I'm still very nervous about some of the stuff mentioned here with regard to mutation. Taking Rust and Clojure as references, you always know for sure whether or not a call to e.g. `flat` will result in a mutation.

In JS, because of past experience, I'd never be completely confident that I wasn't mutating something by mistake. I don't know if you could retrofit features like const or mut. But, speaking personally, it might create enough safety-net to consider JS again.

(Maybe I'm missing an obvious feature?)



Mutation is a real weakness of Javascript. I think the general idea is "methods don't mutate unless they are ancient". For example Array.map (an IE9-era feature) doesn't mutate, Array.sort (an IE5.5-era feature) does. Similarly a "for(let i=0; i<arr.length; i++) arr[i] = b" loop will obviously mutate modified elements while a "for(let e of arr) e = b" won't; the trend is towards less mutability in new features.

Proper immutable support (or a stronger concept of const) would also help with this.


> for(let e of arr) e = b

Is that just

  arr.map( e => b )

?


It doesn't do anything and was my attempt to give a simple example that is somewhat obvious while glossing over the complexity

    let arr = [{a: 1, b: ["a", "b"]}, {a: 9, b: ["a","c"]}];
    let b = "!"

    for(let e of arr) e = b;
    console.log(arr)
[{a: 1, b: ["a", "b"]}, {a: 9, b: ["a","c"]}] (unmodified)

    for(let e of arr) e.a = 2;
    console.log(arr)
[{a: 2, b: ["a", "b"]}, {a: 2, b: ["a","c"]}] (modified)

    for(let e of arr) {let copy = {...e}; copy.a = 4;}
    console.log(arr)
    
[{a: 2, b: ["a", "b"]}, {a: 2, b: ["a","c"]}] (unmodified)

    for(let e of arr) {let copy = {...e}; copy.b[0] = "!";}
    console.log(arr)
    
[{a: 2, b: ["!", "b"]}, {a: 2, b: ["!","c"]}] (modified)

The most frustrating thing about all of this is that the best way to make a deep copy to avoid all unwanted modification is JSON.parse(JSON.stringify(arr))


Thanks for sharing more context. I see what you were trying to illustrate with your earlier examples now.

Why not make a little library so you can do these kinds of things safely without reimplementing them every project? Maybe call it safeCopy or something.


...honestly, I don't see much complexity here. Understanding of reference types and the difference between deep copy and shallow copy makes it pretty straightforward - the result is the same as it would be in python or java.


I don't think Python or Java are examples of good or simple behavior here. Java at least has a strong culture of "copy everything or make it immutable at class boundaries" while javascript libraries often leave you guessing.

Examples where this is easy are C where every copy and allocation is very explicit, C++ which has a good `const` concept that's actually useful, or Rust with it's ownership concept that makes mutability very obvious.


Providing It was syntactically correct the first call would assign e to a variable b for each iteration ( pretty pointless )

The arr.map providing you had a variable on the left side would output the result of each iteration into said array ( so an array containing all b values - however I guess you meant arr.map(e => e)


> Providing It was syntactically correct

Providing arr is iterable (e.g. an array) it's perfectly valid js. Your linter might scream at you to add braces and a semicolon, but neither is needed for correctness here.


Nope. I meant arr.map(e=>b) .

I try to reimplement pointless code, I get more pointless code.


Nope. The first does not do anything. The second makes a new array of length 'arr.length' with 'b' in every cell


No, it's broken code and doesn't effectively do anything.


Typescript readonly properties and types are amazing for this. ReadonlyMap<T>, ReadonlyArray<T>, Readonly<T>.


Yes, this is a real problem and I've been bitten by it more times than I can count. Now I always keep this handy website ready: https://doesitmutate.xyz/


TypeScript does a pretty good job here if you're willing to add a bit of extra syntax:

  const a = [1,2,3]
  a.push(4)

  const b: readonly number[] = [1,2,3]
  b.push(4) // Property 'push' does not exist on type 'readonly number[]'.


Well, Object.freeze in plain JS can help too.

    > Object.freeze([1,2,3]).push(4)
    TypeError: can't define array index property past the end of an array with non-writable length (firefox)
    Uncaught TypeError: Cannot add property 3, object is not extensible (chrome)
Of course, it will only blow up at runtime. But better than not blowing up at all, creating heisenbugs and such.

I often find myself writing classes where the last step of a constructor is to Object.freeze (or at least Object.seal) itself.


For what it's worth, there are only two, maybe three methods in that entire list that mutate where it's not obvious: sort, reverse, and (maybe) splice. All the other methods (like push, pop, fill, etc) are methods whose entire purpose is to mutate the array.


That was my first impression. But then the same logic applies to concat. ("I want to add another array").


Sometimes I don’t. Actually usually I don’t, I do a lot of [].concat(a, b).


Thanks for sharing.

I think this is the kind of thing you just have to learn when you use any language. But when you're switching between half a dozen, being able to rely on consistent founding design principles really makes things easier. And when there aren't any, this kind of guide helps.


I really like Python's design here, if we are not talking about full on language features to enforce immutability, where functions that mutate never return a value and are named with verbs (e.g. 'sort()'), while functions that don't mutate return their value and are named with adjectives (e.g. 'sorted()'). This feels natural - mutations are actions, while pure functions are descriptions.

The only real downside is the lack of return values mean you can't chain mutations, but personally that never bothered me.


i used to like that distinction as well but verbs are too useful to let mutating stuff use them all up! and pure functions are actions as well, they just result in a new thing. also, some verbs sound awkward adjectified: take, drop, show, go, get ...


That sounds pretty reasonable. I can see the case for mutation support, but the unpredictable nature of it is what is frustrating and dangerous.


Coming from PHP, we’re used to it. Half the methods have $haystack, $needle, and the other half use them in the other order.


I feel a better form for this site would be:

Mutates: push, pop, shift, unshift, splice, reverse, sort, copyWithin

Does Not Mutate: everything else


At the same time, all this copying leads to an immense amount of garbage, which can really slow apps down with GC pauses. I really wished JavaScript had support for true immutable structures (a la Immutable.js) since these things do add up.

In my side project, which is a high performance web app, I was able to get an extra ~20fps by virtually removing all garbage created each frame. And there's a lot of ways to accidentally create garbage.

Prime example is the Iterator protocol, which creates a new object with two keys for every step of the iteration. Changing one for loop from for...of back to old-style made GC pauses happen about half as much. But you can't iterate over a Map or Set without the Iterator protocol, so now all my data structures are hand-built, or simply Arrays.

I would like to see new language features be designed with a GC cost model that isn't "GC is free!" But I doubt that JavaScript is designed for me and my sensibilities....


Does shallow copying have the same issues? For example, `let foo = { x: 1, ...bar }` just makes a new object with references to bar's members.


Shallow copying will create a new object, and thus, some (small) amount of GC garbage. Less than a deep copy, for sure, which means less frequent pauses, but still garbage to clean up nonetheless.


You could always look at ClojureScript. I know I am.

  Array.flat() => flatten
  Array.flatMap() => mapcat
  String.trimLeft() => triml, trimr 

Symbols are great but they’re much more useful when you can write them as (optionally namespaced) literals, which are much faster to work with:

  (= :my-key :your-key) ;; false
  (= :my-key :my-key) ;; true

Object.entries() and Object.fromEntries() are both covered by (into). You can use (map) and other collection-oriented functions directly with a hashmap, it will be converted to a vector of [k v] pairs for you. (into {} your-vector) will turn it back into a new hashmap.

And...all of these things were already in clojurescript when it was launched back in 2013! Plus efficient immutability by default, it’ll run on IE6, and the syntax is now way more uniform than JS. I’m itching to use it professionally.


I really like the lisp convention of using !'s to indicate mutation. Like set-car! .

In javascript you kind of have to reason backwards and declare your variables as immutable (const). Though there are still some bugaboos; object fields can still be overwritten even if the object was declared with const.


const only means the variable itself can't be reassigned though, and really the main complain about mutation comes from Array methods. Like Array.pop will mutate the array and you have to do an Array.split for the last item instead if you want to keep your array.


JS already has immutable objects with `Object.freeze()`.

Personally I just use TypeScript which can enforce not mutating at compile time (for the most part).


Thanks. But can I then add and remove items from an immutable object to create new objects?

Part of the immutable value proposition is being able to work with the objects. Based on [0] Freezing feels more like constant than immutable. And the 'frozenness' isn't communicated through the language - I could be passed a frozen or unfrozen object and I wouldn't know without inspecting it.

And freeze isn't recursive against the entire object graph, meaning the nature of freezing is entirely dependent on the implementation of that object.

I really like the language-level expression and type checking of Rust. But it does require intentional language design.

I'm not criticising JS (though I think there are plenty of far better langauges). Just saying that calling `freeze` 'immutable' isn't the full story.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


> But can I then add and remove items from an immutable object to create new objects?

Yes, although the new object is not frozen by default. Adding is quite straight forward, expecially with the spread syntax

    let x = { a: 1, b: 2 };
    Object.freeze(x)
    let y = {...x, b: 3}
    // y == { a: 1, b: 3 }
Removing is less intuitive

    let x = { a: 1, b: 2 };
    Object.freeze(x)
    let { b, ...y} = x;
    // y == { a: 1 }

> And the 'frozenness' isn't communicated through the language

Yes, but given that JS is a dynamic language I wouldn't expect anything different (everything must inspected at runtime).

> And freeze isn't recursive against the entire object graph

You're right, although one could quickly implement a recursive version.

In any case I find Object.freeze not much useful since trying to mutate a frozen object will simply ignore the operation; I think that most of the time trying to do that should be considered an error and I would prefer to have and exception raised.


Object.freeze is kind of constant, but you can still easily copy and work with the objects if you need to, for example, the following is valid for most objects:

    const foo = Object.freeze({ a: 1, b: 2 })
    const fooCopy = { ...foo }
And you are right that Object.freeze doesn't work recursively (although making it work recursively is fairly easy to implement yourself if you use it a lot).

But like it or not JS isn't a language with a powerful type system, and it doesn't pretend to have one so knocking it for that is like knocking Python for using whitespace, or knocking Rust for needing a compiler.

Luckily, Typescript and Flow have most of what you are asking for, and they work pretty damn well across the entire ecosystem.

Off the top of my head, I know typescript has the ability to mark things as read-only even at the individual property level. [1] And they have tons of the type checking nice-ness that you can expect from other "well typed" languages like Rust.

[1] https://basarat.gitbooks.io/typescript/docs/types/readonly.h...


Have you looked at Immer.js? It allows you to express modifications to immutable objects as a series of imperative operations.

In my experience most "immutability" in JS is enforced by convention or, at best, static type systems. It's not ideal, but it works.


Lenses in ramda work well, too, if you don't mind being functional in your js code.

I suppose that still only fits in the "immutable by convention" category, though.


In other words you’re concerned that some Array methods mutate the array (push, pop) and some don’t (map, concat)?

If so then yeah, that can be annoying and/or confusing.


For me the worst is slice/splice.


Kind of stupid, but I imagine the 'p' in 'splice' being an axe that chops the array :D Works for me...


I think OP was saying slice returns a mutated copy while splice mutates in place.


Yes, it stems from arrays. But extends from there to any collection, built in or custom. Kind of bridges the built in type system but it's really about the expressivity of the language.

It becomes especially important in React where you share objects up and down an immutable structure of objects.


Yep. If you've bitten the apple of JavaScript tooling you can kinda sorta rig up compiler-enforced immutable data structures with TypeScript. But IMO if you're going that far it's much easier/well-documented to just use Elm or something.


If this is a major concern for you, you might want to use something like immutableJS[0]. More often than not I’ve found it unnecessary except in very contained parts of a large app, but in case it’s helpful I wanted to point it out.

[0] - https://github.com/immutable-js/immutable-js


Another one that I personally prefer is Immer.js[0]

[0] - https://github.com/immerjs/immer


I really love the Ruby naming conventions around this: `!` indicates mutation.

  2.4.1 :001 > a = [4,3,5,1,2]
   => [4, 3, 5, 1, 2]
  2.4.1 :002 > a.sort
   => [1, 2, 3, 4, 5]
  2.4.1 :003 > a
   => [4, 3, 5, 1, 2]
  2.4.1 :004 > a.sort!
   => [1, 2, 3, 4, 5]
  2.4.1 :005 > a
   => [1, 2, 3, 4, 5]


A method named "method!" means that it's a somehow "unsafe" version of the method "method". A lot of the time it means "destructive version," but if there's no non-destructive version, the destructive one won't have a ! (eg, Array#shift), and sometimes ! means something else (eg, Kernel#exit! is like Kernel#exit, but doesn't run any at_exit code).


Always try to limit the scope of variables. Use local scope. And also functions within functions. The nice part when limiting scope is that you never have to scroll or look elsewhere in order to understand what a piece of code does.


I still don't understand why neither JSs var or let allow you to redefine the variable with the same name.

I makes chaining things while debugging so much harder:

  let a = a.project();
  let a = debug(a);
  let a = a.eject();
vs

  let a1 = a.project();
  let a1d = debug(a1);
  let a2 = a1d.eject();


I always assumed it was to protect against: accidental naming errors, confusion of what a declaration is and copy/paste issues. When I first started writing rust and saw it was a thing I thought it was a terrible idea. I'm a more open towards it now, the strong static analysis Rust does help and it can improve code quality if used in small amounts. However, it still can be quite confusing.

Given that JS doesn't restrict the type of a declaration you can just assign a new value to it, place it in a small scope or use a chain.


>> However, it still can be quite confusing.

I don't know - it's never confusing to me. I just use the IDE that allows me to view the types of the variables whenever I need to see them.

IDE also highlights the definitions and then the usages of the variable, including the syntax scope where it's used.

You're definitely using the wrong tools for the job if you get confused with that little detail.

>> Given that JS doesn't restrict the type of a declaration you can just assign a new value to it, place it in a small scope or use a chain.

Yeah, but I don't want to semantically assign a new value to the variable. I want this to be a new variable, because it is a new variable.

So the point of var is slightly over exaggerated because they could have gone the python way and simply allow declarations to be anything that is an assignment.


var and let tell the compiler the scope in which it should declare the variable (oversimplifying). If var or let are not present, the variable is declared in the global scope; unless you're in strict-mode, then you get a scolding.

So var and let are only tangential to the whole declaration process and only indicate the scope in which the variable is bound.

I feel confused. Why do you want your assignment statements to be prefixed with var's and let's?


if you put var in-front you don't have to worry about reassigning a variable from parent scope. You also make assignment and comparison semantically different, example var foo=1; vs foo=1 if(foo=1) and it's thus easier to spot bugs and understand the code.


What IDE (+plugins?) are you using ?


    let a = a.project();
    a = debug(a)
    a = a.eject();
This is perfectly legal.


So what's the point of having let, then?


let declares a block-scoped variable, while var declares a function-scoped variable.

Limiting a variable's scope can help avoid subtle and potentially annoying errors. For example, if you use a let variable inside an if block, it'll only be accessible by code inside the block. If you use var inside an if block, your variable will be visible to the entire function.


The truth is that if you need to declare a variable outside your current scope you should probably declare it in the outside scope in the first place.

In the scenario with var/let I need to grok the code in order to tell which variables are visible in my current scope.


I tend to mainly use const, let only when necessary. I never use var and set linter to scream about it at me. IE11 supports it, so unless you develop application for IE compatibility mode (my condolences) I don't see a reason to use var. On the other hand var mainly bites you when declaring closures inside loops, also hoisting is just plain weird.


I find hoisting a convenient feature as I can declare the variable in context to where it's used. It means I do not have to break the flow of how the code reads, making the code easier to understand and less bug prone. Example:

    if(something) var foo = 1;


So how do you maintain the invariant that this variable is used only if <something> is true _down the line_?


It's only logical that it will be undefined if it's never assigned. With var you can just declare anywhere. While with let it feels like a chore when you have to declare in a parent block, eg. outside the if-block for it to be accessible within another if-block. Lexical function scope works very well in an async language with first class functions, as you deal mostly with functions, which can access it's closures at any time, etc. makes it logical that the function should define the scope, not if-statement or for-loops.


let also does some magic in for-loops, creating a new variable for each iteration, basically creating a closure.

It also throws and error if it's used before it's declared.

Let basically fix some minor issues that hard-bitten JavaScript developers have learned to avoid.


block scope


Without getting into the merits of allowing redefines in a loosely typed language, the simple reason why JS can't support this is hoisting.

Any var/let statement of the form var a = 1; is interpreted as 2 statements. (1) The declaration of the variable which is hoisted to the beginning of the variable scope, and the (2) setting of the value, which is done at the location the var statement is at.

Having multiple let statements would mean the same variable is declared and hoisted to the same location multiple times. So it's basically unnecessary and breaks hoisting semantics.

In addition, the downside risk of accidentally redefining a variable is probably far greater than the semantic benefits of making the redefinition clear to a reader (esp since I think that benefit is extremely limited in a loosely typed language like JS anyways).


That's not quite accurate. It's actually the reverse.

Think of the closure as an object. It contains variables like `this`, `arguments`, a pointer to the parent closure, all your variables, etc.

The interpreter needs to create this closure object BEFORE it runs the function. Before the function can run, it has to be parsed. It looks for any parameters, `var` statements, and function statements. These are all added to the list of properties in the object with a value of `undefined`. If you have `var foo` twice, it only creates one property with that name.

Now when it runs, it just ignores any `var` statements and instead, it looks up the value in the object. If it's not there, then it looks in the parent closure and throws an error if it reaches the top closure and doesn't find a property with that name. Since all the variables were assigned `undefined` beforehand, a lookup always returns the correct value.

`let` wrecks this simple strategy. When you're creating the closure, you have to specify if a variable belongs in the `var` group or in the `let` group. If it is in the `let` group, it isn't given a default value of `undefined`. Because of the TDZ (temporal dead zone), it is instead give a pseudo "really undefined" placeholder value.

When your function runs and comes across a variable in the let group, it must do a couple checks.

Case 1: we have a `let` statement. Re-assign all the given variables to their assigned value or to `undefined` if no value is given.

Case 2: we have an assignment statement. Check if we are "really undefined" or if we have an actual value. If "really undefined", then we must throw an error that we used before assignment. Otherwise, assign the variable the given value;

Case 3: We are accessing a variable. Check if we are "really undefined" and throw if we are. Otherwise, return the value.

To my knowledge, there's no technical reason for implementing the rule of only one declaration aside from forcing some idea of purity. The biggest general downside of `let` IMO is that you must do extra checks and branches every time you access a variable else have the JIT generate another code path (both of which are less efficient).


We are on the same page.

My point is having 2 let or var statements doesn't actually do anything on the interpreter side.

If JS allowed 2 var/lets without complaining, it would be entirely a social convention as to what that meant, since it would have no effect on the actual code that was run.

And the social convention benefit (which could more easily be achieved with just putting a comment at the end) is probably far outweighed by the many, real examples I've seen where someone has accidentally created a new variable without realizing that variable already exists in scope.

Disallowing multiple vars helps linters identify these situations (which are far more common with var's function level scoping than let's block level scoping).


This human gets it. I'd like to add that if you would like to be clear about what your assignment is doing, put some comments in there.


Another disadvantage of reusing the same variable name is the order of the statements is usually important and it isn't always obvious how, especially when you do this in longer functions.

When you're refactoring, you then have to be much more careful when moving lines of code of code around. With unique names, you get more of a safety net (including compile time errors if you're using something like TypeScript).


If you use let it means you intend to use that binding "later", up to the end of the scope, and within that scope the value shall not change: that's the Whole point.

If you want a variable you can assign successive different values to, it's an entirely different thing, and there have always been var and the assignment operator for that.


>> within that scope the value shall not change: that's the Whole point.

That's pure BS. This is only true for atomics, the value can change (under let, var and const) as we can easily see with Array.push, for example.


I think you mean `const`. `let` can be reassigned.


What’s wrong with

  var a = ...;
  a = a.project();
  a = debug(a);
  a = a.eject();


That can’t show the programmer’s intent whether it is mutation (e.g. a = a + someNum) verses defining a new variable with different types (but has a similar meaning so has a same name) (e.g. someKindOfData = [...someKindOfData])

Rust allows this, and it really clears codes up. I don’t have to make up different identifiers for same data but different representations. (e.g. I would do the above code in JS as... someKindOfDataAsArray = [...someKindOfObectAsNodeList])


    let projected = a.project();
    let debugged = debug(projected);
    let ejected= debugged.eject();


And you need to change 3 lines in total (also you need to make changes mid-lines, too) in order to simply view the data in between, vs only 1 line.

And by the way - if you paid attention in the first place my post actually has exactly what you've just written.


Meaningful names? I don't think so.


And what is that meaning for if the functions you are calling have proper names?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: