Hacker Newsnew | past | comments | ask | show | jobs | submit | ekipan's commentslogin

Yeah, this. Plus a mistake from the article:

  $ echo '*\n!.gitignore' > build/.gitignore
The \n won't be interpreted specially by echo unless it gets the -e option.

Personally if I need a build directory I just have it mkdir itself in my Makefile and rm -rf it in `make clean`. With the article's scheme this would cause `git status` noise that a `/build/` line in a root .gitignore wouldn't. I'm not really sure there's a good tradeoff there.


> The \n won't be interpreted specially by echo unless it gets the -e option.

Author's probably using Zsh, which interprets them by default.


If you want any kind of non-trivial formatting, use the printf command, not echo.

I forget the context but the other day I also learned about Snowflake IDs [1] that are apparently used by Twitter, Discord, Instagram, and Mastodon.

Timestamp + random seems like it could be a good tradeoff to reduce the ID sizes and still get reasonable characteristics, I'm surprised the article didn't explore there (but then again "timestamps" are a lot more nebulous at universal scale I suppose). Just spitballing here but I wonder if it would be worthwhile to reclaim ten bits of the Snowflake timestamp and use the low 32 bits for a random number. Four billion IDs for each second.

There's a Tom Scott video [2] that describes Youtube video IDs as 11-digit base-64 random numbers, but I don't see any official documentation about that. At the end he says how many IDs are available but I don't think he considers collisions via the birthday paradox.

[1]: https://en.wikipedia.org/wiki/Snowflake_ID

[2]: https://youtu.be/gocwRvLhDf8


Getting the entire universe to agree on a single clock for creating timestamps sounds absurdly difficult. Probably impossible?

"Agreement" of time is probably nonsense, yeah. I realized after posting so I edited in the parenthetical, but as [3] notes, locality probably makes this less of a real issue.

Apparently with the birthday paradox 32 bit random IDs only allow some tens of thousands per second before collision chance passes 50%. Maybe that's acceptable?

[3]: https://news.ycombinator.com/item?id=47065241


You don't need the universe to agree. You need your ID system to agree within a reasonable margin of error.

The temperature of the cosmic microwave background can be used as a universal clock.

So can neutron star spin rates

Neutron star spins collectively can be used as a pretty accurate clock.

How? You are here on earth, looking at a neutron star. It pulses about N times a minute. I am 5ly away, looking at the same neutron star. Yes, i see the pulses happening at about N times per second, but the pulse delay you see at time T(a) is not the same pulse delay i see at time T(a), it would be another 5 years before i see the same pulse you saw.

> [1]: https://en.wikipedia.org/wiki/Snowflake_ID

Isn't this just the same scheme as version 1 UUID, except with half the bits? I guess they didn't want to dedicate 128 bits to their IDs.



That also looks like the widely used BSON ids, to anyone else interested

I think everyone agrees with the sentiment but practically speaking that ship sailed decades ago. Browsing the web without uBlock or similar is ill-advised.

Morbidly curious, I turned off my blockers and it's not as bad as I expected honestly. I can still read the article copy.


> And they keep forgetting the purpose of software is to serve users, not developers.

I don't have any horse in the game, but I do think Zig is interesting. This remark is funny to me because it's literally one of the tenets the Zig devs make decisions by!

https://ziglang.org/documentation/master/#Zen

> * Together we serve the users.


Oh, it's a list you can add to your uBO. I thought it was a descriptive headline: "(The) uBlock filter list (is going) to hide ..."

I only watch Youtube via browser now for both adblocking and my own custom userscripts, both on my laptop and my phone. A couple creators I watch mostly do shorts so I tolerate them but I wrote a userscript that changes all /shorts/<videoid> URLs into normal /watch?v=<videoid> to lessen the temptation to doomscroll.


It seems to me that AI is mostly optimized for tricking suits into thinking they don't need people to do actual work. If I hear "you're absolutely right!" one more time my eyes might roll all the way back into my head.

Still, even though they suck at specific artifacts or copy, I've had success asking an LLM to poke for holes in my documentation. Things that need concrete examples, knowledge assumptions I didn't realize I was making, that sort of thing.

Sweet Gameboy shader!


You're absolutely right! (sorry, I couldn't resist)


I actually have the below clashes function in my bashrc to detect overloaded names, and a few compgen aliases for sorted formatted command lists. Clashes takes a few seconds to chew through the 3000ish commands on my Steam Deck.

I also discovered the comma thing myself, and use it for `xdg-open <web-search-url>` aliases, i.e. `,aw xdg` searches archwiki for "xdg", since the SteamOS on this thing doesn't have manpages.

  alias words='iftty xargs'   # join by spaces
  alias c='compgen -c | sort -u | words' # eg: c | grep ^k | chop
  # ... a for aliases, f for functions etc ...
  alias man='g !archman'      # https://man.archlinux.org
  alias ,aw='g !aw'           # https://wiki.archlinux.org
  alias ,mdn='g !mdn'         # https://developer.mozilla.org
  alias ,nix='g !nixpkgs'     # https://search.nixos.org
  # ... a dozen others ...

  g() { local IFS=+; xdg-open "https://ddg.gg/?q=$*"; }
  iftty() { if [ -t 1 ]; then "$@"; else cat; fi; }

  chop() { # [TABS] [STARTCOL] [CUTOPTS..] # eg. ls ~/.co* | chop
    set -- "${1:-2}" "${2:-1}" "${@:3}"
    cut -c "$2"-$(($2+8*$1-2)) "${@:3}" | column #| less
  }

  clashes() { # [CMDS..] # print overloads eg: clashes $(c)
    (($#)) || set -- $(compgen -A alias -A function)
    local ty cmd; for cmd; do ty=($(type -at "$cmd"))
    ((${#ty[@]}>1)) && echo -n "$cmd "; done; echo
  }


(Edit: in the old post title:) "everything is a value" is not very informative. That's true of most languages nowadays. Maybe "exclusively call-by-value" or "without reference types."

I've only read the first couple paragraphs so far but the idea reminds me of a shareware language I tinkered with years ago in my youth, though I never wrote anything of substance: Euphoria (though nowadays it looks like there's an OpenEuphoria). It had only two fundamental types. (1) The atom: a possibly floating point number, and (2) the sequence: a list of zero or more atoms and sequences. Strings in particular are just sequences of codepoint atoms.

It had a notion of "type"s which were functions that returned a boolean 1 only if given a valid value for the type being defined. I presume it used byte packing and copy-on-write or whatever for its speed boasts.

https://openeuphoria.org/ - https://rapideuphoria.com/


> It had a notion of "type"s which were functions that returned a boolean 1 only if given a valid value for the type being defined.

I've got a hobby language that combines this with compile time code execution to get static typing - or I should say that's the plan, it's really just a tokenizer and half of a parser at the moment - I should get back to it.

The cool side effect of this is that properly validating dynamic values at runtime is just as ergonomic as casting - you just call the type function on the value at runtime.


Thanks, I updated the post title based on this and another comment. Thanks for the pointer to Euphoria too, looks like an interesting language with a lot of similar ideas.


Strange. Middle-clicking the link opens a HackerNews frontpage, but copypasting into a new tab shows the article. Presumably the server shoos away referer links? To reduce load maybe somehow? Or maybe something's weird in my own configuration idk.


They explicitly check if the referrer is hackernews and do that.


I had the same issue as you. Had to copy and paste the link.


Mutability is distinct from variability. In Javascript only because it's a pretty widely known syntax:

    const f = (x) => {
      const y = x + 1;
      return y + 1;
    }
y is an immutable variable. In f(3), y is 4, and in f(7), y is 8.

I've only glanced at this Zen-C thing but I presume it's the same story.


"immutable variable" is an oxymoron. Just because Javascript did it does not mean every new language has to do it the same way.


There are two distinct constructs that are referred to using the name variable in computer science:

1) A ‘variable’ is an identifier which is bound to a fixed value by a definition;

2) a ‘variable’ is a memory location, or a higher level approximation abstracting over memory locations, which is set to and may be changed to a value by an assignment;

Both of the above are acceptable uses of the word. I am of the mindset that the non-independent existence of these two meanings in both languages and in discourse are a large and fundamental problem.

I take the position that, inspired by mathematics, a variable should mean #1. Thereby making variables immutably bound to a fixed value. Meaning #2 should have some other name and require explicit use thereof.

From the PLT and Maths background, a mutable variable is somewhat oxymoronic. So, I agree let’s not copy JavaScript, but let’s also not be dismissive of the usage of terminology that has long standing meanings (even when the varied meanings of a single term are quite opposite).


“Immutable” and “variable” generally refers to two different aspects of a variable’s lifetime, and they’re compatible with each other.

In a function f(x), x is a variable because each time f is invoked, a different value can be provided for x. But that variable can be immutable within the body of the function. That’s what’s usually being referred to by “immutable variable”.

This terminology is used across many different languages, and has nothing to do with Javascript specifically. For example, it’s common to describe pure functional languages by saying something like “all variables are immutable” (https://wiki.haskell.org/A_brief_introduction_to_Haskell).


Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location". Probably holder, keeper* or warden would make a more accurate terms using ordinary parlance. Or to be very on point and dropping the ordinariness, there is mneme[1] or mnemon[2].

Good luck propagating ideas, as sound as it might, to a general audience once something is established in some jargon.

[1] https://en.wiktionary.org/wiki/mneme [2] https://www.merriam-webster.com/medical/mnemon


> Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location".

No, the term came directly from mathematics, where it had been firmly established by 1700 by people like Fermat, Newton, and Leibniz.

The confusion was introduced when programming languages decided to allow a variable's value to vary not just when a function was called, but during the evaluation of a function. This then creates the need to distinguish between a variable whose value doesn't change during any single evaluation of a function, and one that does change.

As I mentioned, the terms apply to two different aspects of the variable lifecycle, and that's implicitly understood. Saying it's an "oxymoron" is a version of the etymological fallacy that's ignoring the defined meanings of terms.


I think you are confused by terminology here and not by behavior, "immutable variable" is a normal terminology in all languages and could be says to be distinct from constants.

In Rust if you define with "let x = 1;" it's an immutable variable, and same with Kotlin "val x = 1;"


Lore and custom made "immutable variable" some kind of frequent idiomatic parlance, but it’s still an oxymoron in their general accepted isolated meanings.

Neither "let" nor "val[ue]" implies constancy or vacillation in themselves without further context.


Words only have the meaning we give them, and "variable" already has this meaning from mathematics in the sense of x+1=2, x is a variable.

Euler used this terminology, it's not new fangled corruption or anything. I'm not sure it makes too much sense to argue they new languages should use a different terminology than this based on a colloquial/nontechnical interpretation of the word.


I get your point on how the words meanings evolves.

Also it’s fine that anyone name things as it comes to their mind — as long as the other side get what is meant at least, I guess.

On the other it doesn’t hurt much anyone to call an oxymoron thus, or exchange in vacuous manner about terminology or its evolution.

On the specific example you give, I’m not an expert, but it seems dubious to me. In x+1=2, terms like x are called unknowns. Prove me wrong, but I would rather bet that Euler used unknown (quantitas incognita) unless he was specifically discussing variable quantities (quantitas variabilis) to describe, well, quantities that change. Probably he used also French and German equivalents, but if Euler spoke any English that’s not reflected in his publications.


"Damit wird insbesondere zu der interessanten Aufgabe, eine quadratische Gleichung beliebig vieler Variabeln mit algebraischen Zahlencoeffizienten in solchen ganzen oder gebrochenen Zahlen zu lösen, die in dem durch die Coefficienten bestimmten algebraischen Rationalitätsbereiche gelegen sind." - Hilbert, 1900

The use of "variable" to denote an "unknown" is a very old practice that predates computers and programming languages.


Yes sure, I didn't mean otherwise, but I just wanted to express doubts about Euler already doing so. Hilbert is already one century forward.


Haskell, then.

    f x = let y = x + 1 in y + 1
Same deal. In (f 3), y = 4, in (f 7), y = 8. y varies but cannot mutate. Should be a true enough Scottsman.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: