Hacker Newsnew | past | comments | ask | show | jobs | submit | eyelidlessness's commentslogin

> But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.

Of course it’s different. You have a type that accurately reflects your domain/data model. Doing that helps to ensure you know to implement the necessary runtime checks, correctly. It can also help you avoid implementing a lot of superfluous runtime checks for conditions you don’t expect to handle (and to treat those conditions as invariant violations instead).


No, it really isn’t that different. If I had a dynamic type system I would have to null check everything. If I have declare everything as a Maybe, I would have to null check everything.

For things that are invariants, that’s also trivial to check against with `if(!isValid(obj)) throw Error`.


Sure. The difference is that with a strong typing system, the compiler makes sure you write those checks. I know you know this, but that’s the confusion in this thread. For me too, I find static type systems give a lot more assurance in this way. Of course it breaks down if you assume the wrong type for the data coming in, but that’s unavoidable. At least you can contain the problem and ensure good error reports.


The point of a type system isn’t ever that you don’t have to check the things that make a value represent the type you intend to assign it. The point is to encode precisely the things that you need to be true for that assignment to succeed correctly. If everything is in fact modeled as an Option, then yes you have to check each thing for Some before accessing its value.

The type is a way to communicate (to the compiler, to other devs, to future you) that those are the expected invariants.

The check for invariants is trivial as you say. The value of types is in expressing what those invariants are in the first place.


You missed the entire point of the strong static typing.


I don’t think I did. I am one of the very few people who have had paying jobs doing Scala, Haskell, and F#. I have also had paying jobs doing Clojure and Erlang: dynamic languages commonly used for distributed apps.

I like HM type systems a lot. I’ve given talks on type systems, I was working on trying to extend type systems to deal with these particular problems in grad school. This isn’t meant to a statements on types entirely. I am arguing that most systems don’t encode for a lot of uncertainty that you find when going over the network.


You're conflating types with the encoding/decoding problem. Maybe your paying jobs didn't provide you with enough room to distinguish between these two problems. Types can be encoded optimally with a minimally-required bits representation (for instance: https://hackage.haskell.org/package/flat), or they can be encoded redundantly with all default/recovery/omission information, and what you actually do with that encoding on the wire in a distributed system with or without versioning is up to you and it doesn't depend on the specific type system of your language, but the strong type system offers you unmatched precision both at program boundaries where encoding happens, and in business logic. Once you've got that `Maybe a` you can (<$>) in exactly one place at the program's boundary, and then proceed as if your data has always been provided without omission. And then you can combine (<$>) with `Alternative f` to deal with your distributed systems' silly payloads in a versioned manner. What's your dynamic language's null-checking equivalent for it?


With all due respect, you can use all of those languages and their type systems without recognizing their value.

For ensuring bits don't get lost, you use protocols like TCP. For ensuring they don't silently flip on you, you use ECC.

Complaining that static types don't guard you against lost packets and bit flips is missing the point.


With all due respect, you really do not understand these protocols if you think “just use TCP and ECC” addresses my complaints.

Again, it’s not that I have an issue with static types “not protecting you”, I am saying that you have to encode for this uncertainty regardless of the language you use. The way you typically encode for that uncertainty is to use an algebraic data type like Maybe or Optional. Checking against a Maybe for every field ends up being the same checks you would be doing with a dynamic language.

I don’t really feel the need to list out my full resume, but I do think it is very likely that I understand type systems better than you do.


Fair enough, though I feel so entirely differently that your position baffles me.

Gleam is still new to me, but my experience writing parsers in Haskell and handling error cases succinctly through functors was such a pleasant departure from my experiences in languages that lack typeclasses, higher-kinded types, and the abstractions they allow.

The program flowed happily through my Eithers until it encountered an error, at which point that was raised with a nice summary.

Part of that was GHC extensions though they could easily be translated into boilerplate, and that only had to be done once per class.

Gleam will likely never live to that level of programmer joy; what excites me is that it’s trying to bring some of it to BEAM.

It’s more than likely your knowledge of type systems far exceeds mine—I’m frankly not the theory type. My love for them comes from having written code both ways, in C, Python, Lisp, and Haskell. Haskell’s types were such a boon, and it’s not the HM inference at all.


> ends up being the same checks you would be doing with a dynamic language

Sure thing. Unless dev forgets to do (some of) these checks, or some code downstream changes and upstream checks become gibberish or insufficient.


I know everyone says that this is a huge issue, and I am sure you can point to an example, but I haven’t found that types prevented a lot of issues like this any better than something like Erlang’s assertion-based system.


When you say "any better than" are you referring to the runtive vs comptime difference?


It’s not a hack, but you may find more documentation for the equivalent preload values expressed as a <link> tag. There is (near) parity between that and the HTTP Link header. The values used in the article should work in HTML as well.


> It’s not a hack

Yeah, this isn't a hack; this is what media queries were made for.

Now, this is a hack!

You had to do this to make :hover work correctly for IE6—IE8 [1]:

    body {
      behavior: url("csshover3.htc");
    }
[1]: https://pawelgrzybek.com/internet-explorer-just-hit-the-end-...


I agree, this was not a hack. It is combined behavior from documented features (preload with media and lazy loading).


Disclaimer: I’m a strong advocate for static typing.

I absolutely see the connection. One of the advantages of static typing is that it makes a lot of refactoring trivial (or much more than it would be otherwise). One of the side effects of making anything more trivial is that people will be more inclined to do it, without thinking as much about the consequences. It shouldn’t be a surprise that, absent other safeguards to discourage it, people will translate trivial refactoring into unexpected breaking changes.

Moreover, they may do this consciously, on the basis that “it was trivial for me to refactor, it should be trivial to adapt downstream.” I’ll even admit to making exactly that judgment call, in exactly those terms. Granted I’m much less cavalier about it when the breaking changes affect people I don’t interface with on a regular basis. But I’m much less cavalier about that sort of impact across the board than I’ve observed in many of my peers.


It’s quite common, although I probably see it used more frequently to invoke other (non-shell) scripting languages.


You might want that, I might too. But it’s outside the constraints set by the post/author. They want to establish immutable semantics with unmodified TypeScript, which doesn’t have any effect on the semantics of assignment or built in prototypes.


Well said. (I too want that.) I found my first reaction to `MutableArray` was "why not make it a persistent array‽"

Then took a moment to tame my disappointment and realized that the author only wants immutability checking by the typescript compiler (delineated mutation) not to change the design of their programs. A fine choice in itself.


Marko’s compiler is designed for partial hydration (by default, without any special developer effort), which performs quite well. IIRC they were also looking at implementing “resumability” (term coined by Qwik, for an approach that sidesteps hydration as a concept entirely). I’m not sure where they’re at on that now, but I think it’s generally safe to say that Marko prioritizes load time performance more than nearly all other frameworks.


marko5 which is the stable version does partial hydration by default (like Astro, but with automatic boundaries)

marko6 which is currently in public beta is resumable by default, and does some similar things to the also public beta qwik2


> Sometimes they even fail to even realise that it's what they are doing.

Because that’s not what they’re doing. They’re isolating state in a systemic, predictable way.


Lenses is mutation by another name. You are basically recreating states on top of an immutable system. Sure, it's all immutable actually but conceptually it doesn't really change anything. That's what makes it hilarious.

In the end, the world is stateful and even the purest abstractions have to hit the road at some point. But the authors of Haskell were fully aware of that. The monadic type system was conceived as a way to easily track side effects after all, not banish them.


But there isn’t anything hilarious about that.

It’s a clear-minded and deliberate approach to reconciling principle with pragmatic utility. We can debate whether it’s the best approach, but it isn’t like… logically inconsistent, surprising, or lacking in self awareness.


You might also think of it a bit like poetry: creativity emerging from the process of working within formal constraints. By asking how you can represent something familiar in a specially structured way, you can learn both about that structure and the thing you're trying to unite with it. Occasionally, you'll even create something beautiful or powerful, as well.

Maybe in that sense there's an "artificial" challenge involved, but it's artificial in the sense of being deliberate rather than merely arbitrary or absurd.


This is a fantastic way to put it, thank you for adding it!


You don’t see what’s hilarious about recreating what you are pretending to remove only one abstraction level removed?

Anyway, I have great hopes for effect system as a way to approach this in a principled way. I really like what Ocaml is currently doing with concurrency. It’s clear to me that there is great value to unlock here.


I don’t agree with your characterization that anyone is “pretending”. The whole point of abstraction is convenience of reasoning. No one is fooling themselves or anyone else, nor trying to. It’s a conscious choice, for clear purposes. That’s precisely as hilarious as using another abstraction you might favor more, such as an effect system.


This is a matter of choice, not something with an objectively correct answer. Every possible answer has trade offs. I think consistency with the underlying standard defining NaN probably has better tradeoffs in general, and more specific answers can always be built on top of that.

That said, I don’t think undefined in JS has the colloquial meaning you’re using here. The tradeoffs would be potentially much more confusing and error prone for that reason alone.

It might be more “correct” (logically; standard aside) to throw, as others suggest. But that would have considerable ergonomic tradeoffs that might make code implementing simple math incredibly hard to understand in practice.

A language with better error handling ergonomics overall might fare better though.


>A language with better error handling ergonomics overall might fare better though.

So what always trips me up about JavaScript is that if you make a mistake, it silently propagates nonsense through the program. There's no way to configure it to even warn you about it. (There's "use strict", and there should be "use stricter!")

And this aspect of the language is somehow considered sacred, load-bearing infrastructure that may never be altered. (Even though, with "use strict" we already demonstrated that have a mechanism for fixing things without breaking them!)

I think the existence of TS might unfortunately be an unhelpful influence on JS's soundness, because now there's even less pressure to fix it than there was before.


> And this aspect of the language is somehow considered sacred, load-bearing infrastructure that may never be altered. (Even though, with "use strict" we already demonstrated that have a mechanism for fixing things without breaking them!)

There are many things we could do which wouldn't break the web but which we choose not to do because they would be costly to implement/maintain and would expand the attack surface of JS engines.


To some extent you’ve answered this yourself: TypeScript (and/or linting) is the way to be warned about this. Aside from the points in sibling comment (also correct), adding these kinds of runtime checks would have performance implications that I don’t think could be taken lightly. But it’s not really necessary: static analysis tools designed for this are already great, you just have to use them!


It’s not even safe if you’re 100% sure the types are compatible, unless you’re also 100% sure nothing will change that fact. The reason it’s unsafe is because it suppresses the type error permanently, even if whatever factors led to your certainty now change anywhere upstream ever.

There are certainly ways to guard against that, but most of them involve some amount of accepting that the type checker produces errors for a reason.


Yes of course the types could change in the future, and the forced cast might cause issues. I wish there was a better way, but this is an acceptable tradeoff.

Bear in mind, most changes that could cause issues will still be caught by the type checker in whatever object you're casting to. Obviously it should not be overused where not needed, but it's almost always used in fluent apis because there's no better way (that I know of, at least)


> it's almost always used in fluent apis because there's no better way (that I know of, at least)

Got an example?


Yep, I sent one in another comment https://github.com/elysiajs/elysia/blob/94abb3c95e53e2a77078...

This is not the easiest to follow code, but it's very similar to what you'd find in any fluent web router, the idea is that you have say an App class, which has a Routes generic, then on every route you add you compose the return types by returning this as App<Routes & NewRoute>, the thing is in the most simple cases you can probably do this cast directly and it will be fine, but as you add more features (things like extensibility with plugins, ability to merge to other app routes, etc..) you might eventually run into limitations of the type system that require a escape hatch like "as unknown" or "as any"

It's not the only case in which you might use it, but I think Elysia is a great example as it does some really interesting things with the type system to provide great DX


Thanks. I'm not going to be very specific here because I'm too lazy to dig into that giant type, but if they want that method implementation to work without type assertions then the `add` method would need to be typed as an assertion function[1] so the type system can understand that it narrows its argument[2].

Here's an example: https://tsplay.dev/w8y9PN

[1]: https://www.typescriptlang.org/docs/handbook/release-notes/t...

[2]: Doing this isn't safe anyway because it mutates an object in a type-relevant way while there may other variables referring to it (the safe thing to do is return a new `Elysia` instance from `get`), but that's beside the point.


That's really interesting with the assertion function, I've not seen that done much, thanks!


> everything gets converted to an array at the drop of a hat

Can you name an example? IME the opposite is a more common complaint: needing to explicitly convert values to arrays from many common APIs which return eg iterables/iterators.


`map` returns an array and can only be called on an array.


Right, but I’m not clear on what gets converted to an array. Do you mean more or less what I said in my previous comment? That it requires you (your code, or calling code in general) to perform that conversion excessively?


People write a lot of stuff like [...iterable].map(fn). They do it so much it's as if they do it each time a hat drops.


Thank you for clarifying. (I think?)

I think what confused me is the passive language: "everything gets converted" sounds (to me) like the runtime or some aspect of language semantics is converting everything, rather than developers. Whereas this is the same complaint I mentioned.


One gripe I have is that the result of map/filter is always an array. As a result, doing `foo.map(...).filter(...).slice(0, 3)` will run the map and the filter on the entire array even if it has hundreds of entries and I only need the first 10 to find the 3 that match the filter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: