Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

It's also a communication problem, because labels like "100-year/1000-year event" are easily misunderstood.

* they're derived from an estimated probability of the event (independently) happening each year. It doesn't mean that it won't happen for n years. The probability is the same every year.

* the probabilities are estimates, trying to predict extreme outliers. Usually from less than 100s of years of data, using sparse records that may have never recorded a single outlier.

* years = 1/annual_probability ends up giving large time spans for small probabilities. It means that uncertainty between 0.00001% and 0.00002% looks "off by 500 years".

https://practical.engineering/blog/2025/9/16/an-engineers-pe...


I find a useful exercise is to have a cheat sheet of historic flood heights in some area, tell someone the first record high, ask them how high they would make the levee and how long they think it would last. Peoples' sense for extremal events is bad.


That's a great exercise. Where I live a lot of people died because in the past we were not able to make that guess correctly. A lot was learned, at great expense.


reserve() reallocates by at least doubling the capacity.

reserve_exact() reallocates by exactly what you ask for.

If you reserve() space for 1 more element a 1000 times, you will get ~30 reallocations, not 1000.

This inexact nature is useful when the total size is unknown, but you append in batches. You could implement your own amortised growth strategy, but having one built-in makes it simple for different functions to cooperate.


> If you reserve() space for 1 more element a 1000 times, you will get ~30 reallocations, not 1000.

Surely you mean ~10 reallocations, because 2^10=1024, right?


See "A plea for lean software" by Wirth.

It's from 1995 and laments that computers need megabytes of memory for what used to work in kilobytes.


The transition from kilobytes to megabytes is not comparable to the transition from megabytes to gigabytes at all. Back in the kilobytes days, when the engineers (still) had to manage bits and resort to all kind of tricks to somehow make it to something working, a lot of software (and software engineering) aspects left to be desired. Way too many efforts were poured not so much into putting the things together for the business logic as were poured into overcoming the shortcomings of limited memory (and other computing) resource availability. Legitimate requirements for software had to be butchered like Procrustes' victims, so that the software could have a chance to be. The megabytes era accommodated all but high end media software, without having to compromise on their internal build-up. It was the time when things could be properly done, no excuses.

Nowadays' disregard for computing resource consumption is simply the result of said resources getting too cheap to be properly valued and a trend of taking their continued increase for granted. There's simply little to no addition in today's software functionality that couldn't do without the gigabytes levels of memory consumption.


Wirth's law is eating Moore's law for lunch.


In prehistoric Rust, variance used to be named more explicitly. However, the terminology of covariant and contravariant subtyping of lifetimes is a language theory jargon. This is the right perspective for language design, but programmers using the language don't necessarily use these terms.

It's been replaced with a "by example" approach. It's much easier to teach it: just add a fake field that acts if you had this type in your struct. Rust then figures out all of the details it needs.


Years ago, I introduced Flow gradual typing (JS) to a team. It has explicit annotations for type variance which came up when building bindings to JS libraries, especially in the early days.

I had a loose grasp on variance then, didn't teach it well, and the team didn't understand it either. Among other things, it made even very early and unsound TypeScript pretty attractive just because we didn't have to annotate type variance!

I'm happy with Rust's solution here! Lifetimes and Fn types (especially together) seem to be the main place where variance comes up as a concept that you have to explicitly think about.


Note that this works because Rust doesn't have inheritance, so variance only comes up with respect to lifetimes, which don't directly affect behavior/codegen. In an object-oriented language with inheritance, the only type-safe way to do generics is with variance annotations.


Variance is an absolute disaster when it comes to language pedagogy. One of the smartest things Rust ever did was avoiding mentioning it in the surface-level syntax.


Why? It's just a property of type transformation.

Assuming Parent <- Child ("<-" denotes inheritance):

- If Generic<Parent> <- Generic<Child>: it's covariant.

- If Generic<Parent> -> Generic<Child>: it's contravariant.

- Otherwise: it's invariant.

Or at least it's that straightforward in C#. Are there complications in Rust?


The difficulty is that even trivial generic types aren't cleanly one or the other. A mutable reference type is covariant on read, and contra on write.

Scala was the first language in my exposure to try to simplify that away by lifting the variance annotations into the type parameter directly. It reduced some of the power but it made things easier to understand for developers. A full variance model would annotate specific features (methods) of a type as co/contra/invariant.

I'm not sure what approach C# takes - I haven't looked into it.

Rust doesn't expose variances for data structures at all. It exposes them for traits (type classes) and lifetimes, but neither of those are accessible in a higher order form. Trait usage is highly constrained and so is lifetime usage.

Traits mainly can be used as bounds on types. Some subset of traits, characterized as object traits, can be used as types themselves, but only in an indirect context. These are highly restricted. For example, if you have traits T and U where U extends T, and you have a reference `&dyn U`, you can't convert that to a `&dyn T` without knowledge of the underlying concrete type. You _can_ convert `&A where A: U` to `&B where B: T`, but that just falls out of the fact that the compiler has access to all the concrete type and trait definitions there and can validate the transform.

Rust's inheritance/variance story is a bit weird. They've kept it very minimal and constrained.


> A mutable reference type is covariant on read, and contra on write.

No it isn't. The type is covariant because a reference is a subtype of all parent types. The the function read must have it's argument invariant because it both takes and returns an instance of the type. I think you're confusing the variance of types for the variance of instances of those types. Read is effectively a Function(t: type, Function(ref t, t)). If it was covariant as you suggest, we would have a serious problem. Consider that (read Child) works by making a memory read for the first (sizeof Child) bytes of it's argument. If read were covariant, then that would imply you could call (read Child) on a Parent type and get a Parent back, but that won't work because (sizeof Child) can be less than (sizeof Parent). Read simply appears covariant because it's generic. (read Child) ≠ (read Parent), but you can get (read Parent). It also appears contravariant because you can get (read Grandchild).

Scala doesn't simplify anything, that's just how variance works.


Even in your own description, it is clear that with regards to _correctness_, the variance model bifurcates between the read and the write method.

The discussion about the type sizes is a red herring. If the type system in question makes two types of differing sizes not able to be subtypes of each other, then calling these things "Child" and "Parent" is just a labeling confusion on types unrelated by a subtyping relationship. The discussion doesn't apply at all to that case.

The variance is a property of the algorithm with respect to the type. A piece of code that accepts a reference to some type A and only ever reads from it can correctly accept a reference to a subtype of A.

A piece of code that accepts a reference to a type A and only ever writes to it can correctly accept a reference to a supertype of A.

In an OO language, an instance method is covariant whenever the subject type occurs in return position (read analogue), and it's contravariant whenever the subject type occurs in a parameter position (write analogue). On instance methods where both are present, you naturally reduce to the intersection of those two sets, which causes them to be annotated as invariant.


> A piece of code that accepts a reference to some type A and only ever reads from it can correctly accept a reference to a subtype of A.

The same is true of a piece of code that writes through the reference or returns it. That's how sub-typing works.

> A piece of code that accepts a reference to a type A and only ever writes to it can correctly accept a reference to a supertype of A.

Have you ever programmed in a language with subtyping? Let me show you an example from Java (a popular object oriented programming language).

  class Parent {}
  class Child {
      int x;
  }
  
  class Example {
      static void writeToChild(Child c) {
         c.x = 20;
      }
      static void main() {
         writeToChild(new Parent());
      }
  }
This code snippet doesn't compile, but suppose the compiler allowed us to do so, do you think it could work? No. The function writeToChild cannot accept a reference to the supertype even though it only writes through the reference.

I've seen a lot of people in this comment section talking about read and write which I find really odd. They have nothing to do with variance. The contravariant property is a property of function values and their parameters. It is entirely unrelated to the body of the function. A language without higher order functions will actually never have a contravariant type within it. This is why many popular OOP languages do not have them.


> The same is true of a piece of code that writes through the reference or returns it. That's how sub-typing works.

But it is not true that it is correctly typed with respect to a a supertype of A (it is not valid to call the code with a reference to a supertype of A).

Code that only writes through the reference is correctly typed with respect to a super-type of A (it is valid to call the code with a reference to a supertype of A).

> Have you ever programmed in a language with subtyping?

Sigh, keep that snark for the twitter battles. I don't care enough about this to get snippy about it or to deal with folks who do.


> Code that only writes through the reference is correctly typed with respect to a super-type of A (it is valid to call the code with a reference to a supertype of A).

I'm not trying to be snarky, I genuinely want to know what you think of that code snippet I showed you. You claim it should work, but anyone with an understanding of programming would tell you it shouldn't. You can't write to a member variable that doesn't exist. Have you encountered inheritance before? Do you know what a “super-type” is? The mistake you're making here is very basic and I should like to know your level of experience.


Note that the Rust folks are working on a "safe-transmute" facility that may end up introducing a kind of variance to the language.


It's not particularly easy to teach what that actually means and why it's a thing. It's quite easy to show why in general G<A> cannot be a subtype of G<B> even if A is a subtype of B, it's rather more involved pedagogically to explain when it can, and even more confusing when it's actually the other way around (contravariance).

Anyway, Rust has no subtyping except for lifetimes ('a <: 'b iff 'a lasts at least as long as 'b) , so variance only arises in very advanced use cases.


I’m getting old. I can understand the words, but not the content. At this point, show me dissassembly so that I can understand what actually happens on the fundamental byte/cpu instruction level, then I can figure out what you’re trying to explain.


Nothing really happens on the instruction level because this is all type system logic.


I think this might be the issue. If something has zero effect in the end, why should I care about in the first place?


Variance doesn't affect generated code because it acts earlier than that: determining whether code is valid or not and in doing so preventing invalid code (UB) from being compiled in the first place.

The simplest example of incorrect variance → UB is that `&'a mut T` must be invariant in T. If it were covariant, you could take a `&'a mut &'static T`, write a `&'b T` into it for some non-static lifetime `'b` (since `'static: 'b` for all `'b`), and then... kaboom. 'b ends but the compiler thought this was a `&'a mut &'static T`, and you've got a dangling reference.

`&'a mut T` can't be covariant in T for a similar reason: if you start with a `&'a mut &'b T`, contravariance would let you cast it to a `&'a mut &'static T`, and then you'd have a `&'static T` derived from a `&'b T`, which is again kaboom territory.

So, variance’s effect is to guide the compiler and prevent dangling references from occurring at runtime by making code that produces them invalid. Neither of the above issues is observable at runtime (barring compiler bugs) precisely because the compiler enforces variance correctly.


It doesn't have zero effect. Like everything about type systems, it helps prevent incorrect, and possibly unsound, code from compiling. So I guess the giant runtime effect is that you either have a program to run or not.


Sure, you can keep telling me that and it doesn't stay. I'm completely happy writing Rust, and I am aware it needs variance to work in principle and when I do need that information I know the magic words to type into doc search.

It's like how I can hold in my head how classical DH KEX works and I can write a toy version with numbers that are too small - but for the actual KEX we use today, which is Elliptic Curve DH I'm like "Well, basically it's the same idea but the curves hurt my head so I just paste in somebody else's implementation" even in a toy.

Sorry?


One day the rote example finally made sense to me, and I go back to it every time I hear about variance.

Got a Container<Animal> and want to treat it as a Container<Cat>? Then you can only write to it: it's ok to put a Cat in a Container<Cat> that's really a Container<Animal>.

Reading from it is wrong. If you treat a Container<Animal> like a Container<Cat> and read from it, you might get a Dog instead.

The same works in reverse, treating a Container<Cat> like a Container<Animal> is ok for read, but not for write.


The usual way of phrasing this is, famously, https://en.wikipedia.org/wiki/A_white_horse_is_not_a_horse


It wad easy to teach to Java devs just by using one mnemonic: PECS = Producer Extends, Consumer Super


Luckily variance only affects lifetimes, and these are already barely understood even without lifetimes. If you ignore them, the need for variance disappears.


> but programmers using the language don't necessarily use these terms.

this always annoyed me about the python type annotations, you are supposed to already know what contravariant / covariant / invariant means, like: `typing.TypeVar(name, covariant=False, contravariant=False, infer_variance=False)`

Its used in documentation and error messages too:

> the SendType of Generator behaves contravariantly, not covariantly or invariantly.

https://docs.python.org/3/library/typing.html#annotating-gen...


1 point by James_K 0 minutes ago | root | parent | next | edit | delete [–]

That's horrible design. I was utterly perplexed whenever the compiler asked me to add one of those strange fields to a struct. If it had just asked me to include the variance in generic parameters, I would have had no such confusion. Asking programmers to learn the meaning of an important concept in programming is entirely reasonable, especially in Rust which is a language for advanced programmers that expects knowledge of many other more complicated things.

What's more, the implicit variance approach might create dangerous code. It is possible to break the interface for a module without making any change to its type signature. The entire point of types is to provide a basic contract for the behaviour of an object. A type system with variance in it that doesn't let you write it down in the signature is deficient in this property.


It's possible to cast the lifetime away. It's also easy to wrap it in a safe API: https://docs.rs/recycle_vec/latest/src/recycle_vec/lib.rs.ht... (libstd will likely add something like this).

However, the other issues they mention hit some well-known limitations of Rust, which don't always have a simple satisfactory solution. It's a valid criticism.

With a bit of `unsafe`, it's easy to force through self-referential structs incorrectly and violate exclusive ownership/aliasing rules (that's UB, in Rust!). Sometimes such code can be reorganized to avoid self-referential structs. With enough `unsafe` dark arts it's also possible to properly handle some self-references. However, in general it's typically a false positive in the borrow checker's model, and an annoying limitation.


W3C was way too optimistic about XML namespaces leading to creation of infinitely extensible vocabularies (XHTML2 was DOA, and even XHTML1 couldn't break past tagsoup-compatible minimum).

This was the alternative – simpler, focused, fully IE-compatible.

W3C tried proper Semantic Web again with RDF, RDFa, JSON-LD. HTML5 tried Microdata a compromise between extensibility of RDF and simplicity of Microformats, but nothing really took off.

Eventually HTML5 gave up on it, and took position that invisible metadata should be avoided. Page authors (outside the bunch who have Valid XHTML buttons on their pages) tend to implement and maintain only the minimum needed for human visitors, so on the Web invisible markup has a systemic disadvantage. It rarely exists at all, and when it does it can be invalid, out of date, or most often a SEO spam.


Schema.org metadata (using microdata, RDFa or JSON-LD) is quite common actually, search engines rely on it for "rich" SERP features. With LLMs being able to sanity-check the metadata for basic consistency with the page contents, SEO spam will ultimately be at a disadvantage. It just becomes easier and cheaper to penalize/ignore spam while still rewarding sites that include accurate data.

The schema.org vocab is being actively maintained, the latest major version came out last March w/ the latest minor release in September.


The git protocol is more complex and harder to scale. It's especially wasteful if people are going to redownload all packages every time their amnesiac CI runs.

Single-file archives are much easier to distribute.

Digests and signatures have standard algorithms, not unique to git. Key/identity management is the hard part, but git doesn't solve it for you (if you don't confuse git with GitHub).


git bundles exist to solve the single-file caching and distribution problems


Going crazy: we cold also adopt the container registry api for distributing gems, similar to how helm charts are also distributed nower days.


In the RPG analogy, why waste your skill points on maxing out "checking code for UB", when you can get +10 bonus on UB checks from having Rust, and put the skill points into other things.


Unless lack of social media presence will be taken as a signal that you have something to hide, you terrorist/bot.


Or if you find yourself targeted for non-media criteria, like being Hispanic and needing to buy something from Home Depot.


That's a rare exception, and just a design choice of this particular library function. It had to intentionally implement a workaround integrated with the async runtime to survive normal cancellation. (BTW, the anti-cancellation workaround isn't compatible with Rust's temporary references, which can be painfully restrictive. When people say Rust's async sucks, they often actually mean `tokio:spawn()` made their life miserable).

Regular futures don't behave like this. They're passive, and can't force their owner to keep polling them, and can't prevent their owner from dropping them.

When a Future is dropped, it has only one chance to immediately do something before all of its memory is obliterated, and all of its inputs are invalidated. In practice, this requires immediately aborting all the work, as doing anything else would be either impossible (risking use-after-free bugs), or require special workarounds (e.g. io_uring can't work with the bare Future API, and requires an external drop-surviving buffer pool).


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: