Hacker News new | past | comments | ask | show | jobs | submit | jonsterling's comments login

To be clear, this name was chosen by the creator of Coq, Gerard Huet, with the intention of trolling. It wasn't an innocent French word.


This is wrong. The name Coq is a reference to CoC (the Calculus of Constructions) and to Coquand (the inventor of the Calculus of Constructions). Incidentally, it is one of symbol of France. Its meaning in English provided an opportunity for Gérard Huet to troll with the name, but if the intent were to troll, there would have been many other possible names suitable for trolling, no need to choose Coq. And reciprocally, with many different names would have it been possible to troll, if your intent is to troll.


My understanding was that it was the icing on the cake to Huet. I'm still not sure that changes much.


Is there a citation? The literature is upfront about the JMeq joke, by contrast I've never read anything about this topic.

I'm neither English nor French. I've never seen anything else than a rooster in the name or the logo.


In terms of multiplicity of meanings, certainly. Anyway, if you're interested in my personal opinion on Huet's "humour" and how this degenerated, we can talk privately.


In english at least, we call this a “double entendre” ;-)

Obviously the very purpose of double entendres is to troll those who will get the second meaning. duh!


I wish to clarify this comment; what I said above is strictly correct, but several people have drawn an undesirable conclusion from it which leads me to find a clarification necessary.

It is not the case that trolling English speakers was the primary motivation of Huet. My understanding is that the trolling was a side-benefit to a name he would have chosen regardless of whether it evinced the double entendre.


TypeScript, the language which only just this month added a flag to turn off their completely incorrect subtyping rules for functions! A flag!

I remember reporting this bug years ago, and they said it was "by design, since JS programmers prefer to think of functions as covariant in their input". Well, I prefer to think of 2+2 as equalling 5.


Design decisions like this are part of what makes TypeScript much more enjoyable to work in than other JS type systems (e.g. Closure, Flow).

TS tends to make pragmatic tradeoffs between sound typing, and the way JS developers tend to architect code. I'm glad they do.


I don't know nearly enough about type systems to understand this. Can you explain what subtyping rules are, and how TypeScript's are wrong? And can you explain what "covariant in their input" means?


Covariant: A variable needs to have an Animal. You can put a Cat in the variable.

Contravariant: A variable needs to have a function that accepts Animals. You can't put a function that accepts only Cats, or it will crash on other kinds of animal. But you can put a function that accepts all LivingThings.

So when something is covariant you can use a more specific type, and when it's contravariant you can use a more generic type.

When it comes to function parameters, typescript lets you use either. You can replace any type with any related type, even if you're going in the wrong direction. They did this to make certain use cases simpler, at the cost of weaker typing.


Which use cases are those?? Are any of them actually real life, practical use cases?


If a function takes an array of Animal as an argument, the "intuitive" assumption is that you can pass in a Cat[] to it since it's probably processing the array items and a Cat is an Animal.

In usual Javascript patterns, this intuition works, but it's not sound, since arrays are mutable; the function could write to the array, appending a Dog or a Potoo or a Jellyfish- valid for an Animal[] argument, but not sound for a Cat[].

In practice, in most JS code I've seen, arrays are constructed once and then are passed around as immutable collections, so Typescript's unsoundness saves a lot more casting than it introduces errors.


The real problem here is that mutable data structures aren’t entirely type sound. The solution here is to split collections into read only and mutable types, but people balk at this.

At one point long ago I did some experiments with this, and found that the most of the unique method signatures necessary to represent the data structures we are familiar with are concentrated in the mutators.

That is, the interface differences between read only data structures are pretty small. So you don’t double the surface area by splitting read from write. It’s more like 25%, and this might be partially offset by simplifications in code that has to scan multiple types of collections.


There's nothing wrong with mutable data structures from a type soundness perspective. We know how to do it properly. But you need to get the type system right in order to include mutable data structures; often people get this wrong, and it leads to all sorts of frustration...


Do you have any examples lying around of what the interface differences would look like?


https://github.com/Microsoft/TypeScript-Handbook/blob/master...

The example seems like a valid case. The type of event is expressed in a separate parameter, so being strict here requires pointless casting and doesn't actually make things safer.

Whether you think this type weakness is worth the downsides is up to you.


That helps, thanks.

Can you explain the naming? What sort of variance is the second case contra?


The first one is the more natural one, and so the opposite gets to be "contra", I think.


Suppose you have a type hierarchy where a Tabby is a type of Cat that is a type of Animal.

Now, say you have a function that takes a Cat as an argument and returns a Cat. What types can you use instead of the Cats and still remain type safe? Well, you could pass in a Tabby or return a Tabby and everything would be fine. This is a subtype rule that the compiler can check.

What about something a little trickier? Say you have a map function that maps from a list of Cats to another list of Cats. The map function takes two arguments: a list of Cats and an arbitrary function that takes a Cat and returns a Cat. For each item in the input list, the function is called on it and the returned Cat is added to the output list.

Now, can we replace the Cat to Cat input function with a function that has different types and still have the types make sense?

Let's try a function that takes a Tabby as an argument and returns a Cat (usually denoted Tabby->Cat). This won't work as we could have something that isn't a Tabby in the input list. What about a function Animal->Cat? This would work OK as every Cat in the input list is also an Animal.

Similarly, what if we try a Cat->Tabby function? This would work too as a Tabby is always a Cat. What about Cat->Animal? No, as it might produce a Dog and we only want Cats.

Notice the difference between the arguments and return types? For arguments we can replace a Cat with an Animal but, for returns, we can only go the other way. We say that we are contravariant in the argument type but covariant in the return type.

There are other forms. We say we are invariant if we can't change the type from the original stated type. We say we are bivariant if we can replace Cat with either Animal or Tabby.

So, what did Typescript do wrong? As I understand it, Typescript was (is?) bivariant in both argument and return types. This means that the type system would, say, accept a Tabby->Cat function for a Cat->Cat function argument and then fail at runtime.

Is this a heinous crime? A matter of opinion I guess. It's definitely wrong from a type system perspective but plenty of type systems let this sort of thing pass and are successful.

BTW this subtyping complexity isn't just for function arguments. It's also an issue with collection types e.g. what can yuo replace a list of Cats with? But this comment is way too long already.


Here is a good and down-to-earth explanation: https://www.stephanboyer.com/post/132/what-are-covariance-an...


Not only this, but strangely it doesn't use standard methods or names in the categorical understanding of computer science. All this business about "CompFunc" as a category of sets and computable functions, but I see nothing about partial combinatory algebras or realizability.


For some of my documents which use opentype fonts, the difference is by a factor of 20-30 between xelatex and lualatex.


I think it is not just a minority opinion, but just plain wrong.


> I have to admit I have no idea what racket is, nor did I do much more than scan the article.

why have you commented then? Mao Zedong had a dope saying about this, "No investigation, no right to speak!"

-----------------------------

One answer to your questions, btw, is that macros are useful and interesting regardless of what kind of type system you have. Haskell has something called Template Haskell, which is really quite bad for a number of reasons, but it is certainly possible to imagine a version of Haskell with a well-designed macro system. It just so happens that Racket has an exceptionally well-designed macro system, so the partnership of the two ideas is very plausible and scientifically interesting.

Macros and types are not in opposition. However, macros induce a notion of PHASE which is not usually accounted for in type systems, though it may be possible to account for this in a principled/type-theoretic way using ideas from modal logic and kripke semantics.


Could you explain what phase is? I understand haskell types, and I suspect Template haskell mostly works as an AST translator. A Google search on macros and phase, just turns up diet and bodybuilding preparation advice!


In the simplest case, a macro system gives rise to two-phase evaluation - first, there is code that runs at macro expansion time, that produces, as its result, new syntax, and second, that resulting code runs. But having just two phases is (needlessly) restrictive, because you might want macros that generate other macros (and so the macro-generating macros would need to run at an earlier phase than the generated ones).


Wouldn't the macro-expansion phase be run until it reaches a fixed point? That way all macros could be fully expanded before proceeding to the next phase.


I believe that's how macros in Racket work, but each iteration is considered a separate phase. At least, that's how I understood it.

This page has some useful information about phases: https://docs.racket-lang.org/guide/phases.html


> why have you commented then?

To learn, not about racket, but about the point of macros. It seems to have worked. I was going for something along the lines of Cunninghams law.

Anyway, thanks for the insight on macros.


Do you have any good links about Template Haskell? I've heard of it before, but I'm curious what the main drawbacks are :)


So far as I'm aware, the drawbacks to Template Haskell are:

1) Debugging generated code can be a pain. To some degree this is true of most macro systems, but it certainly applies here.

2) It adds to build times, sometimes significantly.

3) There is an issue preventing use of Template Haskell when cross compiling.

Some other things of note:

The Haskell syntax is really complicated, compared to s-exprs. This is not entirely negative - variated syntax can help you know where you are in an expression and give more guidance when you mix things up - but it's not entirely positive either.

The stage restriction prevents using a piece of Template Haskell in the same module where you define it. For a lot of use cases, this doesn't matter... but it makes module-specific one-offs more awkward, moving their definition away from where they're relevant.

As of comparatively recent, there is "Typed Template Haskell". As originally conceived, Template Haskell generating an expression didn't "have to" produce an expression of the right type. Compilation would still fail once the type mismatch was detected, but further from the cause of the error.


You went straight from saying, “Don't assume I'm a running dog!” to literally proving that you are a running dog! So cool.


In that case... Woof bark!


please chill out dude


This is not true. Some of the best papers ever, I have benefited from reading many times, more than three.


You (and I) are not in the target audience for those books. She does very serious mathematics, which should be defended; she gave a very nice talk on Trimble n-categories to my research group on Friday, for instance.


Standard ML. I develop proof assistants.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: