I do amateur improv comedy; doing it with people who have done clowning, it just feels like they're on another level. "Baboon in makeup" is the first-order obvious thing to say, but there's so much skill in a good clown-actor. I've been toying with doing a clowning course for a while just because I'm in awe at what these people can do during improv.
From the article, "The body tells the story.", "Have an emotional reaction and invite the audience to join in your experience.", "A clown is costume and makeup. Clowning is a verb." - these all capture some of what it is. The ability to wholeheartedly commit to a bit with absolutely none of your personal baggage coming along for the ride, to use your body expressively to cause the audience to feel something, to be a perfectly smooth lump of clay that's immediately formed into exactly what the scene requires; it's an astonishing superpower that right now I can only dream of having.
* "programmers who do not wish the ordering guarantees are entitled not to pay and not to receive", or "We could plough on with proof and, coughing, push this definition through, but tough work ought to make us ponder if we might have thought askew." (https://strathprints.strath.ac.uk/51678/7/McBride_ICFP_2014_...)
Their papers are also generally presented very well - easy to follow, as these things go (by which I mean it's actually feasible to work through with a pencil and paper and understand the contents within a few hours). I really recommend PolyTest which is particularly edifying; before I learned the "avoid the green slime" principle, dependent types were a battle, but when I read it I was enlightened, and it's super interesting to watch the types evolve as the problem is understood.
Yeah, they're terrifying. It's often not that hard to generate the code (e.g. https://github.com/Smaug123/WoofWare.Myriad/ is where I pump out these things) for a bunch of what you would want to do with a type provider, and that's much less existentially terrifying if it's possible.
It's actually sort of the other way round. C# has hardcoded syntax for async/await. F#'s syntax for async/await is a fully-general user-accessible mechanism.
They're not so different in that regard. C# `await` can be adapted by making an awaitable and awaiter type[1], which isn't to dissimilar to how a computation expression in F# needs to implement methods `Bind`, `Return`, `Yield`, etc.
In both languages these patterns make up for the absence of typeclasses to express things like a functor, applicative, monad, comonad, etc.
The article does in fact cite the reproducible-builds project, in the section on "Leveraging bitwise reproducibility". From your comment I am not convinced you understood the point of the article, which is:
* the NixOS build process was unable to perform a full-source build of xz because xz is required too early in the bootstrap;
* a proposed adjustment to nixpkgs to automatically detect compromises of nixpkgs dependencies which are required early in the bootstrap.
Other ecosystems can of course also attempt full-source builds and discover the discrepancy; the entire point of the article is that nixpkgs currently cannot.
It's still weirdly hard compared to Android, though. It somehow makes decisions about whether to snap to different words or lines when you release the cursor, and I still don't understand the algorithm it uses for that, and I wish it just wouldn't.
It is explicitly against the rules (https://czechgames.com/files/rules/codenames-rules-en.pdf), so they were correct. "Your clue must be about the meaning of the words. You can't use your clue to talk about the letters in a word or its position on the table."
This is from the Antithesis docs. It contains an incredibly cute idea.
With a deterministic execution simulation system, you can sometimes automatically locate bugs as follows.
* Identify and reproduce a symptom of a bug; say the symptom occurs at time `S`.
* For various times `t`, rewind the simulation to time `S - t` and replay it, varying whatever sources of simulated randomness are present.
* If there is a threshold `T` such that the simulation rarely reproduces the bug for rewinds longer than `T`, but frequently reproduces the bug for rewinds shorter than `T`, then the bug likely occurs at time `S - T`.
Crucially, it doesn't matter how much time passes between the bug and the symptom! This method, if it works, identifies the moment when "the symptom became inevitable" (or at least less evitable); that is, the moment when the bug occurred.
By the way, the computation involving nonstandard reals is correct (to the best of my ten-year-old memory of studying this stuff). As usual, I will recommend Goldblatt's _Lectures on the Hyperreals_ for an intro to how it all works, and Pétry's "Analyse Infinitésimale: une présentation non standard" for an undergraduate first course in analysis expressed through nonstandard analysis.
Thanks. I actually now think it was a bit incomplete or even wrong 3 hours ago when you wrote that :), but then I thought a bit harder, read a bit more (other Wikipedia and https://www.math.uchicago.edu/~may/VIGRE/VIGRE2009/REUPapers...) and then fixed it. I think it should be good now.
From the article, "The body tells the story.", "Have an emotional reaction and invite the audience to join in your experience.", "A clown is costume and makeup. Clowning is a verb." - these all capture some of what it is. The ability to wholeheartedly commit to a bit with absolutely none of your personal baggage coming along for the ride, to use your body expressively to cause the audience to feel something, to be a perfectly smooth lump of clay that's immediately formed into exactly what the scene requires; it's an astonishing superpower that right now I can only dream of having.
reply