Absolutely the wrong take. You can teach CS with just pencil and paper, but that doesn’t advance the technology, it might only benefit academia in a narrow sense. CS students should be actively engineering software in addition to doing science.
Anecdotally, I'm a fan and generally pick the rarer ones. Eg I had a pair of socks and a few stickers made for my kid which had rarer pokemon.
If OP got a tattoo of Pikachu, it was interesting to me that he picked the most mainstream pokemon (I assumed he was a huge fan)
But from the response, my sample of 1 assumption checks out:
"When I was planning it was obvious there'd be a Pokémon due to his (continuing) interest in those and to me as a non-fan that's the most immediately-recognizable one."
One of my favorite bits of my PhD dissertation was factoring an intractable 3-dimensional integral
\iiint f(x, y, z) dx dy dz = \int [\int g(x, y) dx]*[\int h(y, z) dz] dy
which greatly accelerated numerical integration (O(n^2) rather than O(n^3)).
My advisor was not particularly impressed and objectively I could have skipped it and let the simulations take a bit longer (quite a bit longer--this integration was done millions of times for different function parameters in an inner loop). But it was clever and all mine and I was proud of it.
That's like saying sorting can be done in O(n) because radix sort exists. If you assume some structure, you lose generality, i.e. there'll be some problems it's no longer able to solve. It can no longer approximate any arbitrary function that needs perfect memory over the sequence.
BLC can output any literal 60 bit string x as the 64-bit (delimited) program 0010 x, so in that sense it would be some 61 bit number.
But if ask about just lambda calculus terms without the binary input, then I think it would be some small number of at most 10 bits. BBλ looks at the normal form size so it cannot even reach numbers 0,1,2,3, and 5.
As it fills up the false probability rate goes up. Once the false probability rate reaches the threshold of unacceptability, the bloom filter is full, and you can no longer insert into it.
That most interfaces still let you do something that looks like an insert is an interface failure, not a bloom filter feature.
If you find this controversial and want to reply "I don't have a threshold of unacceptability", I'll counter that a false probability rate of 100% will be reached eventually. And if you still find that acceptable, you can trivially modify any probabilistic filter to "never become full" by replacing the "is full" error condition with setting a flag that all future queries should return a false positive.
I work in a 400k+ LOC codebase in Rust for my day job. Besides compile times being suboptimal, Rust makes working in a large codebase a breeze with good tooling and strong typechecking.
I almost never even think about the borrow checker. If you have a long-lived shared reference you just Arc it. If it's a circular ownership structure like a graph you use a SlotMap. It by no means is any harder for this codebase than for small ones.
> how productive power users in different [fields] can be with their tools
There are a lot more tools in programming than your text editor. Linters, debuggers, AI assistants, version control, continuous integration, etc.
I personally know I'm terrible at using debuggers. Is this a shortcoming of mine? Probably. But I also feel debuggers could be a lot, lot better than they are right now.
I think for a lot of us reflecting at our workflow and seeing things we do that could be done more efficiently with better (usage of) tooling could pay off.
The point of a CS degree is to know the fundamentals of computing, not the latest best practices in programming that abstract the fundamentals.
reply