Good to see [0] Polya's book mentioned (and linked) in the article. For anyone interested, it provides a pretty good general outline for problem solving techniques. [1] There's also a YouTube video of him explaining some of those methods.
Yes, there are strong parallels between Polya's video and the "Cryptodiagnosis" chapter. The chapter is based on a monograph called "Ars Conjectandi" ("the art of conjecturing") and Polya's video theme is "first guess, then prove".
"In all of one's endeavors, the successful cryptanalyst's precept should be kept foremost in mind: "Try the simplest thing first." It is surprising how often this admonishment is disregarded, to the later chagrin of the cryptosinner." -- Callimahos
It might be worth mentioning that the title "Ars Conjectandi" is taken (clearly deliberately) from a much earlier (1713) work of the same title by Jacob Bernoulli, about combinatorics and probability. That title may well be a nod to an even earlier (1662) work called "Logica, sive Ars Cogitandi" -- Logic, or the Art of Thinking -- which was published anonymously and also includes some discussion of probability; probably substantial parts were written by none other than Blaise Pascal.
I've done plenty of research on this topic and I'm immediately reminded of [1] James Webb Young's short but brilliant book called 'A Technique for Producing Ideas'. I first came to know about it on [2] Kirby Ferguson's YouTube channel. Other resources that are equally good are [3] John Cleese's talk on creativity and his [4] little booklet on the same topic.
Also ideas can seem fascinating by themselves but only chasing ideas (without validation/trials) is an obsession you don't want to have. Please remember to strike a healthy balance between generating ideas and executing them. That infamous claypot anecdote from Art & Fear by David Bayles on quantity vs. quality comes to mind.
This person is outlining a good approach that educators must take towards students. But sadly, I don't see this change happening any time soon.
Academic circles often consist of some of the most close-minded people, with extremely rigid and archaic notions of things like how students ought to behave or think, what kind of questions they must ask etc.
If anything, I'd say HN and reddit (or at least circa 2014 reddit, I haven't gone there since then) should be reminded of the opposite.
Skepticism is fine, and critical thinking should always be on with no exceptions, but HN is more prone to armchair quarterback bleeding edge science with no qualifications to do so (and predictably comical results), rather than uncritically accept results.
I myself am more likely to do the former than the latter.
I see this a lot on here especially, where people regard themselves as being well-informed, at the bleeding edge, etc due to reading and quoting scientific papers, especially ones with interesting or counter-intuitive results. But one paper that may or may not have used valid statistical analysis doesn't really prove anything.
That's exactly what the article is about. It's fine for scientists to propose weird and outlandish theories with just enough analytical duct tape to ensure they aren't completely absurd. But those theories are meant to be analyzed more deeply and critically before anything is done with them, not just blindly assumed to be true because somebody published a paper on it and nobody refuted it yet.
There's a lot of statistical tricks that are tough to detect. Like is your sample truly random and representative? How could you even prove that? Like the rat part study purporting to show that drugs are only really dangerously addictive if you're also socially isolated and miserable. It's cool, and I kind of want it to be true, but it seems it's actually highly dependent on exactly what kind of rats you have and where you got them. Similarly, most of the time when you analyze a bunch of statistics, you're looking for a conclusion like that it's 95% likely that A is caused by B. That's only 1 in 20 though, so you should actually expect that if you do the same check for B through U, and none of them really cause A, you will falsely find that one of those letters causes A. But nobody published a paper for checking C through U, so you can't tell.
Yeah, "skepticism" doesn't mean cherry-picking papers and data that agree with one's intuitive understanding of things or some pre-conceived narrative. It at least includes being as skeptical towards one's one beliefs as towards other ones.
There's a delicate balance between keeping authoritarian-esk institutional scientific momentum in check and borderline conspiracy level framing of things.
You should remain skeptical, you should question results, and you should be capable of understanding the data and responses. You should understand uncertainty that's often unquantified yet still exists. You should apply critical thinking as well. You should also understand when you're way out of your domain and realm, potentially misunderstanding the situation or missing critical information in your analysis.