Every time I do the AoC puzzles I wish for two things:
1) The site should ask for the language used to solve the problem and then use this as semi-scientific data for comparing "time-to-solution" for various languages. The sheer volume of data and the relatively high difficulty of cheating on novel problems would make this data set interesting and maybe even useful.
2) I wish all the puzzles had a "hardcore" mode where a naive approach would take a million years of compute, or exabytes of memory. Have multi-gigabyte inputs[1] such that the time-to-solution isn't just how fast you can bang out some simple parsers and loops, but the runtime performance would materially affect your time score.
[1] A method for this would be to use an input-file-generator that is small to download but can generate huge puzzle inputs.
Part2 is quite often what you wish for in 2), isn't it? At least for the later puzzles, part1 is often something that can be solved naively, while part2 needs some cleverness to not blow up.
Yeah, usually part 2 is. Bear in mind that tonight's puzzle is a warmup, it's day 1 after all!
I do find there's usually one or two spots that you can eek by with threads and throwing compute power at the problem. But those are rare — usually you have to actually solve it.
> site should ask for the language used to solve the problem and then use this as semi-scientific data for comparing "time-to-solution" for various languages.
I think this would just show you the average time zone of the language's users.
Today 4 minutes gets you top 1000. There’s more than enough users starting when the problem gets released that an analysis could be done by checking for solutions submitted within the first hour.
I generally find time to solution uninteresting and that how closely the solution models/mimics the problem statement the far more interesting thing.
I imagine that there’s a wide breadth of languages showing up in the top results but would guess that it skews towards imperative languages. Yet, the functional languages will almost read like the problem statement, which is what I find wonderful.
> The site should ask for the language used to solve the problem and then use this as semi-scientific data for comparing "time-to-solution" for various languages.
I can't believe any valid causal inference about languages could be drawn from this, because the skill of the individual programmer at solving these types of puzzles (recognizing the patterns, knowing the number theory, etc.) would surely dominate any impact of the language itself.
I suppose you could conclude something along the lines of "newer programmers use Python more often than Haskell", but I doubt there would be many surprises there.
It would work if language to programmer mapping was largely random with respect to skill level, or randomly assigned languages to use rather than giving the programmers a choice, but then you'd also need to give six months notice or something to learn the assigned language if they didn't already know it.
1) The site should ask for the language used to solve the problem and then use this as semi-scientific data for comparing "time-to-solution" for various languages. The sheer volume of data and the relatively high difficulty of cheating on novel problems would make this data set interesting and maybe even useful.
2) I wish all the puzzles had a "hardcore" mode where a naive approach would take a million years of compute, or exabytes of memory. Have multi-gigabyte inputs[1] such that the time-to-solution isn't just how fast you can bang out some simple parsers and loops, but the runtime performance would materially affect your time score.
[1] A method for this would be to use an input-file-generator that is small to download but can generate huge puzzle inputs.