During my undergrad in Germany, the CS department was in the process of switching from optional homework to various forms of mandatory homework (either directly counting towards the final grade, or requiring a minimum score on the homework before allowing registration for the exam). AFAIK this was because under the old system, there had been too many students registering for exams despite being woefully unprepared, and then predictably failing as a result.
I think optional homework works for classes that are obscure enough only somewhat intrinsically motivated students would consider taking them, but in mandatory classes or trendy majors, there's going to be many people who need a bit more external motivation to study.
I studied math, and all our exams were oral exams. The professor had to actively accept you for the exam, which was usually a given, if you did your homework. (But you could probably get into the exam without doing the homework, too, if you convinced them.)
I have been teaching CS at German universities for close to two decades now.
> AFAIK this was because under the old system, there had been too many students registering for exams despite being woefully unprepared, and then predictably failing as a result.
True. That's the real Dunning-Kruger problem: incompetent people do not know how much help they need to get competent. It is our job to show them their weaknesses as early as possible so that they can effectively work on them.
(I believe that state-funded universities (as in Germany) have some obligation to not only educate the self-motivated top 1% but also offer a solid education even for less perfect students - at least if there is a societal need for their competences.)
Another, more important, reason is that written exams are not good tests of programming competence - especially as tasks and frameworks get more complex. We want to assign good grades to students who are competent at developing software in realistic settings, not in highly artificial exam settings.
Huh - what do you mean? I just checked again, and this is IMHO exactly what Kruger and Dunning reported:
From the abstract:
"Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities."
> Relation between average self-perceived performance and average actual performance on a college exam.[1] The red area shows the tendency of low performers to overestimate their abilities. Nevertheless, low performers' self-assessment is lower than that of high performers.
So regression towards the mean explain the entire effect.
Thanks a lot for the explanation and link. I had read the original papers a long time ago, and was not aware of the more recent discussions. That said, I just read a few of the critical papers, and it seems that even Gignac and others do not dispute that the effect is observable.
They just don't believe that unskilled people are inherently worse than skilled people in estimating their own skill but that all people overestimate their skill (better-than-average effect).
This is still very much compatible with my claim that unskilled people profit from being reminded (repeatedly, not just in the exam at the end of the semester) that they know less than they think.
I will avoid conflating this with the Dunning-Kruger effect in the future (Thanks!).
An recent study found that medical students' estimate of their own intelligence gets lower right after taking an IQ test (confirming the better-than-average effect). But one week later, their self-estimated intelligence returns back to their pre-test levels. To me this suggests that students (and all others) will overestimate their abilities - and invest less time in learning - if they are not constantly given feedback.
So this isn't all that crazy.