Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Threads like these are always frustrating, because as usual people (programmers in this case) freely air their opinions on how schools are broken with phrases like “we need to fix the system”.

As someone who studied pedagogy for years and quit due to an immense frustration with exactly this — how broken the system is — I would encourage you to entertain the thought that maybe, you as a person who is almost in all cases not a teacher, nor someone with any experience apart from once having been a student, do not have a good understanding of how exactly this system should be fixed, and that it’s not broken for fun but because there are some very difficult unresolved issues.

People love to rant about how bad tests are. “We just study for the tests” and so on. And yet this complaint seems to be international. Curious, isn’t it, how all these systems seem to fail in the same way?

In the case of testing it’s because you choose to focus on the obviously bad thing (current state of testing) rather than the very complex and difficult question behind it: HOW do you measure knowledge? And when you decide how, how do you scale it?

These are very hard questions, and it’s frustrating to read the phrase “we need to fix the system” because yes, obviously we do, but agreeing that things are bad isn’t the hard part, and probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.

So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

Cynically, this will never happen because reforms to battle educational issues in any democratic society usually takes more than 5 election cycles to show obvious results (and when the bad results start stacking up current leaders will take the flak regardless).



> HOW do you measure knowledge? And when you decide how, how do you scale it?

I have experienced good tests and bad tests. I studied in France, tests were open book with no multiple choice questions, only problems to solve. This approach scales badly and is a lot of work for the professor grading but it measures knowledge.

The problems were long, had few questions besides describing the problem and maybe a few questions to guide the student along the path to solving it. We had either 3 or 4 hours to solve those problems.

Those tests worked very well. I'd come out from one of those tests having often learned something new.

I was an exchange student in the US, tests involved multiple choice questions, they were closed books with questions around rote memory. While I did feel that some of the education in the US was valuable and interesting, I hated those tests, they didn't correlate as much with comprehension of the subject matter and more with learning facts that are more or less tangentially related to the subject matter. I still remember in a computer graphics tests being shocked by being asked when Opengl was first released, which companies were involved and other completely useless knowledge.

What's interesting to me is that there's much less opportunities to cheat with the former tests while the later tests are pretty much made for cheating. So, imho cheating is a symptom of bad tests.


I don't know if it's popular in France, but another very simple idea that eliminates cheating entirely is oral exams. They're still done a lot in Italy. I once literally inverted a binary tree on a whiteboard :)


IMO oral exams and open-ended answers are the kinda things that really work better for their intended purpose, and everyone knows. But people still prefer multiple-choice because "they scale". The goal isn't simply measuring knowledge, it's doing so in an acceptable/shitty way, with (edit) limited resources.


Next semester, we plan to start doing "interviews" of students in which they explain their own code to us.

It obviously doesn't scale, so we'll use random sampling.


As a TA we did something similar: we asked them to self grade their own homework using a provided rubric, and then we spot checked 1/4 of the students (without replacement) to punish lying about what grade you deserve. We didn’t punish for a few disagreements over the rubric, but if it was blatant we checked their assignments every time in the future (and told them). I think if it was bad enough we could have reported them.

This saved a bunch of time on actually grading assignments and made us write a very clear and unambiguous rubric (which required a very clear homework) and also demonstrated to the students that grading was not arbitrary.


> It obviously doesn't scale

Peer teaching/grading potentially scale out.

Several universities [1] scale out personalized instruction and interactive grading by hiring students from previous cohorts and paying them either in course credit (taking a "course" that involves teaching students in the current cohort) or at a low rate (possibly subsidized by financial aid) comparable to other on-campus student jobs.

[1] "Scaling Introductory Courses Using Undergraduate Teaching Assistants", SIGCSE '17, https://doi.org/10.1145/3017680.3017694

(I want a browser extension linking doi to google scholar btw)


How do you justify the fact that only some of the students get the pleasure of an in-person grilling? Or, am I completely misunderstanding the process you're going to be using?


In my plan, each student is interviewed at least once. Ideally more than once by the same teacher, so the teacher can get to know them a little better, spot areas where the student needs more help, etc.

There's still a scaling problem, but I think it makes the ~200 student classes we have now more feasible than 100% autograding. I also like the other commenter's suggestion of coming back to interview certain students each time, if they need it.


Is this about pleasure or about measuring knowledge?

A lot of stuff you learn and the way you learn it isn't necessarily pleasant, but frequently you still have to do it and you really discover 20 years later why it was needed.


No, it's about why only a subset of students get singled out for extra scrutiny, literally arbitrarily, as the selection procedure itself is defined as "random sampling."


random sampling is an effective method for inferring the same information about the larger population that is being measured in the smaller sample, to a certain degree of confidence based on the sample size and known distribution of what is being measured. These concepts are fundamental to statistics.


In college, viva-voce is a significant part of non-theory exams. It’s another matter it was not run well by many colleges but I always loved those chit chat sessions with some of the good professors. Some professors treat it like a boring Q&A which reduces its effectiveness.


I think you might be who the top response is responding to. You seem to have inside knowledge that saving money is the top priority without considering any real-world resource constraints.


The top response is the one that brought the constraint of "scale" into this discussion, and that's what I'm addressing. Maybe you should bring your objections to them rather than to me. "Real-world resource constraints" is just a euphemism for "wanting to save money" in this case. I'll edit it to clarify that I mean the same.

And I'm not passing judgement on the choice made, nor saying the constraints aren't there, nor saying anyone should do anything different. I'm just pointing out that the scalability constraint will affect the test possibilities, which will affect the quality of the measurement. Feel free to disagree with this all you want.

EDIT: Also, I do happen to have some inside knowledge by having worked in higher education for about a decade, starting in the mid 00s. Coincidentally, most of my work was on cost-saving measurements, designing a few algorithms that allowed universities to reduce their teacher headcount (first at a university, then at a software vendor), so yes, the #1 goal there was saving money. But I don't think having done this affects my answer, nor I do think that I deserve special treatment. I'm merely answering to a chain of comments.


Wanting to save money also falls under availability of staffing trained to do this. Considerations of if the massive increase of expense and diversion of people from other economic endeavors is worthwhile.


Good hunch, but in my experience, availability of trained staff was never really an issue in practice. Hiring well trained university faculty was always purely an economical problem. Universities often already have a trained surplus of faculty employees working in a highly reduced capacity. Especially in the last 10-15 years where distance learning became commonplace, a lot of faculty was replaced by low-paid part-time quasi-teachers, which would be more than happy to be offered a permanent position. To further demonstrate that this is an economical problem: those quasi-teachers often have different job titles other than "teacher", depending on the jurisdiction, in order to evade laws and evade the reach of (often very powerful) faculty unions.


Oral exams have an entire other bunch of issues. Just looking at the professor side, beside time, I imagine it would be very difficult for to grade with same meter an arrogant student, a dismissive one, a smelly one, an eloquent one, or even the first and the last one in the same session.


... a male student, a female student, an attractive student ...

And yes, this is actually a well-known problem in Italy - with (typically male) professors being routinely accused (and occasionally convicted) of favouring attractive (and typically female) students.


if you think that a written text is a great equalizer, it's not.


But it is much more than a oral one.


I don't agree with this. They have different failure modes, but I believe that in aggregate an oral exam affords the candidate the fairer shot, given the minimal assumption that the professor is in good faith.

If I say something imprecisely or if I make a non-fundamental mistake, an oral setting gives me the chance to correct myself and prove to the examinator that I have a strong grasp of the material regardless.

Written exams, especially multiple choice and closed-answer quizzes reward people who regurgitate the notes, oral exams and written long-form open questions reward actual knowledge.

Of course the "better" methods require a greater time investment, and I can't really blame professors who choose not to employ them. But it's quite clearly a tradeoff.


> If I say something imprecisely or if I make a non-fundamental mistake, an oral setting gives me the chance to correct myself and prove to the examinator that I have a strong grasp of the material regardless

This is just even further proving the point, which is that in an oral context this means that the animosity of the examiner is much more significant than in a written one, which by definition implies that the oral one cannot be fairer than the written one.

You yourself are saying that you "have the chance to correct yourself". This is either because you will self-correct yourself on recognizing a specific (perhaps subconscious) face or gesture from the examiner, or because the examiner will directly tell you that you are wrong. Both cases present ample opportunity for unfair discrimination. In the first case, perhaps a person is less skilled at reading people, or perhaps the examiner just has a better poker face. In the second case, you are now at the whim of the examiner to decide based on your body language whether "you are making a non-fundamental mistake and deserve a second chance" or just "have no idea of the material and don't deserve a second chance". And, compared to the written exam, there is absolutely no record of the context that drew the examiner to such conclusion -- which is also kind of important, since evidently the written exam is also subject to some discrimination.


This isn't how oral exams work, though.

Nobody expects you to be 100% on point, it's just impossible; it's not like the spoken variant of a written exam. The kind of "correction" I mean is more along the lines of what would happen during a normal conversation. Imagine I was asked to write a recursive algorithm and I forgot the base case. It's not a fundamental mistake, but the professor might interject to make sure I actually know about termination, inductive sets, etc., which is actually great if you understand the material deeply, because it gives you a chance to prove that you actually just forgot.

Obviously this is assuming good faith by the examiner, but if you aren't willing to assume that, there aren't very many examination formats that are going to work very well.


Is not a question about good faith or not. He may be showing completely unintentional bias. But the point is that the oral one gives you a shitton more opportunities to play that bias. If you even try to say that the oral exam is just "a normal informal conversation" rather than something following a very strict protocol you might as well just give up any appearance of fairness. How much role bias would play on such a conversation is just outside the scale.


It's not the examiner deciding "you deserve a second chance or not". In a normal oral exam everyone gets a "I don't think that's correct" or "please explain that to me" kind of response on a wrong answer. They don't silently scribble a note to distract a point from your score or something like that.

How you deal with that is really where your score comes from. Because if you know what you're talking about you'll correct it and while doing so show that you know a lot of related things. While if you have no idea you can't guess yourself out of that type of question.


I don’t know. For example, in music examination, the outcomes change drastically if you blind the examiner from seeing the student or knowing their name. Unless you see something different in the world of music, I’d say the examination is happening at the same level of “good faith”ness.

How would you blind oral examination so that the examiner is unable to distinguish the student’s gender/race/identity?


> For example, in music examination, the outcomes change drastically if you blind the examiner from seeing the student or knowing their name.

FWIW, the study that "proved" that appears to have been a pretty bad study. So, in reality, no: people are not terribly prejudiced, and things don't change significantly when you blind the examination.


Should be technically possible these days, if we wanted to.

I still think we are conflating the objectives of

1. Teaching students skills and knowledge they need and want, and

2. Rationing access to jobs, status, etc.


All students in a class cannot take an oral exam simultaneously. This means that either:

* everyone gets the same questions meaning later students can cheat by asking earlier students what was on the test, or

* everyone gets different questions meaning much more effort to design the exam and big risks that some students will get easier questions and others will get harder questions


Most of the oral tests I have taken have the questions posted by the lecturer before the exam? I don't understand why it would be a problem for students to say what the question was


> I don't know if it's popular in France, but another very simple idea that eliminates cheating entirely is oral exams.

As an introvert, I am very happy not to have had too many oral exams during my studies (in France ;) ). I think I agree with you in principle, but to me that would have been torture.


You get used to it. I've had the typical weekly oral exam during 2 years in the French "classe prépa", and it was torture at first. I can definitely say that it changed me, made me less stressed about these kind of situation, even years later at work.


I was a student in France and during the two years of high schools I had a bunch of blackboard exams and yeah, you kinda have to learn the material. Of course it also helps to be confortable in such situations, but we had enough of them to get trained in that


You had enough of them to get trained in that. And it might have taken you just few enough to get comfortable for it not to affect your grades in such a way that you dropped out. I had a friend in university that would just completely fall apart in any kind of such situation, even when it wasn't for an exam and even when it was a group presentation setting and it wasn't just him up there. Written exams were completely fine though. Did he not deserve to get a CS degree and just work in some company where he doesn't have to become a team lead or architect where he'd need to speak and present and instead steadily and happily work in his corner, talk to his immediate peers and crank out solutions?


I'm going to go out on a limb here and say that presenting to other humans is (a) a skill that most people can learn (to at least "acceptable" levels of proficiency) & (b) a skill that most people should learn, because it's a huge part of working in the field.

I understand it's incredibly uncomfortable.

I'm a pretty serious introvert and got the shakes and sweat dripping off my hands the first few times I did it. But with exposure and effort to self-improve, it's doable. I didn't like it, but I'm incredibly thankful I was forced to work on it.


Ah yes, the good old fallacy: "I could do it, so it's doable". It's doable by you. That doesn't mean it's doable by someone who is not you, even though they might be otherwise deserving.

It's like LeBron James saying "I learned to dunk, so anybody can dunk!" - but basketball is not just about dunking, and not everyone is LeBron.


Talking to other people is not dunking a basketball 3m into the air.

Frankly, I've been in oral exams, in Romania they're (were?) part of a national exam at the end of highscool. You just have to practice.

If hundreds of thousands of highschoolers in a rather poor country could figure out how to do it (any generally not flunk due to the oral part), for sure university students can do it.

Anyone not able to do it will not really be able to pass any interview, persuade peers that their idea is good, etc.


I've been in plenty of oral exams too, in Italy. That doesn't mean I ever enjoyed them or felt they did me justice.

> Anyone not able to do it will not really be able to pass any interview, persuade peers that their idea is good, etc.

I strongly disagree there. Orals are a situation of complete knowledge and power imbalance between two parties. That is not the case when it comes to persuasion.

As for interviews - yeah, they are similar, and that's why interviews also are seen as very problematic. A lot of people who can be perfectly productive in day-to-day situations, simply don't do well in interviews. We should be striving to correct that, not accept it as inevitable.


I think the way those examinations were set up helped a lot in getting confortable (or at least good enough): like it was a weekly event, just three students and the teacher in one room, each student working on its own question(s); the teachers were more or less helpful, but most would guide us along, not leaving us stuck at our blackboard for the whole duration.

But if one, even if those situation really can't do it, they'd have to switch to a course/class without any oral examination to get their degrees, but I think it's way better to learn as a student than as a professional (and yes, like the sibling comment, I think most people _can_ learn to an acceptable degree)

> work in his corner, talk to his immediate peers and crank out solutions

Do such post really exists?


I think you should quote more of that sentence and then I can say that yes, definitely these do exist:

    where he doesn't have to become a team lead or architect where he'd need to speak and present and instead steadily and happily work in his corner, talk to his immediate peers and crank out solutions?
Yes, companies exist, which do not push you out just because you have found your sweet spot of what you can do and are OK with. Of course we're not talking FAANG here and in general I would assume that HN clientele is skewed towards working in companies where this is not possible. However, I can tell you that I've worked at companies personally way back in the past in which I met many such employees that had been in those companies for quite some time.

The big thing here being "talk to his immediate peers". The guy I was describing was completely fine working w/ us, his friends. Put him in front of an audience and he's got a problem. Of course it'd be hard to get a job in the first place, but a lot of places also did exist at least back then where no coding (neither take home, nor whiteboard) were part of the hiring process. Of course you won't make that guy a consultant at Accenture, he's gonna fall apart.


> I think you should quote more of that sentence

well, that would have made the requisites for the posts harder to meet :)


There's only so many issues someone can have until people in general will decide to give them a "fuck off, I don't care" treatment.

You don't have that for verbal communication with other people, but I'm sure if digged far enough you would have the same reaction to something else that other people think is acceptable.

Just how accommodating should the standard test be?

If the answer is "infinitely", I think you won't find any test that satisfies it


I studied engineering in Italy and all my exams were both written (with exercises, multiple choice didn't exist at all) and oral. No way you could cheat or not engage with the materials.


HN is very much opposed to whiteboard interviews. I can't see this being a popular alternative.


After a class on data structures and algorithms, a white board interview asking you to invert a binary tree is very different from the same interview when you apply for a job.


I might miss the irony in your words but... Job interviews and eduction tests have different objectives, I hope.


Isn't assessing skill/ knowledge/ aptitude the objective of both?


The only thing they have in common is "assessing". An exam for a course seeks to assess mastery of the subject matter of the course. An interview for a job seeks to assess skills / aptitude for a particular job.


This. Moreover, an exam for a course is, to some extent, an assessment on how the course was delivered. And an interview for a job has a much larger scope.


I had an electrical engineering final as an in person oral exam. One question. One hour to solve on whiteboard. It was a hard class to begin with and I got a hard question. I did well, but definitely expected to fail.


Totally agree. I might a bit partial to that, because I tend to underperform multiple choice tests for overthinking, but I've really the impression that open ended questions test knoledge much better and make it more difficult to cheat. Beside that, having almost nothing to do with cheating, another good thing in the French system is the continuous grading: labs were graded, projects were graded, small intermediate tests were graded, so you really do not study for just the exam (actually often you do not study at all for the exam). (beware: my experience is limited to a single grande école I attended).


I went to INSA in the early aughts and we didn't really have continuous grading, labs (TPs) were graded but the biggest part of the grades (les partiels - exam week) happened twice a year (or 4 times a year during the first two years of prépa intégré).

I do know that since then they've moved to a continuous grading system. I'm not sure if that's the same with other grande écoles but I do know that my friends in other grand écoles had a similar system of 2-4 partiels a year.


I'm currently grading an open book test as you describe. It turns out that someone put their attempt at answers on chegg.com shortly after I posted the test. The temptation to use chegg is too great for students to resist. When chegg has the wrong solution (which is often the case), students will doubt themselves and will go with the wrong chegg answer.

To be clear, the only goal of chegg.com is to help students cheat. The world would be a better place if chegg and its copies did not exist.


My solution to this is to use version control and have them record an explanation of their work. If they copy from chegg they also have to forge a commit history, as well as explain the code line by line. I’d like to see them do that without learning anything.


This is a common thing in computer art e.g. demoscene competitions too.


What about seeding the system and putting your own (bad) answers up there ahead of time?


Why is it a foregone assumption that the important attribute of education is "measuring knowledge"?


Because we have a limited number of everything and we have to compare people.


I suspect that a "certificate of course completion" (or, if you prefer, "a course grade") does not actually requiring comparing individuals A and B.

It does, however, require gauging that individual X, for any individual X who have taken the course, said individual X have acquired enough knowledge to consider "having passed".

Anything beyond "pass/fail" is merely trying to stack-rack students and impose un-needed competition. But it is good for the gamification of knowledge acquisition, so perhaps not entirely bad.


The more we can split up "Teaching people useful things" and "Rationing access to status objects" the better off we are.


Many (most?) people learn your A to get to your B.


That's a good reason to split it up, that people don't understand there could be anything else to pursue education for.


Yeah I came from uk undergrad to us grad school and was shocked to see that even some advanced undergrad classes, with grad students in them, were tested by infantile multiple choice questions (this was at harvard). It almost makes one wonder whether the us dominates academia to the extent it does because of the foreign influx.


> I was an exchange student in the US, tests involved multiple choice questions, they were closed books with questions around rote memory.

As a US citizen, many tests were open book essay style, especially once we got to college.

In public school however, lots of "standardized" multiple choice tests that were used to grade the school. Some of those tests also include an essay portion.

Teachers in the US aren't paid to do grading, they typically do it at home in their own time, thus very few essay style tests.


I never gave a fuck about grades. For some courses I had below average marks, for others I was the best.

I studied because I wanted to genuinely understand how things work and how I can solve problems and how, beginning with one ideea I can extend it or come up with an entirely different idea.

And I know I will get downvoted for saying this, but for me, programming without having a solid understanding of CS background and how computers work it would make me just a code monkey, able just to do what I saw in tutorials and copy pasting SO answers without understanding them. Which can be fine, lower level work is ok and highly needed. But wouldn't make me as good as someone who knows his stuff from a to z.

I hate it when I hear someone considering himself a programmer after he modified a WordPress theme or did a 3 weeks boot camp.

Why should this field be hold to much lower standards than medicine, physics, math, chemistry?

I never heard someone bragging that he is a doctor after watching YouTube videos, which happens often with writing and architecting software.


> I studied because I wanted to genuinely understand how things work and how I can solve problems and how, beginning with one idea I can extend it or come up with an entirely different idea.

> programming without having a solid understanding of CS background and how computers work it would make me just a code monkey, able just to do what I saw in tutorials and copy pasting SO answers without understanding them

Other people genuinely want to understand how things work, and getting a CS degree is not the only way to get there.


> Other people genuinely want to understand how things work, and getting a CS degree is not the only way to get there.

Yes, except ...

University courses are always going to have a mixture of people with different motives. One of those motives is going to be a desire to go into research, while another is going to be earning credentials to prove they have an understanding of their field before they embark upon their career. Then there are the people who take the courses with a pure desire to learn how things either in a structured environment or to learn along with like-minded people, without pursuing it as a career path. That runs the gamut from needing university credentials, to the credentials being nice to have, to not needing the credentials at all. Yet, in each case, the desire to learn is genuine.

Then there are the people who are cheating the system by treating the university as a credential mill, the means to an end, where the end has little to do with furthering their understanding. Some of them are upfront about this. I remember one of my high school peers saying they chose their discipline based upon how much money they would earn and how well they would perform. Some choose to deceive themselves about this, largely by griping about how poor instruction is or how irrelevant the course material is while putting little effort into the courses. Whichever way you look at it, these people are problematic when they step over the line by cheating. They go from being passive leeches on the system to actively destructive forces.

There are genuine advantages to learning in a university environment. For some it is structure. For some it is being able to work with their peers. For some, it is having access to professionals in their chosen field. All of these can contribute to understanding, if the learner chooses to do so. I have known very few professors who would turn away someone who was genuinely interested in learning. For the most part, they were more inclined to side with the students who would benefit from learning in a university environment, but were struggling to keep up with the demands of those who would not!


> Then there are the people who are cheating the system by treating the university as a credential mill, the means to an end, where the end has little to do with furthering their understanding.

Is this really an issue? I eventually feel into that bucket. Went to university for EE since I figured I could teach myself CS easily, whereas learning EE without a lab would be harder. I quickly realized I hate a very large portion of EE (anything outside semiconductors) and for most of my courses I did the bare minimum to get an A with minimal understanding.

> They go from being passive leeches on the system to actively destructive forces.

I really don't see how that makes me a passive leech. I agree that cheating is actively harmful to the peers that don't cheat, but someone that doesn't engage is a net gain to the others in my book. By not attending any optional tutorials/office hours it gives other peers more focus time with the professor/TA.


The most trivial example, the requirement for assessment will put a load onto the course staff. While a correct answer presented in the conventional way is a typically easy to assess, an incorrect answer or an answer presented in a non-conventional way is much more difficult to assess.

The people I worked with typically wanted to see students succeed. It went well beyond the mechanics of delivering instruction and assessing work. They considered what they were doing and put time into modifying their practices when students did not appear to be engaged. For a handful of students, it would be successful. In the vast majority of students, it would flop since the disinterested ones were rarely receptive. Some cases may have been similar to yours, where it sounds like you were focusing on other subjects. Some students appeared to be, ahem, more interested in the non-academic merits of university life. It is difficult to tell what the breakdown is since those students were always the most distant. But either way those passive students were a drain. They simply weren't as much of a drain as the cheaters.


Most of what people learn is from the day-to-day, not what they are actually studying. What does that tell you? Instead of ad hominem for 'non-academic merits of university life' perhaps they are not in fact being instructed properly. This makes sense because graduate school is such a grind for personal research instead of actually educating other people.

Teachers need to design their programs around actual learning principles like: Deep Processing, Chunking, Building Associations, Dual Coding, Deliberate Practice.


IMO even CS degrees are worth much these days. I know too many CS graduates that can't grasp the basics, and their work suffers from it. For a recent example (which I hope doesn't get picked apart): even with help they can't guesstimate the complexity of simple algorithms they write.

Genuine interest is a requirement, a degree isn't. Anyone wanting to measure knowledge is even more fucked than we assume.


> Other people genuinely want to understand how things work, and getting a CS degree is not the only way to get there.

It's not the only way, but it is certainly a good way (at least for some).

In my case, I didn't really know what I didn't know. Before studying CS, I had this vague idea of programming as an activity I liked, but it never really "clicked" in the larger sense, as it was woven with magic that I didn't understand at the time. We make programs using Code::Blocks? Well, who made Code::Blocks, and how did they make Code::Blocks without using Code::Blocks?

After studying CS, it just felt like all those mysteries disappeared. Everything made sense, connected into a coherent whole by various mathematical links.

Of course, there's still a lot of things I don't know, and even more that I don't know I don't know. Every now and then, I run into a new concept that teaches me things that I've kinda wondered about, but couldn't put properly into words. Reading about such topics, and seeing the journey unravel, that another person took in order to discover what I'm getting on a silver platter, is one of the most rewarding things in the world.


There are markets for a wide range of abilities.

Even in medicine, the first triage will probably be done by a nurse, then a doctor, finally a specialist.

You don't need a CS PhD for most of the work done with computers and it would be unrealistic and uneconomical to require such a high standard everywhere.

People make a good living customizing WordPress sites and the buyers get good value from it.


>There are markets for a wide range of abilities. Even in medicine, the first triage will probably be done by a nurse, then a doctor, finally a specialist.

Then why require long time in school and residency for doctors? A boot camp should be enough.

>You don't need a CS PhD for most of the work done with computers and it would be unrealistic and uneconomical to require such a high standard everywhere.

No, you don't need any degree to use Excel or MS Word or modifying WordPress themes. But you should need a batchelor degree if if you want to be a high level specialist, programmer or architect.

>People make a good living customizing WordPress sites and the buyers get good value from it.

Likewise, you need a proper education to be a civil engineer or an architect. You do not need a degree to lay bricks or pour concrete.

I am asking for degrees for the higher levels of IT field, not for the people who modify WordPress themes, whom I am sure do a great job and are highly needed but are not exactly exponents of high level work in this field.


> But you should need a batchelor degree if if you want to be a high level specialist, programmer or architect

I'd like to ask, how are any of these specialties technically related to a bachelors degree?

One does not learn how to be a successful software architect in a semester or two of university. Neither does a bachelors degree make you anything more than a beginner in any specialized topic. While the definition of what's a "good" programmer is up for discussion, universities definitely do not produce them in any dependable capacity.

From my personal experience, at this point in time, major part of the students that complete a bachelors degree are in it just because programming is viewed as a well paying job and are about as worthless as a person that completed a 6 month bootcamp. The only difference being that the person with bootcamp experience might actually end up with an exact match of knowledge an employer wants, which hasn't had time to slowly evaporate over 4 years of "study".

Lastly to bring up another one of your points, I'd argue that learning from YouTube videos is not the same as going to a bootcamp. I do not view either highly, but I'd say that learning on your own deserves some level of commendation as it's the most crucial skill in a field that's constantly shifting and changing.


There was an expression at my school, in response to exactly what you're questioning.

"We train you for the last job you'll ever have, not the first."

The intent of a well-rounded bachelor's education isn't to be able to walk through the JavaScript library du jour from memory, but to have at least a base level of understanding how everything adjacent to the thing you're doing works.


> "We train you for the last job you'll ever have, not the first."

That's a bit funny for Computer Science since, even though I don't have numbers, I expect the average graduate now to work until they're 65+, and the vast majority will probably be out of the field of direct software development by the time they're 45 (burnout, people management, program management, project management, executive suite, etc).


>One does not learn how to be a successful software architect in a semester or two of university. Neither does a bachelors degree make you anything more than a beginner in any specialized topic.

That's entirely true. But if you din waste your time in school, you know fundamentals, you have the necessary background to proceed further and become good or excellent.

> that learning on your own deserves some level of commendation as it's the most crucial skill in a field that's constantly shifting and changing

I agree. You have to continously learn. But having a solid grasp of the fundamentals and understanding how things work will just help you on your path of future learning.

In University you still learn by yourself, but under supervision.


> Then why require long time in school and residency for doctors? A boot camp should be enough.

Nurses don't go though the same system, what they go through is closer to a bootcamp.

> No, you don't need any degree to use Excel or MS Word

Yes you do, because if all I want is someone who can write Word documents, I don't need that person to understand computer architectures, see:

https://www.reed.co.uk/career-advice/what-is-ecdl/

> I am asking for degrees for the higher levels of IT field, not for the people who modify WordPress themes, whom I am sure do a great job and are highly needed but are not exactly exponents of high level work in this field.

The IT field is multi dimensional, a person who is building a 3D engine is not the same person who is going to set up the backend system for your bank or write your kernel drivers. If put everything in one hat you are either going to teach too much or not enough in the field that the student will end up pursuing.

Why do you believe the education sector would be a worse place if we would have 3D engineering, backend engineering, mobile engineering and Wordpress theme development as separate fields? I would enjoy it when hiring people if there would be more specific credentials to the role I want to hire for.


I agree with the general premise, but the job market is insane enough to see this more as trying to cure the symptoms than the treating the cause.

3D engineering and backend engineering are very different, but backend engineering is generally easier than 3D engineering. Meanwhile, mobile engineering and backend engineering are far closer related. If one argues mobile engineering and backend engineering are different enough to warrant separate majors, you might as well argue "console/pc game development" and "mobile game development" do the same. You end up with so many ways to split hairs you're really just appeasing hiring managers being lazy.

To put into perspective the absolute insanity, if it were up to hiring managers, we'd have a university trajectory "Bachelor Java backend developer", "Bachelor .NET backend developer", "Master of Microservices", etc. which are obsolete within 2 years and catching up is left entirely to the individual. The field changes way too rapidly while also denying the similarities between different aspects and the ability to learn most things as long as someone can function as a specialist (and most places have a specialist already). Current courses may be too generalist, but at the very least they acknowledge almost every field in CS is effectively still data creation, modification and storage, while strategizing around physical limitations.


I believe that the cause is the following: IT has evolved far too quickly for education to pick up and education itself is resistant to change for both good and bad reasons, we cannot change the system every year and expect grades to be comparable.

I do believe that we could create specializations that are not too specific yet useful, the main goal is to get rid of subjects that the student likely never encounter, I mean, we can start teaching history in Computer Science because maybe you'll program the next Age of Empires, yet we agree that the likelihood of that is so small, we can round it down to 0, my issue is that we don't check this for all the current subjects so that we don't waste people's time teaching them stuff they will never use.

> To put into perspective the absolute insanity, if it were up to hiring managers, we'd have a university trajectory "Bachelor Java backend developer", "Bachelor .NET backend developer", "Master of Microservices", etc.

You don't need "Bachelor .NET backend developer", you just need "Backend Developer", someone who has a good knowledge of one stack can easily migrate to another in the same field.

> which are obsolete within 2 years and catching up is left entirely to the individual

- MVC was invented in the 70s, still useful, it's been over 50 years and counting

- SQL also appeared around the 70s

- OOP appeared in the 50s, that's over 70 years and counting

If you know MVC, you can do MVVM.

If you can handle MySQL it's doubtful that you will have trouble with MongoDB.

I'm not implying that there were no changes and you don't need to keep yourself up to date, I'm saying that there are technologies and concepts that have longevity.


>The IT field is multi dimensional, a person who is building a 3D engine is not the same person who is going to set up the backend system for your bank or write your kernel drivers. If put everything in one hat you are either going to teach too much or not enough in the field that the student will end up pursuing.

I think that for a Batchelor degree, things are good as they are. Students are better learning CS fundamentals.

Learning the framework or the language du jour, is easy to do by yourself. Frameworks, libraries, tools, languages come and go. Fundamental concepts will stay.

I did a master in Web Development, so there is some specialization. Others did masters in Data Mining, Machine Learning, Database Technology, Bioinformatics.

I plan to do a PhD related to using ML in Web applications, so there can be even more specialization.


>I would enjoy it when hiring people if there would be more specific credentials to the role I want to hire for.

There are plenty of credentials out there. You just mostly don't get them from universities because universities are not in general in the business of granting trade credentials.


> But you should need a batchelor degree if if you want to be a high level specialist, programmer or architect

alternatively, I have seen people with long CS degrees but who cant code for their life. degrees are just papers.


Maybe we should require anyone graduating to take a FizzBuzz test before receiving the diploma.


> Then why require long time in school and residency for doctors? A boot camp should be enough.

Someone else answered the question: Nurses do not require a long time. In some (mnany?) states you can become a nurse with a 2 year associate's degree, and career outcome/pay is correlated with experience, not the degree.

And nurses aren't even at the end of the spectrum. You have LPNs, etc.

But the reason I commented: This notion that it takes so many years of school + residency is mostly a US/Canada thing. In many/most countries, you go to medical school right after high school - it is typically a 5 year program.


> Even in medicine, the first triage will probably be done by a nurse, then a doctor, finally a specialist.

For very basic rote queries most pharmacy assistants (who are really just retail staff) will be able to guide you in the right direction too.


Sorry but I’m gonna be very blunt. I think sound pretentious as fuck.

> I studied because I wanted to genuinely understand how things work

I don’t think you do.

Genuine curiosity forged in one’s own mind. It is not something that can be bounded, repackaged as a curriculum, and sold in university. It’s like ether, it’s everywhere and can be captured by anyone, through multiple means.

University degrees for any professions are useless. Even in medicine! There are shit doctors and good doctors. Most people here would’ve run through a couple of them before picking one. I’ve been with my current doctor for 10 years now, because they are really good, empathic, and teach me rather than just pushing pills.

Software in my humble opinion works the same way. I care more about what someone does with the tools they have, rather than them being made of wood or metal or gold


>University degrees for any professions are useless. Even in medicine!

I think there is a fundamental distinction between the professions of medicine and law, in that they have licensing boards that are a means of ensuring a standardized minimum amount of competency. Computer science does not.

Despite the fact that many call themselves "software engineers", they are not engineers in the legal sense of the word in US, unless they also have an engineering license. The point of these licenses is, in part, to protect society in professions where they are expected to ethically serve in the public good. One of the problems with CS degrees in the past is that there is no standardized curriculum, so one CS student may have had zero semesters of calculus and another required to take 4. The standardization is what helps pull structure from the ether. That structure is compromised when people cheat.


> I think there is a fundamental distinction between the professions of medicine and law, in that they have licensing boards that are a means of ensuring a standardized minimum amount of competency.

You might want to read the transcript of this This American Life episode:

https://www.thisamericanlife.org/719/trust-me-im-a-doctor

The summary: Licensing boards do a really poor job in maintaining quality. You can do extremely bad things as a doctor and still keep your license.


I'm certainly not claiming regulatory boards are perfect. Far from it. But I do maintain that some quality control and accountability is preferable to the alternative of no quality control and accountability.


We have slightly different opinions, but I thank you for taking the time to chat, and trusting that we can do so nicely.

The boards are to provide society comfort, but they enforce as you, said the minimum. Something just cannot be measured. This is very true in medicine as it contains a human aspect, as well as ethics aspect (I make more money if I see more people and give each less time).

When I was a teenager, I was losing hair due to Alopecia. My doctor at the time, who barely made sense (both of us were ESL) decided to put me on a course of prescription Iron pills. I was pooping black haha. Only later, I was told that I shouldn’t be taking it and that it was prescribed to me by accident by her because she had the file of another patient. Their last name was my first name, and they were pregnant female, I am a male.

The same doctor injected my mom with some drug and as she did, she said “oh shit” and “OMG” and decided not to tell my mom what it was. She tossed the bottle in hazardous waste box so my mom could not find out what it was. My dad was furious and made a scene as he and my mom naturally got worried. We went to this board and they said they did nothing wrong, and my parents were making a scene, and that we should find another doctor. So much for protecting my best interests and holding a bar.

These boards are mafia; another high profile thread about this is in HN right now. The boards are there to provide a facade of credibility.

> The point of these licenses is, in part, to protect society in professions where they are expected to ethically serve in the public good.

I don’t want to be called an engineer and opted out of license because I think most of engineers are doing the exact opposite. Working at companies that knowingly continue operating when we know it’s causing depression? Collecting data for users without them knowing?

Regarding equality of curriculum and standardization, I hear you. None of this is stuff we can ONLY get in school. I think the interview questions we all conduct at our jobs, or give when applying are doing just that. Checking minimum competency; tangentially I much prefer take home tests or something of that nature.

After this, I tend to think learning should be like gardening. Not all gardens are the same and they have different needs. You may need to learn more calculus if you are in robotics, but not if you are working on something really far from that. Another example, you might really need to learn about algorithms and databases if your job/interests require it.


Yeah, you're right. I think in the context of your story, the board seems like they did not do you justice. I can say from my experience with engineering boards, they seem to be more transparent and will publish their decisions and the underlying opinions on how they reached their conclusions. I think that added transparency goes a long way to mitigate the scenario when a regulating body ends up serving as a mechanism to avoid accountability for the group they are intended to regulate.

I think I agree with your gardening analogy. It seems to me that the issue is often rooted in the hiring process. If a company was able to adequately assess the skills, they wouldn't need to rely on credentials, period. I think credentials become a lazy shortcut in many ways. Sometimes I think this is borne from the fact that many hiring decisions are made by people who are too far removed from the work being hired for, and thus need some pragmatic shortcut. It's easier for HR to say "you don't have the right degree" than for them to read and understand your resume to conclude "you don't have the right skills". The first is binary, the latter requires a lot of nuance.


Many engineers in the US do not have PEs even when it's an option. If you're not having to sign off on regulatory agency-related documents, potentially doing expert witness-related work, etc. there's no need for it.

I started the process at one point in mechanical engineering (engineer in training exam) but moved on to a different type of job so there was never a reason to get the certification.


I understand most engineers work under industry exemptions. The ones that do not will have to work under a PE or be a PE themselves. In those cases, the work has been determined important enough to require the additional accountability of a PE stamp.

I'm not a big fan of credentials, but I understand when credentials become a proxy for something valuable. In some cases, the value of a PE is accountability and the legal authority for a PE to push back when they are being asked to do something unethical/unprofessional. I think one of the central issues of this thread is that the credential of a college degree has become so watered down that it has lost a lot of value.


> I know I will get downvoted for saying this

I doubt it, but I downvoted you for saying that. No comment anywhere has ever been improved by adding "I know I will get downvoted for saying this, but". Please don't do it.


These kind of passive-aggressive replies always bother me, they don't contribute much to the discussion either. Please don't do it.

There are very _few_ reasons why you should downvote a comment. I found parent's comment great, so I upvoted it to counter your unfair downvote.


(It wasn't passive-aggressive, it was just aggressive.)

The "I know I'm going to get downvoted for this, but ..." thing is annoyingly common, and unfortunately it's common for a reason: a lot of the time it works: it lets you frame yourself as a victim without ever needing to be victimized.

And, strictly as a matter of fact, it's almost always false. I just did a search for HN comments saying "I know I|this will get|be downvoted" and checked out the first ten I found. Only one of them was net downvoted (which that particular one richly deserved), even though several of them claimed not just to know they would be downvoted but to know that they would be downvoted "into oblivion" or some such phrasing.

So, why do I care? Because (1) these things just add noise and (2) I think that on net they get unfairly upvoted; I want to discourage #1 and compensate for #2. (Also, most of the time comments that say "I know I'll get downvoted..." are in fact bad ones, but that isn't the point here. In this case, the comment itself was pretty reasonable. It just would have been better without the look-how-brave-I-am posturing.)


I've got to say I completely agree. "I know I'm going to get downvoted for this" is for me a cue to downvote it. Make it a self-fulfilling prophesy. I don't downvote a lot, but this gets a consistent downvote from me.

Every comment that has that line can be improved by leaving it out.


The cue is insecurity. You get triggered by his insecurity which you unconsciously see in yourself and don't like. This "not liking it" feeling is the "cue" you refer to.

People say "you'll probably disagree" as a defense mechanism. They preemptively expect a rejection, and make it known, in order to make it hurt less. Works in a similar way as self-deprecating humor. "You can't hurt me, if i hurt myself first." "You can't reject me, if I reject myself/you first."

I myself, got triggered by this thread and it's display of emotionally immaturity, because I have some of it myself, and I dislike it with a passion.

I've noted that HN is a forum full of emotionally immature people that are usually polite in the way they show it. This little thread is a perfect example of it. Very off putting, still the threads are sometimes interesting, if we can accept this fact and try to look at the discussion itself.


> You get triggered by his insecurity which you unconsciously see in yourself and don't like.

I think you're projecting something here. I'm not triggered by his insecurity and don't see it in myself. I just think begging for votes doesn't belong here, and begging for votes through reverse-psychology doesn't either. Let the content of your comment stand on its own merits.


Thank you for your comment. It is highly valuable to me.


In retrospective now I know that I unconsciously sabotaged myself, I knew the best way to pass subjects was to have a basic understanding of the theory and then practice a lot of exam question examples, but I just couldn't get myself to do that until I had in fact understood the theory in depth.

That led to my grades ending up exactly average, but also at one point I challenged myself and got the highest marks in the most difficult course in my degree. Everyone, specially those who were normally top of the class, were like "WTF did you do??" lol


Ha, this led me to fail school the first time I tried it. Now 10 years later I’m back in school with a 4.0 because I’ve learned to navigate the system and I know what the school wants. I still balance diving deep into what I’m interested in, but I don’t let it get in the way of the grind.


But you don’t need a degree to understand the fundamentals. Jus because you don’t have a degree from an institution it doesn’t mean you don’t count as a programmer. You can learn all of this things on your own pace even if you started by learning how to modify Wordpress themes or got into the field after taking a boot camp. My point is (from the original comment) that grading knowledge and ability to produce quality work is a very hard thing to achieve. I would even go further and question whether it’s even necessary. For example you’re likely not getting a job straight after college without facing the company’s interview process. And every company has its own way. So even if you were to solve the issue in academics, it’s likely to not reflect on the student’s ability to get a job and perform properly


>But you don’t need a degree to understand the fundamentals. Jus because you don’t have a degree from an institution it doesn’t mean you don’t count as a programmer. You can learn all of this things on your own pace even if you started by learning how to modify Wordpress themes or got into the field after taking a boot camp.

We can argue this about any other field.

>For example you’re likely not getting a job straight after college without facing the company’s interview process.

We should have a bare minimum standard, not a maximum.

But colleges and universities should be good enough that graduating one means you are in a proper position and have proper knowledge and abilities to start a career. Since that is not always the case, companies do still organize their own processes.

By not graduating some recognized form of higher education in the field, you don't prove to your future employers that you might be good at what they need. You just prove that you weren't willing to do the work for a few years and that you might not have the knowledge. Some won't care as their work is simple enough and they might train you on the job, some will test you harder and some will not get you past screenings.


> Some won't care as their work is simple enough and they might train you on the job, some will test you harder and some will not get you past screenings.

Another option is that the candidate has relevant work experience instead of a university degree. This is the case for a lot of candidates I’ve seen. There are a lot of factors that make Software easier to get into. For example, in physics you need to have foundational knowledge that was created 100+ years ago. Whereas software frameworks go out of fashion every 10 years or so. Of course there are many important CS foundations and design patterns, but I believe those can be absorbed by working along other experienced engineers


Well it's probably down to the harm done if the standards are lowered vs. the gain.

Incompetent doctor? People die. Incompetent chemist? People die, or at least there's substantial material damage.

When it comes to mathematicians and physicists they only ever have any real impact when they roll up their sleeves, open matlab or R and turn their theoretical work into something practical. Does that make them programmers? Probably I guess.

Anyway as for us programmers there are very few jobs where a poorly written program will cause anyone any harm especially since it can be reviewed, tested, and corrected before being used for real, unlike a doctor who must use their skills on the fly and get it right first time, every time. So the bar for entry is obviously much lower and lowering standards doesn't do much to increase harm.


> Incompetent doctor? People die.

Many, many doctors are completely incompetent, as in, they don't know anything. Yet not all of their patients die.

I have been a "standardized patient" at medical schools; students at the end of their education (after 6-8 years of learning) still don't know shit. And most pass. Maybe they learn on the job... but I doubt it.


Conversely my general practitioners are part of an organization where they are doing their residency. They all are competent, very caring, and effective.

I talked to one of their IT people who told me what a good place it was to work. And had multiple nurses say the same, one going on a 5 minute rant about what a good place it was to work. So that could be a significant factor.


You are completely wrong about that. If they passed their exams, they know a lot. They're no good in practice because they had little practice. Yes, we do learn most of our practical skills on the job. Medicine is very much a 'know-how' profession.


> If they passed their exams, they know a lot.

Yeah, it's an exaggeration. They certainly know some things, and some of them know a lot of things. But, they all had big holes in their knowledge (huge, gaping holes) and

1/ They weren't aware of it

2/ They were trained to hide them and appear to know everything about everything, because they're the experts. That's the scary part IMHO.


> They were trained to hide them and appear to know everything about everything, because they're the experts. That's the scary part IMHO.

You've got to realize that we can't really train healthcare workers to admit failure. Culturally, it's not admitted in any society I ever lived in. People get really angry really fast if you don't hide the gaps, as they feel you're subpar and they're being swindled.

> They weren't aware of it

I recently taught an undergrad course, and I must admit I was baffled by the lack of knowledge of the students, and also how little effort they put into their studies. Doctors who don't read books. That's _much_ more worrying, IMO. Grade inflation, and all that...


>Anyway as for us programmers there are very few jobs where a poorly written program will cause anyone any harm especially since it can be reviewed, tested, and corrected before being used for real, unlike a doctor who must use their skills on the fly and get it right first time, every time.

That's as true for any scientific or engineer field, for architects, pianists, lawyers, painters end economists as is for people designing and writing software.

And yet, all those occupations are generally practiced by people with a degree, who did a lot of study and practice. No watching YouTube videos, no 3 week boot camp will lend you a job as a physicist, concert piano player, economist or architect.

Is not that we don't have a high bar in this field, is that we don't have any bar at all. A programmer is a person who calls himself a programmer. Even car mechanics and construction workers are held at much higher standards than this.


"Even car mechanics and construction workers are held at much higher standards than this."

This is not universally true. Many of the best blue collar wokers I've worked with had no formal training or certification, some have. A few trained and certified blue collar workers I've known have been mediocore at best.

The alumni of certifications, official training, and schools are only as good as the integrity of the institution and of the alum.


More broadly people who refer computing to the standard of construction/architecture would likely be severely disappointed if they had a glimpse how the latter is really done.


Until very recently the most skilled people in our field had no degree because such degrees didn’t exist when they started.

Car mechanics went through the same shift where learning how to fix cars was an on the job thing and many still don’t have a relevant degree. Construction work is an old profession, but still mostly an on the job thing outside of heavy equipment.


Has more to do with credentialing bodies holding legal power over who can practice. If you masquerade as a pediatrician and tell every parent their kid needs an hour of exercise and fruits and vegetables, just letting the nurse give the injections, you'll probably do fine in 99999/100000 cases. But that one time you'll miss childhood leukemia because you don't know what you don't know. Likewise, you write SQL injection code and for maybe 99999/100000 visitors you'll be fine. Until the first malicious bot destroys your company's primary DB and you lose hundreds of thousands of dollars in data, and trash your reputation for getting future contracts due to data security.


Wrote a program that helps find patients for donor organs. Make a mistake people die, luckily first real life test 6 people successfully received an organ.


Cause an outage (or write a bug that causes an outage) for like, hospital software, or software that distributes medical supplies during a hurricaine, or distributes vaccines and ... people die. Maybe not directly because it's not your hand with a scalpel slipping but critical things rely on software.


The economic waste that comes from bad code is death from a thousand cuts. Poor reuse and composability resulting in duplicated work, corner cases resulting in cascading errors, seconds of lag adding up to days or weeks of wasted time - years if at Google scale.

Put in this way, the more software there is, the better programmers we want. But the cost itself is typically externalized over the consumer base and amortized over the lifecycle of these products. Further, the perception of software developers being a cost centre first and foremost is sustained. You surpass these problems by being skilled.


Programming isn't a profession. It's just a new form of literacy. A lot of people claim to be able to write, but that doesn't make them writers.


> I never heard someone bragging that he is a doctor after watching YouTube videos

Yet real doctors actually believe the bullshit they get from medical reps when it comes to prescribing actual drugs that goes in patients bodies. I will let you ponder how successful that strategy is.


Because most of what is needed out there doesn't require an equivalent of a doctor's degree.

The truth is, is that programming languages themselves have evolved far enough that knowing exactly what's running underneath the hood isn't needed anymore, outside of niche specialist cases. Most people don't even need to worry about seeing a single 'index out of range' issue, or worry about CPU cycles. And it's only going to become easier and easier.

I'd compare it to bricklaying. Yes, you need to use the correct formula for the cement you use, but figuring out that formula has already happened. For niche cases that require special cement, you go to the cement specialists that know the ins and outs of it.


This is true for any field, including medicine.

Specialists in, for example, psychiatry don't need to understand how mitosis works, etc...

The same is also true in finance. People who do model equity index volatility don't remember at all how to derive the equation for put-call parity.

In each of these fields there are people who study each of the fundamentals, and then there are people who do more routine "code monkey" work in a narrow area - think chiropractor or vanilla stock trade execution.

Programming is not unique.


> Specialists in, for example, psychiatry don't need to understand how mitosis works, etc...

A psychiatrist has to obtain an MD degree before they can start to study their chosen specialty. There's a reason for that: before you can treat a psychiatric illness, you have to be able to eliminate all other possible causes for the condition. I for one would not want to be treated by a psychiatrist that couldn't distinguish bipolar disorder from brain cancer.


> chiropractor

I'm not sure that this example is making the point you wanted to make. There's a reason we have Docotors/Pharmacists/Physios and don't reply on Chiropractory / Homeopathy. It's because we want to get better.


I'm a homeopathic programmer. I fix huge terrible consequential bugs by adding lots of similar small diluted bugs.


Haha. Fixing an off-by-one error by adding an off-by-10^-10.


> Yes, you need to use the correct formula for the cement you use, but figuring out that formula has already happened.

I got serious doubts about this. It seems we are constantly changing the cement formula.


We do, it doesn't change anything in OPs statement tho. What's the point?


GP

> but *figuring out that formula* has already happened.

Me

> It seems we are *constantly changing* the cement formula.

You

> doesn't change anything

Really?

OP makes the classic error to assume that A) physical engineering isn't changing; B) computer world is analogous to physical.


By the time the bricklayers are there to start on the project, most of the time the choice of cement mixture is already made. For most projects, a standard cement mixture is used and a custom one isn't even needed.

When issues do arise during the project, an expert is brought in/consulted. Standard cement formula's might change over time, for varied reasons, but it's not the bricklayers that keep themselves busy with that.

The same hold true for most IT projects.


> The same hold true for most IT projects.

Definitely not my experience. Even if you say choose Spring Boot the architecture and somewhat lower level constantly change.


I think you’re over-estimating the difficulty of the average programming job. The simple fact is we have great frameworks to work off of and building things from scratch is a waste of time and money for most business applications. Wordpress is a great jumping off point for like 75% of businesses. If you know how to write some custom theme code I’d call you a programmer. Doesn’t mean you’ll get a job in system-level design, but you’ll be able to pull a paycheck and sustain your life (and potentially support others). What exactly is wrong with that?


I have seen plenty of bad code monkeys who had high grades, the idea that the current 'high standards' give us better programmers is unfounded.

The issue is that modifying a Wordpress theme is just as much of a job as optimizing a low level 3D rendering pipeline or writing facial recognition software. One of these is not like the others and the issue is that universities fail to realise this and just try to teach everything.

In my mind we would need to abolish the idea of a general programmer and move towards specialization.


> Why should this field be hold to much lower standards than medicine, physics, math, chemistry?

It doesn't, a good programmer is self evident to good peers.

And schooling isn't the only way to get there. I know plenty of academically educated CS grads that aren't a great programmer not because they didn't do well in school (I have no way of knowing but I assume they did well), but because they lacked curiosity and interest into programming.


The reason why doctors have to jump through so many hoops is that the stakes are higher for failure.

While there are times where a piece of code failure or poorly worded instructions can cause injury to others, those are exceptions to the rule. Generally speaking- the cost of failure for writing and software is lower than it is medicine, and it makes sense not to gatekeep these industries behind theory, and rather just let results speak for themselves.


Programming is just a specialized form of writing.

A person who writes for a regional trade magazine, a failed novelist, and Shakespeare all considered themselves writers.


My position on this has been pretty controversial when I've shared it before, but I still think it's correct:

Measuring knowledge at scale is futile, harmful, and pointless. The fact that a lot of society has been arranged around the fiction that this is a feasible endeavor does not mean it has borne out in practice, and prioritizing assessment in this way has been gradually hollowing out most forms of pedagogy of their value while building an ever-expanding series of increasingly meaningless hoops for people to jump through to get what they actually need. We have deemed it necessary to create assessments to prop up the idea that education can be easily measured and should gate meaningful life outcomes for most people. Most if not all "cheating" behavior is either just a rational, strategic response to this situation, or a disconnect between how people actually solve problems (e.g. often collaborative and laser-focused on the part of the problem that drives the outcome, in this case the assessment) and some weird cultist notion of what it means for an individual to do it "correctly".

Effective pedagogy will never scale unless we get some really AGI-like technologies (I loved The Diamond Age as a kid, but A Young Lady's Illustrated Primer is from the perspective of extant tech a total pie-in-the-sky fantasy, illustrative of how meaningful teaching requires individualized approaches), and we see time and time again that teacher-to-student ratios as well as particularly good specific teachers are overwhelmingly the drivers of even the stupid metrics we are optimizing for

In short, this whole system is broken because its fundamental premise is flawed


What you are saying is not at all controversial, but it is incomplete, which is probably why you have received pushback in the past. Criticising the existing system is easy. Giving an alternative is harder. Implementing that alternative and showing that it's actually better on some metric is MUCH harder than that. But you have not even given an alternative!


The alternative is to treat higher learning like any other experience in life or on your CV/resume: you do it, you tell people you did it, then you either convince them that doing it imparted something useful on you or you don’t.

As someone who’s hired plenty of people, exams and grades do not help one bit with the process and you shouldn’t pay any attention to them.

The only good use of exams I see see is as a potential entry gate, administered by the place you’re trying to impress, to get onto a course, be considered for a job, or be given a license to do something. As exit gates they’re just noise.


Let us consider the proposal of keeping higher learning the same except that we don't do exams. What would happen?

For better or worse, whether somebody completed their degree does often factor into hiring decisions, so exam grades do indirectly factor into hiring decisions. Having completed a degree signals some level of domain knowledge, conscientiousness, and intelligence. Without exams you would have a 100% success rate (unless you introduced some other assessment mechanism), so the signal would be gone; having completed a degree would only signal whether or not you were able to afford it financially.

Secondly, a large fraction of the students would lose motivation and not do anything by the second year. Many would stop doing homework, stop doing any real studying except maybe superficial reading, and many would hardly come to class. In fact, many students currently already do this until the first midterm, even though they know the midterm is coming. A lot of students need the existence of exams to motivate themselves. Not all students, but a lot. These students don't want to lose motivation and waste time; many would probably regret not learning anything for several years. Our monkey brains are not suited to motivating ourselves to do things with a >3 year time horizon. Exams are a mechanism to motivate our monkey brains to put effort into studying.

I think if we consider professions that are important to our own lives, we do recognise the necessity of assessment. Would you prefer your doctor or nurse, accountant, electrician, or for that matter, teacher of your kids, to have come from a school that doesn't do exams or from a school that does? If they are experienced, maybe it doesn't matter, but how many people are going to take a chance on a fresh graduate if they are from a program without any assessment, where the philosophy is "you do it, you tell people you did it, then you either convince them that doing it imparted something useful on you or you don’t"?

There are other ways to solve that problem, and some of them may even be better than the current system, but I'm not convinced that taking the current system and simply removing exams would work.


The evaluation isn’t necessarily the problem, but I think assigning grades may be. It’s gamifies education and I think generally makes things worse.

I think it might be worth spreading some of the standards from medical schools to other programs. Don’t assign a grade to a student, make everything pass/fail. Either you know the material or you do not. There’s no honor roll or deans list and no class rankings.


> Either you know the material or you do not.

This is simply not the case. How well the person knows the material is more than yes/no. You'd be rounding the exam result to 1 bit and throwing away the extra bits of information. Maybe it's a good thing not to show that information to the student, but I'd like to see an argument for why the advantages of hiding that information from the students outweigh the disadvantages.

As for competitiveness in education...there are advantages and disadvantages to it. I'm not convinced that class rankings are a good idea, but I'm also not convinced that the optimal target for competitiveness is zero, to the point of not showing students their grades beyond pass/fail. Anecdotally I've seen the aim for zero competitiveness have perverse effects, where students instead start to compete on how lazy they can (appear to) be while still getting a pass, to show how smart they are.


Some of the best medical schools in the world have adopted pass / fail. Either you are good enough to be a doctor or you are not. That makes a lot of sense to me.

Harvard is one example in the US and McMaster is an example in Canada. Neither place has students competing to be most lazy.


What part of "its fundamental premise is flawed" is unclear? I don't propose an alternative because I don't believe the stated goals of the system are achievable or desirable. Also, if one believes something does more harm than good, an argument to stop doing it does not require an alternative.


You don't need more of an alternative to stop digging the hole deeper (subsidizing the problem, hard-requiring degrees awarded by the problem, etc.).


Are you aware that there are grade-free and exam-free schools out there and that they have been operating for decades?

Measuring the effectiveness of school systems is difficult because of selection bias, but I'm sure you could find some attempts (e.g. PISA) if you went looking.


Exams are a bad way of learning… but a good way for a university to filter out slow down people that would devalue their degrees.


> HOW do you measure knowledge? And when you decide how, how do you scale it?

When I was in school, many moons ago (in France) there were no quizzes. Zero. "Tests" were either dissertations (for topics such as literature, history, etc.) or problems. Everything was done in class, in longhand.

There were no good or bad answers, even in math class, because what was evaluated was the ability to describe the problem, the approach, and the solution, and you got points for that even if the ultimate result was wrong.

"Cheating" was very difficult; copying what another student was writing was hard and not very effective, because unless you could reproduce their whole argument, just taking a sentence or two would not make sense.

This system didn't "scale" very well; in fact it didn't scale at all.

If you build a system that let one person "teach" classes of hundred of students and generate quizzes that can be instantly rated by a machine, then some (most?) students are going to try to game that system.

This is inevitable, and I'm not even sure it's a bad thing.


> "Cheating" was very difficult; copying what another student was writing was hard and not very effective, because unless you could reproduce their whole argument, just taking a sentence or two would not make sense.

In high school we often were given two (or more) sets of problems so we can't copy off each other because people sitting one sit away from each other have different sets.

I remember at least one test where I wrote down problems from both sets (they were verbally dictated by the teacher at the beginning of the test). Then I just solved both and passed the solution to his problems to the classmate sitting behind me (I was asked for this ahead of time by him).

In Poland cheating is frowned upon by teachers and they tried to catch the cheaters but there were no formal systems in place to report or excessively punished cheating (like in USA).


Yes, although many of the students in the story weren’t interested enough in learning, had “low morals”, “no honor” and some apparently were scumbags, as a group they were somewhat efficiently solving the problem of passing the class… that’s not nothing!

In the real world the solutions to your problems can’t be found online, or if they can it’s valid to search them there (and lawyers will charge you a lot to do that). Collectively searching and distributing a solution is something young people are quite adept at (e.g. gaming wikis).


>> HOW do you measure knowledge? And when you decide how, how do you scale it?

This is what makes the problem intractable. Measuring knowledge takes time, lots of time, by a skilled person. That does not scale.

Since we need (want) scale we necessarily have to use (ever weaker) proxies for measurement. And if there's one thing we do know, you get exactly what you measure for.

Hence, the system is not broken - it's working exactly as intended. It's not "fixable" because there's nothing to fix (at this scale.)

Real learning happens either because a) the student is soaking up everything they possibly can using every resource offered or B) they've left college and are fortunate enough to be in a workplace where there are more knows than know-nots, and they take every opportunity to soak it in like a sponge.

College does not prepare people for the working world (and never will). It is operating exactly as it is designed to do.


So, the Leibniz argument? Our current system for educating citizens of all ages is already the best it can be, and any change or even reflection upon it is a waste of time.


You can't educate someone who is not ready to be educated. Those that get the most out of college are those that put the most in. This was true 1000 years ago, and is true now.

Yes, this system is the best [1] because access is open to all (which it wasn't). So those that want to go, can, and those who want to learn, can.

What probably needs to change is the understanding of what college is for. It's not to give you an education, it is to give you the opportunity for you to take an education for yourself.

[1] for some definition of best. Not all schools are created equal, nor all subjects, scale is in play here as well.


I somewhat agree about it being a chance for students to take education for them, but there is also the issue of an institution offering a limited view on a subject like computer science. For example some time ago I estimate that mainstream OOP was taught everywhere, while there was almost no place teaching FP (This is changing slowly now). Even if you took every opportunity you had, you might not have even a teacher or lecturer, who is familiar with it. You could only learn on your own, which you would not need that institution for. Teaching quality is not the same in all places. Teachers and lecturers are not the same everywhere.


Indeed not all schools, and not all subjects, are created equally. And your education is not limited to the specific subjects, or competencies of the school you happen to be at.


> any change or even reflection upon it is a waste of time.

That’s a bit extreme; I interpreted their view as, it’s hard to fix because of intractable issues, but it doesn’t mean we can’t have marginal improvements. Radical upheavals and revamps are sketchy.


In the case of testing, it very much can scale. Tests need to be based on long form questions that test comprehensive knowledge. Open book, Open notes, and hell even open-collaboration up to some limit.

If a test is already graded on partial credit, which in the field of engineering at least most are, then it's no harder to grade than an equivalent test that has less but longer questions.

This obviously doesn't translate for multiple choice tests where there is no partial credit but at least in engineering those don't really exist outside of first year and maybe one or two second year classes. And honestly, every intuition tells me that those classes that I remembered doing no-partial-credit multiple choice should not be doing so in the first place.

Maths classes like algebra, precalc, calculus, statistics, and linear algebra should by no means be using no-partial-credit exams. That defeats the entire purpose of the classes as those classes are to teach techniques rather than any particular raw knowledge.

Same for the introductory hard sciences like chemistry and physics.

And for the ability to handle those more "bespoke" exams, we really need to be asking the question of why certain students are taking certain classes. Many programs have you take a class knowing that only maybe 30% of that will be relevant to your degree.

Instead of funneling all the students through a standard "maths education" class, maybe courses would be better suited by offering an "X degree's maths 1-3" or even simply breaking up maths classes into smaller semesters where you are scheduled to go to teacher X for this specific field up to week A, then teacher Y for this other unrelated maths field up to week B, and teacher Z until the end of the semester. In-major classes need not do this but general pre-req classes could benefit by being shorted and split up through the semester into succinct fields of knowledge so that maths or physics departments aren't being unnecessarily burdened by students who will never once apply the knowledge possibly learned in that class.

-------------

The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world. No artificial handicaps and at most checks should be made for obviously plagiarized solutions. If that's not a viable testing mechanism, it's probably worth asking why and considering reworking the course or program.

The solution to students not wanting to absorb knowledge is to stop forcing students to learn topics & techniques they'll never use because maybe some X<25% of them will. Instead split up courses into smaller chunks that can be picked and chosen when building degree tracks.

---------------

Edit: I forgot to include it but this is largely based on my experiences not necessarily just on my own as a student but as a tutor for countless peers and juniors during my time at university, and as a student academics officer directly responsible for monitoring and supporting the academic success of ~300 students for an organisation I was part of. This largely mirrors discussions I've had with teaching staff and it always seems to boil down to "the administration isn't willing to support this" or some other reason based on misplaced incentives at an administrative and operational level (such as researchers being forced to teach courses and refusing to do anything above or often even just at the bare minimum for the courses they are teaching).


> Tests need to be based on long form questions that test comprehensive knowledge. Open book, Open notes, and hell even open-collaboration up to some limit.

Coursework is already along these lines, no?

> The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world.

How would this apply to a course in real analysis, say?

University education generally isn't intended to be vocational.


> Coursework is already along these lines, no?

It is but exams are not and if the intent of exams is to test knowledge, they should be in a format that is applicable to the real world and one that can't easily be cheated. Also for what it is worth, for essentially all of the courses I took in university, unless they were explicitly projects based classes, exams were the overwhelming majority of the grades in the course (often ~75-90%).

What this meant in practice was that exams that were closed-book, closed-notes often had averages in the 30s or 40s where everyone got curved upwards at the end of the day while open-book exams had averages in the 60s-80s and students who could apply their knowledge passed the exam while students who couldn't didn't. I can't recall a single course with the latter style of exams where I passed without knowing the material or failed while knowing it. For the prior however I personally experienced both and witnessed numerous other students go through this at the same.

> How would this apply to a course in real analysis, say?

Sorry if I wasn't clear but when I said "as if in the real world" I was referring specifically to students having access to the same resources they would have in the real world (aka reasonably flexible time constraints and with access to texts, resources, and tools) not necessarily that the questions needed to be structured as "in your field you'd use this like this" kind of questions.


Tests are a complete waste of time. Nothing in the world of work truly resembles what a test is so why bother testing for it.

Students should be graded on projects and ability to do real things not memorization of a pre-determined set of facts.


Unit testing is also frequently very artificial and disconnected from production use of a codebase. Nevertheless, there is a great deal of value in checking whether things you wrote actually do have the effects you intended.


Can you expand on why splitting up the class would help? I didn't quite get that part.


> Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount

I have some thoughts about the education system, and despite not being a teacher or academic I like to believe that my opinions have some value because I'm an expert programmer that has worked in the field for over 50 years. I attended three different major universities and have degrees in Math, EE, and CS. I still code almost every day (my Emacs configuration is never finished!), and I have in the past taught or been a teaching assistant for both undergrad and graduate courses for four semesters. Cheating has always been a concern, but now things are different.

The original article highlights the scale of exam cheating during the pandemic, but for us, the readers of HN, there is another problem with university learning that happens because of the internet. I've seen this affect virtually all of my younger friends pursuing degrees in CS. Programming assignments in school are unrealistically difficult, and it causes everyone to cheat.

Here's a typical real-life example: after covering doubly linked lists in the undergrad data-structure's class the programming assignment is to write a GUI based text editor in Java using doubly linked lists. This isn't especially hard for a professional programmer, but this is the first programming assignment of the course. Students had to wrestle with Eclipse, learning the AWT/Swing interfaces, event loop programming, and how to translate low level pointer based data structures into non-idiomatic Java based imitations of linked lists that kind of simulated using pointers. Most of the students really couldn't do this on their own, but they didn't have to because they can find the solutions to this very problem right on the internet.

Why would professors give such a program to beginning programmers to write? Because every student turns in a solution, and this causes the professors to lose touch with how difficult their assignments are. Over and over again difficult assignments are given, but the students are seemingly keeping up. The bomb lab assignment is a great assignment for CS students[1], but I've seen it given out with far too few attempts allowed to solve it. Again professors feel like a small number of attempts is all the students should need, they keep turning in the answers. The reason they can is that the complete solution is available on dozens of public Github repos and web sites.

The consequence of such hard and challenging programming assignments is a kind of inflation of the difficulty. The high difficulty causes students to cheat more, since their fellow students are cheating by downloading, cutting and pasting, or simply sharing their programs. There are commercial web-sites like chegg.com that sell the solutions to virtually every homework problem found in CS textbooks. Why should an undergrad spend so much time working on their own homework solutions while other students work openly in big teams at table in the university library?

This kind of cheating is pervasive at the undergrad level. How do we prevent our students to being pushed into cheating to keep up? Graduate school is different, the classes are smaller and more interactive. In my grad school classes I've often had to go to the board to demonstrate my code or proof to the class. Professor Dijkstra used to give individual oral exams to his students. So small interactive classes would help.

I've also seen assembly language programming classes given that require all work to be done on lab computers. The lab computers weren't on the internet and students had to sign in with the lab proctor to use the machines for their assignments. This at least helped some with the problem.

If I was teaching a programming class now, I would require everyone to maintain a git repo that could be checked for realistic commits of the programs as they are written. This might discourage the simple copying of a solution from GitHub the day before the assignment was due.

[1] The text for the bomb-lab assignment (highly recommended by the way): Computer Systems: A Programmer’s Perspective, 3rd edition,Bryant and O’Hallaron, Prentice-Hall, 2016 (ISBN: 0-13-409266- X), a google search will return many bomb-lab assignments and solutions from colleges all over the world.


> Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

Finland topped Pisa rankings many years, because we 1) listen teachers and have good academic pedagogical research, and 2) teachers are highly educated and reasonably well paid, meaning that the job is attractive to competent people.

Then politicians started to think big and read all the hype papers from think tanks about digitization and how the young are digital natives. Let's give them computers and they learn by themselves and ... we started slipping. Still OK, but slipping. It turns out that computers are not magic. Having all the information accessible is not a pedagogical solution.

ps. Chinese studied Finnish school system and imported some of the best policies in Shanghai and it worked. Some lessons work across widely different cultures.


As a Norwegian who lived in Finland for a year, it struck me that the parents I met actually CARED what the kids learn in school, instead of just treating school as daycare for older kids.

This, combined with the possiblity for good teachers to gain respect in their communities, is what makes Finnish schools more effective learning environment than Norwegian schools, I think. Not salaries, some specific methodology, etc.


> do not have a good understanding of how exactly this system should be fixed, and that it’s not broken for fun but because there are some very difficult unresolved issues.

Because it conflates two things with conflicting incentives. This could and should resolve nicely.

1. Spreading knowledge 2. Certifying competence

To get e.g. a RHCE you may or may not attend the course. You may get the materials elsewhere and study from them, you might get tutoring from someone else who attended the course, you may have enough experience from your day job. This is knowledge acquisition.

Then you attend the certification exam and either succeeded or fail.

If you fail, you get back to knowdge acquisition. Decide to pay for the course this time. Get tutoring. Read the materials again. Maybe retry right away because you were just stressed and disoriented. Then you succeed.

Compare this with college. Fail a couple of examinations? Too bad, you are booted. Want to try again? Repeat up to two years! This is absolutely insane! No surprise people are cheating their way through!

Decouple knowledge acquisition from competence certification. Managed to reach end of the math track but failed physics? No problem! Certify math competence and let them study physics some more! Got enough certifications to warrant a title? Cool, give them the title!

Make it possible for people to step away for a couple of years and then come back to earn some more certifications and even the title when they actually need and want to learn those skills.

Make it possible to study 1/3 of your time for 15 years. Maybe people would stay in the learning mode longer. Unlike many doctors who are hopelessly behind the times. Make it possible to study with kids or sick parents to take care of. Make it a part of the adult culture.

Not something people had to suffer through in their youth to earn their place in the world.


This is it. To expand on further on why it's so crazy to couple education and credentialing - already know all the material in a class? Too bad, you have to pay for it and spend time taking it anyway. Is it a class that's completely unconnected to your field? Too bad, the university is making you take it, so you take it. Is the class taught poorly, so that you need to teach yourself outside of it? Too bad, you still have to pay for it and put time into it, in addition to actually teaching yourself the subject.

The education is the major chunk of time and cost, but the credentialing is what most people are trying to get. By forcing people to buy them together, you can make people pay a lot (in terms of both time and money) for an education they find little worth in just because it's the only way to get the credentials.


So true. And another benefit would be that domain experts giving a course could focus on teaching and sharing their knowledge instead of being forced to deal with all the organisational fluff around final grading and "catching cheaters" that is a giant waste of their time. (I see only usefulness in grading as a feedback mechanism for students – but not as "certification" of student's knowledge for the outside world. I also believe it would be more healthy for both students and teachers if you the grades were just a guidance tool, not something that will affect your future prospects at life).

At the end of the day the final grades from school / college grades depend on so many factors that this signal is close to noise anyway, but in college it often feels somehow more important than the actual learning and so much time and stress is spent on them.

In a better world I imagine it would be the organisations that need specific knowledge co-sponsoring "exam centers", separate from colleges, where you could go and get a certificate saying how well you know a given subject. Private companies that want to hire the best people actually have a good incentive to make these exams as fair and useful as possible.

To make an analogy with GAN networks in deep learning: the college would act as a generative part and "exam center" would be the discriminative part. It seems to work pretty well in ML, maybe it would work in education too?:D


I've thought engineering licensure found a reasonable balance.

Everyone has to pass the two certifying exams for their discipline, but there are multiple paths for assuming somebody has acquired the knowledge for the exam, ranging from years of industry work to passing standardized tests to having a college engineering degree.


Well formulated.

It seems to me that a lot of problems in the real-world can be tracked down to unnecessary dependencies (in this case, having to attend college in order to get certified).


>These are very hard questions, and it’s frustrating to read the phrase “we need to fix the system” because yes, obviously we do, but agreeing that things are bad isn’t the hard part, and probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.

I don't agree that the system is broken (broken to me is something that is completely unusable, and we must stop using immediately). The progress we've made as a global civilization is to be credited to the way human knowledge is captured, distributed and taught by us as a species. And certainly formal schooling is a big part of that. And so, I'd rather view the situation as us being on a path of continuous improvement - where everything, including education can be improved.


My opinion is that the educational system in most industrialized countries today rewards the wrong things and that the quality of education suffers to allow an easier time of mass-grading and classifying.

Whether this means it’s “broken” or not is of course completely subjective because it depends on what you think the educational system should be doing in the first place.


>completely subjective because it depends on what you think the educational system should be doing in the first place.

Yup, you nailed it. That is the crux of the argument. I think it also leads into the meta discussion of what it means to be a "productive member of society" and how education fits into that philosophy. Why should one be forced by society at-large to be educated, or productive, or anything at all? :)


> Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

No, we should listen to the people when deciding what the purpose of school should be, THEN refer to the experts on those purposes. Is it teaching random factoids? Making people "cultured"? Separating out people who follow instructions and learn well from those who don't? Introducing habits useful in a workplace? Good habits of thought? Teaching the knowledge required to vote sensibly? To provide some foundational knowledge for later vocational training? To navigate and function in the modern world? Is it just day care for kids?

First decide on the purpose(s) (and their weightings if many) and only then can we have a plan. I think there hasn't been anywhere near enough thought given by most people about what the purpose(s) of schooling is(are).


> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

Well, yes, but at some point we look at the system, see human beings spend over a decade of very precious years doing just that, and not really getting over a decade's worth of benefits.

If we just want to incrementally improve things then definitely we should let specialists have the most weight. But listening to educators will absolutely never lead to major reforms or (god forbid!) reducing the years spent in the system.


Students are just as much a part of the system as teachers, so I don't think this elitism about who can have an opinion is helpful.

I think there are a few constructive things that can be done. One is allowing curious students to design their own academic career (with guidance and supervision). I think students usually cheat because they think the course work is irrelevant to their future lives. Sure people need to be exposed to new things, but a semester on something you know you will never care about is torture. I have a computer science degree, but I remember being forced to take geology. To this day I can't think of a bigger waste of time, I remember nothing from it, and even if I did I would never use it.

Vocational schools and apprenticeship should also really come back. I know parents want their kids to be part of the affluent elite, but in a good society being a car mechanic should be a good life. There's no point saddling people with student debt if their degree gets them a job at starbucks.

I also think that things like essays are a lot better than quizzes. Sans plagiarism, it's hard to fake knowledge if you have to write it out.


> Students are just as much a part of the system as teachers, so I don't think this elitism about who can have an opinion is helpful.

Of course everyone can have an opinion! But are these truly likely to contribute to solving the problem? Of course not. Some people are more likely to have the experience and skills to comprehend and advocate for better solutions.

Yes, I tend to support democratic forms of government over others. However, I'm under no illusion that democracy's broad, sweeping claims about what government are "best" are really defensible when applied to the general problem of collective problem solving under real-world constraints.

Having one person, one vote seems intuitive and valuable for certain decisions. In particular, it seems useful and practical for selecting certain representatives. But I (and many others) don't think it is a great way of making policy decisions in general. Just as one example: committees of experts can make sense in some contexts.

But in general, we can do better than what most of us have seen so far. We have to do better than that. Look at how well government(s) at all levels are serving their constituents. I think it is self-evident that all can stand tremendous improvement.

So, for any particular context, think about how to design mechanisms that are likely to work well. In so doing, one must account for many factors, including: human biases, cognitive limitations, cultural differences, imperfect communication, economic costs, time constraints, factions, self-interest, lack of experience, and so on.

Keeping these in mind, how exactly would you select, organize, and structure an ongoing set of interactions between, say 1,000 people such that one can maximize the quality of their resulting collective recommendations?

One option is to choose 1,000 people at random and weight their opinions evenly. But this is underspecified. How do you compress those recommendations into a form that others are likely to read? How do you discover collective preferences? There are dozens of key questions even if you generally adhere to the idea of "equally weighting each person's opinion".

But there are manifold other options where each individual's starting opinion is not the driving factor.

I encourage everyone here to study political economy, history, philosophy, and anthropology. Disregard your preconceived framings of how people make decisions. Look at how others have done it. Look at what theorists suggest might be alternatives. It is an amazing journey. I've been thinking about it for almost twenty years, and it is just as fascinating, if not more, as when I first got exposed to these ideas in policy school.


Well, in the private world its customer feedback. Sure you dont use their ideas neccessarily, but if, as in this case, 75% of your customers find your product bad then expecting people already part of the system to make radical changes sounds foolish. Theyll just list why it cant change. I admire this teachers cleverness getting his students to pass, but the reality is that it was so big a problem that the system can only be vastly broken and I dont expect people too involved to fix it


I can't see how this addresses the main problem mentioned in the article. My take is that if a student cheats, they should be expelled from university. If you're very lenient, introduce three strikes. The first two strikes will nullify your course as if it hadn't been taken, the third will get you expelled from university. I personally think that would be too lenient, though, and believe that nobody who cheats in any way should have a place in academia. This question has nothing to do with the quality of teaching or problems with tests, etc. It's a matter of intellectual integrity.

When I first heard that students often cheat in higher education not too long ago, I was shocked. When I studied during the 90s at two good German universities, I had not ever heard of anyone who cheated in any course. A cheating student would have been a huge scandal. To be fair, I studied philosophy and general linguistics. I guess people in more practical disciplines cheated even then, e.g. economics -- more specifically, "BWL" in Germany -- always had a bad reputation. However, even in these disciplines cheating was rare. Its incomprehensible to me why lecturers and universities nowadays appear to be so lenient about it.


The AP classes in American high school, which include a test which can provide college credits if passed were great in my opinion. Mostly because I felt the tests were really good. I took 11 of these tests and I learned a ton that has been relevant and stuck with me ever since. In particular statistics, comp sci, and Spanish seemed really good.

Spanish was a hard test. It involved listening to pre recorded conversations and giving responses.

Comp sci I didn’t take the class, just self studied for the test. It was my first exposure to comp sci and only intro to object oriented code. The test made you utilize an API for a little toy problem. That was very good in retrospect. I didn’t really grok APIs until that exact moment on the test. 12 years later fiddling around with game engines, object oriented concepts still seem familiar.

I think the two things that made these exams good is they were very broad so you needed to have mastered the whole course, and they were not designed by a teacher incentivized to give good grades, so they were pretty hard and didn’t advertise exactly what would be tested.

Not needing 90%+ to do very well on the test was good too. So much of school is avoiding tiny mistakes on otherwise easy content to get a perfect score. Not broadly getting the concepts mastered.

Some neighbor schools offered AP classes but it was culturally accepted that students would not get high scores on the exams. Struck me as pretty pathetic. That was a rich kid private school doing worse than my (admittedly fairly wealthy) public school experience.


> do not have a good understanding of how exactly this system should be fixed, and that it’s not broken for fun but because there are some very difficult unresolved issues

I think the reason for feeling of competence that prompts so many people to share their opinions on the matter is that nearly all of us went through this broken system at some point in our lives and our future lives literally depended not so much of what was taught but what was written down as the result of the teaching.


> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

Yes, because they've proved they know things by passing through the system and getting good grades. Oh wait...

Sorry for the joke, but seriously, you can't expect us civilians to shut up. Leaving education to educators is as pernicious as leaving law just to lawyers, or journalism just to journalists. In all cases, the outcomes are everyone's business, and because there are real conflicts of interest here (and not just disagreement on facts) it can't just be delegated to experts. Even calling for it will make people rightly suspicious of your agenda.

Unlike with law or journalism though, pretty much everyone has A LOT of experience with the educational system in practice, by being on the receiving end of it for 12+ years. There's a challenge with sharing our experiences in a fruitful way and not just shouting over each other, sure, but suck it up: we have opinions about what education should be and can be, and we won't shut up and leave it to you.


FWIW the teachers lack perspective. I've dated several teachers and listened to their side about why they teach the way they do. I would propose simple solutions, like a continuous improvement cycle, and educational experiments conducted at random by regular teachers, then reproduced and cross referenced to build new models. They had never considered these ideas before.

When you live your life in a rigorously controlled institution, you only consider what the institution echos. Outside the box thinking is possible, but it's the exception. You need outsider ideas and collaboration.

Politics will never solve these problems. It has to be grassroots and volunteer driven.


> I would propose simple solutions, like a continuous improvement cycle, and educational experiments conducted at random by regular teachers, then reproduced and cross referenced to build new models. They had never considered these ideas before.

I've never met a teacher who didn't do those things; I have met many who wouldn't phrase it like that. Just because they're not using the same terminology as you doesn't mean it isn't happening.

It's very easy to look at a system from the outside and think that they're missing the obvious [1]; things become more complex the more you understand them.

[1] https://xkcd.com/793/


And then us engineers come in and fix education just like we did taxis (regulatory arbitrage, offloading costs to the ordinary workers in surprising ways they aren’t aware of, increasing traffic congestion throughout cities, but hooking people onto the rides with unsustainable loss making introductory prices long enough that alternatives such as regular cabs and public transport become worse).

Or the way we fixed productivity in ways that has led to no measurable increase in productivity despite nearly everyone having the most powerful device ever invented in the palm of their hands.

Or the way we fixed housing through regulatory arbitrage, once again, converting housing for residents into short term rentals for vacationers, making housing for residents more expensive globally and making their communities worse.

Or the way we fixed cable by going from bundled cable packages where we have to pay $70-$100 to get all our channels, to unbundled walled gardens where we have to pay $70-$100 to get a fraction of content plus we also have to pay internet fees in addition.

Or the way we fixed messaging and phone calls by taking something like a $1/yr WhatsApp membership that offered safe encrypted chat and converting it into a data harvesting machine.

Or the way we fixed stock investing by gamifying investing, bringing in a lot of people into active trading who have no business being in active trading and should just park their money in Fidelity, and then promising “free” trades by allowing big banks to trade against, leading to a massive wealth transfer from naive individuals to sophisticated banks at the best of times.

Or how we destroyed inflation through BTC.

Or saved art through NFTs.

The list just goes on.


>>> HOW do you measure knowledge?

The teachers I've known don't really care about measuring knowledge. They're looking for a reasonable way to motivate engagement with the class, that's not too disruptive of the overall flow of the course. One professor told me, "A student who has made an effort to work through the homework problems a couple times should be able to easily get a B on the exam.

Testing also acknowledges that you're competing for your students' attention, and if you give no assessments, your students will rationally focus all of their effort on the courses that do. Preparation for the test becomes a reasonable measure, not of your knowledge, but of how much effort you need to apply to a course. Since students have been taking exams for years, each student knows how to calibrate their own level of effort.

As a student, after some trial and error, I developed a pretty good routine for getting A's in the two kinds of classes I was taking: Those that were dominated by solving problems and proofs, such as math and physics, and those that were based primarily on written assignments, such as art history.


I can tell you that almost every teacher who is not burnt out, does care about how we measure knowledge, mainly because they have to. The big difficulty is that there are two roles for teachers, on one hand you are a mentor and supposed to impart knowledge onto your students (the teaching part). On the other hand you are a gatekeeper, you are supposed to check that the thresholds for some qualification are met. Now if we had an ideal way to measure knowledge those two roles would not really be in conflict with ech other, but because we don't teachers have the difficult job of trying to teach a subject and at the same time find a good way to see if the students actually learnt what the were supposed to. All that with a limited amount of time that is available.


What do they do with the information? The threshold in most courses is to be able to pass the next course. The students who won't do that, tend to drop out, or switch to an easier major, of which there are many.

Teachers do tend to change their content and methods if a large number of students are failing exams, but I think it's based more on a hunch, than on hoping that test scores will yield analytical quality data. This is the sense I get from talking to a lot of teachers. My only teaching experience was one semester at a big ten university, a long time ago.


You make it more important to be eventually right than initially right.

Allow tests to be continuously regraded as the things students get wrong are corrected.

Automation would go a long way towards making that more feasible (i.e. easier for a multiple choice test than a written one).

But the emphasis on being right initially as the only thing that matters is unhealthy, and certainly in part what leads to the majority of people doubling down on confirmation bias rather than admitting being wrong and learning/incorporating the knowledge for the future.

Yes, there are practical issues with improving the system. But I've had a few select teachers that had that policy in some form years ago, and it was often the best teachers that did. We'd benefit from a widespread adoption of similar and it might lower the inventive for trying to cheat to be right the first time, as to the kids being brought up in these systems and reflecting these systems, that's the only thing that matters.


> HOW do you measure knowledge?

This is not the issue, this is the root cause of the issue.

You DON'T measure knowledge.

You should measure the satisfaction of the students.

Because the most valuable asset a developed country needs to protect is the will of the members of their society to keep improving and learning.

> maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

pity that academics and teachers often disagree and, most of all, that schools are public and payed by people's taxes in many developed countries in the World, so people have a right to say.

Teachers are not doctors, doctors practice medicine, teachers do no operate in such a stressful environment, they "educate" young people and is is often the case that it means they impose or suggest their opinions (because they can, nothing prevents them) and families see that kind of "education" unfit for their kids.

And they have all the rights in the World to be listened too, even if they are technically wrong or I disagree with them (I completely disagree on catholic schools for example).

The experts are there to find a solution to their problems, not to build hypothetical perfect solutions in a void.

Also: teachers are there because students are forced to go to school, so they serve, they do not lead. In my country (and practically all other countries in Europe) they are like bus drivers, they are fulfilling an obligation required by State laws under the State government but also offering a service the people paid for to the State.

Maybe instead of listening to "our" teachers and academics, we should look at places where the system is proven to work and copy it: see Finland.

CONTROVERSIAL

On a last note, there's a topic I believe it's the most important, that will quite certainly cause uproar.

If your youngest students die in school shot by someone just a bit older than them, the society you live in have failed in every possible way.

The fact that the system is broken is a joke compared to that.


> Maybe instead of listening to "our" teachers and academics, we should look at places where the system is proven to work and copy it: see Finland.

I did an education degree, and come from a family of educators. Every educator and academic I've talked to (I can't remember an exception) wanted our system to be more like Finland's. The people pushing back against changes in that direction were not teachers, but politicians, parents and high-up administrators.

>Teachers are not doctors

Indeed. And you wouldn't tell a doctor how to do their job, even if you had spent years as a patient. People in the education system have opinions that are informed by years of experience in the field and decades of research. With respect, I'm thinking you are an example of the type of person described by the comment you're replying to: not much experience inside the system but confident in your opinion of how to fix it.

> schools are public and payed by people's taxes in many developed countries in the World, so people have a right to say.

My country, and yours too I think, pays for health care with taxes along with education. Again, does that mean you and I get to tell a doctor how to do their job?

> teachers are there because students are forced to go to school, so they serve, they do not lead

Teachers existed long before mandatory attendance laws. Also, what point are you trying to make with this statement? That because they are necessary by law, their professional opinion is negligible?


> And you wouldn't tell a doctor how to do their job, even if you had spent years as a patient

That's not correlated: doctors ask patient how do they feel all the time.

If they don't feel well, they try to adjust the prescription and/or the therapy.

Don't like the word satisfaction?

Replace it with "listen to feedback from your customers".

Maybe it's gonna ring a bell.

disclaimer: most of my family members work or have worked in health care.

> That because they are necessary by law, their professional opinion is negligible?

The point is that they do not run the education system, they are employees of the education system

And should act like that.

Meaning that they have proper channels already in place to file their complains. Including protected rights like the right to go on strike.

If they want to govern it, they better step up their career and try with politics or with manager roles.

So that teachers can complain about them, their choices and how much of a better job they would do if they were in their position.


> doctors ask patient how do they feel all the time.

And teachers _constantly_ monitor how their students are doing - "feedback from their customers" if you want to put it that way. Talking with them during or after class to see how things are going, assessments on homework, projects and tests, parent-teacher interviews, individual learning plans, collaborations between teachers... I would venture to say "making sure a student is doing well" takes up most of the time of the job.

A patient complaining to their doctor about some treatment not working is not telling the doctor how to do their job. Your first comment made a claim about how student assessments should be done. This is something at the heart of pedagogy and has been studied and experimented on. The analogy to health care would be like if you declared the ways in which doctors should screen for cancer. Nobody without medical training would ever think to make such a claim, but many people seem quite confident in making similar claims about how the education system should work, as you did in your first comment.

I'm not even making an argument about whether you are right or wrong. There are many ways in which assessments can change for the better (educators would be the first to agree with you there). But to then go on and say "you shouldn't measure knowledge, only student satisfaction" without really showing an understanding of how knowledge or satisfaction are currently - or potentially could be - assessed... are you up to date with recent literature on these concepts? Do you have experience performing these kinds of assessments?

I'm still not sure what point you are making with your second section in this comment so I won't try to respond.


> And teachers _constantly_ monitor how their students are doing

They actually don't do it, not constantly, nor as a way of improving teaching.

They monitor their output, but rarely listen to what the student have to say.

In the end it's not their job, their job is to teach what they are told to teach, they rarely go on a limb for their students.

Because their salary does not depend on it.

But as personal story I've always had a conflict with my Italian teacher in high school, I've always been an A student, even after high school, but she hated my temper, so I've always been graded C (I believe it means sufficient in some parts of the World, for us is 6 in a scale from 3 to 10) and when in our latest test I've submitted the assignment of her favorite and she submitted mine, she was graded 9 and I was graded 6, again!

I won't tell you what color her face was when we told her the truth.

You are going through all of this, no matter what, there's always gonna be some bad teacher and yo can't do anything about it.

And since the mentality is "don't tell a professional how to do their job" it's always the student's fault.

That's why I think student should be asked if they are satisfied of their teachings, not of their grades or about how much fun they are having, but of the people teaching.

> I would venture to say "making sure a student is doing well" takes up most of the time of the job.

I'm glad it was like that for you.

It isn't so common in places I know.

In my country they spend 12, 15 maximum, hours - by contract - a week in school as high school teachers, that is when students need it the most.

Let's start by making them work 30 hours a week, it's one of the few jobs left where presence is fundamental, but we still keep treating teachers like those poor souls who have to grade a bunch of four pages written tests, like if computers have not been invented yet. It takes them weeks usually.

> A patient complaining to their doctor about some treatment not working is not telling the doctor how to do their job

I think I have not been clear: they are not complaining, they are being asked questions and depending on their answers the doctor can (should) understand if the treatment is working as intended and not causing too many contraindications .

So the analogy in education should be smth like "what's your favorite Renaissance author, and why?" not "What's the date that changed the life of Machiavelli forever?" (real question from a real questionnaire)

There is no talking to them, nobody grades them for liking profoundly horror movies and writing beautiful essays about them, because it's not "part of the teaching program"

> The analogy to health care would be like if you declared the ways in which doctors should screen for cancer. Nobody without medical training would ever think to make such a claim, but many people seem quite confident in making similar claims about how the education system should work, as you did in your first comment.

I haven't said anything of the sorts.

I am simply saying that if half of the class is getting bad grades in maths you should blame the teacher, not the students.

But bad teachers are allowed to teach anyway, because they are not responsible for their bad teachings.

At least in my country they can't be fired even if they are literally doing nothing.

> without really showing an understanding of how knowledge or satisfaction are currently - or potentially could be - assessed... are you up to date with recent literature on these concepts? Do you have experience performing these kinds of assessments?

I have a few ideas.

For example monitor what subjects show the worst grades or the highest rates of absence from school the day of a test.

These are all basic symptoms of fear and anxiety.

It doesn't take a Nobel prize to understand basic human emotions.

Let's try to understand why, the subject could be really hard or the students really stupid or it could be the teacher. Anyway, being stressed by school it's not something that motivate students.

you could simply ask them to grade their teachers anonymously a couple of times a year.

Internet forums are full of cry for help from students not understanding why they are asked such silly questions and what's the point.

We could monitor those forums, for example...

Unsurprisingly when these kinds of discussions come up, unions complain and go on strike.

And I am all in favor for unions, I have been union delegate in companies I've worked for, but school unions, they are a corporations, at least here.

I've talked to some of them, of course what I'll say is anecdotal I don't pretend to know everyone of them, but when asked why they don't want teachers to be paid better instead of a lot of teachers badly paid who don't do anything important for __education__, they told me blatantly that they prefer two jobs at current wages than one job paid double. They can spin it as a victory. They also told me that if newer teachers are paid better, old timers are going to complain ans tart asking for the same pay (it's kinda impossible here to pay two people different wages for the same job, especially if it's for the public) and that better salaries would encourage more prepared teachers to start teaching and that would look bad for the rest of them.

That's the state of our education system, I hope it is different in all the other countries but according to my friends living all over Europe it's kinda the same everywhere, especially during COVID crisis, where families where left to solve problems schools would not solve, because they couldn't get teachers to get vaccinated or to go to school.

Except, of course, for a few exceptions, that I already mentioned.

But, back on topic, if people studying the subject have no idea, well, that's a problem, don't you agree?

If we wanna keep grading people and "judge a fish by its ability to climb a tree", I think pedagogy is not doing a great service to future generations.

Let's not forget that teachers quote pedagogists when it favors them, but when it goes against their interest, they criticize them saying that "they are talking from their ivory towers. they don't know what's like being a "street" teacher"

Not all is lost or grim, teachers still fight against school commoditization, they still fight against schools as furnaces that generate young workers/consumers, but there's still a lot of conservatorism disguised by idealism.

> Do you have experience performing these kinds of assessments?

As a matter of fact I do.

I wanted to be a teacher, I was discouraged by how limited the space for new ideas was.

In my family, that is very big as I've said, there are teachers.

All of them keep doing it because it's a safe job and the salary is granted, none of them is satisfied of the work they are doing and would gladly do something else, if they had the opportunity.

They all feel like are doing nothing substantial to help the students and that the students know it, but going against the status quo would cost them too much. They tried, they've been burnt, they gave up.

So to get rid of the guilt they grade everyone good, at least they are not unpopular.


That is a lot to sort through and I'll try to pick out the points that are relevant to the discussion we started.

> I haven't said anything of the sorts. I am simply saying that if half of the class is getting bad grades in maths you should blame the teacher, not the students.

Yes, this is exactly what you said:

> "This is not the issue, this is the root cause of the issue. You DON'T measure knowledge. You should measure the satisfaction of the students."

You quite explicitly made a claim about how teachers should assess students. Then I suggested that maybe you should take a step back and question whether you are qualified to make such claims. Now, it seems like you've doubled down, and written a diatribe which superficially touches on a half dozen issues in education. I'm simply pointing out this irony: that the commenter you first replied to was lamenting how so many people outside the field of education feel qualified to make claims about pedagogy. Even if their expertise is limited to, for example:

> I wanted to be a teacher... In my family... there are teachers

Are you aware of the Dunning-Kruger effect?


This speaks to the heart of the issue to me.

Personally I always loved STEM topics, and would go out of my way to learn about them. This ended poorly for me in school, as I ended up being incredibly bored in the STEM classes, as they were filled with content I already knew. Then the other topics I didn't love, and largely did not like to experience them. So in the end my satisfaction was miserable, and I dropped out of 7th grade.

Eventually I got a GED and went to college for CS, but it was that time in-between those two that even allowed that to happen. I needed time to explore the world, find what I wanted to know, and figure out how school can help me get there.

As someone on the other end of the hiring table now, I don't even care about knowledge. Knowledge tells me how far you've got. I don't care how far you've got, I want to know how quickly you pick up the material based on the job I'm hiring for. I care about acceleration. While the two can be correlated, it's not precise. There's not a single hiring test that I can do to figure out someone's acceleration. What I do know, that testing the farthest on some topic as a metric, like leetcode does, it's going to fail every single jack of all trades programmer.


> Personally I always loved STEM topics, and would go out of my way to learn about them. This ended poorly for me in school, as I ended up being incredibly bored in the STEM classes, as they were filled with content I already knew.

Thanks for posting this.

This was my experience as well, with the added malus that when I went into school, people were still saying things like "what do I need maths for?" or "a computer will never write the next Dante" and things like that, so not only it was frustrating, it was borderline painful and lonely.

Then I discovered kids, I don't have kids on my own, but I have a very big family and I am grateful for being surrounded by people younger than me of any age from 3 to 20.

I saw them being entertained by the most boring stuff just because it was new to them and build up from there, at an incredible pace, and become young experts, with all the limits of being inexpert and also being kids, in a very short time.

I realized that what kept them motivate was a feedback loop that needed no external validation: knowing more about that thing made them happily satisfied and so they kept doing it. They don't care about understanding things the wrong way, eventually they'll get it right, they don't care about not doing any mistake, eventually they'll learn to make new mistake, they just wanna learn more and experience more.

What you call "acceleration".

I saw most of them struggle in school because they were bored, they were getting good grades, most of them at least, they were kin to put up the work necessary to get them, but their motivation started lacking, until they arrived to university and chose something that could (potentially) assure a good job or would make their parents happy.

It's a sad state of things, if I think about it, but it's also a "great filter" and we should strive to make education something that adapts to people receiving it (I'm not talking about schools for the gifted or smth like that) and not the other way around.

When I was in my 30s a friend of mine married a woman from Finland, who was living in Sweden, and then they moved back there when they had kids. I've visited them on many occasions and when I saw how they intend school there I was astonished.

They are not tracked, they are not tested, there is no standardized grade scale, there is virtually no homeworks, they do not compete, they learn by playing and are simply thought that you have to get the basics rights to go on and then helped to follow their paths.

I think that, in general, it makes happier adults.

Which is a good goal by itself.


> You DON'T measure knowledge.

> You should measure the satisfaction of the students.

> Because the most valuable asset a developed country needs to protect is the will of the members of their society to keep improving and learning.

But if they aren't actually improving and learning, their satisfaction and desire to continue with what they were getting isn't desire to keep improving and learning.

Self-improvement theater is as much a thing as security theater, and it's something we probably want to be able to distinguish from actual education.


> But if they aren't actually improving and learning, their satisfaction and desire to continue with what they were getting isn't desire to keep improving and learning.

good for them.

In which way this is an obstacle for those who want to?


>This is not the issue, this is the root cause of the issue.

>You DON'T measure knowledge.

>You should measure the satisfaction of the students.

>Because the most valuable asset a developed country needs to protect is the will of the members of their society to keep improving and learning.

What is satisfaction going to get you? As a student, I would have been very satisfied to have great marks while enjoying each night of the week, unfortunately I had to work and skip parties.


> As a student, I would have been very satisfied to have great marks

that's the wrong way to look at it.

As a student you would have been very satisfied to have recognition for your hard work (something that rarely happens in most school systems I know)

You switched cause and effect.

> while enjoying each night of the week, unfortunately I had to work and skip parties

So you were getting satisfaction from doing it, or you would have skipped studying to go to parties.

Which is what I did, I had great marks and went to parties to reward myself.


"You should measure the satisfaction of the students"

OK. Then how do you measure competency? Right now, a medical diploma indicates that the person took all the requisites and passed all the tests to be a practicing physician. If you only measure student satisfaction, how do you which medical student is ready to treat real patients and which isn't?


> OK. Then how do you measure competency?

You don't.

You certify it, when it is required.

> Right now, a medical diploma indicates that the person took all the requisites and passed all the tests to be a practicing physician.

exactly! because it is required by regulations.

> If you only measure student satisfaction, how do you which medical student is ready to treat real patients and which isn't?

there is a high chance than an unsatisfied medical student is gonna be an equally unsatisfied doctor, even if they check all the boxes.

let's be clear: satisfaction is not a measure of how much they are having fun.

just like if you go to the gym you're not more satisfied if they give you free candies and hot dogs and couches with Netflix, but you end up being fatter and less fit than before.


> You should measure the satisfaction of the students.

It's very easy to make satisfying and engaging teaching. It's a lot harder to make that valuable.

If we focus on student satisfaction rather than understanding, we're failing them.


> If we focus on student satisfaction rather than understanding, we're failing them.

while making them unsatisfied gives them a lot of motivation to understand.

Anyway, satisfaction is about the product: the product is education.

The product is not entertainment.


> ...input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.

If people within the education system are getting upset that the people who are supposed to benefit from the education and who are paying an enormous sum of money in order to obtain the education dare to have an opinion about the education, I'd say that's a pretty good indication of the problems with the system. I can't think of any other area where there's anger at customers voicing their opinion. Institutions with that kind of attitude probably wouldn't last long if the education system was opened up and students were actually given some choice (say, by separating education and credentialing).


That last paragraph shows the real issue: that schooling is government controlled and provided.

The only people that ought to be involved are the teachers (and other school employees) the students (and their parents) of that school.

The fact the it might take ‘5 election cycles’ to see a reform through, is a disservice to the students, and often frustrating for the teachers as well.

If government does it, that literally opens the door for everyone else to be involved, muddying otherwise clear waters.


How do you offer free universal education without government control?


There’s vouchers, as mentioned.

And why is it necessary for education to be ‘free’ or universal for that matter?

People are different, doesn’t mean we all need to learn the same exact things in the same exact way to be productive members of society. Not that such a goal is ever realistically achievable.


???

Free universal education was the key to take poor people out of poverty.

How is this even worth discussing?

Do you realize that if that goes away, we won't be in a libertarian paradise, we'll be back to feudalism?

Countless people from bad environments broke through due to free universal education.

I guess you're not aware of the history on this domain or are just privileged and you don't realize it.


Government vouchers for every child that can be used at any education institution whose curriculum includes the government mandated curriculum.


But then people with resources to do so will shop for the best schools meeting this criteria, which hurts people who don't shop.


I would argue that the presence of a market structure would encourage schools to compete and thus drive educational advances that would eventually be used in all schools. In this way even parents who just choose the closest school without looking at the schools testing history or teaching approach are more likely to have better outcomes when competition is stronger.


1. How does the "government mandated curriculum" get enforced?

2. What are the barriers to entry and the fungibility of the educational market? If the educational market isn't truly a free market, then what's the point? More private monopolies and oligopolies without proper oversight?


1.Through standardized testing to ensure the students are actually learning the mandated curriculum. 2. Barries to entry should be low. Maybe teachers must simply be able to pass the standardized tests themselves? I'm not sure what you mean by fungibility here? And I think this system would reduce monopolies in education since schools could choose whatever methods parents preferred, which encourages different approaches. Also the monopolies of the current system, government education departments and religious institutions, would be financially penalized if they underperformed and parents choose alternatives.


The real question is even higher. Why should we measure knowledge?

If it's learning for the joy of learning you don't need a test. If it's to get a piece of paper you need for a job, then schools are just shitty interviews mostly uncorrelated to real world tasks.

I think we should move to learning for the sake of learning (free, open door lessons or pick your own on the internet at your own time - no frontal lessons but still provide a space for students to socialise) and give the chance to students to work on projects that can prove they know something. Workplaces can look at these projects and find someone who fit with them.

You built a robot? I can reasonably expect you to know something about electrical engineering and math.


why do we need to "measure knowledge"? school should be to teach knowlege. Measuring is not our problem. It is the problem of the employers. Test the teachers not the students. We only need to make sure the teachers are of good quality, not the students.


> why do we need to "measure knowledge"?

Because knowledge is currency. It opens doors to privilege and status in society. It also ensures incompetent people are not put into positions where they can do harm.


You mean measured knowledge acquired through a every specific way is currency. Someone who acquired the same knowledge on his own will be cast aside until he gets his certificate.


That's because the certificate is the value, not the knowledge itself. The knowledge is assumed. Without a certificate, the onus of verifying the required knowledge is now on the consumer or employer, and unsurprisingly neither of them want that, so of course knowledge combined with a respected certificate is worth more than just the knowledge itself.


Would you let an uncredentialled surgeon operate on you?


That's not possible because of regulation. For occupations that don't relate to other people's life or death situations, or security in general, it's reasonable to assume if someone has a skill, they should be able to use it professionally regardless of how they achieved it.


> if someone has a skill, they should be able to use it professionally

Sure, but why would you hire an unlicensed electrician, or surgeon, or car mechanic, or builder, or elevator mechanic, or really anything that matters?

The only areas where this point becomes moot are in areas where certifications already are not an issue, i.e. in jobs that almost anyone can do.


Because they are cheaper? I’ve hired unlicensed people to fell trees in my yard that weren’t really going to fall on the house etc.

Also, I’m fairly certain car mechanics are not licensed where I live. Sometimes licenses are just frivolous, especially in modern day society.


I wonder if those unlicensed people had liability insurance, and, if so, if the insurance companies charged more due to their lack of license?


>Sure, but why would you hire an unlicensed electrician, or surgeon, or car mechanic, or builder, or elevator mechanic, or really anything that matters?

You don't need a licensing system, you just need a reputation system. Like how bonds have a rating system; nobody's stopping you buying a junk bond, but the system makes it clear to you that it's a got a high probability of default.


Ok you can be the first person to receive open heart surgery from an uncredentialed surgeon.


if it was a matter of life and death, of course yes.

If it is not a matter of life and death, there's no possibility it will happen, because of regulations.

Anyway:

More than 250,000 people in the US die every year because of medical mistakes, making it the third-leading cause of death

10 percent of all U.S. deaths are now due to medical errors


> Because knowledge is currency

can I sell it?

Can I buy grocery with knowledge?

Knowledge is a tool!

And it's not even universally true, if you know American English it is perfectly useless in rural China or Japan or Central Africa.

EDIT: I'll add another example that won't upset the American audience.

Numbers in French.

We are used to the decimal system, it won't work in France, they count numbers using the vigesimal system.

So 84, 80 + 4 is quatre-vingt-quatre, 4 x 20 + 4.

My way of counting numbers, which is a basic requirements for kids aged 5, is completely useless in France, even though France is a close Neighbour of my country and we dealt with each other since the dawn of history.


Assume for a second that your goal is to teach knowledge, as you say.

How are you telling whether you are successful at that? Even if you do not care about the personal individual achievement level (or whatever) of the students, you still need to be able to measure to understand where you are successfully teaching, and not, so that you can change/improve/etc teaching.

As the comment you replied to said, you can't just wish these problems away, and they are not easy things.

The overall thing is not a problem, they are systems. They can't be "solved" through simplistic answers


How do we know if schools are working? If you test a teacher instead of a student, the problem still exists, how are you going to test a teacher?


Our brilliant courts have made it illegal to give aptitude tests to prospective employees, so they fall back to college degrees as gate keepers.


Classic "tell me you have never been a teacher without telling me you have never been a teacher". Are there bad teachers? Of course. There are bad employees in all industries. But teaching is an extremely difficult job that underpays so most people are in the profession because they want to help kids.

Some things to understand about teaching. You must always teach at the middle kid in terms of ability and intelligence. By default this already means that some kids will be lost and some kids will be bored. This is made worse by conflating age with competence. Additionally, teachers have no understanding of what the kids are going through at home. Say you have a kid that never does homework? Is that the teacher's fault? Is it because the kid is lazy and just plays fortnight at home? Or is it because the parent's only job is night shift and the kids is a de facto parent watching two other kids? Or is it because the parent has a substance abuse problem and the kid hides out at playgrounds until late at night after everyone is passed out and it is safe to come "home"? Statistically, kids with problems at home also tend to be lower on competence scales. The real problem here is social help for the parents but we don't have the political will for this. Do you have any idea how often a teacher has had a student's parent come to a conference to discuss concerns about the kid falling behind just to be told that "It is YOUR job to teach my kid, not mine!" Tell me how testing teachers fixes that? And these are the same teachers that must buy paper/pens/supplies with the other salary because we ration school supplies.

We have some similar problems at the collegiate level. I worked full time while carrying 12+ credits paying my own way through college. I had to cut corners and ration my time. This meant lower grades for some classes but luckily I have the aptitude to get away with it. We also are sending kids to college that shouldn't be there. They don't have a real desire for a professional career outside of something like Social Media Manager. Of course they are going to cheat and use all the tools they have at their disposal having grown up digital. They aren't interested in the subject matter they just want to check the boxes and get through it. There is an issue here that needs solved at the institution level that kids will always be better at tech than the teachers but that is silly to lay at the feet of the teachers. In the end they are trying to lay a foundation of knowledge but the students have to care. Most college classes don't take attendance, is that the teacher's fault too?

Having the best software engineers doesn't mean anyone will use the product. Having the best doctors doesn't mean patients will do what they are told. Having the best trainers doesn't mean people will workout on their own. Having the best therapists doesn't mean anyone will use the techniques suggested in their daily lives.


Computer science is one of the harder subjects to assess knowledge through tests.

In other industries, passing an accreditation exam gives you value in the market. In software engineering, certificates are seen as useless.

This can explain the cynical attitude programmers have against testing.


I have degrees in math and physics. Those degrees gave me close to zero value in the market. Spending X years in school, then having to prove to an employer that you can do barely more than squat is a familiar experience in a lot of fields.

There are tests you can take in those subjects, such as the Graduate Record Exam. Those tests work to some extent because the subject matter is relatively mature, and consistent from one college to another. And yet there are entire fields of math and physics that I've never been exposed to. Their main purpose is to see if you're conversant in a body of knowledge that would prepare you for typical graduate study, not for a job.

Software engineering is a comparatively young field, with less standardization. There are even debates on HN as to whether software engineering is a real thing. There are places where every programmer has the title "engineer" regardless of their background.

I'm only employable because most people hate math and physics so much that they're relieved if anybody offers to do those things for them. That, and I'm pretty good at programming and electronics.


> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

Cynically, because they're part of the problem.

Personally I don't think the "obviously bad thing" is the current state of testing. I'd instead say that the problem is the intermingling of education and credentials.

Society doesn't care about education, however all of the inspiration is geared around it. So you wind up with the case that students either become disillusioned with the system after realizing it's ripe with hypocrisy, or otherwise structured in a way that creates resentment in the system.

As an anecdote, a friend I went to high school with dropped out when he was disallowed to participate in the school band, because it was his only source of motivation to show up every day. I don't think that was the intended goal, but when faced with the reality of the situation, the system is unbending.

I think as a result, the ones you see succeeding in college tend to be more driven by either ambition or obligation rather than any actual desire to learn. So in that respect, I think colleges are self selecting for students that are more willing to think cheating is a good idea. And in many respects they may not be wrong.

> Curious, isn’t it, how all these systems seem to fail in the same way?

No...the systems haven't evolved independently. It's no more surprising to me than learning that felines could get covid.

For what it's worth, I'm a community college dropout. The education and mental health systems were absolutely structured in a way where my severe adhd (and its best friends anxiety and depression) went undiagnosed and untreated through my senior year of high school. I loved learning, but there wasn't any way to get an education that could present the coursework in a way that could keep me engaged. And of course my inability to do homework was continually met with being told it was some personal failure on my part and I should just apply myself.

So back to

> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

_They're the ones that've made me think we'd be better of scrapping the system and starting over._


I taught high school math for 2 years. Which really isn't really enough to diagnose (much less solve) the system's problems. But it did give me a sense of how intractable the problem is.

I find it enormously frustrating when people (not you) complain about "teaching to the test." Teaching to the test is good pedagogy! First, determine what you want to students to know/do. Then choose how you're going to assess their knowledge/ability. Then design instruction that prepares them for the assessment. This is called backwards design.


I assumed you would say that the programmers should start working for the school system but your final description.of problems is not difficult to solve.

The teachers are the problem.

After all I was also sitting in school for 13 years


> In the case of testing it’s because you choose to focus on the obviously bad thing (current state of testing) rather than the very complex and difficult question behind it: HOW do you measure knowledge? And when you decide how, how do you scale it?

I would actually focus on the question of "Why do I need to quantify everybody's knowledge at a high resolution?"

When I was TAing, I held the position - never accepted I should say - that we should make more courses pass/fail; and instead of investing effort in the numeric grading keys, try to give more meaningful feedback on assignments.

Some alternative suggestions I brought up:

* I suggested that the final grade be a combination of the assessment a roll of 1D6 points - to hammer it in that the grading is to a great extent artificial. Somehow this was even less popular of a suggestion...

* I once proposed we offer people a perfect passing grade if they just never show up to class nor submit anything, and only people who want to learn would risk an imperfect grade. I really liked that proposal, because it put the two motivations - learning and making the grade - which are often conflated, at direct odds with each other.

Of course none of this was taken seriously - even though I was serious. Kind of.


I feel that school is criminal and I deeply resent how my childhood was spent due to school. So I'm going to opine on it.


Well they do oddly resemble prisons for kids... In both function and appearance often.


I'd say everyone has been a student so they all have an opinion. Its ripe for bikeshedding. That's the reason imho.


> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.

Oh please. Teachers advocate for themselves. Academics are currently waging a war against standardized testing for ideological reasons. Instead of a polemic against people giving their opinions please just tell us what you think.

For my money, the problem with education is that we decided it's not about knowlege but rather increasing the socioeconomic position of participants. From this it follows that everyone needs a 4 year degree. Education will only function when it's a small number of weirdos who want to be there.

Solve the problem by attacking credentialism, reforming student loans, and bolstering alternative post-secondary education (trade schools, bootcamps).


>In the case of testing it’s because you choose to focus on the obviously bad thing (current state of testing) rather than the very complex and difficult question behind it: HOW do you measure knowledge? And when you decide how, how do you scale it?

This sounds like an entirely different question. When you have a method for testing, you have at least two different measures of effectiveness:

- how well the test measures knowledge when it is taken honestly

- how likely is the test to be taken honestly vs. subverted

I thought this thread was about the second question, but you seem to be focused on the first. But these problems require different kinds of solutions, and crucially, it is much easier (but still not easy) to verify success or failure in addressing the second question (cheating) than the first (predictivity).


This is true for so many other aspects of life as well. Things are the way they are for a reason. Not understanding the deep and complex factors that got the system to where it is dooms you to repeat the mistakes of the past. This is why I go for depth on what I complain and ideate on rather than breadth. Dive deep on something you care about instead of having an opinion about everything. Humans have done well with specialization. If you enjoy breadth, go for it. I just don't see it as very effective.

I suppose that complaining without proposing solutions is akin to protesting. You may not necessarily know what you want specifically, but you don't want the current system.


> Testing .. HOW do you measure knowledge

To me measuring knowledge is a minor reason of having tests. The main reason is to force students to study. If you have no test on knowledge most people will just gloss over the detail and not learn.


Listening to teachers, yes please! Listening to pedagogy academics, no thanks!


> HOW do you measure knowledge?

This question has no meaning unless you specify what the goal of the measurement is. There are two main options.

1. Measurement as part of education process — for the sake of both teacher and the student.

2. Measurement as part of external qualifications — for the people who would later use the credentials achieved in measurement to accept you to higher education and to extend job offers.

Most of the problems with different measurement strategies happens because people conflate the two.


> These are very hard questions, and it’s frustrating to read the phrase “we need to fix the system” because yes, obviously we do, but agreeing that things are bad isn’t the hard part, and probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.

It's kind of hard to believe this needs to be said, because it is so obviously correct.


The petit-bourgeoisie elegies here are ridiculous. You take the higher moral ground by establishing that you have "studied pedagogy and know all about it" and then you proceed with providing cliché points on how education has failed. How can you apply critisism to a system you have been indoctrinated by? How fruitful is it gonna be?


i like this thread and wish to tug at it a bit.

i’m not so sure we _can_ fix it or even _should_ fix it. in my opinion, fixing implies a standard of perfection. it’s an imperfect system, formulated by imperfect people - the types we’re going to meet and interact with for the rest of our lives. there are always going to be imperfect ways of measuring the “goal”, be it content domain knowledge, or project completion kpi, or something else.

the positives of an imperfect system that i can think of off the top of my head are that they give teachers the ability and motivation to find creative ways to impart information and knowledge, and it can implicitly educate pupils in how to navigate complex, broken systems.

teachers who come up with novel educational methods are generally heralded for their innovations, but there’s not much else to incentivize them to remain or continue to innovate. not to mention the fact that those innovations may be expressions of their personalities and not an actual template for how every teacher should teach.

the same seems true for students. they find adaptations for navigating those broken systems. some will fall into the stream of the system, play the game, and get high marks. what have they learned? i’d say they have learned a fair amount. some will discover a need to collaborate survive and they have learned about themselves. some will complain and resist, but pass based on raw willpower or charm or something else. some will fail but will see gaping holes in the system to explore, exploit, or fill. they’ve also learned.

these are just a few of the dimensions i can think of off the top of my head. i believe that the primary way that we should seek to reform or improve educational systems is through how we treat the educational infrastructure (teachers, staff, materials, services) and the students who are failing to engage the experience due to factors beyond their control (mental health, SES, etc.)


> Cynically, this will never happen because reforms to battle educational issues in any democratic society usually takes more than 5 election cycles to show obvious results (and when the bad results start stacking up current leaders will take the flak regardless).

Well obviously we need to fix the system of the system!


It is perhaps unreasonable for the purveyor of a critical service to demand that they be the only one who is allowed to understand or validate the quality of the service on offer, and to insist that the customer is too naive to be permitted a viewpoint.


Finland is a great example of a world class school system that doesn’t measure “knowledge”. So perhaps trying to measure “knowledge” is the real problem?


I don't think the problem is that complicated - you just can't measure knowledge with a process (or a machine). Only a human can approximate another human's level of understanding.

Trying to create a knowledge factory seems to me a pipe dream. All cheating comes from trying to force learning into a rigid mechanical box.

Solution? My opinion - remove colleges, bring back guilds.

Of course, this is an oversimplification, but the moment you remove the need to print out diplomas, everything does become simpler. The "measuring understanding mechanically at scale" is the hardest problem.


Again, it’s so easy to criticize, point out the “problem” and then offer no solution.

The “knowledge factory” exists for a reason. The way our society is constructed, we need structured specialization (pick a course), verification (you’re OK) and rating (you’re the top 5%) — because our entire society expects these things to work and be available.

It sounds like the only difference in your example is that these things exist but are not centrally verified to be identical, because apparently diplomas themselves are the problem.

People need to be able to improve, be excluded when incompetent and rewarded when excellent, because that is how our society works in all other aspects, and the one thing that will always be true about an educational system is that it will mirror society: and if it doesn’t currently it will in a few decades.

You cannot suggest fundamental changes to an educational system without more or less advocating a revolution in society. No wonder most complaints stop at the problem and never continue to proposed solutions.


> It sounds like the only difference in your example is that these things exist but are not centrally verified to be identical, because apparently diplomas themselves are the problem.

Yes, exactly.

You are using a diploma as an indicator of knowledge, and you have a diploma-giving machine (university) that gives the diploma to anyone that can pass a test. People cheat the test in order to get the diploma. It will always happen, no matter how intricate you make your tests, because you cannot automate knowledge verification.

It isn't a "problem" - it's an "impossibility". And it's one impossibility software companies have gradually started dealing with - most don't care about diplomas anymore, because the correlation between knowledge and having a diploma gets lesser and lesser the more diplomas are printed artificially.

So, that is a pretty good solution, too. Remove diplomas altogether, and let employers measure knowledge in a way they seem fit. They will have responsibility for the mistakes of an employee, so it makes sense that they make the criteria. That way, educational institutions will have to make their education useful in the real world, or their reputation will crumble.


> So, that is a pretty good solution, too. Remove diplomas altogether, and let employers measure knowledge in a way they seem fit.

It's not a "pretty good solution", which shows if you start breaking down how you would attempt to achieve this. How do you, first of all, "remove diplomas"? Do you suggest that we fundamentally overturn how our entire society works just to remove cheating?

A "diploma", "certification" or whatever you want to call it, can be issued by many different entities: a collection of nation-states, a state itself, non- and for-profit organizations, even individuals. These all have varying degrees of value depending on the trust placed in the issuing body, from a certificate of having completed Bob's Weekend Sales Course to a state-issued certificate to perform a specific type of surgery.

First of all, which of these are you saying should be "removed"? Only the ones from universities? All of them?

Secondly, how do you remove them? Do you outlaw them?

Thirdly, what happens when they're all gone? How do you certify a surgeon?


> How do you, first of all, "remove diplomas"?

Very simple - just stop giving them to people and let them be forgotten as a concept.

> Do you suggest that we fundamentally overturn how our entire society works just to remove cheating?

Are you suggesting that university diplomas are a fundamental factor of how our society works? If so, would you please explain how?

> First of all, which of these are you saying should be "removed"? Only the ones from universities? All of them?

Only those from the universities. Universities are officially recognized "places to learn" and they should be kept that way. Studying for longer than "appropriate" number of years should not be frowned upon, but encouraged. The whole process of "verification" should be completely independent of learning.

Bundling "learning" and "verification", the way universities do, inevitably leads to hordes of people who want verification, but not learning - i.e. cheaters.

> Secondly, how do you remove them? Do you outlaw them?

Very simple - stop issuing them (I suppose a country-level ban of university diplomas could do, but politics depend on the country so I can't give you a general answer). If you want a certificate authority, make a certificate authority and make it its sole purpose to verify that people have the knowledge. Leave university out of that.

> Thirdly, what happens when they're all gone? How do you certify a surgeon?

A certificate authority (that only verifies surgeons' skills, and doesn't bundle the process with "learning").


The proposal on the central verification authority is actually how pilot licenses are given. The flight schools have no legal power to exam. For example theoretical exams are done by the aeronautical authority of the country which itself only doles out exams with question from a standardized set of questions. Pilot training was for me how education should be. I was not the best pilot but I clearly now see how that was both my fault as well as not my calling.

Also in Portugal and in Poland there is a thing called national exam which is a nationwide exam at certain check points. It is very useful in showing per school social economic issues as well as grade inflations(school grade VS national exam grade). I honestly do not understand why the verification is not done independently of the teaching in all levels of education. It would also liberate teachers to focus on teaching while having a nationwide benchmark validation on their approach to teaching. Teachers ironically hate to be evaluated.

Another factor that is very hard to handle is that education is an industry that employs a very powerful class, the teachers. More often than not when education is in the news it is for teacher' labor issues. In Portugal recently they held a strike on national exam days. To give you an idea of how important national exams are, they are held in police stations and delivered by police officers to the schools on exam day to avoid leaks and so nobody has unfair advantages.

I will never forget that teachers held students national exams hostage for their negotiations. (they lost and suffered such a public backlash that their bargaining power was neutered for a few years).


So you’ve come full circle here, though. What’s the difference between trying to have people not cheat the test at university and not cheat the test at the certificate authority? I was with you until you brought in this part, because it’s literally the same thing now but at a different building, basically.

Employer verification made sense, you mention they have to deal with it if their hire is dumb. This secondary certificate authority idea undermines your entire argument though. Maybe I’m missing something though and you have a good idea for how the CA will mitigate cheating that a university can’t do.


> So you’ve come full circle here, though. What’s the difference between trying to have people not cheat the test at university and not cheat the test at the certificate authority?

Because certificate authority will be separated from teaching, its sole purpose would be to prevent cheating, and they will be able to focus on it completely. Currently, universities don't have much incentive to focus on preventing cheating, because of overworked professors, or simply because they must print X diplomas a year or disappear.

Besides, when certificate authority has a sole purpose of verifying knowledge, it will become obvious which certificate authority allows cheating - people certified by it will fail at their jobs, thus ruining its reputation.

> This secondary certificate authority idea undermines your entire argument though. Maybe I’m missing something though and you have a good idea for how the CA will mitigate cheating that a university can’t do.

The difference is subtle but important - currently, universities don't suffer much reputational damage from cheaters, because the testing aspect of university is interleaved with the learning aspect - so if a university has good learning opportunities, nobody cares if a certain percentage of its diploma-holders are cheaters. They are on their own.

With certificate authorities, cheating will be naturally devastating, because the certificate authority will (I assume) serve just as a filter for employers - employers will choose employees which hold certificates from a trusted authority, i.e. the one that doesn't let people cheat. So there will be natural incentive to prevent cheating, and the free market will do its thing.


> A certificate authority (that only verifies surgeons' skills, and doesn't bundle the process with "learning").

It seems like you've just moved the cheating problem from one organization to a different organization? How would this new certificate authority measure learning better than universities are currently measuring learning? Are there examples of such certificate authorities existing now? Do they also have cheating problems?


> It seems like you've just moved the cheating problem from one organization to a different organization?

While that might seem redundant, keep in mind that most of the cheating happens because responsibility for both teaching and testing falls on the professor of the subject, and there is not much incentive to prevent cheating when the school must print X diplomas a year or disappear. I assume that an organization whose sole purpose is to certificate knowledge can be much more specialized for testing and spend most of their time trying to combat cheating.

> Are there examples of such certificate authorities existing now?

A sibling comment has mentioned pilot schools: https://news.ycombinator.com/item?id=31548177


> My opinion - remove colleges, bring back guilds.

How do you prevent them from becoming corrupt gatekeepers that maximize their own profits while ignoring learning as much as possible?

"measuring understanding mechanically at scale" is exactly the tool to prevent that.


By removing any kind of formal authority by them. They should be places to learn, not a bureaucratic machine for deciding who is "worthy" and who is not.

There could be another organization that specializes in knowledge verification and certification. But that should be completely independent of learning.

Of course, now the question becomes "how do we prevent a verification and certification authority from abusing their power?" - but that question is not particular to this context and can be applied to any human organization of any kind. However, in this particular case, I think that employers would be the verification authorities themselves. They are the ones that need the real-world knowledge, so they should be the ones to measure it.


It sounds more like we need students to be mentors so that the number of teachers scales with the number of students.


Interesting proposal.

Let's say this was formalized as follows:

Each student would be encouraged to serve as mentor for up to 3 younger students, starting on year 3 of their studies, and each semester thereafter, and receive some small stipend for each mentee.

At the start of each semester, prospective mentors would be listed and students would be allowed to seek them out. Both parties would be allowed to know the grades of each other, and the mentees would be allowed to reach out to former mentees of the same mentor. Mentors would also be allowed to do some kind of self-promotion where they could "sell" their abilities as mentors.

After each exam, the mentoring would count as having taken a course for the mentor, with the grade equal to the average grade of their students, and it would provide a numbe of "mentoring credits" equal to the number of students passing. This might seem unfair, but the idea is that this would encourage competition among mentors to "catch" the best students, encouraging the mentor to put effort in.

For the next semester/course, new mentor student connections could be set up, or the same as the semester before could be kept, if both sides agreed.

When a student receives their final diploma, all the mentoring results would be listed, both courses, average grade of students, and number of students, as well as total students*courses mentored and the average grade for those.

I can imagine a lot of employers would be highly interested in this information, as it could be extremely predictive for some kinds of positions (in particular positions of leadership or teaching). Students who had repeatedly mentored other students who achieved great results would be likely to, in the future, be able to recruit and keep high quality employees and help maximize the output of a team. In both cases, it might be it would be the very students they had been mentoring that would be potential hires.

Or, if employed by a university, would be likely to attract high quality post-graduate students as well be effective supervisors for them.

Now shy, intravert or people with bad social skills might find this unfair. But I think there would still be room for people who focused exclusively on learning the subjects themselves, and these would have more time available for that. These might also care less about those jobs where mentoring success would be seen as crucial.


This will also encourage unscrupulous mentors to help their kids cheat!


This is what we do in companies with “train-the-trainer” and with satellite/ambassador engineers/teams.

The only thing I know from education that does something similar is “Jena-plan” in elementary school and Teaching Assistants in uni. Nothing in our high school.


A minor critique and support for what you are saying.

The experts of pedagogy (academics and teachers) are rarely digitally literate (they can’t use technology in any competent, engaging or engaging enough way) enough compared to their students. This line was crossed about 15 years ago with most institutions still digitally functioning like they are in 2005.

Some neat studies out there about this. Profs are smart and know what they don’t know. Academic leadership often doesn’t have the will to modernize. We saw how many colleges resisted modernizing during the pandemic lockdowns and when they came back lamented how poor online was after designing a solution in 2 weeks.

Bureaucracies serve ultimately for their own self preservation.

Pedagogy is a red flag word. People who use it incorrectly often discount themselves pretty quickly. Pedagogy is about how children learn, it has less relevance in higher education, which is more about learning how to learn. Andragogy is how adults learn and I invite anyone to see how often that word is used.

It’s pretty telling when educational conferences how pedagogy is used every 3 words when… How adults learn (andragogy) including young adults is quite different than children.

When I hear the word pedagogy referring beyond high school it’s tell take sign of people using buzzwords, and a sign they might not really know the difference.

If experts really wanted to get into digital learning taxonomies based on old ones that don’t seem to bridge the divide, maybe that would be a start.

Instead academics have insisted on sharding the digital learning experience among dozens of digital tools for students ( how many different things do students have to log into), perhaps so it will not challenge their job security. Ironically most institutions have streamlined enrolment systems that are pretty complex but can take your money smoothly.

I think part of this is because too many academics are poor listeners though to openly entertaining ideas and positions that are not their own. There are amazing academics who get all of this and more (including the solutions) but they are often buried in the toxic cultures present at most post secondaries.

In Academia, afoption of ideas is gradual and slow, and too slow for the rapid changes taking place the past 2-3 years in society.

Another observation is that most universities only teach how to teach children, and churn out teachers. Maybe it’s why the word pedagogy is so present in academic circles. Do universities have a 4 year degree to teach university students like they do for K-12?

In this post - just reading about how WhatsApp and Google docs is used to learn together, for better and worse .. was created by students, not experts. Good on the prof for finding more ways to engage with the material. It’s a big problem.

Institutions have not kept up the skills of their staff. It’s decades behind. Probably another 10 years before the folks at the top wanting to keep things familiar enough start to retire and digitally native geriatric millennials can start getting into those roles and help change.

You have a good point about election cycles but it can go both ways to hurt education and curriculum too if that’s what politicians want.

Looking at students, Covid forced 10-15 years of change to happen in 2, while we have students who have missed a big chunk of education. There’s a need for our leaders, experts and institutions to recognize and do something about this, but bureaucracies ultimately serve their own self preservation at all costs.

One choice is expend all this effort fix the old institutions, or put the same effort into building new institutions for the future.

Education is no longer measured by hours of butts in seats. Education is no longer math not changing for 500 years so curriculum can take a few years to do minor tweaks.

It’s interesting to see how much more society has opened up to taking courses from anyone to learn the beginnings of any topic. If you ask me the clock is ticking on academic brands if they can’t create and revise curriculum faster than the 1-3 years it can take to approve and change a single sentence in a course.

If it’s relevant, I’ve built platforms to deliver online K-12, post-secondary education and industry training certification for a unusually long time. It feels like a weird world sometimes with the lens still stuck creating ad delivering education like it’s designed to be stored on encyclopedia CDs.

Meanwhile, Industry is often having to fill its own gaps to build the skills and competencies they need in people because education isn’t turning out people that are needed. The advances they put in place to keep their people safe shouldn’t be discounted.

Edit: Grammar, clarifications.


> Pedagogy is about how children learn, it has less relevance in higher education, which is more about learning how to learn. Andragogy is how adults learn and I invite anyone to see how often that word is used.

It's always amusing to see lectures on correct usage from people who don't know the difference between etymology and meaning. (And also who don't know the etymology, either, since etymologically, pedagogy isn't “how children learn” but more like “the act of leading children”.)

In English, especially American English, “Andragogy” is mostly used in relation to a particular theory/approach to adult education originating with Martin Knowles, who leveraged the same conflation of etymology and meaning—even when it originated, pedagogy was well established with its modern and more general meaning despite the narrower sense of its Greek roots—to promote it; education for different audiences by age or other circumstance is not generally distinguished by different greek-root terms in English, but by English terms [“early childhood education”, “adult education”, “continuing professional education”, etc.)


That’s great. Thanks for the clarifications.

Hopefully you understood the meaning of what I was saying and that we might agree that pedagogy applies/refers to children, not adults.

Is there a term that better refers to adults?


> Pedagogy is a red flag word. People who use it incorrectly often discount themselves pretty quickly. Pedagogy is about how children learn, it has less relevance in higher education, which is more about learning how to learn. Andragogy is how adults learn and I invite anyone to see how often that word is used.

The original root of "pedagogy" relates to children, but I still have a lot more time for someone who uses that industry-standard term than I do for anyone preaching 'andragogy'. It's a niche, culturally-bound and assumption-based theory with little research to actually support any of its claims.

Despite the grandiose name, 'andragogy' is just another 'learning styles' or 'growth mindset' - it's pop psychology designed to sell training courses.


It’s less about not having the time and being constructive.

Happy to use a better term than andragogy - I’m happy to learn what it is.

Any ideas or suggestions?


Scaling solutions invariably rely on parents. Ignoring parents seem like a great way to design broken systems.


Now you increased the burden by a factor of 3…


Why do we need to measure it in the first place?


and then listen to the teachers and academics

Should we also always listen to the car mechanics, used car salesmen, and cops because after all they are the experts?


I TA'd for a Prolog course at my university (Imperial College London) during the four years of my PhD there. As part of that work I helped correct students' papers. It was pretty clear to me that the students were sharing their code and only changing variable names etc. to make it look different.

It didn't work, because you could see the same, let's say, idiosyncracies, in their code. For example, there might be three or four different ways to solve a coding exercise and about 60-75% of the papers would solve it using the same way, which was not necessarily the best, or even the most obvious, way (what is the most obvious way to solve a Prolog exercise might not be common sense, but that's why you are given a lecture, first, and then the exercise).

What's most interesting is that I saw the same patterns repeated over the three years I TA'd for one of the Prolog courses. I guess they shared the answers between years, or somebody had put them online (I searched but couldn't find them). Or they just copied solutions to similar exercises they found online.

I didn't report the cheating because I felt there was no benefit in doing so. In particular with Prolog, because it's not a language commonly used in the industry, and it's taught at Imperial mainly for historical reasons (there are many of us logicists studying, or teaching, there) I reckoned that most students found it a useless chore and did not understand why they needed to learn it, and why they needed to "waste" time solving those coding problems. So they copied from each other in order to get the job done quickly and then have more time to spend on the things they felt were more useful to them (like learning Python or "ML" I suppose).

I personally thought, and still think, that learning to program in Prolog is useful, just to disentangle a programmer's mind from the particularities of the coding paradigms, and the programming languages, she's most familiar with. At CS schools today, programming is introduced with Python and I guess it's easy to get into a mindframe that all programming languages must necessarily work like Python. Studying languages from different paradigms, like Prolog or Haskell, can shake you off that mentality (it sure did me, back when I did my CS degree).

The problems is that you can't really force this appreciation of the need to learn different things on students, who are often in a terrible hurry and under terrible pressure to do good on their course, so they can get on with life. The Prolog course I TA'd was mandatory and so it must have really felt like someone was trying to force the knowledge down the studnets' throats.

I don't think that's a good idea. You can't teach people that way. They'll just see your obvious effort to force them to learn what you want, and they'll simply take the obvious route around it. And that ends up teaching them a lesson that you really weren't expecting to teach: the world is full of idiots who think they can teach you things, but you know better than they do and you'll show them who's boss.

That's what students do with tests, also. They can see they're a useless waste of time and they can see the obvious way around them is to cheat, and that it's to their benefit to cheat. And so they cheat. I don't have solutions to this. The students shouldn't have to fight the school, and the school shouldn't have to fight the students. The school is there for the students' benefit after all.


> probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much

This is the story of hacker news. And of most other online forums. And of most meetings. Let’s bikeshed. Hi, I’m on the Internet and I read the whole post title. Let me share my thoughts.

People are stupid, lazy, and uninformed in the general case. People writing in an 8 line comment fields on mobile aren’t going to be the exception.

There is no need to be frustrated. Unless you enjoy that feeling. Then by all means embrace it.

Try this? Dip into the comments with right expectation. These are off the cuff uniformed thoughts of the masses. Maybe they make you laugh or cry or maybe something inspires you. Maybe there’s a gem buried here somewhere.

Don’t expect HN commenters to know anything deep. If you are in the mood for a deeper thought, go to the library.


The other problem with the "we need to fix the system" arguments is that they often ignore the much greater problems in our society.

Our schools, in the aggregate, aren't that bad. We have a broad spectrum and inequality is severe, but even in the worst-off areas, it's not the schools so much as broader social conditions that are producing lousy academic performance. If kids are getting evicted, they're not going to be able to turn homework in on time. If they're doing nothing all summer, they're going to backslide. I also question the social value of "fun" projects like dioramas in grade school: the result seems to be that middle-class kids' parents do all the work, producing adult-quality work, while the less well-off students turn in projects that looks like they were made by kids.

We have ridiculous rates of cheating because we're in a society run by people who cheated and everyone knows it. Corporates cheat; you can't become (or stay) an executive if you don't lie and backstab your way to the top. The fish is rotting from the head, and young people are extremely alienated. This doesn't justify their actions, at an individual level, but it does explain the upsettingly high rate of dishonesty we're seeing.

People also underestimate the power of peer framing and moral drift. Generally, people don't wake up one day and decide that they want to cheat their way through college like some future insurance executive. It happens over time. They start with minor offenses like lifting a sentence without attribution, or looking up one answer on a phone... but, over time, they're plagiarizing whole papers and have stopped doing the actual work... and this is when they usually get caught.

Dishonesty also goes both ways. Grading might be broken, but a world without it would be worse--removing the SAT enhances the preexisting advantages of the rich. Once people become teenagers and realize that advancement in society isn't only based on merit but also requires playing social/nonacademic games (in high school, to be popular and appear "well-rounded" to admissions committes; in college, to get laid but also to get introduced to the best companies; in the work world, to ingratiate oneself to the right people and thus climb the ranks over more deserving but less likeable peers) at which everyone cheats, because everyone has to do so... because global corporate capitalism is itself a cheating system in which most of us are predestined to lose... it becomes harder to make a moral argument to them that cheating is categorically unacceptable.


This comment is saying "No one else here knows as much about this as I do", and little else.

Your only concrete solution, produced by your years of study, seems to be "shut up and listen to educators". Well what are they saying?? How do we fix the problem of cheating and the other issues associated with measuring learning through testing?

If you have so much specialized knowledge about the problem, what does it tell you about how to fix it?


I said no such thing. I never even claimed to be a teacher (which I am not).

What I said was that we should listen to people who are active in the field for proposed solutions to problems we face. I also never said I was active in the field, in fact I said quite the opposite.

I really don’t understand why your comment is so confrontational. I never claimed to have the answers to all problems in all educational systems across the globe, I only suggested that if you want to resolve them you probably shouldn’t do it by having an open discussion involving only programmers (or any other non-pedagogical group of people for that matter).


Is this what you really got from that comment? The implicit point is that the better you want to assess knowledge the less scalable it is, eg. giving oral/one-to-one interview style assessments to university students is not feasible even if it is a better knowledge assessment.

Therefore, saying "fix the system" isn't helpful, everyone knows some fixing needs to be done but not how or even if they do, don't have the power to. Look at problems like poverty, housing supply, climate change, I can look at all these and say the system is broken


> This comment is saying "No one else here knows as much about this as I do", and little else.

No, it's not. Not at all.

But your comment is saying "How dare you suggest that we listen to experts? We all should have a say!". And that's the problem the gp post is pointing out. Do you argue like that with your doctor before surgery? With your lawyer before they defend you in court? With a chef before they cook a meal you ordered in a restaurant? No? Then why is education any different and suddenly everybody claims to know how it should work..




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: