Perhaps this is another of the early (?) nails in the coffin for traditional higher education.
If it becomes harder to assess if someone learned something (with a grade), the results of that assessment (GPA) become less valuable. Software has traditionally been at the forefront of allowing people with non-traditional backgrounds (bootcamps, other degrees, self-taught) to work in the highest echelon of jobs, because of experience outside of the classroom (open source, personal projects).
ChatGPT and its ilk put more pressure on evaluation of candidates in interviews and should lend more weight to impact/experience based criteria on resumes (vs education-based).
There is a spectrum of people using ChatGPT to cheat vs learn. But, ideally, "cheaters never win", so interviewers and resume screeners will soon be under as much pressure as educators to holistically evaluate candidates beyond the crutch/immediate signal of a degree. They're just further downstream
I did most of a humanities degree in the early- to mid-'00s and the only courses that relied heavily on long-form out-of-class writing exercises for grades were in the actual language departments (English, foreign languages).
The rest were big on the occasional short quiz in-class to check understanding, and periodic "bluebook" exams that involved writing the equivalent of perhaps 3-5 total pages of typewritten material, by hand, in one class period, in person, in response to perhaps a half-dozen prompts. Basically a series of short, to-the-point essays. Not a ton of outside-of-class paper composition. I doubt they'd have trouble adjusting to remove those all but entirely.
You could force tests to be done in testing centers. My college had these and they were strict about what you can bring, you get up to a whole week to show up on your own time, and you're only allowed a paper and pencil if anything at all, that they provide. Make the Final and Midterm tests worth roughly 60% of their grade, and it wont matter if they cheat on their homework.
Edit:
Alternatively, have students do presentations of their code from their homework, just as we all do peer review professionally. Let students learn and teach other students.
I think the edit is more the case for the near future.
I think we’re about to see a shift from professors running the same curriculum year over year not really knowing students that come and go on a time conveyor belt, to something much closer to the imagination of the parents that are often paying for their kids college “experience”.
OR - I see the tools used to cheat also being used to detect cheating.
I think it just moves the definition of “learning” to a higher level of abstraction: so you know what AI tools to use, how to prompt them, and how to understand their output?
I’m reminded of the time when graphing calculators were going to destroy math programs because nobody would “really know” how to do the work. And yet here we are, and math is fine, and calculators are just another tool.
I hear this argument a lot and I think that it's fallacious. Let's go ahead and extend it to another AI tool - in this case let's talk about stable diffusion.
Let's say you teach a class on fine Art and painting, if you allow your students to use stable diffusion for all their drawings, would you make the case that they have learned how to paint?
Likewise you can't really make the case that somebody understands how to do recursion, if all they're capable of doing is typing the following prompt into chat GPT, "change my forloop into a recursive method".
And in my experience going through calculus, the usage of graphing calculators was heavily. We still had to understand how to calculate derivatives and integrals by hand.
> would you make the case that they have learned how to paint
Well, you’re assuming that “painting” is the physical act of moving a brush on canvas.
But that’s already not true. Plenty of people graduate art school with degrees, despite doing everything on a computer. Are they “painters”? Well, no, but they are artists.
And if you’re talking about a program for artists, where the work is judged on artistic merit (composition, concept, etc), I don’t think it matters what mediums are used.
But if we’re narrowly focused on something more like sign painting, where what matters is brush technique and conforming to customer expectations, sure, AI will reduce the need for such people and will allow those who exist to “cheat”. But who cares?
Most math classes are taught without graphing calculators and when they are used it is minimal. This is fundamentally different. This literally is being people trying to substitute actually know or successfully being evaluated to do anything. The advocates of “bringing down higher education with AI” also consider competence to be “ableism”.
I support using AI in education because it will be in the work environment. It seems insane that anyone would want to teach students on slide rules and typewriters and then send them out to a world of computers and word processors.
And I have no idea where your “ableism” comment came from. Just trying to inject some culture war?
This business of “ableism” is comes up all the time with ChatGPT discussions. I don’t know what your slide rule comment means. Even with calculators we expected students to be able to calculate without a calculator. I really don’t see how you expect people to do “complex” tasks if they cannot do simple ones. Is this magic?
Higher education will become just an optional prep course for your sit down conversational AI interview.
Once AI can do a good job vetting candidates, I see no reason for companies not to have an open applicant process where anyone can interview and be evaluated. If you are sharp and know your shit, a degree won't matter and the AI interviewer won't care.
But this is an "All else being equal" scenario, my true belief is that AI will change things so radically that there is effectively an event horizon in the near future, impossible to predict whats beyond it.
The event horizon you describe is always there. Be it 3D printing, AI, moore's law... etc... The things these things enable, are hard to predict.
Think about cloud computing. It changes the game massively for startups and for people who need enterprise class infrastructure as mere mortals.
Another constant tension to show you how unpredictable all this is: Do you use kernel networking, let the kernel use hardware offloads, or goto use DPDK? The choice of what to do is changing as hardware changes, the kernel changes etc....
... Once you understand, that life is ALWAYS at an event horizon.. you understand AI is just another such event.
Prediciting the future is for the native... Making the future is the way to go. Currrently the AI guys are doing that. But another thing will rise up, it always does.
Why would you even need jobs in this case? Also why does everyone think we live in some magical fairy in which the majority of people with advanced knowledge of a subject are doing so without higher education. Do these people really think everyone is an autodidact? If that was the case, why would they need to cheat on their courses?
If it becomes harder to assess if someone learned something (with a grade), the results of that assessment (GPA) become less valuable. Software has traditionally been at the forefront of allowing people with non-traditional backgrounds (bootcamps, other degrees, self-taught) to work in the highest echelon of jobs, because of experience outside of the classroom (open source, personal projects).
ChatGPT and its ilk put more pressure on evaluation of candidates in interviews and should lend more weight to impact/experience based criteria on resumes (vs education-based).
There is a spectrum of people using ChatGPT to cheat vs learn. But, ideally, "cheaters never win", so interviewers and resume screeners will soon be under as much pressure as educators to holistically evaluate candidates beyond the crutch/immediate signal of a degree. They're just further downstream