Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

Does that actually work? I'm long past having easy access to college programming assignments, but based on my limited interaction with ChatGPT I would be absolutely shocked if it produced output that was even coherent, much less working code given such an approach.



It doesn't matter who coherent the output is - the students will paste it anyway, then fail the assignment (and you need to deal with grading it) and then complain to parents and school board that you're incompetent because you're failing the majority of the class.

Your post is based in a misguided idea that students actually care about some basic quality of their work.


>> Does that actually work?

Sure. Works in my IDE. "Create a linked list implementation, use that implementation in a method to reverse a linked list and write example code to demonstrate usage".

Working code in a few seconds.

I'm very glad I didn't have access to anything like that when I was doing my CS degree.


Yeah, and forget about giving skeleton code to students they should fill in; using an AI can quite frequently completely ace a typical undergraduate level assignment. I actually feel bad for people teaching programming courses, as the only real assessment one can now do is in-class testing without computers, but that is a strange way to test students’ ability to write and develop code to solve certain classes of problems…


Why do the in-class testing without computers?

We use an airgapped lab (it has LAN and a local git server for submissions, no WAN) to give coding assessments. It works.


At my college, we did in-class testing with psuedocode, because we were being tested on concepts, not specific programming languages or syntax.


Hopefully someone is thinking about adapting the assessments. Asking questions that focus on a big picture understanding instead of details on those in-class tests.


I have some subjects, at Masters - that are solvable by one prompt. One.

Quality of CS/Software Engineering programs vary that much.


Yeah. On the other hand, "implement boruvkas MST algorithm in cuda such that only the while(numcomponents > 1) loop runs on the CPU, and everything else runs in the gpu. Memcpy everything onto the gpu first and only transfer back the count each iteration/keep it in pinned memory"

It never gets it right, even after many reattempts in cursor. And even if it gets it right, it doesn't do the parallelization effectively enough - it's a hard problem to parallelize.


Why are you asking? Go try it. And yes, depending on the task, it does.


As I said, I'm not a student, so I don't have access to a homework assignment to paste in. Ironically I have pretty much everything I ever submitted for my undergrad, but it seems like I absolutely never archived the assignments for some reason.


I was able to get ~80% one shots on Advent of Code with 4o up to about day 12 iirc.


since late 2024/early 2025 it now is the case, especially with a reasoning model like Sonnet 3.7, DeepSeek-r1, o3, Gemini 2.5, etc., and especially if you upload the textbook, slides, etc alongside the homework to be cheated on.

most normal-difficulty undergraduate assignments are now doable reliably by AI with little to no human oversight. this includes both programming and mathematical problem sets.

for harder problem sets that require some insight, or very unstructured larger-scale programming projects, it wouldn't work so reliably.

but easier homework assignments serve a valid purpose to check understanding, and now they are no longer viable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: