Hacker Newsnew | past | comments | ask | show | jobs | submit | respondo2134's commentslogin

except (like it or not) students are in direct competition with each other. Unique assessments would be impossible to defend the first time a student claimed your "unfair" test cost them a job, scholarship or other competitive opportunity.


honest q: what would it look like from your perspective if someone worked in entirely different tools and then only moved their finished work to google docs at the end?


In this case, the school was providing chromebooks so Google Docs were the default option. Using a different computer isn’t inherently a negative signal - but if we are already talking about plagiarism concerns, I’m going to start asking questions that are likely to reveal your understanding of the content. If your understanding falters, I’m going to ask you to prove your abilities in a different way/medium/etc.

In general, I don’t really understand educators hyperventilating about LLM use. If you can’t tell what your students are independently capable of and are merely asking them to spit back content at you, you’re not doing a good job.


> In general, I don’t really understand educators hyperventilating about LLM use. If you can’t tell what your students are independently capable of and are merely asking them to spit back content at you, you’re not doing a good job.

Sounds as though you do understand it.


Shouldn't do that. Can make it clear at syllabus time that this will result in the paper being considered as AI-assisted


you're missing out on the false positives though; catching 80% of cheaters might be acceptable but 20% false positives (not the same thing as 20% of the class) would not be acceptable. AI generated content and plagarism are completely different detection problems.


For sure.

False positives with technology that is non-deterministic is guaranteed.

It's more than slightly comedic people being amazed when LLM math works as it's created to.


no, you multiply their result by .8 to account for the "uncertainty"! /s


Except the power imbalance: position, experience, social, etc. meant that the vast majority just took the zero and never complained or challenged the prof. Sounds like your typical out-of-touch academic who thought they were super clever.


It's an incredible abuse of power to intentionally mark innocent students' answers wrong when they're correct. Just to solve your own problem, that you may very well be responsible for.

Knowing the way a lot of professors act, I'm not surprised, but it's always disheartening to see how many behave like petty tyrants who are happy to throw around their power over the young.


If you cheat, you should get a zero. How is this controversial.

Since high school, the expectation is that you show your work. I remember my high school calculus teacher didn't even LOOK at the final answer - only the work.

The nice thing was that if you made a trivial mistake, like adding 2 + 2 = 5, you got 95% of the credit. It worked out to be massively beneficial for students.

The same thing continued in programming classes. We wrote our programs on paper. The teacher didn't compile anything. They didn't care much if you missed a semicolon, or called a library function by a wrong name. They cared if the overall structure and algorithms were correct. It was all analyzed statically.


I understand both that this is valuable AND how many (most?) education environments are (supposed) to work, but 2 interesting things can happen with the best & brightest:

1. they skip what are to them the obvious steps (we all do as we achieve mastery) and then get penalized for not showing their work.

2. they inherently know and understand the task abut not the mechanized minutia. Think of learning a new language. A diligent student can work through the problem and complete an a->b translation, then go the other way, and repeat. Someone with mastery doesn't do this; they think within one language and then only pass the contextual meaning back and forth when explicitly required.

"showing your work" is really the same thing as "explain how you think" and may be great for basics in learning, but also faces levels of abstraction as you ascend towards mastery.


It's like with the justice system: if you have to choose between the risk of jailing an innocent and the risk letting a guilty person go free, you choose to let a guilty person go free. All the time.

Unless you're 100% sure that a student cheated, you don't punish them. And you don't ask them to prove they're innocent.


It's extremely reasonable to ask student to show their work because tests are testing understanding. The understanding IS the work.


> If you cheat, you should get a zero. How is this controversial.

Because the teacher was knowingly giving zeroes to students who didn't cheat, and expecting them to take it upon themselves to reverse this injustice.


this would be an incredibly tough play. We've seen few success stories, and even when the product is good building the business around them has often failed. Most of the consumer plays are terrible products with weak execution and no real market. I have no doubt they could supplement lots of consumer experiences but I'm not sure how they are more than a commodity component in that model. I'm a die-hard engineer, but equating the success of the iphone to Ive's design is like saying the reason there were so many Apple II's in 80's homes and classrooms was because of Woz's amazing design.


but with 10x or 100x the chutzpah


it's notable that there is no talk about defining what exactly AGI is - or even spelling out the three letter acronym - because that doesn't serve his narative. He wants the general public to equate human intelligence with current OpenAI, not ask what does this mean or how would we know. He's selling another type of hammer that's proving useful in some situations but presenting it as the last universal tool anyone will ever need.


And because it's become apparent that LLMs aren't converging on what's traditionally been understood as AGI.

The promise of AGI is that you could prompt the LLM "Prove that the Riemann Hypothesis is either true or false" and the LLM would generate a valid mathematical proof. However, if you throw it into ChatGPT what you actually get is "Nobody else has solved this proof yet and I can't either."

And that's the issue. These LLMs aren't capable of reason, only regurgitation. And they aren't moving towards reason.


When I ask Claude to debug something it goes through more or less the same steps I would have done to fine the bug. Add some logging, run tests, try an hypothesis...

Until LLMs got popular, we would have called that reasoning skills. Not surpassing humans but better than many humans within a small context.

I don't mean that I have a higher opinion about LLM intelligence than you do, but perhaps I have a lower opinion on what human intelligence is. How many do much more than regurgitate, tweak? Science has taken hundreds of years to develop.

The real question is: When do knowledge workers loose their jobs. That is close enough for "AGI" in its consequences for society, Riemann hypothesis or not.


Did you read the whole thread and all of your own comment each time you had to type another half-word? If not, I’m afraid your first statement doesn’t hold.


AGI is pretty clearly defined here: https://openai.com/charter/

> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

So, can you (and everyone you know) be replaced at work by a subscription yet? If not, it's not AGI I guess.


He mayy be out front because he's the best PR face for this, but make no mistake there is massive collusion amongst all the players to inflte this bubble. Across MS, Oracle, AWS, OpenAI, Anthropic, NVidia and more all I see is a pair on conjoined snakes eating their own tail.


it's not just the cost of the vaccine roll-out though, you need test on your target demo and since these are healthy people the bar is very high. If the demographic (like males over 45) shows very little involvement in the infection vectors then testing might fail the cost-effectiveness, not the delivery of the vaccine.


Indeed. Generally for HPV, there were modeling studies showing this was probably a good idea before trials started.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: