Hacker Newsnew | past | comments | ask | show | jobs | submit | LatencyKills's commentslogin

I mentor CS students at two local universities. The best students are using gen ai to enhance their learning and understanding (i.e. they use it as a tool instead of a crutch). The worst students are using it in attempt to “level the playing field” and are failing miserably.

It is easy to determine if someone solved a problem using AI because they can’t explain or recreate “their” solution. Detecting cheating in essays is still far more difficult.


You could probably detect essay cheating (AI written) in the exact same way by questioning the student about it - why did they organize the essay in this way, what was their motivation for focusing on X, or expressing something as Y... Of course anyone can concoct an explanation on the fly, but it should be obvious if they are speaking from the experience of having authored it or just coming up with a post-hoc rationalization.

If they had AI write the essay, yet can still explain it as well as if they had written it themselves (ditto for code), then it would tend to indicate that they at least read it and thought about it, which I think should be more acceptable in a learning environment.


> You could probably detect essay cheating (AI written) in the exact same way by questioning the student about it - why did they organize the essay in this way, what was their motivation for focusing on X, or expressing something as Y... Of course anyone can concoct an explanation on the fly, but it should be obvious if they are speaking from the experience of having authored it or just coming up with a post-hoc rationalization.

I wouldn't claim that I am bad at writing (at least in my native language, which is not English) - at least many colleagues say so. But I do insist that when writing I don't think that way. If I were to answer these question, my honest answers would be:

"why did they organize the essay in this way": I just wrote down the thoughts that came to my mind, and then gave them some structure that seemed right.

"what was their motivation for focusing on X": Either "because it felt right" or "I had to write at most x pages, and indeed it would have made sense to focus on more topics, so I focused on this arbitrary thing"

So indeed I would claim that a lot of sensible reasons why things are this way actually are post-hoc rationalizations. :-)


> So indeed I would claim that a lot of sensible reasons why things are this way actually are post-hoc rationalizations.

Perhaps, but still I think that responses to questioning about an essay that the student did actually write will come a lot more quickly and naturally, even if they indicate that not much thought was put into it, than if they realize they are being called out for cheating and and have to make something up on the spot, since they didn't at least read it carefully!


As a longtime Go developer, I found the bug and its fix interesting.

If you found something wrong in the post, I'd really appreciate hearing about it.


While I agree that it's not important whether or not someone uses AI to improve a blog post or create code examples, this blog post seems like the output of the prompt "Write an interesting blog post about a goroutine leak". I don't have the expertise to verify if what is written is actually correct or makes sense, but based on the other comments there seems to be some confusion if what is written is actually content or also AI generated output.


I do have expertise in Go. The bug was real, and the fix makes sense (though I couldn't verify it, of course).

I just hope HN gets over the "but it might be AI!!" crap sooner rather than later and focuses on the actual content because these types of posts are never going away.


Personally, I just don't like the way this is written. As I said though, I am not an expert and so I may be outside the target group. I think that the original "this is AI" comment is an automatic response which alternatively carries the meaning "this is low-effort" and in that sense I still think that it is valid criticism.


Fair enough - I appreciate your thoughts. I'll keep the "this is low-effort" == "this is AI" equivalence in mind moving forward.


I've done a similar fix, even a bit more interesting, however I wouldn't consider it worthy of writing a blog post, not to mention submitting it to HN.


Even the part where they deploy new code to production without restarting processes?


The bug is somewhat interesting.

The entire "Gradual recovery" part of the post makes absolutely no sense, and is presumably an LLM fabrication. That's just... not how anything works. And deploying three different weird little mitigations flies in the face of the earlier "We couldn't just restart production. Too many active users."


I was an engineer on the Visual Studio team when we first introduced syntax highlighting and code completion. The rollout triggered quite a bit of internal controversy. A sizable group of developers strongly opposed these features—syntax coloring, parameter completion, signature validation—arguing that “real programmers write their code unaided.”

I can’t help but wonder how those same engineers are adapting to the current wave of AI-powered development tools like Claude Code and Cursor.


I will say this, I don't see how these aids can hurt someone's ability to write code.

In that, I think there is a difference between those tools and AI. AI can write code fast, but can definitely also hurt your ability maintain a codebase if you aren't keeping it in check.

I don't think there's any level where synatax highlighting could make a codebase worse.


When a coworker pastes a screenshot of their syntax highlighted code with a black background into chat it is functionally 50% blank as my 55 year old eyes are not good at low contrast. Regular blue, purples and reds are often unreadable against a black background.

Fortunately syntax highlighting isn’t part of the code, it’s just how it’s displayed. There comes a point where pasting images of code, to preserve syntax highlighting, into chats, email, and documentation it’s not only a waste of space, it becomes an ADA issue. Use whatever fonts and themes in your editor, but keep those to yourself.


I'm guessing they used the same argument that was made for calculators, printing... By lessening the burden on your brain you weaken it.


When making a typo one guy I knew deleted the whole word with backspace and typed it new from the beginning. Anything else to correct the error he called cheating.

That was a long time ago - and he was still young. But not so long ago I asked some keyboard manufacturer (Ducky?!) to add more macro features for productivity - and I got the answer that it would be cheating to play back macros faster than normal writing speed - Recording macros on a keyboard and playing it back faster than recorded was impossible by design because they don't want to support cheaters.

Oh those coders... I wonder how much code on Github is invalid because of cheating...


I'm pretty sure Turbo Pascal/C/etc and perhaps vim had syntax highlighting (though perhaps not the other bits) before the first VS release, I'm surprised they hadn't encountered it already.


It wasn't that it was "new" (you're correct, it wasn't). The complaint was that Microsoft engineers were going to use it "as a crutch".

Also, VS (codename Boston) was used as the de-facto internal development IDE for a few years before it we released it to the public. There were also arguments about shipping those types of features publicly.


I'm in my 50s. About two months into retirement I fell into the deepest depression of my life because I couldn't shake the "who am I without my job?" question. It took almost a year (and therapy) to accept that I still add value without working.


I'm a person that wants to learn anything and everything. So guess what I'd do if I'd retire?

Work feels pretty stifling to me.


> I'm a person that wants to learn anything and everything.

That is exactly what I do now. Every question I've ever had I now have the time to devote to answering it. I take classes, I volunteer, I mentor Comp. Sci. students. But, more than anything, I still write code. I spent the last few months creating an LLM from scratch which was incredibly fun.

That said, I have a friend who will probably work until he dies. His only real interest in life is his job. I'm not suggesting that is a bad thing; its more to the point that "retirement" isn't a panacea for everyone.


I do see this as a bad thing and an abdication of taking responsibility for one's own life. As was recently put to me after the sudden death of a friend's father (who lived an unusually rich life): everyone dies, but not everyone truly lives.


Ah... we found the person who thinks they can pass judgement on how people choose to live their lives. I didn't say that my friend doesn't love his job (he does) - I said that he'll probably die before retiring.

Stephen Hawking, Einstein, Marie Curie, and Linus Pauling never retired. Did they not "truly live"?


Agree... but that is exactly what MVPs are. Humans have been shipping MVPs while calling them production-ready for decades.


I really wonder what means for software moving forward. In the last few months I've used Claude Code to build personalized versions of Superwhisper (voice-to-text), CleanShot X (screenshot and image markup), and TextSniper (image to text). The only cost was some time and my $20/month subscription.


> I really wonder what means for software moving forward.

It means that it is going to be as easy to create software as it is to create a post on TikTok, and making your software commercially successful will be basically the same task (with the same uncontrollable dynamics) as whether or not your TikTok post goes viral.


Is that new though? Software has been hype and marketing driven forever.


So nothing changed


I've been using git worktrees with Claude and it's pretty awesome:

https://www.youtube.com/watch?v=up91rbPEdVc

Pair worktrees with the ralph-wiggum plugin and I can have Claude work for hours without needing any input:

https://looking4offswitch.github.io/blog/2026/01/04/ralph-wi...


Worktrees took way too much setup and hand-holding for me, but https://conductor.build made it easy!


I delayed adopting conductor because I had my own worktree + pr wrappers around cc but I tried it over the holidays and wow. The combination of claude + codex + conductor + cc on the web and claude in github can be so insanely productive.

I spend most of my time updating the memory files and reviewing code and just letting a ton of tasks run in parallel


software is all about wrappers, isn't it? :)

conductor -> multiple claude codes/codexes -> multiple agents -> multiple tools/skills/sub-agents -> LLMs


Sadly only allows sign up with Github.


This is fantastic. I’m currently building a combustion engine simulator doing exactly what you did. In fact, I found a number of research papers, had Claude implement the included algorithms, and then incorporated them into the project.

What I have now is similar to https://youtu.be/nXrEX6j-Mws?si=XdPA48jymWcapQ-8 but I haven’t implemented a cohesive UI yet.


Right on that's awesome! I think I'm doing more what you did vs. the other way around. Looks like you're pretty established. How long did it take to build your YouTube to what it is? What's that process been like?


My brother left school after ninth grade and struggles financially — he can’t afford basic health insurance, yet he’ll spend $100 on lottery tickets whenever possible.

I understand the utility he’s purchasing: a temporary sense of hope. What concerns me is the implicit misunderstanding of probability. The difference in expected value between purchasing one ticket and fifty is statistically negligible. This isn’t about elitism — it’s simply about recognizing orders of magnitude and the arithmetic reality of vanishingly small odds.


The difference in expected value between purchasing one ticket and fifty is 50x! Buying 50 tickets is 50x as bad as buying 1 ticket.


Was anything he claimed in the article incorrect? Personally, I enjoy these types of historical stories.


I'm not criticizing the article at all.

In fact I am generally ignorant on the topic of who invented the transistor, nor do I in general particularly care about who invented what.

The quest for academic fame is something I've always utterly failed to understand.

And, if it the author was anyone but JS I'd not have said anything.

What honks me off about this guy though is to see a someone who did in fact do early impactful work on recurrent neural networks believe that:

a) that automatically gives him some sort of special status wrt the rest of humanity

b) because he didn't get the recognition he believes he is due, has completely stopped doing anything useful in the field, turning instead into an absolute crank that every one in the AI field makes fun of, and with a holy mission to rewrite history to assign credit where credit is due everywhere he believes there was an injustice.

c) every time I see someone with an exceptionally well-working brain waste their time because of ego or sheer stubbornness on shite like this instead of using it to do more interesting work, it makes me very sad.

Schmidhuber is a textbook example of this, and the other perfect example of this is Chomsky, a very smart man, who basically - because of his oversized ego and profound stubbornness - ended up wasting his entire life energy working on a linguistic dead-end AND a political philosophy dead-end.

I have a real hard time understanding how the brain of that kind of folks operates, being so bright on certain axes and totally and utterly dumb on others, especially the total lack of self-awareness.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: