Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve come to the same conclusion in regards to my own learning, even after 15 years doing this.

When I want a quick hint for something I understand the gist of, but don’t know the specifics, I really like AI. It shortens the trip to google, more or less.

When I want a cursory explanation of some low level concept I want to understand better, I find it helpful to get pushed in various directions by the AI. Again, this is mostly replacing google, though it’s slightly better.

AI is a great rubber duck at times too. I like being able to bounce ideas around and see code samples in a sort of evolving discussion. Yet AI starts to show its weaknesses here, even as context windows and model quality has evidently ballooned. This is where real value would exist for me, but progress seems slowest.

When I get an AI to straight up generate code for me I can’t help but be afraid of it. If I knew less I think I’d mostly be excited that working code is materializing out of the ether, but my experience so far has been that this code is not what it appears to be.

The author’s description of ‘dissonant’ code is very apt. This code never quite fits its purpose or context. It’s always slightly off the mark. Some of it is totally wrong or comes with crazy bugs, missed edge cases, etc.

Sure, you can fix this, but this feels a bit too much like using the wrong too for the job and then correcting it after the fact. Worse still is that in the context of learning, you’re getting all kinds of false positive signals all the time that X or Y works (the code ran!!), when in reality it’s terrible practice or not actually working for the right reasons or doing what you think it does.

The silver lining of LLMs and education (for me) is that they demonstrated something to me about how I learn and what I need to do to learn better. Ironically, this does not rely on LLMs at all, but almost the opposite.



> I really like AI. It shortens the trip to google, more or less.

Is this "AI is good" or "Google is shit" or "Web is shit and Google reflects that"?

This is kind of an important distinction. Perhaps I'm viewing the past through rose-tinted glasses, but it feels like searching for code stuff was way better back about 2005. If you searched and got a hit, it was something decent as someone took the time to put it on the web. If you didn't get a hit, you either hit something predating the web (hunting for VB6 stuff, for example) or were in very deep (Use the Source, Luke).

Hmmm, that almost sounds like we're trying to use AI to unwind an Eternal September brought on by StackOverflow and Github. I might buy that.

The big problem I have is that AI appears to be polluting any remaining signal faster than AI is sorting through the garbage. This is already happening in places like "food recipes" where you can't trust textual web results anymore--you need a secondary channel (either a prep video or a primary source cookbook, for example) to authenticate that the recipe is factually correct.

My biggest fear is that this has already happened in programming, and we just haven't noticed yet.


These are good questions and thoughts. I don’t think you’re wrong about searching being better before, either.

I started my career around 2005 and honestly, I did great finding what I needed without an LLM. I recall being absolutely amazed with how broad and deep the reservoir of knowledge was, and as my skills in searching improved my results generally got better.

And then they didn’t. It has gotten worse over time now, in part due to Google itself and in part because of how the Internet tailors content to Google. Getting a Quora link in response to a technical question is a crazy farce. Stuff like that didn’t really happen the same way 20 years ago, much like the Pinterest takeover of Google images.

The signal pollution also seems like a huge problem to me. I’ve doubled down on creating real content in the form of writing and software in an attempt to kind of rekindle and revive the craft of, I don’t know, ‘caring enough to do things yourself’ in my life. I miss it the more I read generated text or see generated images and video. I like LLMs and other generative AI, I see the value, but their rapid takeover is seriously disheartening. It just makes me want to make real things more.


>Is this "AI is good" or "Google is shit" or "Web is shit and Google reflects that"?

It's a -"I can ask it questions based on results and/or ask it to tweak things and/or ask it dumb what if questions" that I can't on web "good."-

Honestly one of the major plus towards LLMs is that it is immediate and interactive.

>Hmmm, that almost sounds like we're trying to use AI to unwind an Eternal September brought on by StackOverflow and Github. I might buy that.

I mean, what's the alternative? You're not going back to web of 90s (and despite nostalgia, web of 90s was fairly bad).

>you can't trust textual web results anymore--you need a secondary channel

I wonder how good LLMs would be at verifying other LLMs that are trained differently - eg sometimes I switch endpoints in LibreChat midpoint during a problem after I am satisfied with an answer to a problem, and ask a different LLM to verify everything. It's pretty neat at catching tidbits.


I'm of the same mind, AI is a useful rubber duck. A conversation with a brilliant idiot: Can't take what it says at face value but perfect for getting the gears turning.

But after a year of autocomplete/code generation, I chucked that out the window. I switched it to a command (ctrl-;) and hardly ever use it.

Typing is the smallest part of coding but it produces a zen-like state that matures my mental model of what's being built. Skipping it is like weightlifting without stretching.


I feel like using copilot chat in my editor has really been a boost for me, in the way you describe. But it's also super janky. Like lots of times I'll be having a conversation about my code, and then I say, "what if I were to make this change here" and it comes back, "Sorry, I don't have access to your files. Can you paste the code in?" And I'm like, WE WERE JUST TALKING ABOUT IT. It's like the file fell out of context, but it didn't tell me. Sometimes it's hard to get the current file back in context.

Or I'll go to great pains to be explicit about what I want (no code snippets unless I ask for them specifically, responses hundreds of lines long with dozens of steps it wants me to take), and for a little while it does that, and then boom, back to barfing out code snippets.

People talk about this tool as some kind of miracle worker and to me while it is helpful it is also a source of major frustration for me, because it cannot do these most basic things. When I hear about people talking about how amazing LLMs are, I'm extremely confused. What am I doing wrong? I really would like to know.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: