Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dramatically improving my learning. The ability to ask questions & ask for examples has helped me learn far faster than books, videos, blogs, etc.

Yes, I know the information isn't always 100% accurate. Neither are the books, blogs, courses, etc., I pay for. They also don't let me ask "what about this scenario..." This gives me such a strong base to build from which I can then use other sources to verify.



> Yes, I know the information isn't always 100% accurate. Neither are the books, blogs, courses, etc., I pay for

Those are two completely different things. ChatGPT has no obligation to be correct, and isn't even trying! It is a chatbot. Its prime directive is only to sound believable. Those books, blogs, courses, etc are at least trying to be correct and provide you with reliable information on a topic.

Try using ChatGPT on a topic you know really well.


>Those books, blogs, courses, etc are at least trying to be correct and provide you with reliable information on a topic.

On topics I know really well, ChatGPT is wrong more often that courses, blogs, books, etc. However, I don't think it's prudent to put them in two different categories, where human books are reliable and LLM answers aren't. Both make many mistakes, with a difference in degree that currently favors human books, but without either really being in a different category.

Newly published books (let alone blogs) are frequently less reliable than even Wikipedia. They are written by a handful of authors at most, get a limited period of review, and the first edition is then unleashed upon unsuspecting students until the errata list grows long enough that a 4th edition is needed.

The prime directive for LLMs with RLHF is a combination of giving the answer that best completes the prompt, and giving the answer people want to hear. The prime directive for authors is a combination of selling a lot of books, not expending so much time and energy writing the book that it won't be profitable, and not making so many mistakes that it damages their reputation.

Neither books, blogs, nor ChatGPT have any obligation to be correct. Either way, the content being reinforced (whether through money, or through training) is not The Truth straight from the Platonic Realm, but whatever the readers consider themselves satisfied with.


> and not making so many mistakes that it damages their reputation.

And that's the difference! Human authors are incentivized to return reliable information. Reliability is not ChatGPT's concern at all, believability is. It can't even cite a source!


I'm using chatgpt as a coach for my programming learning path and honestly, it's amazing.

My bootcamp Discord feels useless in comparison. ChatGPT is always available, and while it's true sometimes gives beliable but wrong answers in my experience if you have basic knowledge on the topic it's easy to spot.

It saves me sooooo much time it's amazing. The time I spent correcting or figuring out ChatGPT mistakes is minuscule in comparison of the time I'd be spending scanning horrible documentation or testing stuff from Stack Overflow.

And if you feel it doesn't provide useful answers, then just and use your search engine.


Don't want to sound rude but GP was asking about a topic you know really well. If you are a student in a new topic, by definition you cannot know the topic really well.


I think the correct way to use it as a student is as a companion to the textbook. If any exercise or text is too convoluted Chatgpt can explain it in a more understandable way.

And step by step solving of exercises is great for students too, they can verify with the textbook's answer key.

The key is having a source of truth on hand.


What type of queries are you running


> Try using ChatGPT on a topic you know really well.

And read what it says carefully. On many occasions I've seen it say something in a subject I know well and it's started correct so I've mentally autocompleted and assumed it had the whole thing right.

Eg. This for "What is Gell-Mann amnesia?" - contrast to the actual meaning. [1]

> Gell-Mann amnesia is a term coined by science writer Michael Crichton to describe the phenomenon where people tend to forget information that contradicts their preconceptions or beliefs. The term is named after physicist Murray Gell-Mann, who won the 1969 Nobel Prize in Physics for his work on the theory of elementary particles.

> According to Crichton, Gell-Mann amnesia occurs when people encounter information that contradicts their beliefs, but rather than updating their beliefs to reflect the new information, they simply forget the information and continue to hold onto their preconceptions. This type of cognitive bias can be a significant barrier to learning and can lead to flawed decision-making.

Learning from ChatGPT as if it's similar to "books, blogs, courses, etc" really doesn't seem like a good idea.

[1] https://en.m.wikipedia.org/wiki/Michael_Crichton#GellMannAmn...


I did, and it was wrong in subtle ways for sure.. but honestly, it was right enough that i was really impressed.

Like i think if you're aware that it can be confidently wrong then it can help you explore areas you don't know.

To think of it differently it felt like learning via comments on Reddit. Which is to say a large portion of Reddit posts are shockingly confidently wrong. But with ChatGPT you can inspect those "comments", ask from various angles, etc.

I have this feeling that ChatGPT could, even in its current form, be useful for learning the larger complex picture. Very bad at reciting facts definitely, but ideas maybe not so bad.

Either way i'm still wanting it to improve. But i still think it's shockingly impressive. I will be paying happily if they can make "small" improvements to how it understands information.


> Yes, I know the information isn't always 100% accurate.

It's not even 25% accurate when you start asking anything other than "what is a lion" and "what is 2+2"




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: