Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“However, ever since the chatbot has come into action, it has been negatively impacting students' learning.”

This sentence is out of nowhere, did ChatGPT write this article? How could someone think something so new is suddenly negatively impacting students across the board.



Dramatically improving my learning. The ability to ask questions & ask for examples has helped me learn far faster than books, videos, blogs, etc.

Yes, I know the information isn't always 100% accurate. Neither are the books, blogs, courses, etc., I pay for. They also don't let me ask "what about this scenario..." This gives me such a strong base to build from which I can then use other sources to verify.


> Yes, I know the information isn't always 100% accurate. Neither are the books, blogs, courses, etc., I pay for

Those are two completely different things. ChatGPT has no obligation to be correct, and isn't even trying! It is a chatbot. Its prime directive is only to sound believable. Those books, blogs, courses, etc are at least trying to be correct and provide you with reliable information on a topic.

Try using ChatGPT on a topic you know really well.


>Those books, blogs, courses, etc are at least trying to be correct and provide you with reliable information on a topic.

On topics I know really well, ChatGPT is wrong more often that courses, blogs, books, etc. However, I don't think it's prudent to put them in two different categories, where human books are reliable and LLM answers aren't. Both make many mistakes, with a difference in degree that currently favors human books, but without either really being in a different category.

Newly published books (let alone blogs) are frequently less reliable than even Wikipedia. They are written by a handful of authors at most, get a limited period of review, and the first edition is then unleashed upon unsuspecting students until the errata list grows long enough that a 4th edition is needed.

The prime directive for LLMs with RLHF is a combination of giving the answer that best completes the prompt, and giving the answer people want to hear. The prime directive for authors is a combination of selling a lot of books, not expending so much time and energy writing the book that it won't be profitable, and not making so many mistakes that it damages their reputation.

Neither books, blogs, nor ChatGPT have any obligation to be correct. Either way, the content being reinforced (whether through money, or through training) is not The Truth straight from the Platonic Realm, but whatever the readers consider themselves satisfied with.


> and not making so many mistakes that it damages their reputation.

And that's the difference! Human authors are incentivized to return reliable information. Reliability is not ChatGPT's concern at all, believability is. It can't even cite a source!


I'm using chatgpt as a coach for my programming learning path and honestly, it's amazing.

My bootcamp Discord feels useless in comparison. ChatGPT is always available, and while it's true sometimes gives beliable but wrong answers in my experience if you have basic knowledge on the topic it's easy to spot.

It saves me sooooo much time it's amazing. The time I spent correcting or figuring out ChatGPT mistakes is minuscule in comparison of the time I'd be spending scanning horrible documentation or testing stuff from Stack Overflow.

And if you feel it doesn't provide useful answers, then just and use your search engine.


Don't want to sound rude but GP was asking about a topic you know really well. If you are a student in a new topic, by definition you cannot know the topic really well.


I think the correct way to use it as a student is as a companion to the textbook. If any exercise or text is too convoluted Chatgpt can explain it in a more understandable way.

And step by step solving of exercises is great for students too, they can verify with the textbook's answer key.

The key is having a source of truth on hand.


What type of queries are you running


> Try using ChatGPT on a topic you know really well.

And read what it says carefully. On many occasions I've seen it say something in a subject I know well and it's started correct so I've mentally autocompleted and assumed it had the whole thing right.

Eg. This for "What is Gell-Mann amnesia?" - contrast to the actual meaning. [1]

> Gell-Mann amnesia is a term coined by science writer Michael Crichton to describe the phenomenon where people tend to forget information that contradicts their preconceptions or beliefs. The term is named after physicist Murray Gell-Mann, who won the 1969 Nobel Prize in Physics for his work on the theory of elementary particles.

> According to Crichton, Gell-Mann amnesia occurs when people encounter information that contradicts their beliefs, but rather than updating their beliefs to reflect the new information, they simply forget the information and continue to hold onto their preconceptions. This type of cognitive bias can be a significant barrier to learning and can lead to flawed decision-making.

Learning from ChatGPT as if it's similar to "books, blogs, courses, etc" really doesn't seem like a good idea.

[1] https://en.m.wikipedia.org/wiki/Michael_Crichton#GellMannAmn...


I did, and it was wrong in subtle ways for sure.. but honestly, it was right enough that i was really impressed.

Like i think if you're aware that it can be confidently wrong then it can help you explore areas you don't know.

To think of it differently it felt like learning via comments on Reddit. Which is to say a large portion of Reddit posts are shockingly confidently wrong. But with ChatGPT you can inspect those "comments", ask from various angles, etc.

I have this feeling that ChatGPT could, even in its current form, be useful for learning the larger complex picture. Very bad at reciting facts definitely, but ideas maybe not so bad.

Either way i'm still wanting it to improve. But i still think it's shockingly impressive. I will be paying happily if they can make "small" improvements to how it understands information.


> Yes, I know the information isn't always 100% accurate.

It's not even 25% accurate when you start asking anything other than "what is a lion" and "what is 2+2"


Cumulative damage may be very small now, at the beginning of this age, but educators I talk to certainly are having a reckoning about it. Whether this will ultimately be bad for education is an open question, but with the way many classes and homework are designed now, students using ChatGPT to answer questions certainly do learn less.


I 'borrowed' the (ten year old) daughter of a friend and sat her down in front of ChatGPT to see what would happen, proposing she ask for help with her homework.

And yes, the first thing she did was see if it could literally answer it for her. It did. However, once that was done with she proceeded to massively expand the scope of the original homework... I've rarely seen anyone her age as enthused with learning, they're usually already tired of school.

Yes, I was standing by to fact-check it, but it didn't actually make any mistakes. Which is a little unfortunate, I guess... I'd been planning to stress the importance of cross-checking. Oh well. It generally seems to get everything right at the level of questioning children are usually capable of.

(Half an hour later she was asking it why water expands when it freezes. Fifteen minutes after that we were reading a wikipedia article on molecular physics... I dare say this was a productive evening.)

---

I think ChatGPT is an incredible resource for learning, and trying to keep it out of schools is throwing the baby out with the bathwater. It does better at teaching than the teachers often manage.


Productive sessions like this are great, but this is one session with one kid with a new, intriguing technology. It is hardly enough experience to roll it out to schools, and it is certainly a huge jump to suggest it is better at teaching than a human teacher would be.


Education today seems broken, perhaps this will be a way to fix a broken system. They don’t teach abacus use in schools anymore as well as how to use log tables


Assuming we're limiting our scope to the modern American public school system this won't help, at least where I'm located. Teachers are required to have Masters degrees and are paid within the range for a part time cashier at Lowe's[1]. We aren't going to fix this system until that changes; when we're paying that badly we're going to get the worst of what's left.

[1] Sourced the salary for teachers near me from Google's linked data. Sourced the Lowe's part time cashier range from an actual job listing in my area.

Edit: fixed footnote, sentence fragment.


Yeah, homework was designed as a way to make self study monitoring and verification scalable. It's not a bad solution but if the availability of AI makes it hard to control that someone has done their homework instead of giving it to an AI, then you have to look for alternatives.

Thankfully, in such a world, AI is available to the "teacher" side as well and can serve as a way to both check that the student is doing their job and also to answer their questions, like a personalized teacher of sorts.


The "if I was a teacher" thoughts on how to handle a ChatGPT that can do short answer questions for English, History, and Social Studies classes...

I'd have a set of N questions that shouldn't take long to answer but demonstrate that the material has been read. Yes, ChatGPT can answer them.

The second part would be in class, after the homework had been handed in pull a name and number out of a hat and have that student answer that question again and answer a followup question on that material.

The oral part of it makes it so that having another do the homework for the student and not do the reading means that the student would show a lack of understanding of the question or its followup.

(note: my sophomore English teacher did this... and I messed up answering questions about the Miller's tale which I hadn't read)


This is a brilliant tactic as avoiding being embarrassed in front of the class is a huge motivator. Plus, the value of teaching others is a huge benefit to the student for the subject matter and the practice of communication skills when explaining to the class.


ChatGPT's knowledge doesn't run that deep. If ChatGPT can write a credible essay about a given subject it means your subject was rather generic to begin with.

I was playing a bit with ChatGPT and I wondered how much actual knowledge could be stored in that model. So I asked ChatGPT whether it knew the song by Franz Schubert called "Am Feierabend" and if so, if it could tell me the subject/meaning of the song. "Certainly I know this song", responded ChatGPT and then gave me -- with total confidence -- a completely wrong answer about the meaning of the song. In fact I was a bit baffled by the authority with which is spouted this nonsense answer. A truly intelligent system would be aware about the limits its knowledge, right?

So I think that with a minimum of creativity teachers can come up with questions that can easily stump ChatGPT.


> If ChatGPT can write a credible essay about a given subject it means your subject was rather generic to begin with.

ChatGPT created a pretty good essay for my daughter's home assignment. She had to write a fictive autobiography from the perspective of a 16th century noblewoman. Is that too generic? (Side note: ChatGPT did it in Hungarian.)

> A truly intelligent system would be aware about the limits its knowledge, right?

That's a question of definitions. If you ask me, ChatGPT is a truly intelligent system that is ridiculously unaware of its own limitations. It doesn't look like a contradiction per se, I've met very smart, highly functioning megalomaniacs.


> Is that too generic?

No but as you stated it's fictive... I can also tell you a lot of fictive science facts, or fictive president names, or anything fictive for that matter

The goal is to use your _own_ imagination, of course chatgpt can align sentences in a semi cohesive manner, it's its whole purpose


ChatGPT has no concept of trying to be correct with regard to world knowledge. This doesn't just apply to obscure things. For instance, when I ask it about mainstream books and TV shows, it frequently misattributes words or actions to the wrong character. But not only that, it will then proceed to explain why the character said so, and how does it reflect on the character.

It's not about awareness or limits of knowledge. From the point of view of a language model, it doesn't matter whether it was Todd or Walter White who killed Lydia, or whether it was Kinbote or Shade who invented a phrase. It only tries to generate a response to your input, such that it is a plausible continuation.


It is a five alarm fire at every university I have a line into. As in, “it’s time to radically rethink your entire course” kind of fire.

I don’t think it’s bad, per se, but ChatGPT has effectively made it pointless to do certain kinds of assignments now.

A lot of professors have been teaching the same way for many years. It’s a reckoning.


I attended a prestigious university. You could pretty much do what you liked (including nothing) for the three years and the degree was awarded for 30 hours of exams in a single week. In the exam room you couldn't copy work or pay someone else to do it or consult a chatbot. You knew the subject or you didn't. So you don't have to change the teaching. You could change the examining.


Yep, back to exams, which there was never anything wrong with in the first place, IMO.


Except for the people who answer badly due to stress and discover they've wasted 3 years off the back of a few bad hours.


This is why we have retakes. If your coursework results in lower grades than fellow students who cheated (with or without AI) there's no such recourse.

Even assuming all the students do it in good faith, there are factors other than knowing the material that can affect their performance on coursework. For example one person may have access to a well equipped, ergonomic, quiet space and plenty of undisturbed time in which to complete the assignment, while another may not.

The major benefit of exams is that everybody is taking them in, as far as possible, consistent and controlled conditions. It's not ideal, but I think it's better than the alternatives.


Some unis only allow retakes if you totally fail. They don't let you optionally choose to retake to try to do better, so no, that's not a solution that works for everyone.


The ultimate reckoning may be that the model of teaching people how to think is now deprecated, as the ability to think and reason can now be outsourced in a way that is much more direct and powerful compared to search engines and the internet.


I think ChatGPT could he harmful as-is but the tech can definitely be made vastly beneficial with some changes.

Imagine using it interactively as a study partner or tutor rather than a homework cheat?


There is an article today in the New York Times about ChatGPT and education, indicating that teachers view ChatGPT as a threat. Treat it like a search engine. It's important to not always use it to replace basic reasoning skills, but ultimately it needs to be treated like a tool that is incorporated into the curriculum. Of course, educators won't know how to do this for a number of years. But maybe ChatGPT can explain to them how to do it.


> indicating that teachers view ChatGPT as a threat.

Not what they meant, but it may be somewhat of an existential threat to the profession of teaching, the thing is a killer tutor in subjects that it is confident in. It is a perfect fit for language, especially, since it can carry a full conversation with you and correct every single one of your mistakes.

Obviously it has major issues with inventing things out of whole cloth and accuracy problems, but human teachers also aren't 100% reliable or knowledgeable about everything, so it's not like it has to be perfect.


I believe this is going to transform the way people learn. Instead of learning in a structured they are going to learn exactly what they need.

You don't need to know about physics molecules until you face a task related to it. And ChatGPT can direct you to learn more about it.

I am in my bachelor's right now and I see how I can skip a lot of knowledge regarding some low-level computer science skills until I really need them. And at the moment if I can't go further without learning how to effectively optimize ML algorithms using CPP I will hear from ChatGPT that I need to go that way. Otherwise: skip. It is not good enough to provide a whole curriculum for sure. But it is sufficient to get directions to which parts I need to learn.

The education is going to be transformed into "tree" | "unstructured" | "NoSQL" instead of a clumsy set of blocks everybody needs to go through.

This is my opinion


Won't this just make everyone much more reliant on these technologies since they cannot to any task without asking the "omniscient chatbot"?


> You don't need to know about physics molecules until you face a task related to it. And ChatGPT can direct you to learn more about it

I don't think ChatGPT will really help you learn about actual 'tasks' related to molecules except generating short text paras about them


maybe in future teachers are going to be a help to use ChatGPT. And the curriculum will be only <suggestions> which you must follow to explore the subject on your own. and at the end of the lecture you just share what is the most interesting finding you have dig up. and which experiments you can perform using that knowledge. and collaborate with others who had the same <search graph> (read interests) in this field like you.


If the majority of young people feel the same as you, the future will probably be very similar to the movie 'Idiocracy'.


Humans tend to not like change, particularly in institutions that are themselves resistant to change. Education today is very much like the education in the early 1900s - designed to educate a compliant workforce who would work in factories. Neat rows of desks lined up before an authority.

Wide availability of these tools will create a new "AI Native" generation. What this will do I don't know. Some effects will be good, some bad.

Narrowing to just me, I find the real value I get while learning is asking questions along the lines of "is this concept like this other concept that I know?" When I get an affirmative, then ChatGPT often goes on to tell me something else that I didn't know. These kinds of interactions are very valuable to me. Example from last night in the middle of a long chat:

"How is the query representation computed? Also, is the dot product between the query and key representations similar to computing the cosine similarity between two vectors?"

"The query representation is typically computed by applying a linear transformation to the hidden states of the input words, often represented as vectors. This linear transformation is typically implemented as a fully connected layer, or a matrix multiplication, followed by a non-linear activation function, such as a ReLU. The resulting query representation vector has the same dimension as the hidden state, and it's used to measure the similarity between the word and the other words in the input sequence.

The dot product between the query and key representations is similar to computing the cosine similarity between two vectors. The cosine similarity is a measure of the similarity between two vectors in a multi-dimensional space, and is defined as the dot product of the vectors normalized by their magnitudes.

The dot product of the query and key representations can be seen as an un-normalized version of the cosine similarity, in the sense that it computes the dot product of the two vectors. The result is a scalar value, which represents the similarity between the two vectors, the larger the scalar, the more similar the vectors are.

The dot product, however, is just one way of computing the similarity, other similarity measure, like the euclidean distance, the L1 norm, or even other more sophisticated similarity measures can also be used. The choice of similarity measure depends on the task and the model architecture, also the dot product has the advantage of being computationally efficient, which is important when dealing with large scale models like transformer based models."


The ChatGPT Discord is incredibly active and is largely students


Did the verification work for you?


Sorry, what verification?


>This sentence is out of nowhere, did ChatGPT write this article? How could someone think something so new is suddenly negatively impacting students across the board.

Ironically if it were impacting learning I think it would be specifically if ChatGPT was NOT being used across the board, but by only a subset of students. So some kids doing assignments the hard way while others have the equivalent of a calculator/secretary/genii.

But if every student has the same GPT access then the class can simply raise the expectations for the quality of the student work across the board and it's cool again.


If the model gets as good as they strive for, its outputs would be indistinguishable from those of PHD level students. There would be no point in augmenting its results with human contributions


>If the model gets as good as they strive for, its outputs would be indistinguishable from those of PHD level students. There would be no point in augmenting its results with human contributions

I don't disagree but in a world where every GPU on the planet is as intelligent as PHD student, the next year each GPU is twice as smart as the smartest human on Earth, and not long after that every person on Earth is either dead or turned in pure energy or something and homework isn't on anyone's mind...


That seems worth worrying about, yes.


Fair enough!


> But if every student has the same GPT access then the class can simply raise the expectations for the quality of the student work across the board and it's cool again.

You can’t “raise expectations” to address work being done by LLM based on its corpus instread of students based on their knowledge of the material; it fundamentally eliminates both the direct learning use and the assessment use of certain assignment classes, for those doing it, whether it is consistent across the class or not.


> “However, ever since the chatbot has come into action, it has been negatively impacting students' learning.”

Aka welp students prefer to learn from ChatGPT rather than our teachers who have poor knowledge transfer skills.

I've been using ChatGPT daily and it is an incredible tool. I feel that I learned so much more through that time than if I had to use Google and books.


As long as you can process it in a productive and skeptical way, and not accept it with the same confidence it is presented, it can be a powerful learning tool. However, if it teaches kids to no longer need to structure sentences or logical arguments, and to trust whatever it says. That seems bad in general.

I found ChatGPT to be very useful in two areas:

1. where the problems are simple, and the question requires rudimentary knowledge, but of something very domain specific. Where, I would be 100% able to find a better answer myself by reading up on it, but I can get there 90% of the way, and figure out the 10% that is missing or wrong out of context.

If you don't have the ability to fix or identify that last 10%, you might suffer more for it. You'll get there more quickly, but you end up at slightly the wrong place.

2. When the questions are creative in nature. Suggest an imaginary setting, character, describe properties associated with, etc.


Eat your own dogfood right?


lol, I agree that it came out of nowhere, but it has been affecting student's learning IMO - not sure if it's been for better or worse.


Almost certainly negatively. Why learn or put in effort into written text prompts when examiners can be easily fooled with machine generated prose? Those that lack discipline, self-control, and have no desire to learn will eliminate themselves.


Sadly. I've learned there is a lot of correlation to lack of discipline/self-control and success in life. Most of the wealthy and successful people I know also happen to go to the gym, monitor their diet, and seem to be able to control their temper in a bad situation.

It seems to spill into all areas of your life.


It's called trait conscientiousness.


While letting the slackers slack more, it lets the diligent learn more. I do self guided study and it's perfect for generating questions to quiz oneself on based on a text, and create answers based on that text. Doing this prevents the ai hallucinations. Not to mention creating flashcards and simple shell scripts for any task to manage it all.

There so many ways this revolutionizes learning for erudite people.


My son is 7 years and he routinely asks me "why learn anything when you can just ask Google?". I can only imagine the impact this will have on that attitude over time.


There are at least two answers to that question:

1. Knowing things lets you know more things faster and they stick: associative memories are very durable because they are mostly groups of references to existing objects in memory. Much less novel information that needs to be encoded. Repetition and memorization matters.

2. Look up latency: Same problem as computer cache misses and L1 vs RAM lookup times. You will take 100x longer to achieve the same result by looking up reference material all the time. Also important questions typically have answers from many different problem domains (social, scientific, philosophical, ethical) and its important to have a lot of knowledge memorized to see a solution to a novel problem holistically.


Do you challenge him with simple questions like "How do you know the answer you get from Google is true?"


Yep. I attempt to educate him on epistemology as much as I can. He's a relatively skeptical kid, but he's also stubborn as hell and he gives somewhat of a recursive answer of "you just Google that too".

I try to impress upon him too that some things take hundreds of hours of explanation before you know enough for it to be functionally useful to you, so you can't always just Google everything. You have to learn things and build your foundations up. Alas he hates the idea of learning anything.


How so?

ChatGPT is quite popular on HN and other similar tech-savvy spheres, but is still far from enjoying mainstream success. Maybe like 1% of students have used it yet, and for a month at most, so it's quite the hyperbole to say that it's been "affecting" them already. Or, conversely, if this kind of thing worries you today, you're in for a wild ride ...


My son's (he's 17) peers are all using it for homework at school. Agree on the wild ride though. This is like the advent of the electronic calculator on maths study, but the difference here is everyone has got it at once.


>This is like the advent of the electronic calculator on maths study ...

Absolutely! And I honestly think it will be a net positive, after all.

As others have pointed, with or without calculators, GPT or whatever, slackers gonna slack and those who take it seriously will have another great tool in their hands to do great things with it.


It's pretty widespread on tiktok in the student circles I'm in...

And the fact openAI is still struggling to handle the load in peak hours 2 months after launch tells me they must be seeing pretty big user growth.


Or they haven't properly designed their cloud arch to even moderately scale. We can obviously see their AI and CS talent, but do we have any gauge on their internal systems and IT talent?

I've used ChatGPT maybe 3 times since launch and 2 out of 3 attempts resulted in a "come back later" type message. It would make sense to see which accounts are the heavy users and throttle them in favor of lightly used accounts but I guess they didn't think of that or decided against it for some reason?


In all likelihood GPT is not embarrassingly parallelizable. https://en.m.wikipedia.org/wiki/Embarrassingly_parallel

This changes how easy it is to scale.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: