Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My exchange:

Me:

  I have a 12 liter jug and a 6 liter jug. I need 6 liters of water. How do I do it?
ChatGPT:

  Using the 12-liter jug and the 6-liter jug, you can get exactly 6 liters of water in the following way:

  1. Fill the 12-liter jug to its maximum capacity.

  2. Pour the water from the 12-liter jug into the 6-liter jug until the 6-liter jug is full. This will leave you with 6 liters of water in the 12-liter jug.

  Now you have 6 liters of water in the 12-liter jug.
Me:

  Can you think of a simpler way to achieve this?
ChatGPT:

  Yes, a simpler way to achieve this is:

  1. Fill the 6-liter jug to its maximum capacity.

  That's it. You now have 6 liters of water in the 6-liter jug!
I wish I understood why prompting it a second time gets a better answer.


I bet when you said a 12 litre jug and a 6 litre jug it wrongly assumed that you required it to actually make use of both the jugs in some way (not merely that they were available for possible use), leading to the pointless step.


Seems right! If you make it more of an inventory list of tools, it answers correctly.

> I have two jugs: a 12 liter jug and a 6 liter jug. I need 6 liters of water. How do I do it?

> GPT-4: If you just need 6 liters of water and you have a 6-liter jug, you simply fill the 6-liter jug to the top with water. You'll have exactly 6 liters! No need to use the 12-liter jug in this case.


This video covers the concept pretty well: https://www.youtube.com/watch?v=IJEaMtNN_dM

It is pretty normal to try to incorporate the extranneous details into the reply.


I would bet a high percentage of humans would do the same thing if prompted as such.


I've noticed that the LLMs are all tuned to emit corporate speak.

Everyone I've encountered that adds lots of obfuscating and tangential details to their day-to-day speech (and tries to establish that particular tone of faux-inclusivity and faux-authority) has turned out to be a sociopath and/or compulsive liar. I find it interesting that LLMs have the same symptom and underlying problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: