Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nah, prompt engineering wouldn't have solved the fundamental issue, which is that the associations between ideas as stored in the weights will be the same between the two AI players, which makes it an easier game for them than for a human equivalent. It'd be like two copies of you playing on a team, having shared all the same experiences right up until the moment the game starts.

And don't get me wrong, it's still a fun experiment! It's just that that 4 would never have worked if a human played against another human—there are simply too many other words that would be equally strongly associated:

* Gum: Gum is often wrapped in paper, so 'GUM' is strongly associated with the word 'PAPER'.

* King: King is a type of face card, which are printed on paper, so 'KING' is strongly associated with the word 'PAPER'. (Repeat for JACK.)

* Light: Paper is a lightweight material.

That's 4 others right there that are at least as closely connected in my head as LAWYER or LOG. The only reason why o1 pulled up the same four when guessing as it did when clueing is that it's the same model.

Again, I didn't mean this as a knock, just a warning about drawing too many conclusions from the test!



When I saw those 4 words I thought of "letter" or "writing". (But I likely wouldn't have thought of that cluster while scanning the full board.)

I think "paper" is a great clue, and those 4 words lawyer/mail/log/line match better than gum/king/light.

There's an even better reason for "lawyer/-paper" than chatgpt gave: lawyers "serve papers".


That we disagree on this is exactly why who you're playing with matters. I'd have never gotten to lawyer, certainly wouldn't have connected log. Line is a very faint possibility. Mail is the only one I'd have gotten for sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: