Hacker News new | past | comments | ask | show | jobs | submit login

> If you ask specifically about the discrepancy it will usually deny the discrepancy entirely or double-down on the mistake.

I have had the exact opposite experience. I pasted error messages from code it generated, I corrected its Latin grammar, and I pointed out contradictions in its factual statements in a variety of ways. Every time, it responded with a correction and (the same) apology.

This makes me wonder if we got different paths in an AB test.




How the hell does one A/B test a language model that even the designers don’t fully understand?

Of course, I’m sure that once you start plugging engagement metrics into the model and the model itself conducts A/B tests on its output… hoo boy….


I pasted error messages from code it generated. It kept generating the same compiler error eventually. When I applied the "socratic method" and explained to it the answer based on stack overflow answers. It would at first pretend to understand by transforming the relevant documentation I inserted into it, but once I asked it the original question, it basically ignored all the progress and kept creating the same code with the same compiler errors.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: