Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is, no matter how you wrote the prompt, the way you wrote it still triggers some intrinsic bias of LLM.

Even a simple prompt like this:

=

I have two potential solutions.

Solution A:

Solution B:

Which one is better and why?

=

Is biased. Some LLM tends to choose the first option and the other prefer the last one.

(Of course, humans suffer from the same kind of bias too: https://electionlab.mit.edu/research/ballot-order-effects)



Prompt writing can probably take a lot of lessons from designing surveys. Phrasing, the chosen options and their order have massive impact both for humans and for LLMs. The advantage with LLMs is that you can reset their memory, for example to ask the same question with a different order of options. With humans that requires a completely new human each time

Half the battle is knowing that you are fighting


I think there's a lot of alpha left in building a better and more intuitive UX for seed/top-p/temperature etc. The vast majority of users don't get that far.


Eh. This is true for humans too and doesn’t make humans useless at evaluating business plans or other things.

You just want the signal from the object level question to drown out irrelevant bias (which plan was proposed first, which of the plan proposers are more attractive, which plan seems cooler etc.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: