Hacker News new | past | comments | ask | show | jobs | submit login

The example on their website is pretty neat as well: "When I ask you for code, please just give me the code without any explanation on how it works. Bias towards the most efficient solution."



A fun note is that even with system prompt engineering it may not give the most efficient solution: ChatGPT still outputs the avergage case.

I tested around it and doing two passes (generate code and "make it more efficient") works best, with system prompt engineering to result in less code output: https://github.com/minimaxir/simpleaichat/blob/main/examples...


I am often struggling to make GPT-4 respect the explicit requirements in the prompt. It would usually be inconsistent in how it applies them.


It is impossible to guarantee any output from this system. Anyone telling you otherwise is a liar.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: