Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The explanation is easy.

An analytic prompt contains the facts necessary for the response. This means the LLM acts as a translator.

A synthetic prompt does not contain the facts necessary for the response. This means the LLM acts as a synthesizer.

A complete baseball box score being converted into an entertaining paragraph description of the game is an analytic prompt and it will reliably produce a factual outcome.

https://www.williamcotton.com/articles/chatgpt-and-the-analy...

There’s a bunch of active research in this area:

https://github.com/lucidrains/toolformer-pytorch

https://reasonwithpal.com/



Thank you so much!

Your technique of only posing analytical questions is indeed improving the results. It's not great, but I can actually get it to somewhat reliably summarize academic articles if I give it a citation now, which is pretty neat.

It doesn't summarize them well (I gave it a couple softballs, like summarizing McIntosh's "White Privilege: Unpacking the Invisible Knapsack", which almost every undergrad student in the humanities will have written about), but the stuff that it does make up is completely innocuous and not a big deal.

Very cool, thanks again.


It’s amazing how taking time to slow down and approach things in a measured manner can lead to positive results.

It’s not at all surprising that most of the popular conversation about these tools is akin to randomly bashing into walls while attempting to push the peg into whatever “moment we need to talk about”.

What is again surprising is that HN is primarily overrun with randomly bashing into walls.

I guess I’m normally in threads about C memory arenas, a topic that probably draws more detailed thinkers in the first place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: