Hacker News new | past | comments | ask | show | jobs | submit login

Isn’t this approach forcing the LLM to adapt? E.g. it is throwing tokens away that don’t match the grammar.



Well the grammar will be correct as enforced by the sampler, but the content it's filled with could be anything at all. Sort of how when you change the prompt template the output can be garbage for some models. I haven't tried it out yet myself, but apparently even OpenAI's implementation of this exact principle on their API still has function hallucination issues even with GPT 4.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: