Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Friendly fyi - I think this might just be a web interface bug but but I submitted a prompt with the Mixtral model and got a response (great!) then switched the dropdown to Llama and submitted the same prompt and got the exact same response.

It may be caching or it didn't change the model being queried or something else.



Thanks, I think it's because the chat context is fed back to the model for the next generation even when you switch models. If you refresh the page that should erase the history and you should get results purely from the model you choose.


Appreciate the quick reply! That's interesting.


You're welcome. Thanks for reporting. It's pretty confusing so maybe we should change it :)


I've always liked how openrouter.ai does it

They allow you to configure chat participants (a model + params like context or temp) and then each AI answers each question independently in-line so you can compare and remix outputs.


openrouter dev here - would love to get Groq access and include it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: