Hacker News new | past | comments | ask | show | jobs | submit login

I thought he was hinting on using eval.





To make the long story short, you can manipulate LLM responses (I want this for testing/cost reasons) in my chat app, so it's not safe to trust the LLM generated code. I guess I could make it possible to not execute any modified LLM responses.

However, if the chat app was designed to be used by one user, evaling would not be an issue.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: