Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I’d love to be able to stream a JSON response down to the client and have it be able to parse the JSON as it goes

why though?



In a non chat setting where the LLM is performing some reasoning or data extraction it allows you to get JSON directly from the model and stream it to the UI (updating the associated UI fields as new keys come in) while caching the response server side in the exact same JSON format. It’s really simplified our stream + cache setup!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: