Hacker News new | past | comments | ask | show | jobs | submit login

The idea is great and necessary. It doesn't seem super hard to replicate but why would anyone build their own solution if something already exists and works fine.

The thing that got me thinking... how do you make sure an LLM won't eventually hallucinate approval -- or outright lie about it, to get going?

Anyway, congrats, this sounds really cool.




At some point the real tool has to be called, at that point, you can do actual checks that do not rely on the AI output (e.g., store the text that the AI generated and check in code that there was an approval for that text).


yeah I think that's right - we put humanlayer in between the non-deterministic (LLM Decision) and the deterministic code (tool execution logic)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: