Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They need to figure out how to do that. :-)

I would suggest that they have a search database of things to avoid, and a specialized ChatGPT that can find things in that database. Any statement it wants to make is passed by the specialized watchdog with, "Is there anything here that we can't say?" If the specialized watchdog says yes and creates a citation to the rule. Then a tool pulls the rule ACTUALLY in the database up, and ChatGPT compares with the statement to say whether it REALLY applies (without this step the watchdog could hallucinate rules that don't exist!), and then corrects the original ChatGPT before it actually says anything.

This probably would work fairly well in practice.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: