Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is fantastic. One rarely discussed use case is avoiding overzealous "alignment" - you want models to help advance your goals without arbitrary refusals for benign inputs. Why would I want Anthropic or OpenAI to have filtering authority over my queries? Consider OpenRouter ToS - "you agree not to use the Service [..] in violation of any applicable AI Model Terms": not sure if they actually enforce it but, of course, I'd want hardware security attestations that they can't monitor or censor my inputs. Open models should be like utilities - the provider supplies the raw capability (e.g., electrons or water or inference), while usage responsibility remains entirely with the end user.


That's a big reason why we started Tinfoil and why we use it ourselves. I love the utilities analogy, something that is deeply integrated in business and personal use cases (like the Internet or AI) needs to have verifiable policies and options for data confidentiality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: