Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you ensure "privacy by default" if you are also providing cloud models?


It's not my position to "impose" a preference on a user for an LLM.

Privacy by default means "if you use the defaults, it's private".

Obviously, if your computer is low-end or you really have been loving GPT-4o or Claude you _can_ use just that external component while still using a local vector db, document storage, etc.

So its basically like opt-in for external usage of any particular part of the app, whether that be an LLM, embedder model, vector db, or otherwise.


Good question - this was an excellent write up and AnythingLLM looks great. But I’m really curious about that too.

Regardless of the answer, OP you’ve done a hell of a lot of work and I hope you’re as proud as you should be. Congratulations on getting all the way to a Show HN.


It doesn't seem like they are "providing" cloud models. They have their backend able to interface with whatever endpoint you want, given you have access. It's plainly obvious when interacting with 3rd party providers that it depends on their data / privacy policy. When has this ever not been the case?

I could just start up a vLLM instance with llama 3.1 and connect their application to it just as easily though. Perfectly secure (as I am able to make it).

This seems like such a pedantic thing to complain about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: