Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder, where do you draw the line between your private data and just a model update (federated learning)? E.g. if I would analyze all the model updates from an individual person, you probably get a good understanding about exactly the private data which you wanted to hide, or not?


Indeed - this is where we will combine the Federated Learning implementation with tricks from Differential Privacy and Secure Aggregation - which can help give formal guarantees on the amount of information present about any individual person within a gradient.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: