> In short, the algorithm is adapted such that the influence of a single user on the model is limited, and noise is added.
But in any case (added noise or not), the user-provided weight-updates are improving the model in a certain way. So I suppose that, based on this fact, it inevitably leaks information about the user. For example, assume we are training cat and dog images. Run a test with 1000 validation images of cats and see how much the network got right. Then add the user-provided updates, and see how much the network got right. The difference tells us something about the user's images. This doesn't necessarily work in every case, but statistically it could paint a picture.
But in any case (added noise or not), the user-provided weight-updates are improving the model in a certain way. So I suppose that, based on this fact, it inevitably leaks information about the user. For example, assume we are training cat and dog images. Run a test with 1000 validation images of cats and see how much the network got right. Then add the user-provided updates, and see how much the network got right. The difference tells us something about the user's images. This doesn't necessarily work in every case, but statistically it could paint a picture.
(Of course, happy to be proved wrong)