That paper is not about fingerprinting the arbitrary output of a specific model, which would allow Meta to track its usage in the results, e.g. tell a genuine text from a fake generated by their model. The paper implies giving the model some specific secret input only known to you.
I think the thread we're in is also based on the similar misunderstanding.
I think the thread we're in is also based on the similar misunderstanding.