That is actually what we want to have happen. Fawkes relies on a POISON attack, in that it corrupts the model into learning the wrong thing. So taking our cloaked photos and learning on it is what corrupts the model and provides the protection.
If you're asking: what if the model trains on "uncloaked" images, we talk about that extensively in the paper and provide technical answers and experimental results. Take a look.
If you're asking: what if the model trains on "uncloaked" images, we talk about that extensively in the paper and provide technical answers and experimental results. Take a look.