Very interesting. Certainly good to know for the debate on AI stealing vs repurposing artists' content.
Which version of stable diffusion is this? I would think a newer, less overfit model would be much less likely to make these mistakes, especially given the random initialization nature of diffusion.
I think this could hardly be a strong point, human artists can also reproduce verbatim art from other humans with reference or even from memory in some cases, the question would be more on the terms of "can it generate original art based on the reference ones most of the time?", which this kind of test don't prove or disprove.
And just to assert, I'm not saying AIs are like humans or nothing similar to that, this is more like playing devil's advocate.
https://x.com/louiswhunt/status/1874092181281268219
He also did something similar with LLMs:
https://x.com/louiswhunt/status/1868026490300014947