I'm a proponent of not restricting (well, or trying to restrict) machine learning models and not considering them a lossy database but it must be said here if humans can recreate copyrighted works from memory and publish them, they are in trouble too.
I agree. The problem is that a human has ethical deterrents to avoid copying data while a machine doesn’t, so we have to rely purely on legal incentives to avoid copies from being produced.
I think the best argument here is that having the work in memory is not illegal, and human brains are not bound to copyright even when they can also be considered lossy databases. The question is where do we draw the line for a lossy database.