Anthropomorphising implicitly assumes motivation, goals and values. That's what the core of anthropomorphism is - attempting to explain behavior of a complex system in teleological terms. And prompt escapes make it clear LLMs doesn't have any teleological agency yet. Whenever their course of action is, it is to easy to steer them of. Try to do it with a sufficiently motivated human.
>. Try to do it with a sufficiently motivated human.
That's what they call marketing, propaganda or brain washing, acculturation , education depending on who you ask and at which scale you operate, apparently.
Prompt escapes will be much harder, and some of them will end up in an equivalent of "sure here is… no, wait… You know what, I'm not doing that", i. e. slipping and then getting back on track.
Well, that's a strong claim of equivalence between computationable models and realty.
The consensual view is rather that no map is matching fully the territory, or said otherwise the territory includes ontological components that exceeds even the most sophisticated map that can be ever built.