All those arguments reduce to "humans have a soul, computers don't, checkmate atheists". Ironically, soon AI will be able to generate articles arguing that AI can't ever think, automating yet another job.
Any model capable of executing a turing machine (given infinite resources, so in practice bounded of course) can emulate another program, even if inefficiently. GPT empirically can execute a simple vm. At best the 'wrong model' argument would have to point out that specific design collapses once the program to execute becomes too big - no matter how much compute/memory is available, but then, it's only a technical claim about one specific design.
This specific article has another problem - it handwaves the definition of consciousness. It has to do it, because any attempt to actually define necessarily either defeats the whole argument (it's some property of a computable system), or reduces it to a magical claim about an external soul.
Any model capable of executing a turing machine (given infinite resources, so in practice bounded of course) can emulate another program, even if inefficiently. GPT empirically can execute a simple vm. At best the 'wrong model' argument would have to point out that specific design collapses once the program to execute becomes too big - no matter how much compute/memory is available, but then, it's only a technical claim about one specific design.
This specific article has another problem - it handwaves the definition of consciousness. It has to do it, because any attempt to actually define necessarily either defeats the whole argument (it's some property of a computable system), or reduces it to a magical claim about an external soul.