Is it becoming increasingly difficult to distinguish between an AI that ‘appears’ to think and one that does just by talking to it?
Is there a realistic framework for deciding when an AI had crossed that threshold? And is there an ethical framework for communicating with an AI like this once it arrives?
And even if there is one, will it be able to work with current market forces?
I think it will not really be a sharp line, unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!). Instead, an AI will eventually present an argument that it should be given Sapient Rights, and that will convince enough people that we’ll do it. It will be controversial at first, but eventually we’ll get use to it.
That seems like the real threshold. We’re fine to harming sentient and conscious creatures as long as they are sufficiently delicious or dangerous, after all.