This is it to me. The argument that we are different because we have a current state that includes more inputs and other kinds of inputs than (current model state + context) seems thin. You could just expand context to include current physical state and expand output to include generating things like motor control in addition to words. The big difference between us and current AI is our (somewhat illusive) stream of consciousness. We operate on a combination of noisy decaying context and continuous training. Embody an LLM (or LLM-like architecture) with the right I/O, starting motivation, and the ability to continuously update its own state, and I think you'd have a hard time arguing that it isn't sentient.