To me a more important part of sentience is a continuous existence. Today's AI is still very discrete. It exists and then it does not, repeated. Even large window sizes are still just discrete calculations before the model goes back to sleep. A mostly always-on streaming style architecture where the model is continuously fed input and sensory data and always processing it, whether or not it is being asked a question, is necessary for true sentience.
Exactly. LLMs are closer to a plinko game than consciousness. And I'd bet they quickly start returning gibberish when you run them in a feedback loop, sort of like an opamp going to a power rail in a badly designed circuit
Hence the "mostly" portion of my post. You do not answer single questions and then pause your brain until some taps you to asks another. During your waking hours you are processing data non-stop (you even are during sleep really). If you processed data for a second and then paused without any activity for minutes between each question, I would say you likely do not have sentience as you would not have the time to consider anything outside that single question.
Even that will be exist and then don't. You feed it same recorded stream of data and it will give you the same AI at the end. You could copy it, or fork it and start giving different input to get different result.
As long as its in a computer, it's existence won't be as continuous as us.
No more than 2880000. Although the brain is asynchronous and there are gaps. Dan Dennett was showing various experiments demonstrating that continuous time is an illusion (of our consciousness).