Of course. If I hire a human servant, I will expect him to understand me even when I'm not very precise in my commands, and perform the action I wanted precisely. In fact, the best servant is the one who does what you want before you have to ask him, right?
That's exactly what I expect from computers in the future.
I don't know why you're assuming the AI is smart enough to perfectly understand what you want even with incomplete knowledge but is stupid enough to need your help even though it's effectively capable of doing everything by itself.
It's perfectly possible to have an automaton that's good at predicting needs and inferring outcomes without assuming that it can set independent goals of its own without being prompted.
One is driven by external stimuli in a deterministic way, which makes it a computer like any other.
The other is internally directed - which edges into questions of sentience, free-will, and independent mentation, and is unknown territory for CS.
Siri and Viv are already heading in the former direction. Siri has some nice canned responses which can create a fun illusion of a personality, but not many developers - and even fewer NLP designers - are going to tell you Siri is independently goal seeking.
That's exactly what I expect from computers in the future.