When the AIs are working in service of corporations this seems incredibly unlikely.
We already see what happens when peoples decision making is coloured by mass media advertising. An obese population trapped by debts taken out to fuel consumption.
It is in other peoples best interests for you to work like a slave, be addicted to unhealthy habits & run up vast debts in order to buy their products.
We keep allowing those with power to distort the markets gaining themselves more money and more power at the expense of the little guy. I don't see any reason why AI in the service of the powerful will do anything but accelerate that.
Is it Google's responsibility to? I would say no. If algorithms detect that an individual is going to a bar every Monday and Thursday night, and then starts providing information about said bar on Monday and Thursday nights I don't see the problem.
But I think it would be a problem if every Monday and Thursday night Google Now started providing information about AA meetings in the area, instead of bar information. It's up to the user to make the choice, Google Now just detects trends and then displays information based on those trends.
I go to the gym every Monday, Tuesday, Thursday, and Friday morning. And each of those mornings Google Now tells me how many minutes it will take me to get to the gym from my current location. Should Google Now start giving me directions to the nearest breakfast place instead? No, not unless that starts becoming my pattern.
If you're trying to change your lifestyle, it's more difficult when you have a bad friend constantly enabling the behavior you're trying to cease.
Google may not have a responsibility to be a good friend, but personally I'd prefer not to have a bad friend always following me around, thus I'm a little less excited about this feature.
I think many would rather tell it when to start instead. What's hard about telling it to stop is when you can't tell it's started because it's something more nuanced than the obvious diet plan.
It may not be their responsibility (although if it had that information it would be the morally correct choice). However, regardless of the responsibility -- the CEO of the company saying "we're going to make your life better!" by an AI pushing products is almost certainly not going to make your life better.
> Should Google Now start giving me directions to the nearest breakfast place instead?
That may depend on how much Waffle House pays for advertising, and that is the problem.
don't you think thats a pretty severe statement wrt to free will and agency? if i'm just a consumer wired up to a machine thats deciding whats best for me (even with the best of intentions), doesn't that make me less human?
should I just be a actor playing through a set itinerary of vacations and movies and burgers and relationships? maybe you think its that way already, except less perfect than it might be, but thats a pretty frightening notion to me.