At a certain point, is that necessarily a bad thing?
If the AI knows me so well, it's probably something I would buy/want anyway. If my house starts cooking bacon and eggs in the morning based on it's analysis of me, and there's a 90% chance that's what I want, awesome.
A lot of people already do it with driving. On my commute to work every day, I go the way google maps tells me to. Google is effectively controlling traffic patterns, but it's made driving to work faster.
I realize it could possibly be used for evil, but by the time we get there, most if not all, of the people reading this will be dead by that point, so we'll never know.
Because to a company using ML to impart control over your behavior, you are a $. They want to extract value from you, just like every other company, government, etc... If they become really, really good at this, you are going to have a low standard of living (because you are being mined for value), and make noncoherent decisions constantly (you will no longer be rational, nor have a functional personality).
To some degree, this has already been happening for decades, since psychologists began formalising advertising into a science. Look at the increasing percentage of lower class families and how diminished their average value is. This isn't caused by time localized events, like a 5 year recession or a housing market bubble. This is the result of extremely fine-tuned and effective advertising, that has caused cultural shifts towards greater and more irrational consumerism.
ML allows the advertisers to be much much more effective; they no longer even have to manually understand their market to manipulate it. Again, look at the growing disparity between the richest 1% and the rest of the schmucks. That 1% is soaking up the profits of extremely effective advertising, which the rest of us are paying for.
It's not that they know what you want, it's that they make you want it. And that destroys individuality, creativity and the last bit of freedom we still have.
To add to what @neuralRiot, the mistake in your thinking is that you're imagining AI as your assistant. But it won't necessarily be your assistant, since you don't actually own it and especially if you're not paying money, you're not the customer.
To give an example, imagine if Google Maps took you on an alternative route, not because you'd arrive faster, but because on that route there are ads you need to watch, or because GMaps wants to free the road for some high profile travelers and you're just getting in the way.
Waze is already serving commercials while you're driving. And given the weird routes it has taken me on, I now have no idea if Waze's algorithm chooses certain routes because it would be faster, or because it wants me to watch ads, or because it wants to take me off the main road to clear it for others.
Companies are already doing evil shit with real consequences. Facebook for example did experiments on manipulating people's feelings. Target figured out a girl is pregnant before her father. Plenty of other examples. It's just that people aren't paying attention.
I find the attitude of "I go wherever Google maps tell me to go" problematic. We should be asking more questions. When we don't, we are basically complying to what the government/corporations are asking us to do. That's not freedom.
If the AI knows me so well, it's probably something I would buy/want anyway. If my house starts cooking bacon and eggs in the morning based on it's analysis of me, and there's a 90% chance that's what I want, awesome.
A lot of people already do it with driving. On my commute to work every day, I go the way google maps tells me to. Google is effectively controlling traffic patterns, but it's made driving to work faster.
I realize it could possibly be used for evil, but by the time we get there, most if not all, of the people reading this will be dead by that point, so we'll never know.