> They’ll order your food and find the best deals on shopping, swipe your dating profile, negotiate with your lenders, and generally anticipate your every want or need.
Why should they do so?
I mean seriously.
There is more money to make in telling you that the AI will buy you the beast deal but instead buy premeditated (i.e. bought) "okay" looking deals instead.
Similar dating apps and the related ecosystems has a long history of scamy behavior in all kind of ways as they want to keep you using the app. And people with money have always found ways to highlight themself more. I.e. there is more money to make in "swiping for you" in way which looks on the surface honest but isn't.
etc. etc.
There is basically always more money to make in systematically deceiving people as long as you do it well enough so that people don't notice, or they don't have a real/realistic choice, i.e. are forced by the circumstances.
So the moment you take the human and human conscience/moral out of the loop and also have no transparency there is a 0% of this ending well if regulators don't preclude abuse. And AI is pretty effective at removing transparency and the humanity out of the loop. With how things currently look, especially in the US, they (edit: they==regulators) are more likely to do the opposite (edit: i.e. remove consumer protections).
Why should they do so?
I mean seriously.
There is more money to make in telling you that the AI will buy you the beast deal but instead buy premeditated (i.e. bought) "okay" looking deals instead.
Similar dating apps and the related ecosystems has a long history of scamy behavior in all kind of ways as they want to keep you using the app. And people with money have always found ways to highlight themself more. I.e. there is more money to make in "swiping for you" in way which looks on the surface honest but isn't.
etc. etc.
There is basically always more money to make in systematically deceiving people as long as you do it well enough so that people don't notice, or they don't have a real/realistic choice, i.e. are forced by the circumstances.
So the moment you take the human and human conscience/moral out of the loop and also have no transparency there is a 0% of this ending well if regulators don't preclude abuse. And AI is pretty effective at removing transparency and the humanity out of the loop. With how things currently look, especially in the US, they (edit: they==regulators) are more likely to do the opposite (edit: i.e. remove consumer protections).