Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not necessarily of this opinion, but his view is that knowing what you had for breakfast and your address is the same as your bank account # or house keys; it's just that people haven't realized this yet.

Their argument is that the reason why people say house keys, or bank act # is different is because they are protecting something of value. His argument is what you do on a daily basis is just as valuable you just don't know it yet.

The other argument against is "yes my privacy/ daily activity may have 'value', but unlike my bank account or house keys people can't take something from me from knowing it. " That's more up to opinion, but the argument is knowing you so well, someone can take your independence. A bit dystopian, but imagine n years from now, ML is good enough to predict your responses and behaviors to 80+% of things. Companies now use it to effectively get you to make decisions they want. Combinations of timing, placement, location, repetition, persuasion, counter-argument all to produce a desired result, but unlike present day extremely effective and possibly without your awareness. If there is a way to get our wet ware to do it, ML will figure it out.

Advertising/Marketing/Sales become a game of Go, where you're the Go board. Best ML wins.

We like to think we're fully in control of our decision making facilities, but study after study shows that's mostly an illusion. Our sub-conscience mostly decides for us, and then our brains rationalize this away after the fact.

The fear is that by giving away your emotions, your responses, your ocean 5 factor model, your preferences, your habits, and a playground to test all of this in you are potentially giving up your independence at a future time.

It's all dystopian, it's not guaranteed, but it pans out as all likely directions and very probable tech.



At a certain point, is that necessarily a bad thing?

If the AI knows me so well, it's probably something I would buy/want anyway. If my house starts cooking bacon and eggs in the morning based on it's analysis of me, and there's a 90% chance that's what I want, awesome.

A lot of people already do it with driving. On my commute to work every day, I go the way google maps tells me to. Google is effectively controlling traffic patterns, but it's made driving to work faster.

I realize it could possibly be used for evil, but by the time we get there, most if not all, of the people reading this will be dead by that point, so we'll never know.


Because to a company using ML to impart control over your behavior, you are a $. They want to extract value from you, just like every other company, government, etc... If they become really, really good at this, you are going to have a low standard of living (because you are being mined for value), and make noncoherent decisions constantly (you will no longer be rational, nor have a functional personality).

To some degree, this has already been happening for decades, since psychologists began formalising advertising into a science. Look at the increasing percentage of lower class families and how diminished their average value is. This isn't caused by time localized events, like a 5 year recession or a housing market bubble. This is the result of extremely fine-tuned and effective advertising, that has caused cultural shifts towards greater and more irrational consumerism.

ML allows the advertisers to be much much more effective; they no longer even have to manually understand their market to manipulate it. Again, look at the growing disparity between the richest 1% and the rest of the schmucks. That 1% is soaking up the profits of extremely effective advertising, which the rest of us are paying for.


It's not that they know what you want, it's that they make you want it. And that destroys individuality, creativity and the last bit of freedom we still have.


To add to what @neuralRiot, the mistake in your thinking is that you're imagining AI as your assistant. But it won't necessarily be your assistant, since you don't actually own it and especially if you're not paying money, you're not the customer.

To give an example, imagine if Google Maps took you on an alternative route, not because you'd arrive faster, but because on that route there are ads you need to watch, or because GMaps wants to free the road for some high profile travelers and you're just getting in the way.

Waze is already serving commercials while you're driving. And given the weird routes it has taken me on, I now have no idea if Waze's algorithm chooses certain routes because it would be faster, or because it wants me to watch ads, or because it wants to take me off the main road to clear it for others.

Companies are already doing evil shit with real consequences. Facebook for example did experiments on manipulating people's feelings. Target figured out a girl is pregnant before her father. Plenty of other examples. It's just that people aren't paying attention.


I find the attitude of "I go wherever Google maps tell me to go" problematic. We should be asking more questions. When we don't, we are basically complying to what the government/corporations are asking us to do. That's not freedom.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: