>The New York Times once experimented by predicting the moods of readers based on article content to better target ads, enabling marketers to find audiences when they were sad or fearful
Can we maybe go one step back in the discussion and not only discuss what we should do about it, but simply ask does it even work?
There's Zuboff's book about surveillance capitalism that echoes much of what the blog post talks about, that recent netflix documentary that everyone was talking bout and so on, but how much evidence is there that this isn't all just mostly bullshit?
When the Cambridge Analytica scandal broke they used the buzzword 'psychographic targeting'. turns out, psychographic targeting doesn't even really work[1]. A relative recently sent me an article about China allegedly using mind-control helmets to control their thoughts of children attached with a picture of children wearing helmets with blinking lights. I have yet to see a facial emotion detection system that labels Harold[2] of the "Hide The Pain" meme fame as anything other than 'happy'.
I'm more afraid of how bogus all these systems are and the unquestioning power people attribute to tech, which itself enables these firms. It's no wonder they keep inviting Yuval Noah Harari for talks, they must feel flattered.
I think the article does a good job of showing that current technology has the power to be a lot more persuasive than it ever has been. You could nit-pick on the effectiveness of individual methods but the broader point is clearly evident: technology, and our increasing use of it, has more potential to implement, test and refine new persuasive techniques at a scale never seen before in human history. The evidence for this is obvious.
It stands to reason that we need to draw lines around what appropriate persuasion looks like not because these new techniques are definitely abusing our freedoms but because they might and in theory could. The potential in itself should be enough to treat the role of technology seriously and consider what the boundaries should be.
My point is that I think this kind of thinking is disempowering and counterproductive. By framing technology companies as all powerful, even if that's possibly inaccurate and overblown, tech supremacy becomes inevitable in the eyes of people who see it as lacking any alternatives. ("If they can predict me this well, they must know better, right?")
I think this is why Silicon Valley loves to invite their critics who argue along these lines. This criticism doesn't deflect their own notion of supremacy, it just makes them look like bond villains, and they secretly love it.
Showing that in many cases the emperor has no clothes, that complex human behaviour can't be reduced to some ML and that in many ways it's embarrassing self-aggrandizing marketing is I think, the much better way to reign in these firms.
Talking about the limits of how we use technology in society is hardly disempowering - quite the opposite. I would have thought that talking less about the potential supremacy of technology would be the real counterproductive measure in trying to curb that potential reality.
Whether or not the influence of technology is overblown, the fact still stands that we need to draw boundaries around how technology can be used for persuasion. The question should be "where is the line?" not "should we even bother drawing one?".
This is also very common in politics. There is a presumption that the government is corrupt and working counter to public benefit.
In reality, the voting public ultimately has 100% control. Most of the people who spend time lamenting about the power of political advertising, lobbying, ect have never once knocked on a door and tried to change someone's mind about an issue.
Unfortunately, believing in a world view where the you are disenfranchised victim is incredibly convenient. It allows individuals to absolve themselves of responsibility while doing nothing.
You're absolutely right. The voting public is in control and can collectively act to get government to work for shared public benefit. We do have individual and collective responsibility here and empowerment to change things too.
It makes sense that empowered citizens in a society would naturally want to protect their rights to not have their natural human biases gamed or manipulated. Thinking ourselves above human nature and beyond the tactics of persuasion would be naïve but we needn't see ourselves as helpless victims. Instead, we should act to defend our rights and remain empowered and in control. We're not disenfranchised and we can introduce regulation to ensure that our freedom is preserved.
I agree and that framing resonates much more strongly with me. It is more powerful to say that “I don’t want to see that" than insinuate that it is a collective problem, or perhaps a problem for other inferior people, eg halfwit republicans or some such.
For example, regression models aren't exactly ai magic, but unchecked, they compute discriminatory insurance premiums in a way we find morally unacceptable.
(addendum: actually, I can't help but feel that although it's been around since before I was born, I fully understand how it works and use it on a regular basis, regression still is ai magic ;-) )
It's a hypothetical. Waiting until the technology exists is probably a bad idea, since at that point there'd be someone with the means and motive to convince you that the technology is entirely benign.
> I'm more afraid of how bogus all these systems are and the unquestioning power people attribute to tech
They say sufficiently advanced technology is indistinguishable from magic, but in the day to day business world it manifests more like "Technology I haven't bothered to actually understand is indistinguishable from magic". And everybody loves magic!
Is an example of a self-fullfilling prophecy. Bad words aren't used because they are bad. They are not used to game the system and hence become bad words....
...Sorry, English is not my native tongue.
The industry described here is heavily degraded and removed from reality, it is a sorry mix of gambling, lying and posing. It was predictable they would swallow AI systems for their "work" like their lives depend on it. This is blind faith, you could use a prophet to do the same.
There would be no net negative effect if it vanished overnight aside from slower price building. Regulation could do so much here compared to rules for advertisement or AI itself.
This is actually a topic about bias, since people use tech for self-validation to a large degree.
Can we maybe go one step back in the discussion and not only discuss what we should do about it, but simply ask does it even work?
There's Zuboff's book about surveillance capitalism that echoes much of what the blog post talks about, that recent netflix documentary that everyone was talking bout and so on, but how much evidence is there that this isn't all just mostly bullshit?
When the Cambridge Analytica scandal broke they used the buzzword 'psychographic targeting'. turns out, psychographic targeting doesn't even really work[1]. A relative recently sent me an article about China allegedly using mind-control helmets to control their thoughts of children attached with a picture of children wearing helmets with blinking lights. I have yet to see a facial emotion detection system that labels Harold[2] of the "Hide The Pain" meme fame as anything other than 'happy'.
I'm more afraid of how bogus all these systems are and the unquestioning power people attribute to tech, which itself enables these firms. It's no wonder they keep inviting Yuval Noah Harari for talks, they must feel flattered.
[1]https://www.nature.com/articles/d41586-018-03880-4
[2]https://static.independent.co.uk/s3fs-public/thumbnails/imag...