If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Say you discover that putting good-looking women next to cars causes the sale of cars to increase. Why does nobody question whether it is legitimate to do so? It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively, for some as yet undescribed but working cognitive bias to do its magic.
Some advertisers even make a joke out of it, eg the Lynx ads where the dude is thronged by a huge horde of women. It's a cliché, for a good reason.
I suppose most people will just say you have free will and it's your own fault for thinking what was suggested, but I sense this is more of a grey zone than most people are willing to admit. How can the free market work if everyone is so easily affected by suggestion?
---
Of course this also applies to the free market in ideas. In what sense are people free to make up their minds if it's decided for them what they should see, whether or not the government is doing it or FB? Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
There is absolutely no limit to this. Putting on a suit for a financial job interview exploits the cognitive biases of interviewers. Tattoos exploit the cognitive biases of hipsters. Putting hockey-stick growth projections in pitch decks exploits the cognitive biases of VCs. Equity grants (pretty much always) exploit the cognitive biases of startup employees. Driving a fancy car exploits the cognitive biases of (a very large portion of) the dating market.
Human beings are not computers. No aspect of human behavior is perfectly logical or rational. You simply cannot ban emotional appeals in principle, because all appeals have an emotional component. This way lies a dystopia far more terrible than "grandpa shared some nonsense on facebook".
Yes, the problem lies not with the technology, but with the gross asymmetries created by wealth and power imbalances. There is an enormous difference in the danger posed by an individual salesperson using cognitive bias on a personal level, and a multibillion-dollar conglomerate using it in an automated, industrial way to exploit millions of people. Those two situations should be regulated very differently.
But even with large corporations, how far would you go and what is considered a bias?I buy Apple laptops for the irrational reason that I find them pretty. I am biased towards spending money on items I find aesthetically pleasing. Should Apple be banned from making pretty laptops? Should they be legally restrained from overpricing their products because it creates a possibly unwarrented impression that they are "premium" to people susceptable to that bias.
Just about everything we do has some component of persuasion or an appeal to biases. It's a huge can of worms.
What's a monopoly? What is a business expense? What's NSFW? There are lots of gray areas in regulation, to the point where ambiguity is more the rule than the exception. This is just another aspect of complex behavior that, now that we see it abused, we realize we need to regulate.
Draw it anywhere and keep tweaking until you find the right balance. Some things are obviously bad, while others are obviously good. Put the line somewhere in the grey area on between and tweak as needed until you find the right balance.
Definitely not easy, but probably doable to a certain extent.
It’s was already drawn in 1789. It’s far worse to accidentally suppress the truth than to share a lie. Falsehoods will reveal themselves in time because of the very fact that they are not true. The opposite is not the case. There are infinite lies but only one truth
You're making this too easy for yourself. Modern communication technology didn't exist in 1789, and the truth can be difficult to determine for people -- even scientific truth. Outrage cycles are very powerful.
Clearly suppressing a truth is bad. But modern technology causes massive amplification of falsehoods in a way that just didn't exist in the past, and that's bad, too.
The format of communication is irrelevant. At the end of the day, I make my own decisions and so does everyone else. You cannot suppress the bad out of life because its humans that cause the bad by accident or on purpose. Why are people so worried about proving a false hood is it that hard for you? You need to run and hide from a difficult conversation? How do you ever expect to change your mind oh yeah you want the government to make them. You’re motivated by hate against a fictitious enemy of the past because it makes it easier to justify your extremely pitiful fortitude
From a libertarian, individualistic perspective, there is much in what you say. But telling people off for being easily persuadable isn't a viable solution. You have to deal with people as they really are, not as you wish them to be.
From a utilitarian perspective, fake news and persuasive technologies do real harm, and not just to the people who are persuaded. Unless something is done at the group level (i.e. by the state) the harm will continue. You might consider that an appropriate trade-off for liberty, but others can reasonably disagree without being weak.
It’s far worse to accidentally suppress the truth than to share a lie. Falsehoods will reveal themselves in time because of the very fact that they are not true.
A lot has changed since 1789, and there is no reason not to re-examine the assumptions that were made then in light of technological developments over the past two centuries plus.
Now it came about after this that Absalom provided for himself a chariot and horses and fifty men as runners before him. Absalom used to rise early and stand beside the way to the gate; and when any man had a suit to come to the king for judgment, Absalom would call to him and say, “From what city are you?” And he would say, “Your servant is from one of the tribes of Israel.” Then Absalom would say to him, “See, your claims are good and right, but no man listens to you on the part of the king.” Moreover, Absalom would say, “Oh that one would appoint me judge in the land, then every man who has any suit or cause could come to me and I would give him justice.” And when a man came near to prostrate himself before him, he would put out his hand and take hold of him and kiss him. In this manner Absalom dealt with all Israel who came to the king for judgment; so Absalom stole away the hearts of the men of Israel.
We're going to have to figure out the right limit at some point. Otherwise we're heading towards some neural network continually feeding depression vulnerable people content that keeps them in a perpetual state of depression in order to optimize engagement. Perhaps we're already there.
There's a big difference between an accountant putting on a suit for a job interview, and a company having realtime insight into the behavior of every single accountant in the U.S. as they search for a job.
The latter exists and has been used by Google to offer jobs to users for whom they may have an opening. (Or perhaps a competitor has openings and Google wants to ensure the best candidates not work there, which is what I would do if I were Google.)
That several gargantuan companies have and use this kind of insight into widescale human behavior is more terrible than "grandpa shared some nonsense on facebook."
But you can't just say that bias exploitation should never be banned simply because there's a large variation in how it's done. Obviously some of these things are unacceptable and others are subtle enough to be acceptable.
It'd be like saying that all car driving should be banned because high speed driving is extremely risky. Of course that makes no sense, because there's a huge difference between driving 100mph in a neighborhood and just doing 25mph, and it's also the reason we have speed limits, which draw a line in the sand, past which we say "this is unacceptable."
> You simply cannot ban emotional appeals in principle, because all appeals have an emotional component. This way lies a dystopia far more terrible than "grandpa shared some nonsense on facebook".
True enough. But are you implying (not saying you are, just asking) that ~"nothing can be done"?
"Banning emotional appeals....This way lies a dystopia..." is just one option - might there be other approaches that could improve the situation?
Obviously there are approaches that could improve this situation. But most people like to project the stupidest idea they possibly can on anything they have a negative emotional response for.
This problem space has barely been cracked open, and we're going to have to get big smart about it before it destroys us.
Rallying for mass adoption -- generally -- should be done after having discussed it in an open-minded setting.
Most ideas become refined as they grow from very small open-minded spaces, towards mass appeal. So perhaps this idea can be refined as well... no sense in trying not trying.
Maybe but as of yet there doesn't appear to be any clearly net positive solutions (IMO) that can both curb emotional appeal and cognitive biases without introducing harms themselves.
Teaching people things like logic, critical thinking, psychological biases, etc so they understand it when in abstract thinking mode when the topic is just that is one thing - teaching people to be able to practice it during realtime conversations of unrelated object level discussions (say, politics) is something else entirely.
> I cant think of any downside here.
I can imagine many people who may see a downside of a population who is broadly capable of skilful critical thinking. I don't think it's an accident that we only talk about the crucial need for it, but never talk about actually doing something about it.
> teaching people to be able to practice it during realtime conversations of unrelated object level discussions (say, politics) is something else entirely.
In my experience, once you give people the basic toolkit for introspection, it can kick off a positive feedback loop. Not everyone will become a stoic, but they can make meaningful progress on their own.
> I can imagine many people who may see a downside of a population who is broadly capable of skilful critical thinking. I don't think it's an accident that we only talk about the crucial need for it, but never talk about actually doing something about it.
On the contrary I think those who see this as negative are few and far between. Democratic power in the western world has never been greater than today, so if this is something the people want, there is very little that can stand in their way
> In my experience, once you give people the basic toolkit for introspection, it can kick off a positive feedback loop.
No doubt it is an improvement over baseline - but do you have any opinions on this specific notion that skills in abstract thinking mode may not successfully transfer to object level thinking, particularly when under pressure (ie: conversations)? I truly believe that this is a real phenomenon.
> On the contrary I think those who see this as negative are few and far between. Democratic power in the western world has never been greater than today, so if this is something the people want, there is very little that can stand in their way.
Oh, I think this is a very small group of people. I believe the general public has wanted a lot of things for a very long time (general welfare for all people, within reason, especially domestically), but improvements along these lines seem rather marginal, despite essentially unending productivity and GDP growth. Something seems rather imperfect here - the advertised benefits of democracy seem more impressive than the results, not unlike TV commercials for a Big Mac Meal versus what you actually get at the restaurant.
>Oh, I think this is a very small group of people. I believe the general public has wanted a lot of things for a very long time (general welfare for all people, within reason, especially domestically), but improvements along these lines seem rather marginal, despite essentially unending productivity and GDP growth. Something seems rather imperfect here - the advertised benefits of democracy seem more impressive than the results, not unlike TV commercials for a Big Mac Meal versus what you actually get at the restaurant.
I think the lack of progress is due to a fundamental misunderstanding of how democracy functions and drives progress. Meaningful progress is only made when persuasion occurs. Living in political or philosophical silos then going to the voting booth only perpetuates the status quo. To drive forward, you have to change someone's mind. Just about anyone can double their political impact on an issue by going out and having a 1 hour conversation with someone who is indifferent on the topic. The biggest trick ever played on the public is telling them they are disenfranchised and that the system is rigged. If people spent 5% of the time and effort they spent lamenting corruption or political advertising on persuasion, change would advance at a rapid pace.
> I think the lack of progress is due to a fundamental misunderstanding of how democracy functions and drives progress.
I assume you mean people's perception of the lack of progress. Actual physical progress is independent of the public's perception of it, I would think.
> Meaningful progress is only made when persuasion occurs.
Makes sense. I pay fairly close attention to the variety and nature of political persuasion, and I find it rather....interesting. I think a fair argument can be made that it is often exerted in directions that seem inconsistent with the general well being of the population.
> The biggest trick ever played on the public is telling them they are disenfranchised and that the system is rigged.
I agree that that is "a" trick, but I don't agree that it is the biggest one, by a long shot. Actually, I think there's more than a little evidence that suggests there's a fair amount of truth to it.
> If people spent 5% of the time and effort they spent lamenting corruption or political advertising on persuasion, change would advance at a rapid pace.
I can't imagine how this could be true. Who would be the parties being persuaded in this scenario, and how would that exert influence on the amount of change?
And then looking at it from another perspective: compare the change in well being of the average person in China versus the USA over the last 10 to 20 years - does the above explain the entirety of the differential? Or, might there be some other variables in play here?
>> I think the lack of progress is due to a fundamental misunderstanding of how democracy functions and drives progress.
>I assume you mean people's perception of the lack of progress. Actual physical progress is independent of the public's perception of it, I would think.
I totally agree.
>> Meaningful progress is only made when persuasion occurs.
>Makes sense. I pay fairly close attention to the variety and nature of political persuasion, and I find it rather....interesting. I think a fair argument can be made that it is often exerted in directions that seem inconsistent with the general well being of the population.
I think it depends on what you count as political persuasion. I think it is more than adds and media. It includes conversations at dinner tables and in lunchrooms. It includes school curriculum, protesters, and door to door political canvasing. I would agree that political advertising and media is often inconsistent with general well being. I think that it is disingenuous and often exerted to benefit a small subset of the population, either corporate shareholders or individual politicians. That said, in my opinion, the influence of advertising and media is relatively small in comparison to the other factors. Sometimes, like a swing voter, it might sway the public a critical 1%, but collectively, the other forces are far more influential.
>> The biggest trick ever played on the public is telling them they are disenfranchised and that the system is rigged.
>I agree that that is "a" trick, but I don't agree that it is the biggest one, by a long shot. Actually, I think there's more than a little evidence that suggests there's a fair amount of truth to it.
See above
>> If people spent 5% of the time and effort they spent lamenting corruption or political advertising on persuasion, change would advance at a rapid pace.
>I can't imagine how this could be true. Who would be the parties being persuaded in this scenario, and how would that exert influence on the amount of change?
The ideal parties to persuade are those with weakly held views. 40% of Americans don't vote. Most of those who do vote have very little information and opinion on down ballot positions and propositions. If someone wanted to, I think they could persuade an indifferent or marginal neighbor or coworker to vote on a topic with an hour of face-to-face conversation. For context, the average american spends 4,000 hours a year consuming TV and digital media. Anecdotally, I too am part of the problem because I easily spend dozens of hours a year listening or complaining about politics, or reading articles to "stay informed".
>And then looking at it from another perspective: compare the change in well being of the average person in China versus the USA over the last 10 to 20 years - does the above explain the entirety of the differential? Or, might there be some other variables in play here?
This is a such a huge an interesting question, I'm not sure we could even define the scope or definitions on HN. I would be happy to follow up via email if you are interested.
> I think it depends on what you count as political persuasion. I think it is more than adds and media. It includes conversations at dinner tables and in lunchrooms. It includes school curriculum, protesters, and door to door political canvasing. I would agree that political advertising and media is often inconsistent with general well being. I think that it is disingenuous and often exerted to benefit a small subset of the population, either corporate shareholders or individual politicians. That said, in my opinion, the influence of advertising and media is relatively small in comparison to the other factors. Sometimes, like a swing voter, it might sway the public a critical 1%, but collectively, the other forces are far more influential.
This is all true, but my concern is that which seems to go unnoticed, and that is: the nature of "conversations at dinner tables and in lunchrooms", or in internet social media, forums, etc, seems to be highly influenced by the initial framing of the stories by politicians, journalists, thought leaders, etc. These topics do indeed get discussed, but are these discussions comprehensive of all dimensions and perspectives? Or, is it something more along the lines of: an event occurs, and various entities "get out in front of it", establishing specific narratives and perspectives for the public to argue over, and exerting ongoing maintenance of these narratives, discrediting or redirecting discussions that go off script, and amplifying those that are on script? If one pays very close attention to stories from a meta-perspective, I propose that these techniques can be seen clear as day (and this theory is far from novel- it's as old as the hills).
> The ideal parties to persuade are those with weakly held views. 40% of Americans don't vote. Most of those who do vote have very little information and opinion on down ballot positions and propositions. If someone wanted to, I think they could persuade an indifferent or marginal neighbor or coworker to vote on a topic with an hour of face-to-face conversation.
No doubt. And if executed skilfully, one may actually achieve the watered down proposals tabled by politicians, as opposed to the true democratic desires of the population (see: single-payer Medicare for All). And even when someone is on to this and tries to beat them at their own game?
"Alexandria Ocasio-Cortez rejects left-wing calls to force Pelosi to hold a 'Medicare for All' vote in exchange for her vote for the speaker"
Pay attention to the language and assertions in that article, as well as AOC's hypocrisy (prior complaints vs current stance):
"Gray and Dore pointed out that Ocasio-Cortez had previously suggested she supported holding a House vote on Medicare for All." "We can't even get a floor vote on Medicare for All," the congresswoman said in January. "Not even a floor vote that gets voted down. We can't even get a vote on it."
This is what I would call professional lying (or, Public Relations, Mass Perception Management, etc).
Is no one able to see the amount of persuasion that exists in "the news"? Am I imagining this? Do politicians not have a history of sticking to talking points?
Why does this abstract aspect of the situation so rarely get discussed on an intellectual site like HN? Why do we fight with each other over the same old object level nitpicks that one finds on Reddit? Are we not capable of more than that?
> This is a such a huge an interesting question, I'm not sure we could even define the scope or definitions on HN. I would be happy to follow up via email if you are interested.
I believe that this community should put some genuine honest effort into realizing that abstract perspectives and discussions on these matters is "the" path to eventual resolution, rather than succumbing to our intuitive Pavlovian behavior of reacting to the latest outrage ju dour that the media pushes into people brains. The underlying situations are not this dynamic, they are largely static across time...so, why do we not analyze and discuss them accordingly?
Is the way we undertake system analysis of Project Planet Earth (emotional, tribal, chaotic and disorganized, no documentation or tasks or timelines) similar to how we do the same for the complex (but far simpler) systems most of us work on at our days jobs?
EDIT: This "fresh daily news for the proletariat to argue about" approach to journalism reminds me of an old Joel Spolsky post:
>> When I was an Israeli paratrooper a general stopped by to give us a little speech about strategy. In infantry battles, he told us, there is only one strategy: Fire and Motion. You move towards the enemy while firing your weapon. The firing forces him to keep his head down so he can’t fire at you. (That’s what the soldiers mean when they shout “cover me.” It means, “fire at our enemy so he has to duck and can’t fire at me while I run across this street, here.” It works.) The motion allows you to conquer territory and get closer to your enemy, where your shots are much more likely to hit their target. If you’re not moving, the enemy gets to decide what happens, which is not a good thing. If you’re not firing, the enemy will fire at you, pinning you down.
>> I remembered this for a long time. I noticed how almost every kind of military strategy, from air force dogfights to large scale naval maneuvers, is based on the idea of Fire and Motion. It took me another fifteen years to realize that the principle of Fire and Motion is how you get things done in life. You have to move forward a little bit, every day. It doesn’t matter if your code is lame and buggy and nobody wants it. If you are moving forward, writing code and fixing bugs constantly, time is on your side. Watch out when your competition fires at you. Do they just want to force you to keep busy reacting to their volleys, so you can’t move forward?
>> Think of the history of data access strategies to come out of Microsoft. ODBC, RDO, DAO, ADO, OLEDB, now ADO.NET – All New! Are these technological imperatives? The result of an incompetent design group that needs to reinvent data access every goddamn year? (That’s probably it, actually.) But the end result is just cover fire. The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP. The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Microsoft.
Sound familiar? Why can we only achieve the smallest of accomplishments on matters that have (or would have, ex-propaganda) widespread public support, often for years if not decades, often bi-partisan? Might it be because the public is always reacting to the latest outrageous development, always in a state of confusion, never spending significant amounts of time focusing on one issue until it is fixed?
I'd agree there isn't a downside to being able to identify biases and analyzing your emotions, but that doesn't actually curb actions based on biases or emotions. You would have to prescribe biases with respect to what metrics and the choice of what metrics to pick would be where problems arise because you would also be implicitly or explicitly also be prescribing what things to value.
Some economists have tried arguing that advertising adds value to products, but I think it only hinders people from making choices based on the product's merit. It's a distortion of the market.
We wouldn't lose much if advertising would just be forbidden. A traditional definition of it would probably be enough to get rid of most of it. The main problem is that huge sums of money are involved, and almost all media profit from it to some extent, so there's a huge incentive to shut down this conversation.
> Some economists have tried arguing that advertising adds value to products, but I think it only hinders people from making choices based on the product's merit. It's a distortion of the market.
Precisely. Spec sheets, unbiased reviews, ratings systems that vet purchases, those add value. Almost by definition, what we consider advertisement is designed to distort humans from making rational free-market choices. Stretching the analogy: Attention is finite, therefore every ad you consume moves you further from perfect information.
That's a purely pragmatic argument without getting into the whole bit where ads use all sorts of psychological tools to shape behavior.
It would seem that review sites would be a massive beneficiary of a ban on advertising; whether or not they review in a Consumer Reviews (scientifically backed, generally) or Bob’s Internet Affiliate Shills or Celebrity Chris’ Favorites manner is left as an exercise to the reader.
At least review sites would be something that a user would have to seek out, rather than having the information blasted into their field of attention at times the advertisers found strategically optimal.
Also, if people found themselves regretting purchases of products and services recommended to them by a particular review site, they would be free to use a different review site in future.
Hopefully these sites wouldn't operate on a non-refundable yearly-subscription model, making people reluctant to "waste" the money they had spent. The business model for such sites might be a little tricky, since they shouldn't really be making money from affiliate fees, and web advertising wouldn't exist in this world either.
>At least review sites would be something that a user would have to seek out, rather than having the information blasted into their field of attention at times the advertisers found strategically optimal.
This is the crux of the question. I don't consent to any of it, but just walking down the streets I'm targeted by an assault on my brain from every direction. Ditto picking up my phone or browsing the web (were it not for uBlock). It's not ethical.
As for review sites being shilled, mandate big disclaimers detailing sponsors (shill praising product X would have to have a big banner saying "this content paid for by X", as its already done in other contexts). Reputation would eventually form around trusted reviwers.
>We wouldn't lose much if advertising would just be forbidden.
We would lose the current ad industry and just replace it with some other form of it.
You can't get rid odd advertising, because the existence of the product itself is already advertising. Let's imagine that advertising were banned. How could a car manufacturer still advertise their cars? Put bigger logos on them and sell lots of cheaper models for a while. That way the city will be full of your vehicles and any time someone sees a car they'll think of your cars. Same goes for Coca-Cola and other brands. In that case they just need to take up more shelf-space.
Wouldn't that be a great outcome? Cheap cars or cheap cola? That's better than more expensive products (because of the cost of advertising that's turned into billboards and banner ads.)
For example, if a concert promoter can't buy a 30 second spot on local radio to promote and upcoming show, they will instead pay for the band to be interviewed by a DJ where the interview is little more than the band talking about the upcoming show. I don't think that's an improvement.
We have to ditch the cognitive bias that “products” are what we should contribute.
Information exchange can take place when people use their imagination to create with raw materials. Stock stores with raw materials. Focus public governance on health, not public-private collusion.
Think like open-source; take computer, make it useful to you.
We’re circling last generations emotional model for packaging given their logistics capabilities.
IoT can act as a model here; smaller uni-task gadgets or general compute gadgets, needing less code.
The sunk cost fallacy at scale is how our system works now. That simply props up the ingenuity of those who got there first.
Why the ever loving hell do we need teachers with high education bills and masters degrees? Were we not able to reach reading for the thousand years before?
This social monolith building needs to billionaires is merely following a social monolith to kings and priests.
We move around atomically but pay a tax on our effort upwards. The result? Decades of inequality growth, deflation of buying power.
It’s not technical change we need but emotional; we don’t owe these people deference.
Pre-packaged experience is no different than “here’s your Bible, Timmy.”
Enforcement doesn't have to be perfect to benefit society. Once it was illegal to advertise drugs to the public. After it was legalized suddenly drug company budgets were overtaken by marketing, instead of R&D. In my opinion that was a net loss for society.
From what I can tell 70% of drug marketing budget is about persuading doctors and not consumers so I don't see how your example applies. In fact it just shows that companies will find a way to spend marketing money to influence people given enough ROI even if they can't target consumers directly.
As an absurd example, if someone were to invent a ray-gun that reprograms people's brains with arbitrary beliefs it would be 100% unethical and illegal to use.
However, ads or fake news article that takes many exposures to reprogram you (by using the cognitive bias back channels as you mention) are perfectly fine. ¯\_(ツ)_/¯
With the raygun there's a clear cause and effect. You can fairly easily prove that what changes beliefs is the raygun. You can't do that with ads and fake news articles.
The other point is that with ads and articles you have to engage with them yourself. Something can grab your attention, but you can decide to ignore it anyway.
There isn't a clear cause and effect between smoking a single cigarette and developing cancer, or even between smoking a single cigarette and addiction. For a long time, tobacco companies used such equivocation to avoid regulation of their industry.
Instead, we look at the total effect (and often the intended effect) of multiple small changes, and decide whether they are a net positive for society.
Do we have a total effect of ads and fake news on society? Something that's actually reliable and not propaganda itself? It seems extremely difficult to be able to differentiate between an ad letting people know that a product exists and it directly influencing your decision. Or you could consider letting people know about the product existing as influencing the person, but then it's hard to argue that ads are actually bad.
With tobacco it's far easier to show a correlation, because people can just not smoke. People can't just avoid the effects of advertising, because it's all around us. Even word of mouth can be an ad campaign.
Also, tobacco probably is too regulated. It should be regulated to stop other people from experiencing the negative effects of somebody smoking. Limiting what flavors cigarettes can have and enormously taxing them seems like it has gone too far. People should be allowed to make their own choices in a free society.
> Do we have a total effect of ads and fake news on society? Something that's actually reliable and not propaganda itself? It seems extremely difficult to be able to differentiate between an ad letting people know that a product exists and it directly influencing your decision. Or you could consider letting people know about the product existing as influencing the person, but then it's hard to argue that ads are actually bad.
Thinking out loud, I would be curious to see a study on people who play 'Loot Box/Gacha' style games. Include both people who have been playing for a while, as well as people who 'start' playing during the study. Include non-players.
Measure their other 'vice' behaviors (i.e. Drinking alcohol, nicotine, real gambling, gentlemen's clubs, etc.) over the same time.
My hypothesis is that in those who are introduced to Gacha games, you will see an increase in other 'addictive' behavior, because the content they see will whittle away at their resistance to other addictive behaviors.
>My hypothesis is that in those who are introduced to Gacha games, you will see an increase in other 'addictive' behavior, because the content they see will whittle away at their resistance to other addictive behaviors.
I understand what pattern you're going for, but I don't think the study would show that. The number of people that play gacha games is small conjugated to the number of people that smoke, drink, gamble etc. Chances are that you'd likely see the reverse - doing those things could increase your likelihood of playing gacha games (or well, spending money on them - it's entirely possible to play most of these games as virtually free).
It isn’t exactly on point but at some level the investment industry is aware of the difference and does a fairly good job at walking the line. There’s a reason every prospectus looks the same: it’s meant to convey facts and let investors arrive at their own decision rather than “sell” them on making an investment.
> People should be allowed to make their own choices in a free society.
When the rest of the "free society" has to bear the costs of certain choices, it doesn't seem so illegitimate to argue that it has some interest in constraining those choices.
And there of course is the endless contradiction of being a social creature: on the one hand, being part of a society creates possibilities that were not there before, but also leads to constraints that were also not there before.
Endless political philosophers across centuries have grappled with how best to resolve this paradox, with no definitive answer at this point.
>My hypothesis is that in those who are introduced to Gacha games, you will see an increase in other 'addictive' behavior, because the content they see will whittle away at their resistance to other addictive behaviors.
But we obviously tolerate this to some extent. I think it's reasonable to limit smoking so that it doesn't cause second hand smoking effects on other people. However, going further than that seems odd to me. If they're only hurting themselves then I don't see the problem with it. We allow people to do extreme sports, we allow them to drink alcohol, we allow them to do high stress jobs, we allow them to eat sugar, we allow them to do a long list of things that can hurt them, but we tolerate most of those because people should be free to choose how to live their life. Yet somehow on a small list of things we control people's behavior. If they don't follow it then they're considered immoral. (We don't call them immoral, but that's how we treat them.)
People who are addicted to gacha are psychologically manipulated, not to mention the gambling aspects. Gambling is already regulated across the world.
Just like laws protect the physically weak from exploitation, they should protect the psychologically weak from psychological exploitation.
You wouldn't accept someone holding you at gunpoint demanding your money, and you would have no choice but to give up you belongings. Even though it was your choice to give up your stuff, you were coerced into it. gacha/loot boxes and similar addictive phenomena pry on your psych.
This isn't some conspiracy, this is a tried and true method which you can learn about from the whaler himself[1].
> Something can grab your attention, but you can decide to ignore it anyway.
I disagree. It's been shown[0] that repeatedly exposing people to information makes them more likely to believe it's true. As a further example, try to not think about your breathing after reading this sentence :)
> The other point is that with ads and articles you have to engage with them yourself. Something can grab your attention, but you can decide to ignore it anyway.
That's a slippery slope, when we remember that researchers are constantly looking for the right hooks that edge the 'persuasion' closer to entrapment.
Doubly so when we consider neuroatypicals; While many people think of a 'seizure' as something that results in shaking movements/etc, There is a nonzero size of the population that can have seizures that are more 'internal'; the stimulus results in either a locking of focus on an item, or a sort of disoriented haze (at which point, for some, their subconscious is wide open.)
Those who take the time to study hypnotism/NLP and are more wise than intelligent understand that we are playing with fire here folks.
There's this problem in the sciences and higher academe where we give things names that sound like something but mean another. For instance, Einstein's Special Theory of Relatively is a common example. Ask the average person on the street what it means and they'd say, "Well of course I know what it means, after all, it means that everything is relative."
Well obviously that's not what it means. What it means is that the speed of light is rather impossibly constant. Likewise, economics has this same issue. "Free Market" does not mean everyone has complete free will and has total immunity to persuasion. "Free Markets" defines a philosophy of trade where individuals can own and sell property. Whether or not people are 'easily affected by suggestion' as you put it, has got no impact on the concept of free markets.
> Whether or not people are 'easily affected by suggestion' as you put it, has got no impact on the concept of free markets.
Yes, it does. A free market is not just about individuals owning and trading property. A free market requires that all trades are voluntary. If people are forced, for example by law, to make trades they would not freely choose to make, you do not have a free market: forced trades are not voluntary. But if people are manipulated into making trades they would not freely choose to make if they knew the information the manipulator is hiding from them, those trades are not voluntary either.
You are correct that if you are compelled by the government to do something it's not a free market. But if coca-cola convinces you to try a new flavor through a marketing campaign, it does not eliminate the free market. It was your choice the whole time.
I don't think you can compress the issue down to a "government" vs "corporation" binary.
There are some powerful benefits of free markets over command economies, such as decentralized self-organization, tendency towards the maximization of total surplus, and dynamic adaptation to changing conditions.
But those benefits are conditional on, for one thing, the rational behaviour of individuals.
There are ways to "hack" people such that they stop behaving rationally (with respect to the economy). The threat of violence or incarceration (maybe one you are equating with government intervention) is one of them.
But there are plenty of others: chemical addictions (e.g. cigarettes), systems that prey on the susceptibility of our dopamine-reward pathways (like slot machines, or your perhaps your facebook feed), etc.
Good policies in a free market economy are ones that make it more difficult to disrupt the beneficial aspects of a free market. I'd say good policy-making actually increases the free market-ness of markets in a messy, imperfect world.
Historically, our society has drawn a distinction between force (things like direct threats of violence or incarceration), which makes things done under such conditions involuntary, and things that are addictive but which people still voluntarily choose to do.
In some cases, such as smoking, we have ended up still imposing penalties on companies that purvey such products, but the basis for such penalties has not been the addictiveness of the thing, but the physical harms it causes, such as lung cancer. Addictive things that do not cause harms of the same sort, such as slot machines, have not been treated the same.
IMO that distinction is a good one: addiction does have a voluntary aspect that is not present in cases of simple threat by force. Treating addictions as though they simply override the agency of the person is not, IMO, a good idea. And I think a similar distinction can, and should, be drawn between simple threat by force and "persuasive technologies"; there is a voluntary aspect to the latter--people have to choose to believe what they are being told--just as there is with addiction.
Of course we also draw a distinction between acceptable persuasion and fraud or manipulation, which are unacceptable and punishable. That distinction is also a good one, and I think it can help to deal with "persuasive technologies" in a reasonable way.
> if coca-cola convinces you to try a new flavor through a marketing campaign, it does not eliminate the free market.
If the marketing campaign is open, and I'm aware that that's what it is, and it doesn't make any actual false claims (puffery is another matter--that's basically unavoidable), then yes, it's my choice whether or not to be convinced.
If the "marketing campaign" is really efforts behind the scenes to present me with misleading information and to disguise the motives behind it, that's something different.
It is true that efforts of the latter sort are not new. What impact new "persuasive technologies" have on the frequency or success rate of such nefarious tactics is, I think, an open question.
"Individuals can own and sell property" is part of but not the whole concept of free market.
All the economic theory of free market (and all the advantages of it) rely on a few core assumptions, like liberty to trade and set prices, but also lots of buyers, lots of sellers, full competition, low barriers of entry and full information.
Just as it's well known that a market that devolves into monopoly or oligopoly does not work like a free market anymore, or in the case of severely unequal bargaining power, the same applies in the case of information asymmetry, which also is well known to lead to a failure of free market.
So whether "people are easily affected by suggestion" does matter, because if that becomes the case and companies are widely using effective methods to do so, then the resulting economic structure of the competition is not like a free market.
Just a minor nitpick, but it's really special relativity that says that the speed of light is constant (and the maximum anything can travel at). General relativity is much more general theory about gravity and space time.
I see the rest of society buying and liking the typical brands such as: Tide, Kraft, Nabisco, Pepsi-co, what have you and I shutter. It's really weird but I started to have an adverse reaction to all the companies that actually advertise and I'm always on the look out for companies that use few ingredients and in general have no commercials.
And this has changed me so much. Even seeing that stereo typical hot-rod style cars that you are clearly described as attracting women - I actively dislike people that show off even. I have never had an FB account.
I don't know where I'm going with this but yeah, cognitive biases and such and controlling people. I guess just trying to say, it doesn't work on everyone - not identically and as expected at least.
I suppose there are both philosophical and pragmatic reasons for the current situation:
1. Philosophical - you don't want to get the threat of physical force involved unless there is a very very good reason for doing so. That is, you don't want to make things illegal. So we forbid outright lying and fraud, openly misleading customers, but allow things that you describe. If you have a private online platform, why should you not be allowed to or be compelled to moderate its content? Yes, tech companies will try to maximize user engagement, but what else should they be trying to maximize? It is up to the rest of society to develop a proper culture and information hygiene to shape their demand, and tech companies will adapt. It can develop organically and passed from generation to generation, or you can try to expedite the process through schools, media and other mechanisms that exist in a society.
2. Practical - people tend to get interested by forbidden things, and they seem to like their biases. In the Soviet Union, for example, perhaps most people were fascinated by the Hollywood movies of the 70s and the 80s, and even the way the Western brands were being advertised. They seemed 'cooler' to the Soviet products that were mostly devoid of marketing. Try to eliminate most biases, and people will vote with their feet. Arguably a better approach is to allow things, but in good faith educate people on how to best deal with them and why that is the right way of dealing with them. Same way a lot of parents today would explain to their children why they should stay away from cigarettes, for example, or wash their hands before having food.
I don't think there's anything reasonable about either of these:
1. Are the freedoms of corporations to act without the coercion of the state more important than the freedom of citizens to act without the subtle but effective coercion of corporate persuasion? The case for individual freedom doesn't always imply the absence of action by the state.
2. Would restricting manipulative practices by businesses really lead to more businesses exploring those practices? That may be the case in restricting consumer choices but would it also be true for corporate behavior? I'm not so sure.
Corporations are just voluntary associations of people, are they not? It seems strange to allow something for an individual, but to forbid groups of people to associate with each other and carry out the same activity.
I also think that calling persuasion 'coercion' is misleading. Coercion implies the threat of force or violence. You and your family still have the choice of switching off your TV, not using facebook or twitter, living your life as you want. Assuming you can live in a community of like-minded people such that there are few social costs of doing so for you and your family. I would like to see more such communities, especially among people working in tech.
Restricting some practices by businesses would lead to foreign businesses and cultures becoming more attractive to consumers.
In another discussion on philosophy someone mentioned a principle they didn't have a name for, but applies here as well. Their example was that you can ethically look at a license plate and memorize it, but if you get a network of people at strategic positions of a town to meticulously write down all license plates 24/7 and collect this in a central place, that becomes an unethical/illegal privacy violation.
Enough quantitative change becomes qualitative. Ice is just a bunch of water molecules just a bit differently organized. Graphite is worthless, even though it's the same atoms as diamond, but differently organized.
Many people joining together into a corporation can become its own thing, acting as if it had "goals" on its own that may be not in line with the goals of the individual participants. Corporations are real entities separate from individuals. It's emergence. It's a "socially constructed" entity but no less real.
Yuval Harari describes this very convincingly in Sapiens.
Im not sure I follow your opening premise. Why is strategic tracking of license plates unethical? I am pretty sure it is not illegal. You can pay a private detective to follow someone around town taking pictures.
In Europe, even before the GDPR, but especially after it, this would be illegal. It's personally identifiable information and tracking it requires consent.
Private investigators - not sure how legal that is around here and what are the constraints.
Example - for a job application I recently needed a background check by a US company. As soon as I filled out my current location as a European country, a consent form popped up. They can't go sniffing around even in the public records and compile stacks of background info on people at the scale a company does this.
Lots of things are public outdoors only with the common sense understanding that it's ephemeral, is just remembered by people's imperfect memory and won't be meticulously categorized and stored somewhere, unless there is consent. Now, you can make small scale notes and a personal journal about whom you got to know at a party etc. You can take tourist shots and have random people in the pictures but you can't follow people around and photograph them specifically as the main subject of the photo without their consent. Public figures and officials like the police don't have such privacy rights on duty.
It's about the scale of it, and common sense judgment of whether it systematic data collection or something just happening normally as life goes on. But I'm not a lawyer, I'm just describing the broad principle.
The law and people's intuition may be different in the US.
----
If you don't like this, think about another example: hitting/patting your friend on the back as banter or congratulations or something. How strongly do I have to hit them to commit assault? What if it's my frail grandma? Intent, scale and common sense matter. There is no law saying the exact amount of force I am allowed to apply.
>The law and people's intuition may be different in the US.
I think this is it precisely.
I agree with you that intent, scale and common sense matter. Some of this can be included in the law or left up to a judge/magistrate to decide.
Going back to the premise of the article, before we design a privacy law for targeted advertising, we have to agree on what constitutes a level of manipulation so great it effectively takes agency away from those targeted.
Corporations are obviously far more than just an association of people. There are huge tomes of legislation that define them, give them extra powers & limit their actions: Limited Liability, Tax codes, Health and Safety, Child Labour, Employee exploitation.
Many of these laws are written in blood.
So no, legislation that limited large companies ability to exploit the heuristics and biases of their customers would not be 'strange'. We can also legislate against foreign companies.
This is an argument to scrap OSHA and allow slavery. How else, after all, can businesses be expected to compete globally?
When corporations grow above a certain size (the exact size of which is for another discussion), they tend to gain power against the individual. I'm not sure which is worse - powerful government, or powerful corporation. Both serve their own interests ahead of individual liberty, and collectively it means that people are less free and more beholden to corporate interests.
I don't want to put so many restrictions on small business, because they're more directly driven by individuals (although if their behavior starts having a significant negative impact on individuals, then perhaps that can be investigated). Small business has far less power to negatively impact individuals when compared to multi-billion-dollar multinational organizations.
I believe that society should favor and protect the freedom of the individual over the freedom of the large corporation. Without those individuals, society is useless. Of course, that's sidestepping the issue that society far too often sticks its teeth in where it doesn't belong, and won't bite down where they'd actually be useful.
Your last sentence is honestly a copout (not on your part personally, but in general as an intellectual idea). That basically embodies a race to the bottom, as countries around the world erode freedom, and the rest of the world "harmonizes" so as not to be disadvantaged on the world stage. You see this now with the current copyright situation.
It's kind of the same idea as the GPL protecting the greater freedom of software by restricting some of the freedoms that a developer may wish to have. Many similar arguments can apply too, as for example I've heard it said "GPL's restrictions will just push people to use less-restrictive licenses" That may be somewhat true, but there are also people who value the greater freedom more than their personal freedom to use the software in a non-free manner. Note that this isn't an endorsement of GPL over other licenses, just a parallel to be drawn that is somewhat relevant to HN.
I guess you approached the problem I have with libertarian arguments (which I tend to find intellectually appealing): if a corporation is powerful enough, it becomes barely distinguishable from government in practice, except without any democratic governance. At the extreme end you have a corporation that owns all the land in your nation and you have to abide by their arbitrary rules to be on their private property. Yes, they cannot physically assault you the way a government can, but you may be left with few options but to sign a contract you may otherwise not have wanted to sign (and then the government can physically assault you for breaking that contract).
We are sort of approaching a standard argument between classical and modern liberals, so it would be pointless to just repeat the same tired arguments. Having said that, I would love to see more of thoughtful debate between classical and modern liberals. Usually you tend to see both sides just preaching to their choirs or a very shallow debate at best.
I have a solution, if corporations are just voluntary associations of people, then all the people voluntarily associated with the association should be responsible for the corporations actions. So if the corporation is caught breaking the law, all shareholders become personally responsible and face jail time. No more shielding from personal responsibility with limited liability etc..
>>> Corporations are just voluntary associations of people, are they not?
No. Corporations are a creature of government policy, and receive entitlements that sole proprietorships or general partnerships do not receive, such as limitation of liability.
> It is up to the rest of society to develop a proper culture
Just as the environmentalists say that you can't really throw things away because there is no "away", increasingly there is no "rest of society" that exists completely outside online culture. Especially in the year we've all been quarantined inside.
That point is sort-of pointless. It would be fair to boil a lot of the argument there down to "People don't act randomly. They have reasons for what they are doing. I think their reasons are bad".
I can't argue with that, but the alternatives are worse. If you centralise power, sooner or later the advertising exec gets control of the powerful body, and now you can't choose to resist even if you can see that what is happening is bad.
A key part of the free market is precisely that the world is actually quite predictable. The fact that people sometimes make predictably bad choices doesn't especially undermine the free market. The market doesn't require people make good choices, it just redirects resources to people who make better choices than the average.
You could deploy the same argument in favor of allowing any kind of fraudulent product or service; the full libertarian "caveat emptor" approach. However, not only is this unpopular, prohibiting fraud doesn't in and of itself result in fraudsters taking over the market.
The boundary between overly enthusiastic promises and actual fraud is one that's in different places for different jurisdictions and is constantly at the forefront of litigation as people invent new fraud schemes.
> "People don't act randomly. They have reasons for what they are doing. I think their reasons are bad".
Sometimes the reasons are bad in very objective ways.
A goes down to the market to buy a kilogram of apples. A vendor B advertises 1kg apples for a pound. He weighs them out on his scales. A hands over his pound. When he gets home he finds he has 800g of apples. Was A's reason for purchasing the apples from B good or bad?
(laws against short measure have been a thing since at least Roman times; I believe they also had a few product quality laws, although the canonical example there is always German beer law from 1516)
> It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively
I doubt pretty girls in ads are actually supposed to mean the goods they advertise make a man more attractive. Who (except teens) would believe that, consciously or subconsciously? You just get attracted yourself and that's enough, simply seeing a pretty girl fires the hormones and neurotransmitters making you feel good about what she advertises, no semantic load necessary.
Nobody would believe drinking Coke will make you live free and happily. Still their marketing messaging is about connecting Coke with a youthful, carefree, liberating experience, friends, fun.
You wouldn't think eating candy bars will make you athletic. But they market it with athletes and active people playing soccer etc.
They don't show some fat dude sitting in the dark in front of the computer shoving candy and chips and coke in his face and becoming diabetic.
The associations can be immediate if repeated often enough. When picking a product you don't reason "okay this will make my life X", but you feel a familiarity, a draw, a positive emotion. Not necessarily consciously. But especially after a stressful day, in the supermarket you will be more prone to emotional, autopilot handling.
> I doubt pretty girls in ads are actually supposed to mean the goods they advertise make a man more attractive. Who (except teens) would believe that, consciously or subconsciously?
How about chewing gum and mint ads? Many try for messages that are essentially "don't get cock-blocked by your bad breath" or something similar. And they work.
I studied "strategic communication" in college (a mix of PR, advertising, marketing, whatever) and I distinctly remember a mentor saying, "When you're selling a drill, you're not selling a drill. You're selling the hole."
The point is, people don't buy things for their own sake. They buy them for what they think the thing can do for them. Any car will get you to point B, but some people will go for a cheap, utilitarian car because they want to save money (or maybe the utilitarian aesthetic is their thing), while others will go for the flashy car for that feeling of sex appeal (even if women don't suddenly fall all over a new car owner, the feeling of confidence is a social benefit to the buyer, even if that's not worth the asking price).
More generally, though, all communication has this sort of color to it. We see anti-privacy legislation being touted as protecting children and fighting crime. Small talk is not really about sports. So I don't think it's realistic to legislate persuasion. I would probably be behind making formal logic a part of public school curriculum, though, so people are better equipped to discern for themselves when persuasion they're exposed to is nonsense (among other benefits).
I think explicit instruction is quite ineffective either way. What you need is teaching by immersion, through examples. Science should not be taught as a series of facts and formulas but as riddles, arguments, why do we think this or that, what does this or that experiment tell us. How could one misinterpret it? Let's try and present this information to influence the reader's thinking: half of you to one direction, half of you in the opposite.
Simiarly in history class, can we weave the narrative to make country X look good in this ancient war? What political slogan could country B come up with to counter that narrative? What facts would you emphasize regarding the Black Plague as a communist or as a Catholic?
In math, would you market a more fuel efficient car by telling the percentage difference in miles per gallon or liters per 100 km?
These are obviously just some examples, but the point is that critical thinking is a cross-cutting concern and cannot be segregated into one class. It should be constantly invoked. Instead of listing fact X, explain how people figured it out. Why did people not see it before and what did they think instead? Was it universally accepted immediately or what were the criticisms?
The thing is, though, this is extremely difficult to teach. You'd need extraordinary teachers with lots of background knowledge, since such debates and brainstorming can quickly go astray of the standard curriculum and quite easily end up in difficult and perhaps unknown territory. Perhaps you could do this with an Internet connected laptop and a projector, diving into Wikipedia rabbit holes in real time. But this would be different each year, and you'd have no way of ensuring a standard curriculum. If you don't make things required for the test, many students just won't listen, no matter how engaging the class may be.
Such teachers would demand higher salaries to be competitive with places that can also make great use of the limited supply of critical thinkers. Maybe it's offensive but teachers as a demographic aren't that smart in most countries,since the requirements are pretty low to graduate and the social reputation of the profession is also quite low.
But even if you find great teachers, it's even more difficult to test, as it's open ended, not objective and not standardizable.
I think there is much uncorroborated assertions about bias.
Associating emotions to a product or getting attention is an old trick that existed before modern marketing. The smoking example can be generalized for fashion and there are mechanisms like peer pressure involved that certainly incentivize consumption. But they are not overriding your will. Addictions can, but even then I would say there is still a free will involved.
> but I sense this is more of a grey zone than most people are willing to admit.
I do believe ads affect me, but the scope is limited and regulation would be more draconian than my natural inclination to give products an unjust bonus for boobs. However, it may not work, because I come to the conclusion that the product must lacking if you try to sell it with dirty tricks.
> Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?
No, because people can make up their minds, otherwise they would have a lot of cars by now. They loose that, however, if you regulate too excessively because the decision is already made for you. There are sensible reasons for regulation, so it is a gray area, but I don't see it as helpful here.
Empirical counter evidence in favor of my free will for any practical purpose: There is no irresistible ad.
Isn't what you're calling a free will just the influence of your previous experiences? Which would mean that there should theoretically be a way for powerful advertisers to avoid people from obtaining such experiences in the first place.
The wider culture is supposed to counterbalance for this.
Taking things to the extreme: maybe people are by default violent and will kill a few people in their lifetime to get their way. But the culture counterbalances this.
Likewise everyone old enough to have a sex drive has seen enough publicity with barely-clad attractive women to know the ruse. Of course, people are still vulnerable to these things (cf. onlyfans, etc.) but at some point you have to establish that the rules are clear and if people still want to indulge the fantasy of a beautiful woman by buying a car, well, whatever.
Sure, we still put backstops to this with drugs, gambling. But the use of sex as a tool for attention grabbing is way too wide a net to cast.
(I note that vaunted free speech zone America is actually more restrictive in what sexual material can be shown on TV or even marketed; there isn't a US "Babestation" TV channel, is there?)
In Brazil there was a lot of social pressure (from NGOs or similar "civil society" groups reflecting elite class views, but anyway) to curb what was becoming an arms race in beer advertising. Now beer advertising is very modest about female skin, and instead focuses on time with the guys and so on.
TV coverage of the yearly five-days Mardi Gras has also dialed back very very significantly over the past decade. Internationally we can see the case of F1 "Grid Girls". The culture sometimes puts backpressure on this.
(Societies have odd contradictions on sex generically. In relatively relaxed, women-breastfeed-in-public, close-physical-contact among strangers Brazil, topless sunbathing is not only frowned upon, it's illegal. People will snitch on tourists and the police will go and arrest them.)
This is what Slavoj Zizek terms the "cynical function of ideology". People can know the ruse and enjoy it anyway. No one goes to e.g. striptease joints expecting complete sexual gratification.
(The following video analyzes a fragment of the movie "West Side Story" and it's fully SFW)
If you go that far then the entire legitimacy of legal systems, judicial systems, penal systems, world order etc falls apart. Free will does not really exist, but what is the alternative to pretending that it exists.
Recent scholarship engaging with the impact of digital technology on contract law has suggested that practitioners and researchers will need to give proper consideration to the ‘role of equitable remedies in the context of contracts drafted in whole or part in executable code’. More generally, a raft of challenges stem from the increasingly active role digital technologies play in contractual relations.
Faced with these challenges, instinct may dictate attempting to tame the technological beast with a variety of regulatory responses spanning the full spectrum of possibilities, from legal requirements to voluntary codes of conduct or standards. While regulatory action may be a priority from a public policy perspective, the seeming trustworthiness of algorithms, and the consequent reliance placed on them by contracting parties carry the inherent risk of lack of autonomy and fully‐informed independent decision‐making that, in Australia at least, is addressed by equity through the doctrine of undue influence.
This article explores whether this traditional doctrine can adapt to operate alongside regulation in dealing with some of the challenges presented by algorithmic contracting. Specifically, it focuses on those contracts where algorithms play an active role not only in the execution, but in the formation of the contract, as these are the “algorithmic contracts” that challenge the very fundamentals of contract law.
Cognitive biases are a real problem for free markets, but the question isn't whether free markets are perfect, it's how they compare to the alternatives.
People can make poor choices because of cognitive biases, or they can have choices made for them by other people with cognitive biases. The other people can be unelected, unaccountable leaders, or leaders that are chosen by voters, and politics seems to be where cognitive biases are worst.
In general, I would rather suffer for my own cognitive biases than the biases of elected officials and voters, but that's not to say I advocate for free markets in every scenario, because there is a lot more to consider than individual choice in that discussion.
With freedom, humans can control for this by learning from it. They can see that cars don't necessarily get you women despite of what the ads say. This can also be indirect learning by someone pointing it out.
If you start making certain things illegal to say, you can use that against people. For example, given enough money for lawyers, you can sue people for saying that "these cars don't get you women" using the same anti-free speech regulations by finding holes and exceptions in them. History has shown that lawyers can pull this off.
Nonsense. There are plenty of things you are not allowed to say, and nobody is using them as "loopholes". False advertising is a crime already, for one.
I saw a talk a while ago that argued the sexist marketing of home computers toward boys is likely responsible for the drop in women becoming computer programmers during the 90s.
My family is a living proof of this. I have 2 sisters, I was a boy, and yet, I am the only programmer. GW-BASIC’s evil plan with its marketing oriented towards me, its black-and-white text, and their 10 20 30 LIST instructions, or the white reference book of the Amstrad 8086 in English which, as a French boy of 7, was incredibly opaque (which I still read and memorized by heart), those were all directed towards me.
Or perhaps it is time to admit that talks which are drawing inferences are just talks, and while my sisters were asking dads to draw horses, I was asking dad to offer me a drawing of a powerpoint, because things that plug to each other fascinated me.
If sex creates differences in the body, it is only stunning ideology that draws us to affirm it creates no difference in average in the brain, and that everything is socially constr... No, I’m not James Damore.
I was on a flight a few years ago while wearing a software company branded hoodie. The person next to me was very excited, since he was trying to get his son to like computers. He bought his son all these toys, read him books, and did activities to encourage an interest in software. His son was six.
I asked if he had other children. He had a daughter who was eight. He was not doing the same things for her.
Enrollment among women in computer science was growing steadily until the 80s, when it turned around and declined. Our biology has not changed during that time frame. And the core job of a software engineer hasn't really changed at that time. Other engineering disciplines did not experience the same change in trajectory. Innate biological preferences have a hard time explaining the unique trajectory experienced in software.
You don't just need to demonstrate that there exist differences in the brain between men and women, but that the specific differences lead to the observed population-wide outcomes.
Something like Lynx (called Axe here) is probably a great example for a different reason, being that it would not exist were it not for the very advertising that promotes it. The aversion for natural body odour (that is, not 'sweaty' smell) is given by advertising, same with the aversion for 'bad breath'.
Banning advertising would kill these products, because they add no value except to solve the problem they created.
Education is the best mitigation. Knowing that we have biases helps us recognize when it is happening. Most of the pushback to advertising that I've seen uses either asceticism or anti-consumerism language instead of cutting to the heart of what's going on.
> If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Of course humans aren't perfectly rational but I'd argue it's still a good assumption to make, as a society, because the alternative leads to a very disturbing path. Ultimately, assuming individuals don't really know what's best for themselves can be used to justify all kinds of authoritarian measures from limiting speech to straight up enslavement.
It might seem like a stretch to us, but it was actually used to justify slavery in the past. As early as Aristotle who thought that slaves didn't have the ability to think properly and therefore needed masters to tell them what to do.
Hell, I'd go right before Aristotle. The whole allegory of the cave from Plato/Socrates is most likely the first non-religious justification in western philosophy (which was promulgated) which explains this "the people on the bottom deserve it and are correctly segmented" idea.
Idiot farmers, who are not educated and don't see reality for how it is (only shadows on the wall), cannot be trusted to run your state. Only those who see reality for how it is (those who are dialectically educated in a school) should be anywhere near power.
I think that even today, this notion of "you are low/stupid therefor we should not listen to you" is almost everywhere in the west.
You can discuss persuasion or cognitive bias away until everyone gets an aneurysm, sure. Some example: You can't not communicate. So even not persuading is persuading. What if the presidential candidate would just not give a scheduled speech? How do you communicate information objectively? Casual language creates bias, so does scientific language, simple language, passive language, active language. "Cognitive Bias" might as well be called "Cognition", since it is just how the brain works. You have to think "tree" immediately when you see one, even before you validated that all leaves are real and that the whole thing is not a projection on an transparent screen. Otherwise you can't function.
But: Big tech throws us in a situation where a small group of people influences our perception on a massive scale. Facebook changes a sentence on their homepage and a billion people read it. Youtube raises some parameter (yeah I know that's not how AI works) by 0.01 and the political opinion about the Grenfell tower disaster changes ever so slightly - for 30 million people. Google's filter has a tiny hole and some troll broadcasts wrong medical information about gout to 200k people.
Every time one of these things happen, the world shakes. Dozens die or survive. Demonstrations form and elections swing. Opportunities are wasted and ideas surface.
I am not arrogant to actually propose an easy solution to this, and I don't think there is one. Just be aware that "I can always go and stab someone" is not a good argument when you are discussing a fully automated drone swarm with kill authority.
I agree scale is very important and it still has too little visibility. Everyone knows the quote "with great power there must also come great responsibility". Well, scale is indeed power. An action applied to one person might not be a big deal. But when the action is applied to or affects thousands, it should require much more consideration and carry much more responsibility. This argument has a lot of applicability in many other areas too.
Twitter CEO said we can give them the right to speak, but that doesn't mean the right to go viral. I think "viral" is an apt term - these are mind viruses that are 95% harmful.
To your point about scale, once a tweak is made and X or Y "news story" is let loose in the wild, it gets amplified. The scale of the impact isn't linear, it's something more exponential.
I keep coming back to my default idea here - that PII needs to be seen as legally owned by me and only licensed to others for use. The default legal framework should include medical / epidemiology research as freely licensed and commercial use as ... well let's just say i think my license conditions will be expensive.
If an advertising channel then shows ads that breach license they are liable. A fairly simple licensing process will come into play and we can find new ways to fund things
Edit: Yes i do get a lot of the issues around regulation of tech - it's almost like saying regulation of every day life which is really broad. And the different bodies and approaches will also need to be broad. But i am a believer in markets and individual decision making and i also believe that personal information has in the past few years become a genuine new ... commodity? And we need to raise that commodity into visibility - to be able to put prices on it openly. Maybe it won't work, maybe privacy is like a human right and can only be dealt with at that level - but i don't think so - privacy to me seems ephemeral and usually poorly defined. Longer discussion to be had
If you're a US citizen with a Facebook account, your value for Facebook is about $ 200 per year. That's the average, including children, seniors, etc. If you're a tech worker in your 30s, it's probably 3x as much.
If you want to use Facebook without targeted advertising, you need to either convince them that they don't need all that money, or pay it yourself. And that's just Facebook.
In other words: the internet economy without targeted ads will be a very different place. Facebook will survive. imgurl / snopes / fivethirtyeight? Unlikely...
> If you want to use Facebook without targeted advertising, you need to either convince them that they don't need all that money, or pay it yourself. And that's just Facebook.
I hate this argument. It's tries to find a solution to sustain the status quo in the face of change. But that's illogical because if there is a change, then the status quo is going to change too.
The only way Facebook is going to stop doing targeted advertising is if they're forced to, likely due to new regulations. If that happens, Facebook's entire business model will collapse, so they'll either have to make massive fundamental changes to how they make money, or they'll die and be taken over by a competitor.
In either case, users don't "owe" Facebook anything. Consumers are the life blood of the economy, and thinking about how consumers should change their behaviors to sustain a corporation is backwards thinking.
"Without ads" and "without targeted ads" are different things. There's an "ad-centrist" view whereby adverts should be OK so long as they're chosen to go with the content on the page and the community that the site aims at - you know, like all newspapers and magazines used to do.
I'm not sure how imgurl survives given the abuse workload; there is an inevitable death spiral of image hosting sites as they get cluttered by more and more adverts of worse and worse quality.
Not necessarily. I don't know what percentage of advertising revenue is from political advertising, but that's money that Facebook makes off of you that doesn't come out of your pocket. Essentially any time you're shown an ad that aims to convince you to support some idea, or vote for something, or change your behavior in some way (like going vegan) instead of trying to get you go spend money, you provide value to Facebook that doesn't come out of your pocket.
I suspect that this is a lot of the value of "the next billion". Why bring Facebook to people who have little disposable income? Because there are still ad buyers who will pay to reach those audiences if they have votes to give, or even just public support for some cause or another.
Money spent on political advertising absolutely comes out of your pocket. The process is: politicians levy taxes and excise, divert the revenue to linked entities, pay Facebook out of the proceeds. Or they extract money from companies under threat of expropriation via adverse regulation and litigation. In developing countries this is blatant; in developed countries it is disguised (as lobbying, consulting fees etc.).
In fairness, there is an argument that at least some political advertising is consumption by politically engaged citizenry who don't stand to gain financially from the result; witness spending on unwinnable Senate races in the latest election, for example. But in general you have to assume that political advertising comes with some expectation of return, which comes out of the everyday citizen's pocket.
Let me rephrase. Someone advertises to you because they expect something in return. Coca-Cola advertises to you because they expect you to buy coke. If you're too poor to buy coke, you're not useful to them.
Joe's political campaign advertises to you because they want you to hate Bob, or be scared of those other guys, or whatever. This will help put Joe in charge, which is good for Joe and potentially bad for you, but it doesn't necessarily come out of your pocket. It comes out of the pockets of Joe and his friends. That might be taxpayer money, but you might not be the taxpayer paying for it. It might be private money from religious people who want gay marriage to be illegal. You pay for it with freedom, but not with money.
My point wasn't that there's a free lunch here. My point was that there are other mechanisms at play, costs of advertising that don't translate directly into cost of goods and services you buy.
I can't remember ever seeing an ad on Facebook trying to convince me to go vegan or do anything public-spirited.
Ever.
I have however seen plenty of ads trying to get me to spend money on something/someone.
And conflating the political process with the marketing process is absolutely pernicious in a democracy. The Brexit campaign used micro-targeting of individuals with knowingly dishonest ads that played to their biasses.
That's not persuasion - that's just manipulation.
Put simply, no one should ever be allowed to run organised campaigns based on lying and distortion for any reason. If you need to lie you don't have anything valid to say in the first place so removing you from the public discourse causes no harm to anyone.
Maybe I wasn't clear. I'm not defending political advertising or saying it's a good thing. Just that it exists, and people will pay Facebook for the privilege of reaching your eyeballs even if they don't think they can get money out of you directly, if they think they can get something else out of you, like a vote. So even you're poor financially you are still useful to advertisers because you have something to give.
I agree that most of the time that's a bad thing, although it's possible it can be used by good people trying to convince you to be good, or raise awareness of an important cause. Probably much less frequently, but evaluating good and bad views is very subjective so I'm not going there.
Now perhaps all or advertising is manipulative , dishonest, and immoral. Doesn't detractfrom my point.
No, because it's $200 on top of what you would otherwise spend on the stuff you buy. The cost of marketing has to get baked into the price of goods or the sellers would be losing money. If they weren't advertising, or could advertise for less, they'd be able to lower prices.
Advertising can lower the cost per item, if there was a large startup cost and marginal costs are small. More importantly, facebook ads have caused me to buy more things because the ads showed me things that I thought were useful that I otherwise wouldn't have known about. Honestly, I feel like maybe I should be paying to see FBs ads sometimes.
And what about all the sales that are made through Facebook ads? They support millions of jobs around the world. A less efficient ad machine would mean worse sales or higher marketing costs or both. Some of this excess goes directly to profits, but a good chunk of it supports employment.
Imagine if we got rid of all advertising and what that would do to demand. I know the discussion here is centered on targeted ads, but I think there would be significant economic effects if their incidence was decreased.
The global consumer economy runs on all these extra desires created by ads, and ads do fulfil the function of informing customers.
If I have to see ads (and I really don't), I'm not against tracking ads per se, but the long-term storage of them. Given current technology and governance, there's not a way to separate these two.
In any case, there's significant economic effects to stopping tracking ads.
>If you're a tech worker in your 30s, it's probably 3x as much.
I wouldn't be surprised if it's much more than that. Most businesses sales follow the Pareto principle, 80% of their revenue comes from 20% of their customers.
I always wondered what's stopping a rival to Facebook that offers users a share of the money they are generating. Besides the scale barrier to entry of course. Generally speaking, people like money and will switch products and services that puts money in their pocket.
I think challenge it doesn't solve a user oriented problem.
Most users care about the enjoyment/utility/stimulation they get out of a social network. Some users care about their privacy. Needing an extra $30-60/year does not rank high on user priorities.
While facebook might make $200/US individual, their profit margin is only 30%.
Imagine if Facebook gave 10% to their users, lowering their profit margin to 20%.
Let's say every user gets 10% of the average, or $20/year. As you say, not a big priority. But some people would sign up, who otherwise would not. And some people would keep their accounts, who otherwise would not. Win for Facebook? Hard to say.
Now, what if Facebook gave each person 10% of what they personally brought in? That wouldn't take much more tracking than what they already do. But in this case, you'd have a lot of people maximizing their time on Facebook in order to increase their payout. Win for them, bigger win for Facebook. Heck, Facebook could even get rid of all their stupid dark patterns, and stop the endless research into more dark patterns, improving their bottom line even more. No need to trick people into engaging if they are voluntarily engaging for a small sum.
>Now, what if Facebook gave each person 10% of what they personally brought in? That wouldn't take much more tracking than what they already do. But in this case, you'd have a lot of people maximizing their time on Facebook in order to increase their payout. Win for them, bigger win for Facebook. Heck, Facebook could even get rid of all their stupid dark patterns, and stop the endless research into more dark patterns, improving their bottom line even more. No need to trick people into engaging if they are voluntarily engaging for a small sum.
I still don't think the ROI for the user is there.
The average FB user spends about 35 min per day, or 210hrs/year on the platform. Assuming 10% revenue sharing, that is $0.10/hour.
If the average user were to double their time on the platform, they would make an extra $20/year.
This is a laughable incentive for most people, let alone your most valuable add targets (people with disposable income).
Oh yes, the ROI is crap. But even today with an ROI of exactly $0 there are still vast numbers of people spending the day in their feed or playing whatever the hot FB game is these days or whatever. Getting paid (a pittance) for doing that would certainly drive more engagement. "They're actually paying me to do this!"
>I always wondered what's stopping a rival to Facebook that offers users a share of the money they are generating.
They sort of do. The money is converted into the utility of the product to each user. Technically, they could lower the utility of Facebook and pay you the rest.
I mean, you already have that via the Facebook EULA for example. You give them a license to use your data however they want and in return you get access to Facebook. You can not accept their terms and in return they can not allow you to use Facebook.
Would these limits apply to essays written by people who have for good reason cultivated a following of people generally interested in what they write and positively inclined towards agreement with them?
Maybe this essay should be forced to be presented on essays.com without attribution and compete on the ideas within rather than the implied authority of the author.
(I happen to agree with a lot of the content, but couldn’t completely compartmentalize that this was a persuasive essay against persuasion.)
There are two hilarious subtexts that always accompany these sorts of arguments:
-It's okay to use persuasive technology to push political orientations I agree with
-We want to defend democracy but we implicitly agree as commentators above the fray that people can't be trusted to make the right decisions and have to be manipulated towards our preferred orientation
I sometimes ask myself, in 20 years will we begin to see class action lawsuits directed at technology platforms that use notification triggered dopamine releases for growing engagement?
I thought about it further and lean towards agreement.
You know what makes it such a subjective question? The cigarette supplier sells a substance with an addictive chemical inside of it. Notifications are not a substance, the drug response is your own chemical (dopamine). Still an interesting question though.
Indeed, a case like this succeeding would blow the door wide open for suing anyone or anything that stimulates you to the point of ejaculatory dopamine responses.
But also, are there social media apps where you have to opt in to notifications? At least with native apps, I've only ever seen opt-out behavior, which doesn't really work with that analogy.
(My point is that it's also illegal when the person is of age - there was a fairly big lawsuit against the tobacco sellers for pushing an addictive, harmful product and pretending it was fine!)
imagine an advertisement so effective that anyone who saw it would immediately buy the product at whatever price asked as long as they could afford it.
Imagining this we can see that there is obviously some limit beyond which advertising must be curbed, and admitting this the question just becomes at what point must advertising actually be curbed?
Of course one could argue that such an advertisement must be for a product so wonderful and useful that everyone would want it - let us say immortality with youth and good looks - but if that were the case such a product should be recommended enough by word of mouth and the evidence of all the old people becoming young and good looking when taking it.
The closest thing is probably a very charming, attractive/successful person soliciting you in person, at a place where you're open to buying that product (let's say a mall, or specialty store) or are a captive audience. Maybe we should ban the way luxury brands are sold, or military recruitment at high schools.
why limit to advertisement, it can also be used to manipulate you into voting for certain person or proposition that may not be in your best interest. Emerging technologies like AR, VR, Brain Computer Interface are ready for someone to abuse them.
If your assumption is that people are helpless to resist persuasion, then who is this incorruptible entity who will protect us from having to make our own choices?
the assumption is that there is some point where people need to be protected, the scenario I envisage is a point where it is extremely obvious because if there was an advertisement that could cause you to automatically buy it people would need to be protected or the company with the advertised product would soon have all the money and all their customers would end up enslaved to it. The question of course is if there are any less extreme scenarios where people need to be protected.
But I think the real question that needs asking here Winston, is when are you going to just give up and love Big Brother?
You're using the passive voice, "people must be protected", which calls for action without considering whether the cure might be worse than the disease.
> The technology exists to take your likeness and morph it with a face that is demographically similar to you. The result is a face that looks like you, but that you don’t recognize. If that turns out to be more persuasive than coarse demographic targeting, is that okay?
I have wondered if in the future, movies/TV shows will be personalized with your name. I have found that when I watch a movie where the main character shares my name (and therefore other characters say the name a lot when talking to/about him), it makes the experience more immersive. It would seem fairly trivial to substitute in other names in a pretty smooth way (it would get tougher if you had to also adapt things like a business card that is visible on-screen, however).
Of course, this could be taken to the next level if you changed the faces of the actor via deepfake-like technology. I don't know how actors would feel about this sort of thing, but hey maybe it would open up the door for a bunch of new actors, who would essentially be a blank slate for customized faces. Imagine a world where Hollywood actors don't have to be good-looking!
I wonder if countries where English is not a native language have a built-in advantage - media they consume in English runs in a virtual machine of sorts.
I have noticed, for example, that anglosphere seems to adopt US Hollywood/cultural tropes much easier and faster than other countries. Down to, for example, people using the word 'like' every other word or speaking in a certain tone of voice being quite pervasive in England but to my understanding relatively rarely seen in, say, the Netherlands.
That's been my experience too. Insults directed towards me in English just wash over me like water off a duck's back. Insults directed towards me in my own native language are like a gut punch, in that they feel more real and cutting.
I've been long hypothesizing that my English "self" is running in a sort of a "virtual machine" where I can step back, examine and pick apart the input for the parts that are relevant or beneficial, then continue processing.
I can already imagine movies and tv shows starting,
Please tell us your name out loud and then some deep fake voice magic will adapt the actors voice so it pronounce your name the way you pronounce your own name.
That’s has to be so weird to hear your name in shows/movies.
Visual novels often do that - you put in ypur name and it is seamlessly included into the text portion. It is a simple yet surprisingly immersive trick when the game starts talking about you or to you. :)
>The New York Times once experimented by predicting the moods of readers based on article content to better target ads, enabling marketers to find audiences when they were sad or fearful
Can we maybe go one step back in the discussion and not only discuss what we should do about it, but simply ask does it even work?
There's Zuboff's book about surveillance capitalism that echoes much of what the blog post talks about, that recent netflix documentary that everyone was talking bout and so on, but how much evidence is there that this isn't all just mostly bullshit?
When the Cambridge Analytica scandal broke they used the buzzword 'psychographic targeting'. turns out, psychographic targeting doesn't even really work[1]. A relative recently sent me an article about China allegedly using mind-control helmets to control their thoughts of children attached with a picture of children wearing helmets with blinking lights. I have yet to see a facial emotion detection system that labels Harold[2] of the "Hide The Pain" meme fame as anything other than 'happy'.
I'm more afraid of how bogus all these systems are and the unquestioning power people attribute to tech, which itself enables these firms. It's no wonder they keep inviting Yuval Noah Harari for talks, they must feel flattered.
I think the article does a good job of showing that current technology has the power to be a lot more persuasive than it ever has been. You could nit-pick on the effectiveness of individual methods but the broader point is clearly evident: technology, and our increasing use of it, has more potential to implement, test and refine new persuasive techniques at a scale never seen before in human history. The evidence for this is obvious.
It stands to reason that we need to draw lines around what appropriate persuasion looks like not because these new techniques are definitely abusing our freedoms but because they might and in theory could. The potential in itself should be enough to treat the role of technology seriously and consider what the boundaries should be.
My point is that I think this kind of thinking is disempowering and counterproductive. By framing technology companies as all powerful, even if that's possibly inaccurate and overblown, tech supremacy becomes inevitable in the eyes of people who see it as lacking any alternatives. ("If they can predict me this well, they must know better, right?")
I think this is why Silicon Valley loves to invite their critics who argue along these lines. This criticism doesn't deflect their own notion of supremacy, it just makes them look like bond villains, and they secretly love it.
Showing that in many cases the emperor has no clothes, that complex human behaviour can't be reduced to some ML and that in many ways it's embarrassing self-aggrandizing marketing is I think, the much better way to reign in these firms.
Talking about the limits of how we use technology in society is hardly disempowering - quite the opposite. I would have thought that talking less about the potential supremacy of technology would be the real counterproductive measure in trying to curb that potential reality.
Whether or not the influence of technology is overblown, the fact still stands that we need to draw boundaries around how technology can be used for persuasion. The question should be "where is the line?" not "should we even bother drawing one?".
This is also very common in politics. There is a presumption that the government is corrupt and working counter to public benefit.
In reality, the voting public ultimately has 100% control. Most of the people who spend time lamenting about the power of political advertising, lobbying, ect have never once knocked on a door and tried to change someone's mind about an issue.
Unfortunately, believing in a world view where the you are disenfranchised victim is incredibly convenient. It allows individuals to absolve themselves of responsibility while doing nothing.
You're absolutely right. The voting public is in control and can collectively act to get government to work for shared public benefit. We do have individual and collective responsibility here and empowerment to change things too.
It makes sense that empowered citizens in a society would naturally want to protect their rights to not have their natural human biases gamed or manipulated. Thinking ourselves above human nature and beyond the tactics of persuasion would be naïve but we needn't see ourselves as helpless victims. Instead, we should act to defend our rights and remain empowered and in control. We're not disenfranchised and we can introduce regulation to ensure that our freedom is preserved.
I agree and that framing resonates much more strongly with me. It is more powerful to say that “I don’t want to see that" than insinuate that it is a collective problem, or perhaps a problem for other inferior people, eg halfwit republicans or some such.
For example, regression models aren't exactly ai magic, but unchecked, they compute discriminatory insurance premiums in a way we find morally unacceptable.
(addendum: actually, I can't help but feel that although it's been around since before I was born, I fully understand how it works and use it on a regular basis, regression still is ai magic ;-) )
It's a hypothetical. Waiting until the technology exists is probably a bad idea, since at that point there'd be someone with the means and motive to convince you that the technology is entirely benign.
> I'm more afraid of how bogus all these systems are and the unquestioning power people attribute to tech
They say sufficiently advanced technology is indistinguishable from magic, but in the day to day business world it manifests more like "Technology I haven't bothered to actually understand is indistinguishable from magic". And everybody loves magic!
Is an example of a self-fullfilling prophecy. Bad words aren't used because they are bad. They are not used to game the system and hence become bad words....
...Sorry, English is not my native tongue.
The industry described here is heavily degraded and removed from reality, it is a sorry mix of gambling, lying and posing. It was predictable they would swallow AI systems for their "work" like their lives depend on it. This is blind faith, you could use a prophet to do the same.
There would be no net negative effect if it vanished overnight aside from slower price building. Regulation could do so much here compared to rules for advertisement or AI itself.
This is actually a topic about bias, since people use tech for self-validation to a large degree.
Let me be upfront about my bias: Since childhood I felt that TV commercials were a form of assault, assault on the mind. (So it wasn't too surprising to find out that it's literally the domestic use of war propaganda techniques. it's feels like assault because it is assault.) So, FWIW, I'm in favor of banning advertising more-or-less comprehensively. (And, yes, I realize that that sounds terrible to a lot of people for a lot of reasons. I'm not trying to persuade you that it's a good idea. That would be hypocritical, wouldn't it?)
Anyhow, if you go look at the Wikipedia entry for "Neurolinguistic Programming" you'll see that it's coated in warnings that it's a pseudo-science. So that's where we stand today in re: state-of-the art persuasion technology. Most folks have never heard of NLP, many that have are openly skeptical (verging on hostile) and some people have entered into the Information Revolution proper. (Meaning they know and use this body of knowledge.)
Given that we have allowed software to fall under patents, and that this knowledge constitutes the software of the mind, I'm reluctant to have it "go mainstream". I'd hate to imagine patent wars over IP that is essentially just structured thought...
On the other hand, ignorance of the "operating system" of the mind causes unspeakable suffering. (I myself was cured of severe depression, just to add a personal, anecdotal, note.) And the differential between folks who know and folks who don't is also problematical. That would seem to argue in favor of rapid and widespread dissemination.
Then there's the problem of the self-referential nature of persuasion and the limits thereof: can you limit persuasion if I can persuade you not to? Either persuasion tech doesn't work and so the laws are unnecessary, or it does work and can be used to affect its own regulation.
The tech and the society around it are the problems, not the solutions. The solutions are cultural. If you want to re-establish a private sphere after its complete erosion in the last 25 years, you will have to take on the entire leviathan. IMO, the next wave of disruptive technologies are going to be about providing just that.
A not-insignificant number of people don't even believe in psychology as a concept, so I'm thinking it's gonna be an uphill battle just to define the problem in a legislatively-useful fashion.
One of the comments has an amazing line “ Persuasion is at the heart of information security. Not “information technology security”, but the security a person has about their ability to make information-based decisions about which actions they can take in their best interest.”
That being said, its seems the event that has brought the concern over persuasive technologies to the fore is the election of Donald Trump. New sorts of persuasive technologies may have Trump over the finish line, but I'm pretty convinced that he ran a competitive campaign without them. He identified policies and cultural grievances (immigration, anti-elite sentiment, trade, and white identity politics for example ) that no one else was talking about, ran with them. I dislike that these subjects resonated with enough of my fellow Americans to win an election, but they did. To retreat into a mindset of "those rubes must have been tricked" is to deny them agency and is problematic in and of itself.
We live in the attention economy. We have medias monopolizing that attention.
It's the Supermarket of Ideas™, right? Maybe the Freedom Speeches™ and Freedom Markets™ camp followers could advocate for some competition.
I dunno, maybe something crazy, like a doctrine of giving interventions their fair share of oxygen. For example: After Alex Jones spins up the peanut gallery, trained psychiatrists can talk them all off the ledge. Make sure they all get a cookie and some nap time.
Highly interesting topic, I am currently planning to write my thesis about persuasive technology. If anyone has articles/ ideas/ stories/ books or papers to share I would highly appreciate it!
I thought about researching how humans react to persuasive technology and start self regulating by installing adblocker/deleting apps...
I am all ears if there are other interesting questions you might think of :)
I should think that actually having academics access to the internal effectiveness metrics would in and of itself be hugely beneficial.
- what is the range of ads to be served like? I rmeber thebfounder of duck duck go saying lack of PII made little difference to him - someone is searching for "PC Monitor" then the range of ads seems pretty obvious. Is this true in "persuasive" mode? Are there bidders trying to serve political ads or lingerie when I search for a monitor? How can you tell?
- is this a zero sum arms race ? A study I cannot find should that Political races are decided by who the public prefers not by total spend. Congressional races showed that if ad spend was matched the winner stayed consistent - it's only when one side out spent the other did ads have an effect. If I am outraged by ad X do ai also get served the anti-X advert? Would it work if I was?
(this is the equalicalent of how many adverts saying fruit and vegetables are good for you outweighs adverts for sugary drinks? There is probably a lot of metrics out there on that - how comparable is it?)
Thank you for the inspiration, I am currently reading the Subprime Attention Bias (https://us.macmillan.com/books/9780374538651) that seems to pick up on the internal effectiveness metrics, and might have some answers regarding your first question.
I am personally intrigued by the idea that people, who are aware of persuasive tech, PII, captology and marketing efforts skyrocketing in the attention economoy, start to mistrust products/ services or political agendas that spread their message with the help of such tech.
Might only be my bubble though...
You're probably aware of the "illusory truth" effect, basically that if you repeat something enough times people will perceive it as true. That's probably the original persuasive technology :)
No I was actually not, my academic paper library is overflowing with papers from neuroscience, over economics to tech but somehow I have not heard of the illusory effect.
Thank you, deegles :)
> there are limits to how much alcohol you can drink
Pedantic, but there's only limits on how much you can buy while clearly drunk. Although, I suppose passing out is sort of a limit on how much you can drink.
The article is not honest for the way that it presents some of its support arguments it tries to make it seems as if society is solely affected by this technologies in a vacuum or as if polarization in America just happened magically but this is far from the truth when we look at technology like that we can’t do a proper analysis we become parrots of simple platitudes
I think there's a clear case for limiting the tactics currently employed by social media. They are using a clear strategy and deploying it to millions/billions through the use of algos. There are multiple ways to attack that and I encourage it before the world collectively loses its mind.
What's amusing is how Schneier himself falls prey to his own 'emotional thinking'--
> Emotional appeals have likewise long been a facet of political campaigns. In the 1860 US presidential election, Southern politicians and newspaper editors spread fears of what a "Black Republican" win would mean, painting horrific pictures of what the emancipation of slaves would do to the country. In the 2020 US presidential election, modern-day Republicans used Cuban Americans’ fears of socialism in ads on Spanish-language radio and messaging on social media. Because of the emotions involved, many voters believed the campaigns enough to let them influence their decisions.
... where he sets up (anchors) a reprehensible example of an emotional appeal (19th century racial bigotry) to reflect a 'reprehensible' example of modern-day emotional appeal (animosity towards a social/economic philosophy.) To reinforce his point he purposefully neglects to mention it was Southern Democrats who spread fear against 'Black Republicans' and instead uses 'Republican' twice (as a callback) to tweak the more modern-day version of that political party.
The irony of Schneier using his long-standing and influential blog to insist we 'have a serious conversation about limiting the technologies of persuasion' is not lost on this reader.
Edit--Feel free to downvote, but perhaps explain where my comment is wrong?
It's more than just amusing - I think it gets to the heart of what arguments like this aim to achieve. They're not about how terrible tactics like this are in general, but about how awful it is that they're being used in the service of the wrong political ideology and goals (that is, not the author's political ideology) and how not only does the full force of the government need to be used to stop this but not doing so is actually undermining democracy and the legitimacy of of the system itself. They're also part of a broader idea that the legitimacy of democracy itself depends on whether the party supported by the pundits and journos wins.
The first example is clearly using Republicans as the victim. If anything that's the opposite of what you're claiming he's pushing. He could have chosen other framing (like "conservatives" as perpetrators) if he wanted to prey on emotions by connecting these two examples. His framing of those two examples, if anything, seems chosen to counter-balance each other.
I'm a little sad to see him falling into the "we are under attack and something must be done!" narrative around free and open discussion on the internet.
By all accounts, the election system in the US is working, most people aren't antivaxxers, and belief in an undetectable wizard in the sky seems to be falling. We are winning the war against misinformation.
Why are so many smart people claiming that we aren't, and that more censorship is required?
> [TFA, Schneier] In this regard, the United States, already extremely polarized, sits on a precipice.
Witchcraft accusations (which honestly in the 80's there were a lot) are way down. People understand glass is not a liquid. All music is now mainstream. String Theory is being exposed as a religion.
But people are very polarised on what they do disagree with. You wouldn't fire someone for being in the KKK 30 years ago. Now you fire someone for naive accidents.
It doesn't offer any indication that social media is the root cause of the current state of political polarization in the US, or any indication that polarization is bad or harmful.
I think people place far too much stock in US partisan screaming. It simply doesn't matter that much.
I'm not arguing that the US isn't polarized, that much is clear. I just don't see any precipice.
Why does social media not pose the same polarization threat to large free countries other than the US? We're not seeing any large-scale global shift toward political polarization, and, Facebook is pretty much everywhere in the west.
I wouldn't bet that other countries are exempt. If I remember, Europe has had quite a lot of trouble, due to polarization. I grew up in Africa, where tribal conflicts were absolutely horrifying.
These types of campaigns are usually multi-pronged, sustained and coordinated. They aren't really effective, unless they are all managed by a central director.
Dog whistles only work if you train the dogs.
I think that it's about where the effort is directed. The US is a huge target. This nation has an oversize influence on the world, and it's natural for other nations that either resent this influence, and/or want to exert their own, to do as much as possible to damage the US.
We are taught not to question authority, and punished when we do. Propaganda may be an exploit of early training, and this may be intent rather than side effect.
Over time things should stabilize as we get social antibodies against new forms of persuasion, and a new balance of power between groups that organize differently because of new communication patterns.
For us on HN, Internet may be something that's existed for many decades, but social media and smartphones really changed things for the masses starting around 2008-2010 or so.
Re: can persuasive technology methods create tools/processes of radical neutrality.
I'm actively contributing to a tool called Pol.is[3]. It's designed to be consensus-building tech. It's part of some pretty rad democratic processes in Taiwan[1]. It basically just puts a more approachable visualization over some overwise intimidating (but basic) statistical methods of "dimensional reduction", like PCA or UMAP. This visualization lets any participant explore all the statements that each statistical group agrees and disagrees on together.
These "opinion groups" are just neutral statistical clusters. That feels important. It's just an integration of gut-feel statements that people perform a hot-take of: agree/disagree/pass. But it turns into this machine for everyone to independantly uncover the deep stories of each group. Once groups form, for complex but focussed issues, there are often 3-4 groups, even if you maybe thought it was just "us and them".
This tool does lots of interesting things psychologically.
One of my working theories of what makes it unique (I admit to having a few) is that it incentivizes the most passionate participants in a discussion to do the hard work of finding ever-more-nuanced "majority opinions" that everyone agree on under the differences. (These are elevated in the UI.) The nature of this incentive is up to the people who are designing the process in which the tool is embedded. For e.g., they might say that every "majority opinion" uncovered will shape an agenda item topic in a future livestreamed meeting of powerful people. So now you have the most passionate participants (who might otherwise shake apart concensus if their energy were directed otherwise), trying to craft clever statements to "manipulate" people in other groups to agree. But in essence, what this means is that those passionate people are exploring the consensus statements of opinion groups other than their own. They want to understand the group so that they can drop a statement into the opinion space between groups. But even if the intentions are not noble, and someone is just trying to "trick" others into agreeing, that participant is still doing deep work to delve into the theory of mind of the other groups, as evidenced through that groups consensus statements. Basically, the most active, passionate and potentially disruptive participants are encouraged to learn about their opponents in order to sway them, and in doing so, they accidentally indoctrinate themselves into empathic bridge positions. And these bridge positions are perhaps very important to stabilizing consensus and resolving conflict[2].
I'll admit that I wonder what a political or democratic system might look like if we used tools like this to elevate the voices of participants who straddle bridge positions between groups. Like what if we had democratic processes where instead of sending ambassadors from the center of opinion groups, we actually elevated those from the boundary spaces. (Maybe this was always the benefit of systems that use random sortition to elect leaders?) The selection of those people would be relatively politically neutral, as it's not a subjective choice, but a statistical one. Yes, one can probably "fake" being moderate, but to truly embody it, perhaps that changes you. And what effect would it have to send these bridge people (even transiently, for just one topic) into a position of power, to pass judgement?
I had an idea in a similar vein after reading the article and I would be interested in getting your thoughts.
As I see it, the problem statement is as follows:
1) individuals are subjected to vast amounts of political persuasion in their daily lives. This persuasion can be purchased by the highest bidder to subtly influence public opinion in a way that is counter to it's own interest. Ultimately, this can weaken the marketplace of ideas, rational discourse, and consensus.
My idea is an app, forum, or social group which pairs individuals to discuss issues in a manner that builds consensus.
Users would identify their position on issues as well as how strongly they hold that position. They would then be offered the opportunity to discuss and issue which they care deeply about with someone who is indifferent.
This could yield a high persuasion return for time invested, and help build consensus. This could have an effect similar to door to door campaigning, but with the ability to target politically active individuals who a with high impact topics. One thing that I find particularly attractive, is that it would focus the discussion on individual issues, opposed to platforms or parties.
> This persuasion can be purchased by the highest bidder
Yes. In other words, they're purchasing weighted edges in the social graph: buying new edges (that send peer signals toward the target within a graph of social influence), or amplifying weight of existing edges representing peer relations. The whole attention economy is not actually about the knowledge or content that flows through this network, rather it's all about buying/building/fortifying edges/connections between marks and those who will exert influence on them, in the direction favoured by the buyer. It's the shape of the network that's damaged, not the content flowing through it (which we've never had more access to). Empathy is just an existing biological solution to a similar network problem, maybe even a biological technology (sorry, can't help it with the weird jargon here -- was a biochemist in a past life).
> My idea is an app, forum, or social group which pairs individuals to discuss issues
It's a neat idea, but to be clear: why would people participate? I feel that without a self-interested reason, audience would bias toward process-oriented and highly empathic people who already care for the sake of caring. (I would participate, for instance.) imho some aspect of participation needs to give the process and participant teeth they didn't previously have, or give the participant power in a larger system. That's why people vote. That's why they run for election. That's why they join boards/committees. My mindset is interested in empathic tools primarily where they marry power-seeking and empathy together again in a new process.
It's a neat idea though, and don't take my words for discouragement pls! It's one thing to seek social/tech tools that sit beside charity or art or altruism in the cultural landscape (which it sounds like you're rolling around), but it's another to seek social/tech tools that sit alongside elections or Robert's Rules or other governance processes (where I believe Polis operates). We definitely need more people looking for opportunities here, so I'm grateful you're thinking on these things :)
>why would people participate? I feel that without a self-interested reason, audience would bias toward process-oriented and highly empathic people who already care for the sake of caring.
My thought is that the "selfish" incentive would be to make impact on an issue you care about. In exchange, you give someone a chance to talk to you about an issue they care deeply about you don't.
The underlying premise is that caring and reasonable people can and do have differences of opinions. The "teeth" is giving access to likely voter will listen and not have a dogmatic position. I for example, would spend an hour to double the my effective voting power.
I have little interest in running for an office, and don't think that politicians are particularly effective about changing the minds of voters. They are tied to a multi-issue platform which is incredibly cumbersome, and disinterests many individuals from the process overall.
Interesting question. Interesting too that we've become so self-confident in our machines and our studies that we think to ask it. Seeking to regulate something like the use of psychology, which we understand so little of, seems like an 18th century pirate asking his captain whether they should regulate the ocean. Like no, we only know a minuscule amount of information about it, about the bare surface of it at that - ask me again in 2,000 years.
I think the FDA was approving medicines long before the precise molecular mechanisms were known. Even with unknowns you can still study outcomes and legislate around them. Instead you seem to propose "oh we don't know enough, so we should just let it run wild".
I do agree that sensible legislation around this is hard to impossible to do. I'd rather see some more research into the effects funded before jumping directly into legislation.
You're right that we don't need to understand something completely before we regulate it.
My point was to emphasize how little we know about psychology and how little good a law that addresses our current technology will do in the long run, since the psychology-focused technology we have 50 or 100 years from now will likely be vastly more dangerous.
The image I have is of a caveman who, upon seeing one of his tribesmen kill another with a rock, proposes to regulate the use of rocks. Good idea maybe, but give it time and rocks will be the least of his troubles.
However it's unfair of me to dismiss our real life solutions to real life problems by pointing to a problem that might come to exist in an imaginary long-term future. We can only make the best with what we have.
Maybe I need to update my mental model of the law from an eternal-and-perfect view to an incremental-and-imperfect view, especially when it comes to the intersection of law and technology.
Yeah, no. Ignorance is a reason to employ caution, not throw it to the winds.
What do you imagine is going to be the horrifying fallout of telling businesses and politicians they're not allowed to build and act on detailed profiles of people?
---
If we know that humans have all sorts of cognitive biases, how come it's ok to use that fact while at the same time we insist there's some kind of free market?
Say you discover that putting good-looking women next to cars causes the sale of cars to increase. Why does nobody question whether it is legitimate to do so? It's as if there's a line between actively lying ("Studies show that men who buy this car will find many many women attracted to them") and just putting it there suggestively, for some as yet undescribed but working cognitive bias to do its magic.
Some advertisers even make a joke out of it, eg the Lynx ads where the dude is thronged by a huge horde of women. It's a cliché, for a good reason.
I suppose most people will just say you have free will and it's your own fault for thinking what was suggested, but I sense this is more of a grey zone than most people are willing to admit. How can the free market work if everyone is so easily affected by suggestion?
---
Of course this also applies to the free market in ideas. In what sense are people free to make up their minds if it's decided for them what they should see, whether or not the government is doing it or FB? Isn't this the same as the authoritarian nightmares that we've been pointing fingers at?