Hacker News new | past | comments | ask | show | jobs | submit login

But why do you think legislation should be the one forcing that choice on everyone, rather than the market allowing consumers to choose?

The issue isn't what we personally like or prefer for ourselves -- it's more about when we think it's legitimate to take away the right of businesses to experiment with business models, and of consumers to be free to enter into the tradeoffs they prefer.

Why shouldn't a consumer be allowed to agree to what you call "extremely invasive spying" in exchange for a free product, if they want to? As long as you're not signing away rights (e.g. you can't agree to sell yourself into slavery) or engaging in physical harm (e.g. food and safety regulations), then what's the moral basis for taking away choice?

After all, what you call "extremely invasive spying" other people might consider no big deal at all, in terms of their own personal data.




Legislation is used in cases where individuals either can't make that choice or if individuals' choices would cause negative externalities for some people.

That's how we get food safety laws, employment laws, waste disposal laws, etc. With regards to food safety for example, the vast majority of people don't have the equipment nor knowledge to be able to test whether the food they just bought isn't contaminated, so it's better for everyone if the law just uses the threat of significant penalties to force companies to produce safe food and that people can buy and eat food and be 99% confident there is nothing nasty in there.

The same can apply for data protection - very few people have the skills and resources to reverse-engineer every single binary they run and inspect network traffic, so it's better if significant penalties just force all companies to not misuse personal data by default.


> the vast majority of people don't have the equipment nor knowledge to be able to test whether the food they just bought isn't contaminated,

Back in the late 1800's and early 1900's food companies actually tried to convince people that this was the right thing, instead of, you know, just selling unadulterated food. The argument was literally "if we just put 'borax' on some sort of ingredients list people can choose for themselves." Women's magazine and cookbooks explained to house wives how to chemically check their food for contaminates (which required a lot of interesting acids) which included fun things like formaldehyde and liquified cows brains (to simulate "cream" on top of really water-downed milk).

Just to be clear, this only helped the companies and in certain circles ~1850-1906 (when the pure food act was passed) is called "the great stomach ache" because people were essentially eating poisons in every meal.

People are awful at understanding their own risk profiles, especially in the face of advertising.


> People are awful at understanding their own risk profiles, especially in the face of advertising.

Advertising is also regulated, by the government. The same one that regulates what goes inside of food. If the public is vulnerable to manipulation by advertising, the government is also vulnerable to manipulation by lobbying (remember the USDA "food pyramid" that put grains at the bottom due to industry lobbying?). Your argument is inconsistent because it implies that the government is somehow both simultaneously going to allow the public to be deceived by advertising, but correctly regulate the contents of food itself.


It's not inconsistent to say that the government has significantly more resistance to manipulation than a random person.


> That's how we get food safety laws

Yes, and we also have food labeling laws that require companies to truthfully warn consumers about some mildly-unhealthy things (sugar, trans fats) on the nutrition facts label - but, crucially, these laws do not ban those things.

That's the argument being made - that companies should be able to run on business models that aren't extremely bad (putting yourself into slavery, physical harm) as long as they clearly communicate to consumers exactly what they're doing (which, to be clear, is not happening now - a 30-page privacy policy in legalese with lots of "mays" is nowhere close to the "privacy nutrition label" that we deserve).

> The same can apply for data protection - very few people have the skills and resources to reverse-engineer every single binary they run and inspect network traffic, so it's better if significant penalties just force all companies to not misuse personal data by default.

That's a crazy invalid argument. The fact that very few people can reverse-engineer binaries doesn't actually matter. The government also isn't going to reverse-engineer every binary in existence, nor do they test every single product that is sold for banned substances, because that's not the way that (e.g. food safety) laws work.

The way that food safety laws actually work is that the government says "here are the rules for labelling, and here are a list of things that absolutely cannot go into your product at all" and then may occasionally test for compliance. That's exactly how both (a) labelling (but allowing) and (b) banning sale/(mis)use of personal information would work.

Your (invalid) argument has nothing to do with the argument about whether it's better for companies to not do thing x vs do thing x but clearly warn customers about it, because the enforcement/detection mechanism is exactly the same, whereas you're (falsely) implying that it somehow differs.


> > That's how we get food safety laws

> Yes, and we also have food labeling laws that require companies to truthfully warn consumers about some mildly-unhealthy things (sugar, trans fats) on the nutrition facts label - but, crucially, these laws do not ban those things.

The US is pretty big in the war on drugs, so yes they ban things. Moreover they use their military and economic power to pressure others to do the same.

> That's the argument being made - that companies should be able to run on business models that aren't extremely bad (putting yourself into slavery, physical harm) as long as they clearly communicate to consumers exactly what they're doing (which, to be clear, is not happening now - a 30-page privacy policy in legalese with lots of "mays" is nowhere close to the "privacy nutrition label" that we deserve).

So in principle you are agreeing companies should be regulated. You just disagree on the extend.

> > The same can apply for data protection - very few people have the skills and resources to reverse-engineer every single binary they run and inspect network traffic, so it's better if significant penalties just force all companies to not misuse personal data by default.

> That's a crazy invalid argument. The fact that very few people can reverse-engineer binaries doesn't actually matter. The government also isn't going to reverse-engineer every binary in existence, nor do they test every single product that is sold for banned substances, because that's not the way that (e.g. food safety) laws work.

> The way that food safety laws actually work is that the government says "here are the rules for labelling, and here are a list of things that absolutely cannot go into your product at all" and then may occasionally test for compliance. That's exactly how both (a) labelling (but allowing) and (b) banning sale/(mis)use of personal information would work.

> Your (invalid) argument has nothing to do with the argument about whether it's better for companies to not do thing x vs do thing x but clearly warn customers about it, because the enforcement/detection mechanism is exactly the same, whereas you're (falsely) implying that it somehow differs.

I think the main argument is actually not about how easy it is to understand the implications of certain labels, but an argument about power balances. The power imbalance of a single consumer compared to a company like Facebook is immense. Facebook is employing many psychologist to make their product as addictive as possible ( the notion that we are rational beings that are immune to psychological manipulation has long been proven false), moreover even if the overstep some lines, their many lawyers will ensure that as a consumer you still only have a small chance for regress.

Democracy is fundamentally a tool to balance power and that's exactly what is happening in this case, people want Facebook et al to be regulated to balance the power. Admittedly there are problems, i.e. voters are also subject to psychological manipulation, but it's still much better than anything else we have.


>Why shouldn't a consumer be allowed to agree to what you call "extremely invasive spying" in exchange for a free product, if they want to? As long as you're not signing away rights (e.g. you can't agree to sell yourself into slavery) or engaging in physical harm (e.g. food and safety regulations), then what's the moral basis for taking away choice?

Not everybody believes that individual, consumer-level choice is such a sacred cow. Why is a "moral basis" needed to prohibit choices that the broader society deems bad to offer? For example, I would be quite happy to see pyramid schemes regulated out of existence despite the fact that all participants are making reasonably informed choices to join up (certainly, much more well-informed than people who click agree on the Facebook ToS). Hopefully I don't need to offer a "moral basis" on why I think that's a good idea.


The simple answer to this and all consumer regulation related questions is information asymmetry. The service that wants to monetize privacy or other sensitive aspect of customer life is never disclosing information in a way, where customer would easily understand all the consequences. People do not speak Legalese, they do not understand technical details and they generally should not be expected to be experts in T&Cs. For this reason they choose the government that acts on their behalf and levels the playing field by defining the terms that are acceptable to majority.


This is orthogonal to the GP's post, which was asking "why can't people choose whether they want to pay for things with privacy" (paraphrased).

The solving of information asymmetry is a necessary precondition to that choice, but after you've solved it, why shouldn't you have that choice?


You also have to make sure it's really a choice, for one. If I can't avoid the companies that require users to pay with privacy, that's a problem.


Yup, of course - antitrust and all that.


> But why do you think legislation should be the one forcing that choice on everyone, rather than the market allowing consumers to choose?

Meta is still allowed to do this, so consumers still get a choice. Meta is just now forced to ask consumers for their explicit consent.


That. The legislation is not forcing the choice on any consumer, it’s giving them their choice back by forcing Meta to ask the question in clear terms and allow users to clearly tell Meta they are not allowed to spy.

Now, if Meta can’t make a viable business model without denying users the choice, doesn’t that mean that their model is flawed? The thing is, for users to understand what they are choosing, they need to understand their right to privacy and why it matters. It took the EU at least a decade (possibly more) to reach the point where they are starting to apply this legislation.


I think that legislation is the right choice when companies have proven to be repeatedly deceptive and manipulative about what they are actually doing.

If companies are hiding or downplaying what their true business models are, then consumers can not be expected to make an educated or informed choice about which companies to support or use.

When large companies with massive amounts of resources are also using them to spread propaganda, it becomes even harder for people to make informed choices.


So the solution is to use regulation to stop propaganda, enforce transparency, and allow consumers to make a fully-informed choice.


"the market" isn't some omnipotent magical force that inherently provides all options.

Given the current status quo, it's pretty clear that "the market" isn't solving this problem. Legislation is needed because "the market" has failed to regulate itself.

"the market" does not work for consumers, it's shaped for and by sellers.

Left to its own devices, capitalism creates conditions that are explicitly harmful to the population. Look at the sheer amount of labor protections we have. "the market" didn't stop employers from enforcing 80 hour work weeks and child labor, and it absolutely would not have without legislation.

Capitalism isn't magic. "the market" doesn't solve problems on its own.


>it's pretty clear that "the market" isn't solving this problem.

Have you considered that it isn't being solved because consumers don't actually mind the extra tracking? All the tracking does is attempt to give them better ads. Better ads provide a better user experience. Personalized ads are a win win for both the company and the user.


> Have you considered that it isn't being solved because consumers don't actually mind the extra tracking?

This is an interesting hypothesis, but there's no evidence for it.

Here's how we can test it: enact regulation requiring companies to disclose what personal information they collect, how they use it, and what third parties they disclose it to, and put that information in an equally prominent place as the price (e.g. "$10/month plus we collect your name, phone number, email address, and mailing address and sell it to 16 third-parties[link to list] for advertising purposes).

Then we'll see how many people are willing to pay for services with their data.


Mostly people don't know, and literally can't imagine, how much data there is about them. I accidentally freaked out a friend when they posted a photo on FB and challenged their friends to guess where it was taken, by responding 30 minutes later with a link to the Google street view photo of the same spot.

That said: The personalisation of the adverts is, at least in my case, pretty terrible — FB and Twitter categorised me as being interested in sports I've never watched, languages I've not learned, and politics of countries I've never lived in.

As someone else replied to me here, it's good that they're bad, because bad ads are easier to ignore.


> The personalisation of the adverts is, at least in my case, pretty terrible — FB and Twitter categorised me as being interested in sports I've never watched, languages I've not learned, and politics of countries I've never lived in.

Part of the reason for the push for opaquely-personalised ads could also be because advertising platforms can effectively defraud their advertisers by serving their ads to people who have no interest in them, which would be impossible in a world where people explicitly choose which types/categories of ads they're interested in.


>advertising platforms can effectively defraud their advertisers by serving their ads to people who have no interest in them

Why would a platform ever do this intentionally? This will just result in lower CPM and advertisers pulling out.

>which would be impossible in a world where people explicitly choose which types/categories of ads they're interested in.

It's possible. You can still make an ad model to give people the worst ads and ignore their preferences.


If people don't care, then why is legislation being passed? There is little monetary benefit to a government to unilaterally enact this kind of legislation without public support or legitimate concern for public safety.

Privacy and data mining by corporations is a huge topic of discussion these days. Because the conversation doesn't enter your bubble, it doesn't mean it's not happening. You should look around more, find some diverse news outlets and actually get a sense of public opinion before making claims about it.


>If people don't care, then why is legislation being passed?

Because there is a vocal minority who does care and the media makes the collection of data sound scarier than it really is.

>There is little monetary benefit to a government to unilaterally enact this kind of legislation

Except for all of the fines that you can collect under it?

>get a sense of public opinion before making claims about it.

I have and most people do not care until you start using loaded words that make people think people are actually spying on them and people who can look at their data when in actuality their data is drop in a pond and it is only processed by computers and is never looked at by another human.


> consumers don't actually mind the extra tracking?

Consumers can opt in to the tracking.

When Apple required opt-in, 10% opted in, 90% opted out.


> But why do you think legislation should be the one forcing that choice on everyone, rather than the market allowing consumers to choose?

I see a lot of debate about market forces vs regulation. In my view, neither one of them will solve every problem; that's why we have both.

As others have mentioned, an issue with Facebook is that the consent was not informed and explicit.

Another group of related issues, and this is where I think the balance needs to tip to regulation, is the network effect/walled garden/near monopoly: choosing another platform doesn't just require signing up to another business model or UX, it also involves being able to communicate with a different group of people. This is compounded by their takeover of Instagram and WhatsApp.

As Facebook only offers one model (pay for the service by sacrificing personal data) it's an all or none proposition. This might satisfy the US market better than European ones.


It is rational for governments to be defensive of practices which externalize costs onto 3rd parties, and they have to balance it with a number of factors including innovation.

It is easier to understand this with health, as compared to privacy, I think. The meat packing industry used to use substances such as formaldehyde to preserve meat longer. This wasn't transparent to the end customers and the health issues lead to lower productivity of the population as a whole. Soldiers eating this meat lead to lower fighting capabilities and higher illness. The meat packing industry fought against any transparency here knowing that it would hurt their profits. After an enormous amount of advocacy over decades, there was regulation added. There is a balance here - some substances are outright banned and some are just required to be documented on the food. This makes sense, in my opinion, because we can't expect every person to be a food chemist and know what's good and not good. Market makers have relatively large amounts of money to spend to confuse customers with misinformation, if given the option.

Applying this to privacy, the question that governments have to find out is what is fair for end users to be allowed to make a choice on and what is outright harmful such that a rational, informed person wouldn't make that choice. It is a tricky thing to get right and this bill may be overreach, but that is the nature of government and any policy - iteration. But there is always a place for a government to be in the market. Otherwise the market will be dictated by the powerful, not the people. Without perfect information transparency and the ability to interpret that information, there is no such thing as a free market.


> But why do you think legislation should be the one forcing that choice on everyone, rather than the market allowing consumers to choose?

Isn't that the exact point of legislation? Consumers had no choice in the matter, because Meta collected and sold user data even if you had no facebook / instagram account.


> But why do you think legislation should be the one forcing that choice on everyone, rather than the market allowing consumers to choose?

You paint a rosy libertarian picture about choice, but choice is not an inherently good. I don't want to live in a society that allows the use of consumer data to be used adversarially against consumers to defraud and manipulate them. It is good when laws prohibit fundamentally immoral behavior.


You paint a rosy utopian picture about regulation, but regulation is not inherently good. I don't want to live in a society that allows regulation to be used adversarially against business owners to control them and manipulate the market.

> It is good when laws prohibit fundamentally immoral behavior.

Sure, but that's not relevant, because people don't agree on what "fundamentally immoral behavior" is to begin with (for instance - I see nothing wrong with voluntarily trading some of your personal data away for a product, subject to lots of terms and conditions).

However, even if people agreed on what "immoral behavior" was, you couldn't regulate it away, because you don't have the ability to directly affect laws.

Regulation is by definition a very coarse instrument. Regulation is necessary because people are evil. But who puts laws and regulations into place? Also people, who are evil. The very fact that you can have greedy CEOs who destroy the environment for profit means that you also get corrupt regulators who write twisted laws for a bit of extra money. So, regulation is inherently subject to corruption - that's exactly how you get regulatory capture[1].

So, given this fundamental truth about humanity, the only reasonable thing to do is to design a system that acknowledges the truth about corruption and is designed to resist it. In this particular case, that means putting labelling and transparency regulation into place, which is much simpler and harder to corrupt and depends far less heavily on the moral frameworks of individuals.

Your proposal to regulate the market to conform to your personal moral code is highly authoritarian and in denial of a basic truth about human nature.

[1] https://en.wikipedia.org/wiki/Regulatory_capture


I am glad that regulation prohibits human trafficking, despite the 'loss of choice' business owners have as a result. Prohibition is never perfect, but there is certainly a lot less human trafficking than if the market was less 'authoritarian'.

In 70 years, I suspect people will view the ad/consumer surveillance industry much like we view the 1950s tobacco industry today.


Your response to my comment is as irrelevant to the original one you posted.

I never said anything about "regulation being inherently bad" or "all regulation should be abolished". I certainly never said or implied anything remotely along the lines of "slavery-adjacent things should be legal".

Please make sure that your comments actually have a connection to the thing you're commenting on before posting them.


Let's also have the market allow consumers to decide how much poison they're willing to consume in their food. After all, consumers will be free to make the choice between poisoned and unadulterated food.

Some people are starving, allowing businesses to bulk their products with poisonous substances will offer much more affordable food to those people. What's the moral basis for taking away that choice?


And that’s fine: that consumer can choose to opt in.


Except they can't. If a service is only profitable with personalized advertising they aren't allowed to say "if you want to use this service you need to allow us to do personalized advertising".

Which means that in the future if there's a service I might want to use which is in this category and if I don't mind having my data used to personalize my advertising, I don't have a choice: the service won't exist.


That's a bit like saying that motor racing wouldn't be possible without cigarette advertising.

It turns out that banning tobacco advertising did not lead to the end of Formulae 1 and making targeted advertising optional won't lead to end of social media either.


Case in point: Mastodon, an explicitly not-for-profit social network currently growing pretty dramatically.


So essentially you want to take choice away from the majority of consumers because it might make certain business models unprofitable and you want that business model for yourself? If that's not what you want then you need to better explain your point because right now it certainly sounds like that.


> you want to take choice away from the majority of consumers

Huh? My worry here is that privacy regulation will reduce the choices available to consumers. If it's legal for some services to offer "pay with ads + fraud detection" and others to offer "pay with your money" we can see what sort of services consumers prefer. In most areas online it turns out its the former. Regulation removes the first option.

> you want that business model for yourself

I'm not interested in running this sort of business; while I used to work in ads I'm not planning to go back. My main perspective here is as someone who uses and enjoys many ad-supported sites (including HN) and doesn't want to see them regulated out of existence or behind paywalls.


You’re straw-manning ads against personalised ads. There’s nothing stopping sites from targeting ads in ways that don’t require the processing of personal information (as your example, HN, already does).


As I wrote in my response to pif above [1] it's pretty likely that the GDPR also requires consent for detecting ad fraud, without which advertising isn't practical on most sites.

[1] https://news.ycombinator.com/item?id=34248454


Yes one of the purposes of the GDPR is to make it so such a service does not exist.


If I want that service and someone wants to provide it to me, how is it the role of government to say we can't make that trade?


If I want to take crack cocaine, and someone wants to sell it to me, how is it the role of government to say we can't make that trade?

Sure, hyperbole, but the role of government is to regulate when the externalities of a transaction aren't being accounted for within the transaction.


I'm on board with governments stepping in over externalities, but if A and B make a deal that includes "B will, in addition to observing A's behavior to implement functionality, use these observations to target ads and detect ad fraud" I don't see how externalities enter into it.


I think the normal reasoning is that services that don’t use data in this fashion can’t compete with those that do. Hence users who don’t want to be tracked have no other options. Whether that bothers you is a personal question.

Edit: I think it’s comparable to something like warranties. Enforcing an automatic 1/2/etc. warranty on products makes certain business models untenable. Do we consider blanket warranties on end products more important or an individuals freedom to forego such a warranty more important? Different societies may reach different conclusions.


Because having something priced as "free" is distorts the market. Consumers don't see immediately the price they pay for the product. There's plenty of experiments with people having to choose between something that is $1 in price difference and when both options are paid, people make a rational choice. However, when one of the options is free and the other cost 1$, a disproportionate amount of people choose the free option. [0]

So yeah, I think legislation should strongly push things away from free - because everything has a cost and it should be visible to end users.

[0] - https://www.neurosciencemarketing.com/blog/articles/the-powe...


> Because having something priced as "free" is distorts the market.

Only if you consider the "market" to only take price into account - which is a very simplistic model that most people don't adhere to.

You pay for things with your money, time, attention, and personal information. The only problem with the current market is that, while it's extremely easy to see monetary prices (due to effective government regulation, I might add), it's far more difficult (or impossible) to see those other three "prices". Were they equally visible, there would be no problem.

> There's plenty of experiments with people having to choose between something that is $1 in price difference and when both options are paid, people make a rational choice. However, when one of the options is free and the other cost 1$, a disproportionate amount of people choose the free option.

What's happening here is just a "rounding down" of a particular cost to zero, which isn't relevant if the associated cost isn't close to zero. Specifically, if the two options are (1) pay $1 for thing and (2) pay $0 but expose your IP address to the website, the latter option will get rounded down to zero the vast majority of the time because people usually don't care about their IP address being seen. However, if the two option are (1) pay $1 for that thing and (2) pay $0 but give them your real name, mailing address, email address, phone number, and SSN, and the buyer is aware of that upfront, there will not be the same asymmetry, because that private information does not round down to zero.

The correct solution is to regulate transparency such that, alongside the "free" sticker price, consumers see a very clear warning label "you will provide your full name, email address, and phone number, and these will be sold to 17 parties[link] for advertising purposes" - in other words, actually expose the privacy cost as part of the "sticker price" alongside the monetary cost.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: