> individual “data rights” have led to unintended consequences; “privacy protection” seems to have undermined market competition;
That opening paragraph already speaks to the over-elevation of the market over any other concerns. So it perfectly fits onto "news.ycombinator.com". Human rights, including privacy and data rights, are more important than the profits of some companies
Most examples in the text are, for instance, related to companies failing to properly implement the GDPR (Amazon sending data to the wrong person, Spotify not asking for 2FA/email confirmation for the bulk download, companies deleting articles even when there would sufficient public interest, Ad vendors failing to ensure compliance and therefore seeing drops in demand, ...), that is, market failures - something this site would probably not call out but rather attribute it to the legislation.
Market competition is usually in the opposite to profits. Market competition usually works against the competing companies and in favour of their customers.
That is true in theory but it is not (solely) what is at stake here. Here, we are talking about the cost of regulations, and these do eat into the profits of companies. To some extent, these costs could also hurt competition (assuming the competitor had the same data-vacuuming business model). While we can not directly say whether the (possible) decrease in competition compensates for the compliance costs, looking at the overwhelming opposition by businesses small and large (incl FANG), it seems like the cost are higher.
Compliance costs money, what's supposed to be new or particularly bad about that?
Many in this thread seem to argue that launching an internet based business or entering a market as large as the EU was previously free and I honestly don't see where this illusion is coming from. Yes, GDPR compliance costs money, just like a whole bunch of other things.
> To some extent, these costs could also hurt competition (assuming the competitor had the same data-vacuuming business model).
Same question, if that data-vacuuming assumption is true, why should the general public care about preserving that? GDPR regulation and implementations are by no means perfect but it's not like transparency, forcing companies to think about their impact on their users privacy and other aspects don't present benefits as well. I realize that the question of regulation is a matter of philosophy just as much as it is political but painting GDPR as some kind of killer of good businesses seems, at least, very weird.
In general this might be agreeable. But in these cases these are not direct unintended consequences of the legislation but rather consequences of companies failing to do their due diligence when implementing GDPR. It would be like companies putting people in bomb disposal suits after sth like OSHA is implemented and complaining about the cost and lost productivity. Or to take the EU: Recently the CJEU ruled that all work time has to be tracked (not only overtime), and a German company (in Germany this wasn't mandated before) would force employees to install invasive phone apps that track location etc. and use to determine work time. Employees rightfully complaining should direct their anger at the company, not at a law that would work perfectly well with a standard punch card system
> But in these cases these are not direct unintended consequences of the legislation but rather consequences of companies failing to do their due diligence when implementing GDPR.
The complexity of additional law is part of its unintended consequences.
How is sending the recordings of one user to another a result of law complexity?
In any case, there's no increased complexity here; the Data Protection Directive (enacted in 1995) already mandated access to one's data (Article 12 - Right of access). The GDPR mostly gave some real teeth to that existing right.
> That opening paragraph already speaks to the over-elevation of the market over any other concerns. So it perfectly fits onto "news.ycombinator.com"
I have not found the cultural climate on HN to be opposed to the GDPR at all. Both its goals and its implementation seem to be popular here overall, and most comments I see that are critical of it get enough downvotes to have their text significantly desaturated even if they're making a good argument.
It's my impression, though I have not worked in the field that people operating ad networks believe some kind of tracking is necessary to prevent click fraud, and do not want to sell ads free of any tracking even when they have customers asking for the option. I don't know how true this claim is or whether alternatives have been adequately explored.
What I do know is that some of the industries that use ads, such as newspapers are struggling to make enough money to continue operating. Nobody will miss a scummy adtech firm, but people might miss local news outlets. It's valid to talk about the impact on business not just in terms of profits, but also considering potential positive externalities of the continued operation of a given business.
I recall a discussion on the matter. Everyone that wasn't selling ads felt that if it was impossible to sell ads without tracking due to "click fraud", there was no obligation from society to prop up their failing business model by letting them ignore the privacy rights of the public just because it was expedient to do so.
The business model is fine, the EU and you are choosing to kill it. Not only are people arguing for no tracking they are arguing on forcing other people to live without tracking. So if someone in Europe wants a free service in exchange for their personal information they don't even have the choice. Additionally GDPR requires that businesses not restrict their products to only those that are funding it.
Well yeah, that's the meaning of laws/regulation, they apply to everyone. If someone in Europe wants a free service in exchange for [something illegal] well sorry but not allowed. That's not the hard part of this whole debate...
Except for the fact that in this case the "something illegal" is the user's own data, which they should get to decide what to do with. Isn't that the whole point of this discussion? That user's have the right to their own data? If they decide they want a free service in exchange for it, they're only 'hurting' themselves after all, and if they really have a right to their data (instead of a right to have their rulers tell them what to do with their data) then they should be able to make that choice! You can't act like everything that's "illegal" is equally bad, by grouping "personal data" with other "[something illegal]" items. That's a pretty severe case of equivocation.
I'm not sure I see that, but I'd be willing to consider that possibility if there was any data to back that up instead of hysterical op-eds and articles. I see a lot of those, but not a lot in the way of convincing evidence.
Although, FB in general doesn't seem to help the political discourse, I don't think it's anyone's job to decide what ways of communication and discussion are right for a democracy, because if you control where people can speak and what they can say, it's a short step until you control how they vote.
Wait what? I cannot say illegal things are illegal? The only association I made is that things that break the law are illegal, which is a tautology and obviously true. I never said everything illegal is the same.
Personal data has many meanings and for some, this data might bring their literal death so of course it's a serious topic. Not common in the last ~30 years of western history, but that's just a tiny slice of time/location so of course it's sensible many of us want to keep personal data, well, personal.
I apologise, I might have misunderstood your point then. I assumed you were saying something more interesting then a simple tautology. Why say a tautology?
I thought what yout were saying is that we shouldn't want it to be legal because it's illegal and therefore bad.
> Except for the fact that in this case the "something illegal" is the user's own data, which they should get to decide what to do with.
And what websites have been open with their data collection before the GDPR forced them to? When I opened the data collection dialogs introduced by the GDPR for the first time I expected maybe two or three entries, and it was near consitently closer to 40! WTF. Calling it "user choice" when the site owner deliberately ommits that kind of information is dishonest at best.
Forcing companies to tell users what they're doing with their data so the users can have informed choice isn't really my issue here.
The issue I have is the rules in what can be done with that data. Because those rules make it so that the users don't get to decide what they're OK with being done with their data.
If I'm being really honest here, the GDPR seemed pretty well intentioned, IMHO it just went a little far.
Maybe I read a different text than you but I have yet to encounter a scenario in which a user would consent to processing that the law still would not allow. Assuming the consent was gathered according to the basic principles of GDPR, fair, transparent, specific etc.
Maybe I am missing something. Would you provide examples? I would be very glad to learn if there are edge cases I may be overlooking.
The GDPR does not restrict much what you can do with personal data in principle, it does require much more effort in explicitly informing the user (also for updates) and it does grand inalienable right to users on their own data.
Those "inalienable" rights are kind of the issue here: it's telling the businesses and users how they can sell that data and what they can use it for, so it's not a choice by the user on what they do with their data, and more a collective decision with the government.
As for explicitly informing the user about what they do with their data, that part I have no quarrel with.
My point would be that there are a lot of illegal business practices. You as a costumer cannot buy expired food.
> it's telling the businesses and users how they can sell that data and what they can use it for, so it's not a choice by the user on what they do with their data, and more a collective decision with the government.
That is true, and that should happen in cases where market incentives do not align with social or public interests.
I see where your coming from, and it makes sense. Your point of view, if I understand correctly, is that the government should protect consumers from accidentally making bad choices. And I get that. It seems like it would be nice. But I don't think that's a road we want to go down, because if the government gets to make choices for people in one area, why not others? And why are we assuming that the government always knows better than the people that are actually in situations?
I mean, if I'm being honest my views on government aren't very common, so it'll probably be expedient to agree to disagree. (:
I don't think tracking adtech is a fine business model, but I'm inclined to favor technical solutions over legislative ones. There are several reasons, including: technical solutions are available to everyone, not just specific regions; technical solutions evolve and respond quickly to a changing environment; technical solutions offer users more direct control over what they will accept; technical solutions are not coercive or backed by the threat of violence.
I love it how everyone is downvoting you for pointing out something super basic: GDPR is essentially the government deciding that everyone in Europe's data belongs to the government, to decide what they can use it for and what they can't. That's not personal data rights. That's other people deciding what's best for you.
To make a little jump, this is like saying that child labor regulations mean that your children are actually property of the government. (not to show it is wrong, but that there is some background context needed)
Also I can still give facebook all my personal data. Simply facebook need to get my consent to distribute and sell it and I will forever have some basic control on what data fecebook has on me. The government has little to do in this.
Also (beware the strawman), as far as I know people cannot sell their own organs in the EU, is this a sign that your body belong to the state or that business models build on harvesting poor people organs are unjust?
So for your examples, I would say yes, yes through its rules and actions the government has clearly shown that it thinks it owns those things. Including our bodies (drug war anyone?). And a business model that pays for organs is not "unjust" but maybe a bad idea for those that would participate, obviously. I mean, the way you phrase that makes it sound like they're going to be kidnapping poor people in the streets to steal their organs if there wasn't a law against selling organs, which doesn't make sense.
> Also I can still give facebook all my personal data. Simply facebook need to get my consent to distribute and sell it and I will forever have some basic control on what data fecebook has on me. The government has little to do in this.
So long as the government doesn't force companies to provide the basic control, that's how it seems like it should work! (:
> I mean, the way you phrase that makes it sound like they're going to be kidnapping poor people in the streets to steal their organs if there wasn't a law against selling organs, which doesn't make sense.
My understanding is that figuratively speaking that is almost what happened with subprime loans.
Corporations and market can have a lot of power in performing predatory tactics. If drugs were simply legal quite a few business would sustain themselves on other people addictions.
One of the main reason we need regulations is that any sensible and obvious law (like not kidnapping people to harvest their organs) has loopholes (like keeping people poor, ignorant and devoid of mobility (lack of education, criminal convictions etc.)) so that they will agree to sell their organs.
Organ harvesting is a deeply extreme subject and obviously will not happen with or without regulations, but modern free society need are built on the free enterprise (eventually in the public sphere) of individuals and consequently they need to handle when individuals gather too much power and can destabilize societies.
Every free society has this problem (including bitcoin with a 51% attack) and needs to find a solution to both promise rewards for personal enterprise and incentives not to abuse the system (for bitcoin (IIRC) they are respectively money and loss of hardware investment)
> So long as the government doesn't force companies to provide the basic control, that's how it seems like it should work! (:
(interpreting as government should not force companies)
My problem with that is that principles do not help us distinguish fair competition from predatory unethical behavior. In the contest of personal data and privacy that is relevant as we live a completely different universe from just a few years ago.
Gossip is not illegal, but if you were magically able to listen to every conversation in a 10 km radius that would be a problem. Legal and illegal are often linked to how hard it is to do something and the scale at which you can do it.
Which is a fine argument, but people often don't consider that this kills off all the businesses that rely on those adtech companies. I think part of the problem is that it's not immediately obvious that sites like Google and YouTube only run because of that adtech.
They only rely on ads because that's the path of least resistance. If the GDPR eventually means that there is no Google or no Youtube anymore – which is not very likely – that just means the cost of Google/Youtube doesn't justify its benefits, assuming markets work at all.
This is an astute observation about this site. I too have noticed, on many topics, that the comments here tend to acquire a different character when it's the middle of the night in California.
Many in adtech do support privacy and data protection, after all they're just people too and want a good experience online.
The problem is that GDPR is well intentioned but poorly implemented, a common occurrence in politics with examples in lots of sectors. These kinds of unintended consequences are what happens when politicians don't quite understand the nuances of an industry and focus more on regulation-in-principle and showboating rather than actually effective rules.
No, unintended consequences happen in every change one implements. Show me a law of any consequence, and I'll show you some unintended consequences it brought on.
I think you are confusing unintended with unforeseen, which is a different matter. But I don't think these were unforeseen; some are problems with the implementation, which will be corrected by the companies responsible, and others are just inevitable (if you give people access to something, by definition it makes it easier for a third-party to abuse that access).
These problems were not unforeseen. The politicians were warned over and over again and you can find articles about it going back years.
The implementation problems are with the law, not the companies. There are ways to enforce data protection and privacy without such complex and nebulous laws that aren't even effective against the worst offenders.
I've seen many critics of the GDPR say something to this effect; I've yet to see any make it concrete. Without wishing to put you on the spot, what leads you to that conclusion?
Purely anecdotal, but I've had many debates over the last year on HN with people who claim that GDPR is just protectionism - rather than being a sincere effort to improve human rights online, they argue that it's just a sour-grapes effort to cripple American tech companies.
Pretty funny argument given that the big US companies are benefiting the most from it according to the article.
“The consequence was that just hours after the law’s enforcement, numerous independent ad exchanges and other vendors watched their ad demand volumes drop between 20 and 40 percent. But with agencies free to still buy demand on Google’s marketplace, demand on AdX spiked. The fact that Google’s compliance strategy has ended up hurting its competitors and redirecting higher demand back to its own marketplace, where it can guarantee it has user consent, has unsettled publishers and ad tech vendors.” (Digiday)
Having the intention to harm US companies and while implementing it actually benefiting them is not mutually exclusive.
Political motivations will be complex for anything with as wide impact and as complex as GDPR.
The perception that the US tech companies should be affected the most was certainly factored in during the political process involving thousands of people. It's debatable how small or great impact this had, not whether there was any.
> Pretty funny argument given that the big US companies are benefiting the most from it according to the article.
That would be the "unintended" part.
Only the ginormous companies can spend thousands of human hours on compliance while their smaller competitors either leave the market or get steamrolled due to the compliance costs. All this has happened before, and all this will happen again...
> Most examples in the text are, for instance, related to companies failing to properly implement the GDPR (... companies deleting articles even when there would sufficient public interest,...)
Some of those examples were deceptively reported. For example, the doctor who asked The Guardian to take down articles about her suspension: she had successfully appealed that case, and a judge overturned her suspension and ordered the record expunged: her name was dragged through the mud on bad information. This is exactly what right to be forgotten is meant for.
>> That opening paragraph already speaks to the over-elevation of the market over any other concerns. So it perfectly fits onto "news.ycombinator.com". Human rights, including privacy and data rights, are more important than the profits of some companies.
What do you mean? I use news.ycombinator.com every day and I consider human rights, and consumer rights such as privacy, to be extremely important. In fact, I use news.ycombinator.com because there is a very strong current in support of such principles by the users here.
There is also a strong streak of free-market capitalism and technology-first, you-can't-stop-progress techno-optimism, but that is the point. This site offers opportunities for debate.
You're assuming too much if you're extrapolating from a few comments you disagreed with to the entire userbase of this site. HackerNews is not an echo chamber. Not yet, anyway.
Not really. The GDPR stipulates a right to be forgotten [1] with the following exceptions:
> 3. Paragraphs 1 and 2 shall not apply to the extent that processing is necessary:
> (a) for exercising the right of freedom of expression and information;
> (b) for compliance with a legal obligation which requires processing by Union or Member State law to which the controller is subject or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller;
> (c) for reasons of public interest in the area of public health in accordance with points (h) and (i) of Article 9(2) as well as Article 9(3);
> (d) for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) in so far as the right referred to in paragraph 1 is likely to render impossible or seriously impair the achievement of the objectives of that processing; or
> (e) for the establishment, exercise or defence of legal claims.
If the fundamental thesis is that "the market fails" or that "corporations are irresponsible", then at the very least it should be predictable that some of these regulations will have the real negative consequences (which I think we can agree some of these are) expressed in this article. The goal of this realization is not necessarily to disparage the GDPR, but hopefully to learn and perhaps put together a more precise or better iteration in place that avoids these pitfalls. For example:
> Spotify not asking for 2FA/email confirmation for the bulk download
I'm not extremely familiar with the letter of the law, but if it doesn't specify that you need 2FA/email, and there are clear fines/downsides to not complying, I do not see how this is not a predictable issue that would come up. The incentive is to comply, since you've already put in the work to make it possible, and there are onerous punishments for not doing so. In other words, a false negative (disallowing the download) can be potentially perceived as much worse than a false positive (allowing the download). This seems built-in: the goal of the law was to give it teeth to allow the user to get this data. If we just default to "its the companies fault for not applying an additional layer of thought to all this", then whether its true or not (and I agree it is!), it does not realistically solve the problem - establishing blame doesn't necessarily provide a path to making this less likely in the future as long as the equation is still heavily weighted towards disincentivizing false negatives. This is another way of saying: if we want to characterize corporations as lazy/malicious/what-have-you, then we can't then be Pikachu-surprised-face when they act that way under the letter of the law like some monkey's paw scenario: we should instead "aw shucks, fool me once" and try to come up with something better.
> companies deleting articles even when there would sufficient public interest
Similarly, we should try to predict that it is entirely plausible that the rights will be attempted to be abused, or leveraged, by those that it provides an obvious benefit for, even if it is malicious. Here again it is interesting that we begin with the thesis that "we need these laws because companies have proven not sufficiently responsible enough on their own" and yet then immediately make a law that is vague and thus defers major parts of the decision to these same companies. Many times this ultimately comes down to litigation where the boundaries of laws are worked out, and this is a very reasonable response: its only been a year, the courts will hopefully work out when these rules are mis-applied. However, it is on us to make sure we litigate "too much" right to forget (as in the cases here) and not just cases where companies refuse to forget. If not, the courts will send a clear message that it is perfectly fine to blindly abide by every request as the path of least resistance. Again: the premise is that they don't care, and we still haven't figured out how to legislate caring.
> I'm not extremely familiar with the letter of the law, but if it doesn't specify that you need 2FA/email
The Regulation doesn't mandate specific technical implementations anywhere; it leaves that to the industry, which is the expert in that regard. But on the subject of the Right of Access it does explicitly say:
"The controller should use all reasonable measures to verify the identity of a data subject who requests access, in particular in the context of online services and online identifiers"
It seems to me that not using 2FA/email means they haven't used all reasonable measures to verify the identity.
This ideological purity doesn't work well in the real world, and it's not about profits either.
People don't care about privacy as much as you imagine them to, especially if they have to give up everything they get for ads today. One look at what people willingly share to the world on social media shows that.
But powerful monopolies are a problem and market competition is the correct answer to that power. Regulation isn't a magic cure and should be used to place guardrails on the market, but in this case could've been written far better to provide data protection without entrenching the major players even further.
> Every single company on that list deserved to die.
Hi, I'm Brent Ozar, the cofounder of the first company in the list. (Ah, the joys of alphabetical sorting.)
I've written a big long post[1] about why we stopped selling to the EU, but here's the short story: the EU only represented 5% of our revenue, and for that small of revenue, I wasn't prepared to risk the GDPR's fines if any one of the third party tools we use had a problem.
During our GDPR prep with our attorneys, it was completely clear that the third party app ecosystem was in no way ready for GDPR enforcement actions. For example, we use WordPress and WooCommerce to sell online training classes. I'm a database administrator, and I know dang well that WP and WC aren't encrypting student data at rest, nor do they encrypt the other fields where people put student data - let alone how some of the plugins handle student data by storing it in the posts table, which was never designed to handle that kind of thing. If I had to face EU officials, I could never say with a straight face, "Oh yes, I was completely confident in WordPress's abilities to keep customer data secure."
I have confidence that someday, apps like WP and WC will have a better GDPR compliance story that doesn't just meet the bare letter of the law, but also the spirit. When they do, I'll be all about selling to the EU.
I'm doing the preparations that I can - for example, we've got a Privacy Policy that lays out our interactions with other partners, and lets EU folks request their data & delete it.
However, this is just the life of a small bootstrapped business: sometimes, you gotta make choices to focus on your best customers. 5% of my customers were threatening me with regulatory action that might result in huge fines if I let a ball drop. Unfortunately, I only have so many hours in the day. If I have the choice between doing regulatory paperwork for 5% of my customers, versus adding more value for 95% of my customers, I gotta make the obvious choice.
> If I had to face EU officials, I could never say with a straight face, "Oh yes, I was completely confident in WordPress's abilities to keep customer data secure."
Should this business really continue handling potentially user data if it can’t guarantee it will be secure down the line ?
Well, he himself said as an explanation that he couldn't audit the user data he received to ensure deletion or to ensure that it didn't fall into the wrong hands.
He stated that people contact the company via many different methods: email, twitter, Instagram, etc... GDPR mandates that when the user demands it the company must delete all associated records. This small company doesn't have the resources (or doesn't want to waste time) to go through all emails, all twitter exchanges, etc... and expunge them all every time someone demands it.
As far as WordPress plugins go - I get that too. The place where I work has 100s of 3rd party packages. To go through them all would require Y2K level of effort to make sure they comply and/or upgrade ones that don't.
So I am not at all surprised that Brent Ozar didn't think EU was worth the effort.
This looks like an overzealous interpretation of the law more than anything else. Looking at both the founder's comment and yours, I can only +1 hannasanarion's comment.
And because someone will probably ask "why", here is why:
1) The GDPR was not designed to drown small businesses into expensive processes forcing them into bankruptcy or into cancelling their expansion in Europe. It was designed as to force business owners into thinking twice when they plan on getting rich by exploiting and reselling customer data to third-parties, or by performing "smart" operations on this data (i.e. any company that sticks the words "AI" or "smart" or "neural" or "deep" close to "customer data" in its business model).
2) If a user agreement (or privacy policy) specifies that data requests should only be carried out through medium X (e.g. an email address) then that shuts down all discussions surround the "people contact the company via many different methods: email, twitter, Instagram, etc..." argument.
3) "I'm a database administrator, and I know dang well that WP and WC aren't encrypting student data at rest, nor do they encrypt the other fields where people put student data." So what? GDPR nowhere says "student data should be encrypted at rest". It says it should be protected from unauthorized access, but not that it should be encrypted. Encryption is one way to respond to this requirement, and 9 times out of 10, it will be implemented with security flaws much worse than simply enforcing access control to the data. By trying to address a problem that does not exist with a solution that is inadequate, this business owner basically failed at his primary mission: managing risk.
Two arguments raised, two arguments completely wrong. Hence the justified conclusion: I am glad that these business shutdown or stopped playing with EU citizens data following the enactment of the GDPR.
> GDPR nowhere says "student data should be encrypted at rest". It says it should be protected from unauthorized access,
The reality is that you can't protect unencrypted data from unauthorized access. You can try, but you can't guarantee it, not when you have hosting partners, for example. Encryption is just one completely reasonable defense mechanism that needs to be part of a larger strategy. I wasn't comfortable defending the company without personally identifiable data being encrypted. You might be. I'm not.
> this business owner basically failed at his primary mission: managing risk.
To the contrary, I succeeded. I eliminated the risk at the cost of 5% of my revenue. I sleep great at night not worrying about the GDPR.
Each item on the list deserves a little blurb on why GDPR "forced" it's removal from the EU market.
Examples:
Unroll.me's entire business model was made illegal in the EU.
Hitman: Absolution faced problems in taking ownership of its EU servers.
The two games Loadout and Super Monday Night Combat both claimed not having the resources necessary to comply with GDPR. For perspective Loadout had a peak of 208 concurrent players in 2018[0] while SMNC had a peak of 40 players in 2018[1].
Exactly. Unroll.me continues to operate outside the EU and continues to sell access to people's emails. "We don't charge you for this amazing service" they claim, at no point making it clear what their business model is. They prey on the ignorance of consumers, exactly what regulations (like GDPR) should be protecting consumers against.
I suppose a similar article could be published about the pharmaceutical industry, crying about the cost and consequences of FDA regulation. Doesn't mean we should scrap FDA regulation and "let consumers decide" whether drugs are safe or not.
Drawbridge: cross device matching (ie fingerprinting) service
unroll.me: selling your inbox contents, but "anonymized"
FamilyTreeDNA: proudly letting law enforcement and probably tons of others search your (and your relatives!) DNA, without -- for your convenience -- asking your, or your relatives', consent
Klout: kinda scummy, and also not phased out because of GDPR
>Lithium CEO Pete Hess reported that Klout is a “a standalone consumer-facing service” that no longer fits the focus of delivering customer service solutions. In addition, “recent discussions on data privacy and GDPR are further expediting our plans to phase out the Klout service, giving us a chance to lead on some of the issues that are of critical importance to our customers: data privacy, consumer choice and compliance.” [1]
We're supposed to mourn these companies? We shouldn't trust an author or site whose best choices of companies to mourn are (1) at least in some cases, not gone because of GDPR, and (2) mostly companies we're better off without.
Color me surprised, scammy business are losing millions and exiting the EU. I'm totally happy about the outcome.
Though there is still a lot of abuse and dark patterns going on, I believe most of them should make it as easy to "opt all in" as "opt all out" for the cookies for instance.
The problem is any regulation is that it increases the startup costs for smaller businesses.
So as more regulation comes in it will just end up cementing the large players in place as they can absorb the costs of any regulation, while smaller businesses will have higher startup costs (which lets face it were next to nothing).
So while you maybe rejoicing now that shitty companies have gone for now, regulation will just make it harder for these massive companies to be toppled as it makes it harder for smaller companies to comply.
The EU are trying to have article 13 pushed through and any site that has user generated content will have to have some sort of upload filter to check for copyrighted content. That is going to cost money to implement and since Youtube hasn't really be able to achieve it, the only people that will be supplying the software will be the likes of Google, Microsoft etc ... So again it will just make it harder to the small business and help the large businesses.
Also a lot of these regulations make are making the web a shittier place. Every time I go onto a site now, I have the stupid cookie and GDPR notice plaster in front of what I want to look at. I already protect myself and don't care about their attempt to track me. It is just an irritation that nobody pays attention to and it achieves the opposite of what it was intended to achieve.
We need regulations because people will go as far as they can to make more money.
Businesses were upset when their country banned child labor while their concurrents' country didn't, same when weekends, vacations, reasonable work weeks were introduced. What about safety requirements, food quality inspections, &c.
Self regulating markets are a myth, just look at the US insurance and health industries if you want a proof.
That's also why in healthy countries you get a lot of free passes when you start a business: lower tax rate for a few years, 0% loans, advisors paid by the state, &c.
> regulation will just make it harder for these massive companies to be toppled as it makes it harder for smaller companies to comply.
Why did no one topple apple, amazon or google in the last 25 years? If anything the lack of regulations when they started allowed them to become the de facto monopolies we all know today.
Some of those companies aren't even 25 years old. They didn't get toppled because they were the young upstarts growing into incumbents.
The problem isn't supporting privacy and data rights, it's doing so in a way that creates unintended consequences which actually worsen the market and UX for consumers. There are better ways this regulation could've been written, but it wasn't. That's the issue.
I'm of the opinion that privacy regulation is a good idea, but it's trivially true that it's an additional burden for start-ups. The Is it worth it? question is a legitimate one.
And now those who cannot/do not know how to protect themselves will be unable to start a business on the internet in the EU. Do you think these two groups have to be mutually exclusive?
Without the people who start business type X, we won't have competition in business type X. Therefore a law that makes it hard to start businesses of type X will affect you whether or not you ever intend to start such businesses.
This applies for any X that you care to name, including "internet".
If you believe that you can both pass regulations that make businesses of type X harder to form, and enjoy the benefits of having new businesses of type X around, then there is probably a big flaw in your thinking.
In case of GDPR, X is "businesses abusing people's data", which essentially boils down to "adtech". We don't need more competition in adtech. We need adtech to die.
No, X is "businesses that handle people's data". For whatever reason.
The goal is to regulate adtech. But the effect is to impose regulatory costs on every company that wants to have a discussion forum on their website. (And the upcoming copyright bill is even worse.)
In the case where X is what you describe, then fine. If they can't start their company and simultaneously treat my private data with respect and care, then I don't care for them to exist.
The cost of business going up isn't necessarily a bad thing, if we're getting something valuable in return (IMO we are). The question is whether or not the increased cost is prohibitive, and you have not provided any evidence to suggest that's the case.
> The question is whether or not the increased cost is prohibitive, and you have not provided any evidence to suggest that's the case.
The thousands of companies that just block EU citizens rather than comply seems to suggest that they feel the cost is prohibitive.
As for more direct hard evidence I believe this would fall into the "unseen" category in Bastiat's That Which is Seen, and That Which is Not Seen and is, in effect, calling on someone to prove a negative.
> The thousands of companies that just block EU citizens rather than comply seems to suggest that they feel the cost is prohibitive.
They block EU because they deem compliance not worth the effort (now), usually because they get more than enough from their US markets. This doesn't mean the costs are prohibitive. Thousands more companies didn't block EU citizens. Some companies (notably news sites) even started to offer a superior product to EU citizens (e.g. plaintext news).
Also, even with those blocking EU or shutting down, nothing of importance is lost. These companies have competitors that are less abusive, who do fine.
Handling other people's personal data is a serious responsibility. GDPR imposes regulatory costs, in the same way that health and safety or environmental protection legislation imposes regulatory costs. It's not creating any new costs, it's just properly pricing an externality.
Adtech ought to die. Ideally, I would want to pay for Google and Facebook the same way I pay for Netflix and Spotify. In exchange, I would want them to treat the data about what I do online with the same respect with which my doctor treats my medical history.
The model where Google provides a service and users pay for it is more efficient and more societally healthy than the model where Google provides a free service, a million companies pay to place ads on it, and pass the cost of their AdWords budget onto users who get a 'free' service.
It is a model where consumers get better products, and where millions of creative minds aren't wasted making web pages uglier (or ruining cities with billboards, for that matter). It is a model where competition is also a little easier, because an alternative search engine can undercut Google's prices and carve itself a starting market niche, even if their service is not quite as good as the established competitor; instead of the current model where first you need to be better than Google in every way, and then you have to fight the network effect.
I have no clue how to get to world to switch to this model. It will require that elusive white whale, an online payment mechanic that is truly as frictionless as cash. And it will almost certainly require legislation rather than mere market pressure, because people can see their monthly Google bill but cannot see the vast costs of the marketing industry which they pay for every day.
>The model where Google provides a service and users pay for it is more efficient and more societally healthy than the model where Google provides a free service, a million companies pay to place ads on it, and pass the cost of their AdWords budget onto users who get a 'free' service.
That's cool and all, but people can't pay for it. These fees would add up quickly and you'd basically never go beyond your few webpages that you're paying for, because everything else costs money.
I probably would never have cared about the internet or anything related to computers, if websites had required people to pay. That would not have been an option for me or most people I knew growing up.
Everyone who cares strongly about this issue (not nearly as big a cohort as hn thinks) is against targeted ads. If they ever get their way and laws really end Google/Facebook's business model as GDPR intends, the much larger cohort of people who care more about not paying for services will start caring.
I've started 2 startups in the UK since GDPR (well, 1 that happens to sell 2 different products), not really affected me one little bit.
But then again, they're not scummy companies.
Soooooo, bullshit.
I had to put in like a few hours thought into what data I was collecting and how long it was appropriate to keep it.
I happen to know quite a lot about GDPR because I dealt with it at a client I was previously working with, if you want to make it extremely complicated, you can. But you don't have to.
In one we actually track user's behaviour to make better recommendations, but we're open about it and they can disable it if they want. We also delete that data if they delete their account.
It's just a different mindset, it's their data, not yours. You're open about what you're doing and if they want you to delete it, you delete it.
There are no costs because no one is enforcing it.
> In one we actually track user's behaviour to make better recommendations, but we're open about it and they can disable it if they want.
If I understand well this is opt-out instead of opt-in... If you would be slapped some percent of your revenue for this you would feel the costs. Not only the cost of fine, but also of reading and implementing GDPR more carefully. But data protection authorities don't have enough resources to audit even 1 / 100 000 of companies that ignore GDPR up to this level of detail. So you can live in happy ignorance that you are implementing GDPR.
That not to say that GDPR doesn't help in general. The issue is that it will be a dead law or a law that hits randomly some very, very small percentage of companies breaking it.
Having a law that no one implements properly is just a recipe for abuse of power by authorities. "Show me the man and I’ll show you the crime" is well known to people living under the Soviet rule. (And, No! EU is not the Soviet Union. But some DPA are in post-soviet republics with people that were raised in this mentality.)
"I happen to know quite a lot about GDPR because I dealt with it at a client I was previously working with,"
There we go. You already done the time investment at someone else's expense.
So thanks for proving my point.
My comments weren't about GDPR but about regulation in general. Any regulation requires more work which makes it difficult for smaller players. You had to do the extra work.
Should we ban food inspections too, since that means smaller players have to do more work? How about automobile safety testing, it's such a hassle for auto makers. Why not get rid of building codes and prohibitions on lead in children's toys while we're at it.
I imagine the anti-GDPR-folks might argue that overly onerous restrictions have been harmful to smaller players. Temperature requirements effectively made Peking duck illegal in California, until a lawmaker representing the Chinatown area proposed a law specifically exempting it: https://www.sgvtribune.com/2015/08/22/peking-duck-is-so-impo...
Because that knowledge is worth thousands to tens of thousands of euros in lawyer time. And you're still not guaranteed to get it right or be covered.
Your example is like saying that everyone that wants any kind of job should know multi variable calculus. When people protest that that's putting too much of a burden on people, you bring up that you got a job just fine, because you learned multi variable calculus in school.
Their example is like saying if you want to open a restaurant you better take the two day course on food safety. Equating GDPR compliance with multivariate calculus is just a gross exaggeration. Yes there are risks, you get those with every venture you start. You're pretty well covered with the technical due diligence we as a sector should have put upon ourselves in the first place and you can externalise the rest easily, just like people do with many other regulations like taxes/finances.
We should really separate the protection of scummy business models and down to earth stuff like data takeout / account deletion and transparency as to what companies do with user data. The latter is neither rocket science, nor should it be particularly hard for any startup that's over the "my company is a fancy slide deck" stage.
But it's not just that. Read the rest of the thread how much time and effort people had to spend at various companies for compliance. It's not just about data takeout and account deletion.
So the regulation causes problems for people that haven't done anything wrong.
A lets be clear here. People aren't dying, it mostly ads and shitty data collection. I think it might be better to actually educate the public (which govs are doing) as to some of the pitfalls of the internet rather than regulating the crap out of it.
While this is true it's exactly that, which turned the world (and by extension the world wide web) into a fucking dystopia. Brexit, without the whole concept of targeted ads and the data collection that goes with it would have not been possible.
Yep, I think add tech is utterly and totally evil. And all that to make a buck, or a billion.
I, for one, think that's a disastrously high price to pay for a few successful tech companies.
People aren't dying,
Actually I disagree here. When you look at the consequences of the technology in countries like Myanmar, The Philippines, Brazil, Cambodia and others and the likes of Mr. Zuckerberg and his ilk giving exactly zero fucks (unless it becomes bad PR) I'm afraid you're definitely wrong on that one.
> Brexit, without the whole concept of targeted ads and the data collection that goes with it would have not been possible.
However nobody mentioned all the people that didn't bother voting because they were at Glastonbury which was on at the same time.
I very much doubt that is true. The UK has been a bad fit in the EU and there has been a sentiment for years that we don't want any EU interference. For example many don't want "The EU monopoly money" (not my words mind you) and generally the public is Euro-sceptic.
The papers and politicians were trying to find a scapegoat because quite frankly it didn't go the way they wanted. Much like Trump's victory claiming that Russia hacked the election (there were like a few thousand placed on facebook, which paled in comparison to the Democrat's budget).
Many of the people that voted out were of older generations that don't pay attention to tech. So I find it dubious how much influence the likes of Cambridge analytical really had.
> Actually I disagree here. When you look at the consequences of the technology in countries like Myanmar, The Philippines, Brazil, Cambodia and others and the likes of Mr. Zuckerberg and his ilk giving exactly zero fucks (unless it becomes bad PR) I'm afraid you're definitely wrong on that one.
Like exactly what? You haven't qualified anything here. You just claimed I am wrong because of what? What adverts, what is happening? This is a very vague claim.
I suspect much like the vote to leave the UK it will be very spurious evidence.
Like exactly what? You haven't qualified anything here. You just claimed I am wrong because of what? What adverts, what is happening? This is a very vague claim.
Vague claim? Not at all.
I was asking myself if I should actually bother to even answer, but then decided to invest a couple of minutes into some very basic DDG searches. You can find some results below.
Let me assure you that there's a ton more, if you just bother to open your eyes.
I close my argument here, since anything else would be either counter productive or violate site guidelines.
But please don't accuse me of sprouting vague claims or not qualifying my arguments just because you seem more interested in a timely Uber or a cheap stay and fuck all the consequences.
If your business case depends on either abusing or being careless with other people’s personal data, how are you not a scummy business ? That’s basically all the GDPR requires of you, don’t abuse people’s personal data and be careful with it. Both seem like common decency to me.
if you were _already_ complying before GDPR existed (because your business model isn't scummy), then GDPR compliance _should_ cost very little, if at all.
If you weren't complying at all, then adding compliance is very costly after the fact. If you cannot make your business work without complying, then the business must die, as there's no natural right for a business to exist.
if you were _already_ complying before GDPR existed (because your business model isn't scummy), then GDPR compliance _should_ cost very little, if at all.
But unfortunately, that isn't really how it works. Under GDPR you could still find your privacy policy now isn't written in the correct terms, or your previous consents or notices weren't worded properly and might not stand up any more, or your methods of storing data don't make per-person permanent deletion straightforward. And all of this remains true even if you were compliant with all previous data protection legislation (at least here in the UK) and even if you weren't doing anything sketchy with the data and have no plans to do so in future either.
If nothing else, you probably need non-trivial amounts of management time to understand the new rules, some extra legal advice that you're going to have to pay for, and an update of your key documents to make sure everything uses appropriate structures and wording to comply. That alone could already be a significant cost for a small, bootstrapped business, and that's without changing anything about the actual data you're collecting or how you use it.
Businesses that don’t have security issues when handling private data, obviously.
I agree with the GP, in that ease of starting companies should not be the primary goal, setting security and privacy on the back seat. It shouldn’t harder than it needs to, not easier at any cost.
No-one seems to have suggested that it’s the one thing we should optimise for, but it is important. Small businesses are the foundation of economies, and every extra overhead ultimately damages those economies and so needs some justification that is of greater value, financial or otherwise. One year on, it’s still not clear to me that GDPR has achieved that greater good, and I write that as someone who is a very strong believer in stronger privacy laws in principle.
Because if you do poorly on the small business front, then they can't grow into bigger businesses. How many EU tech companies do you know of compared to American ones?
Many, but I'm european so it probably doesn't count (;
If you make it harder for companies for protecting people it's still a win. I recently visited SF, "the center of innovation" for the startup world. I saw 2 people defecating on the street in 2 weeks, countless peeing and had to jump over homeless at some points to walk the street. If that's the cost for startup and innovation, please don't bring it to EU.
I'm European too and I really wish people from Europe didn't have an attitude like yours. Some parts of Europe are incredibly poor, but of course we have a much smaller homeless problem, because if you're truly without shelter then you simply die in winter.
I'm from Spain and there are homeless people in Spain, just a lot less than in the US. But it's not because they die in winter, it is because you don't become automatically poor if you lose your job, or if you need an operation, or if you study at university. It's safety nets that avoid people losing everything and becoming homeless.
This all is a reflection of the old talks about the costs of doing technical things right. One way of looking at them is that if something works for business, we should not pursue better software architecture or improve security or usability. Another way is to analyze and estimate the technical debt and eventually start paying it. This is exactly what happens with privacy now: business may cry about "removed incentives", "prohibiting costs", "eliminated opportunities" and other BS, but in the end it's just a compliance debt that they are not willing to pay. GDPR identified that debt and the mechanisms for claiming it, that's it. After the dust settles, there will be plenty of best practices and educated people which will make compliance easy, certain business models unpractical and the business will go as usual. Yes, compliance isn't a piece of cake, but there's nothing written in that law which a sane engineer or manager would not implement. Even the right to be forgotten makes sense: information about past crimes distributed via search is a kind of extrajudicial punishment which makes it much harder for people who already served their sentence to find a job and return to normal life. It's a job of a government to prevent them from committing another crime, it's not a job of a search engine or a news website.
I don't think the law is ambiguous. It's usually the situation when GDPR is already violated or going to be violated and data processor wants to find the least expensive solution to reduce the risks. In other words, it's not "How we should do it?", but rather "How difficult it will be to challenge our solution X in court? What our chances to win?" THIS is ambiguous, but it's the same with any regulation.
It is all about risk, ambiguity and individual circumstances. I dont think that is bad, but there is no clear record of what it even is we are meant to protect.
If you're in the business of "doing free services so you can skim GB's of data from users" or you "sell wholesale data collected without notice", the EU doesn't want you.
If you're doing a good job of keeping user data private except at the direct request of a user in a plain-language direct permission, then you're doing a good job to the GDPR. Slipups happen, and as long as you do your best to stop the bad thing, limit the breach, notify users, and be a good steward for their data, then it's all good.
As a US citizen, I try to make a point to only work with companies that adhere to the GDPR. I know they don't have to do so with me. But it tells me their internal processes are set up to respect the user's rights. And well, running dual systems for different compliance regimes is a tough sell - its easier to do 1 big system.
> as long as you do your best to stop the bad thing, limit the breach, notify users, and be a good steward for their data, then it's all good
If that regulator happens to like you. There is no schedule of offenses and penalties and due process, only an absurdly high maximum for selective enforcement.
And there are a lot of regulators. Some of them a lot more combative than others. That is my main reason for dislike for the regulations.
Overall I support the regulations, but I really wish the penalties had more documented structure than “We will fine you anywhere from 0 to an 8 digit number (in our case) depending on what we think is right”.
There is due process. If you think a regulator's decision was illegal, you can escalate to the courts. Some member states may not have the best justice system, but that's what the ECJ is for.
There is no explicit schedule – that could be gamed – but that doesn't mean regulators can act arbitrarily. Punishments have to be proportional to the infraction, similar cases have to be treated similarly... The GDPR just does not spell out how public authorities work.
It actually does say that punishments have to be proportional IIRC. I'm not sure if that actually makes a legal difference or if it was included to make the GDPR easier to understand.
And you pay for the lawsuit out of your own pocket. Now you need to run a business and fight a very expensive legal battle against the government. That same government that regulates your business.
I don't see why this changes anything. Lawyers still cost a lot of money. They might not seem like they cost a lot of money to Americans, but that's because Americans earn a lot more money.
>So what? If you have a grievance with an entity, that's the entity you have to fight a lawsuit against.
One of the grievances people have against GDPR is that they don't like how GDPR's enforcement depends so much on the individual person at DPAs. You'll still have to deal with the person afterwards that you sued.
Yes. Each party paying their own fees is a uniquely American thing.
> I don't see why this changes anything. Lawyers still cost a lot of money.
Prohibitively high lawyer fees are a uniquely American thing. The ECHR guarantees practical and effective access to the courts.
> One of the grievances people have against GDPR is that they don't like how GDPR's enforcement depends so much on the individual person at DPAs. You'll still have to deal with the person afterwards that you sued.
That Americans have against the GDPR. Given that the people who actually have experience with European authorities and law don't see these issues, it's very likely they don't exist.
You don't necessarily have to deal with the same person. Even if a DPA always assigns the same person to you, there is no oversight, that person is petty and cares more about harming you than about their job: We have rule of law and a functioning court system. And I can't help but find these continuing insinuations that we don't pretty insulting.
Precisely this. The cost and complexity of complying with GDPR is directly proportional to the scale and complexity of your data processing operations. If you comply with the principles of the legislation - collect the minimum possible amount of data, store it for the minimum possible time and process it only in ways that are essential - then compliance is very straightforward. Things only become ambiguous when you're trying to do something that the GDPR doesn't want you to do.
What's written on that web pages is clear enough for me and it's the same as my own understanding of personal data. It is rather abstract and I can admit that it may be not easy to understand for others without some good examples. But it's a complicated topic in general, that has to be studied beyond reading a single article or text of EU law.
What is your opinion based on? Have you both read the law and attempted to bring an organization into compliance with it?
I have, and it is definitely ambiguous. To take a simple example, consider all of the cookie warnings that you now see. Intelligent and informed people disagree on whether they are required, enforceable, or sufficient.
I have to deal with compliance on daily basis. Cookie warning is a usually misunderstood idea of having user consent for storing and retrieving information from his device. The law applies to the local storage and other similar solutions too, and it is the intention to use this data that has to be explained if it’s not one of legitimate purposes for which consent is not required (e.g. session id cookies and auth. tokens). Since it is mandatory, it becomes an UX topic, not a legal one - how exactly to integrate the collection of consent to all possible landing pages of your website so, that user will be informed about it prior to any data processing.
This lw has been in effect in the form of various national laws for over a decade. GDPR is only slightly different from Swedish national Data Protection Law, for example. So yes, this entirely the tech’s fault and debt. We as an industry have ignored these laws for too long, and now crying because the debt is being collected. Boo hoo. Cry me a river.
You have to admire the EU's gumption: forcing the payment of technical debt with GDPR, and forcing us to face the hard reality of copyright law with Article 13 (hopefully leading us to abolish it after realizing how ridiculous it is when seriously upheld).
There's something so naive or earnestly human about them. If the U.S. kept being the only relevant legal force on the Western Internet, we'd mull around in gray areas forever.
On the side of where all the internet platforms are from, it is the US that's relevant. Maybe it's not a coincidence that the EU doesn't produce many internet platforms that are good?
And you attribute this to regulations rather than the ground truth that the EU is a hodge-podge of very different cultures, countries, languages and laws that only recently implemented a shared currency, and is totally unlike the huge, wealthy and comparably homogenous market that is the USA?
The scope of what is "personal data" under GDPR is much broader than you are assuming, you are only considering the obvious, simple cases.
It also covers an astonishing amount of industrial sensor data used solely for industrial purposes. Unfortunately, for many high-scale industrial sensor data models the technical infrastructure required for compliance literally does not exist. In some cases we don't even have the computer science required to build the compliance infrastructure. But the vast majority of people would be very upset if the business model of some of these companies became "unpractical" and had to go away because GDPR compliance is effectively impossible. No amount of trying to do the "right thing" will make these industrial companies compliant.
There is gross misconception that GDPR only affects ad tech companies or retail or companies with business models involving people. This is far from the case.
Can you point to something which supports this claim?
In all of my reading it's been personal data, and definitely wouldn't apply to the things people would usually associate with "industrial sensors" eg. Carbon monoxide levels in a space, or even occupancy data (eg. for lighting/HVAC control) so long as it simply reflects whether an area within a building is occupied.
What's the specific requirement, and what makes it unattainable?
The position taken by every legal team I've worked with is fairly simple: if a sensor platform allows you to incidentally detect the existence of an unidentified individual at a point in space and time, then that sensor generates "personal data". The reason for this is that it is well-known that it is possible to analytically reconstruct the identity of individuals detectable this way with sufficient data. This is consistent with e.g. how ad tech data is treated under GDPR, so it is typically used as the standard for determining if industrial sensing platform data is "personal".
What people don't immediately grok is (1) just how many industrial sensor systems there are these days operated by diverse organizations -- almost every sensor type on an autonomous car, for example, is also widely used in many other industrial contexts, (2) the scale of sensor coverage in most places people occupy indoors and outdoors, which is far beyond what they typically imagine, and (3) how many of these sensors can be used to incidentally identify the presence of a person at a place and time, sometimes in very non-obvious ways. A single sample from a single sensor may not be identifiable but multiple samples from multiple sensor modalities often is. And the sensor modalities used for industrial sensor systems are increasing in diversity and resolution very quickly, which makes it even easier.
Humans perturb the environment they move through, and we have enough environmental sensors now that we can often track those perturbations across the sensor modalities to create a fingerprint. People have a difficult time imagining how easy this can be in practice until they've seen it done.
Thank you for this very interesting example! However applying this regulation to industrial sensors then is still the only right thing to do. Technical progress must be constrained by the speed with which society can adapt to it and by all the related concerns: if there’s lack of understanding on how to make the technology compliant or there are complications, it’s just that the cost of the technology appeared to be higher than anticipated. Business has to deal with it, just like in all similar situations - see hardware vulnerabilities in Intel chips for instance.
What youre saying makes sense, and I still agree with the GDPR. For example:
Power is used by a house. The meter runs. You pay the bill. The house has an address and a point of contact.
Power is used by the house. Machine learning is applied to map each individual and how they live in said house. The data is then sold to target things the ML algo picked up. You pay the bill. The house has an address and a point of contact, along with a detailed profile of each human in said domicile.
Same sensors exist, yet one violates the GDPR and the other one does not. Can you guess which one?
This is a very interesting list, especially the part about the GDPR increasing the attack surface and where the data gravity center is.
The part about "compliance cost" should be taken with a grain of salt. If you were compliant before, because you respected the users‘ privacy, the effort was relatively low.
The study about VC having dropped by 50% in the EU because of the GDPR sounds pretty weird to me. Unless of course there’s selection bias and we’re talking AdTech companies mostly.
An interesting number would be: how many people closed down forums and moved their discussion boards to Facebook?
> If you were compliant before, because you respected the users‘ privacy, the effort was relatively low
This is not true. Even if you are perfectly compliant, you need a complaint-response mechanism and lawyers in the EU ready to react to invalid accusations.
Given GDPR took a complain-investigate model, one also needs to be ready for power-tripping regulators. (Recall the Romanian data protector using GDPR to seize sources from a newspaper investigating corruption allegations [1].). Protecting against that requires, if not active lobbying, keeping lobbying connections warm. That costs money.
Ironically (and predictably), I’m seen more data being funnelled to Google than before. They have the scale to deal with this crap in each of the EU’s (currently) twenty-right member states.
Ironically, when GDPR came into effect so many on HN were spreading fake news that companies would be litigated to death by users. Of course, to remove that possibility and ensure only legitimate claims are pursued, the data regulation authorities act as middle-man. Such cases of abuse could also just as easily be done when people could sue. For example, nowhere does the GDPR imply that you need to hand-over a source - that goes for journalists as well as non-journalists. Companies sued have the right to appeal and, if GDPR wouldn't have existed, the Romanian authorities would've probably just used e.g. tax law to stifle the RISE project.
> nowhere does the GDPR imply that you need to hand-over a source
Complain-investigate compliance regimes tend to result in deference due to the cost of investigations and other informal expenses regulators can rain upon the regulated. (It works in finance because financial firms have the margins to support it. Also, the industry regulators are checked by both the courts and a public regulator, the SEC.)
Complain-investigate is thus a terrible structure for a general business law. Strict liability for data loss or mis-use (including the rights to data transcripts and deltion) would have been simpler. (Albeit, less profitable for European law firms.)
Long story short, GDPR’s aims and technical costs (e.g. deleting user data from backups) are fine. The problem is the compliance structure. It’s fundamentally incumbent-biased, commercially and politically.
GDPR is just the 1995 Data Protection Directive with teeth. If you were compliant with the DPD, you were almost certainly compliant with the DPD by default. The principles are the same and many parts of the legislation were carried over verbatim. GDPR came as a shock only because many businesses had been flagrantly disregarding the (weakly enforced) DPD for many years.
>Even if you are perfectly compliant, you need a complaint-response mechanism and lawyers in the EU ready to react to invalid accusations.
Did American businesses really think that they were immune from prosecution under EU law prior to GDPR? No European business was under any illusions about the extraterritorial reach of American courts.
> Did American businesses really think that they were immune from prosecution under EU law prior to GDPR?
Prosecutors need to build a case before causing costs for the suspected noncompliant. Complaints, and regulators in complain-investigate regimes, can incur costs with zero evidence. This is why most systems reserve such structures for high-margin, high-risk applications, like banking regulation. Deploying it as a general business law is aggressive.
I don't know about it being low effort to be compliant. We spent most of a year with a significant portion of our software engineering teams devoting time to GDPR even though we are not any kind of data collection company. It's the legal requirements -- we had to audit every last piece of software, make little tweaks if necessary, etc, just to ensure we were demonstrably compliant with the law.
I wouldn't be surprised if 150B is actually a low estimate.
> even though we are not any kind of data collection company
Are you collecting data on your customers? If you are, then one of the things your company does is data collection, even if that's not what's in your business plan.
We store enough identifying data to do business with our customers, we do not collect data for data's sake. Not for metrics, nor for ads, not to sell, etc.
The term "personally identifying information" does not occur anywhere in the text of GDPR; the regulations use the term "personal data", which is defined differently.
I raise this issue in almost every thread about GDPR, because although it might seem pedantic, the error strongly implies that people have not read or understood the legislation. The difference between personal data and personal identifiers is integral to GDPR and the legislation cannot be understood without fully understanding that distinction and the implications that follow from it.
Every company receives data about their customer, usually leaked by the customers themselves. How they handle it and what they choose to store / delete differs wildly.
First, if you cared about user privacy, you would store as few data points as possible.
Second, it’s very likely that you have APIs in place that can request all data for a user anyways. If you don’t know what data you have of your users, you don’t give a shit about their privacy, no?
Third, user requests are usually: a) what data do you store about me? B) Export all data. C) delete all my data (for real).
The orchestration of a data extract from even a midsized corporation is a significant endeavour.
Someone in the company knowing what data we have on an entity is a significant step away from the entire company being able to access that, because, you know, we take data privacy seriously, so we don’t make it easy to access all data on a single entity.
If your approach to privacy is putting all the eggs in a basket, allowing easy extraction of everything from that basket, and hoping the basket can be kept secure I’d argue your model is weird to begin with.
This is so simplistic. There are many storage solutions for many different use cases.
Some of these are write once and immutable afterwards.
There are relational structures for transaction history that may also link to customers.
These all have to be re-designed in such away that information can be removed from the system and exported from the system, while keeping essential information (such as past sales records).
> Some of these are write once and immutable afterwards.
Got an example of something like that that'd make it impossible to soft delete a person? I'm struggling to think of any datastore in regular use that's write only.
Yeah, as I thought, it's a blockchain/distributed ledger related technology. Hence why I said "regular use". I doubt large numbers of EU businesses are suddenly having to move data from their core ledger to another datastore because of this.
>The study about VC having dropped by 50% in the EU because of the GDPR sounds pretty weird to me. Unless of course there’s selection bias and we’re talking AdTech companies mostly.
It's really that surprising to you that when the EU effectively bans one of the most profitable models of business that venture capital investment will drop by 50% in the EU?
To be fair, it's probably just not GDPR but all of these regulations combined. Venture capital can move across borders, why would you invest in a startup in the EU when you could just do it in the US?
> The study about VC having dropped by 50% in the EU because of the GDPR sounds pretty weird to me. Unless of course there’s selection bias and we’re talking AdTech companies mostly.
Could just be noise in the data?
Could be VCs determining that fewer products are actually worth pursuing if the main monetization model for everything is ads?
One thing my company did (we are US-based, but have an international operation as well) to try and mitigate the volume of compliance work to be done was section off software that would be used in the EU from everything else. Previously we had been working on making all of our software 100% internationally universal, but GDPR made that difficult, going forward we're kinda cutting loose the guys in the UK (there's some irony, I guess) to keep up the code that has to be GDPR compliant while the rest of the company focuses elsewhere.
So... anecdotally, I'm not at all surprised if the increased compliance cost made some people reconsider investments in EU businesses, even if they don't rely on ad revenue as a business model.
This isn't a US VC investment forum. This is a US forum subsidized by a startup accelerator, but otherwise quite generally about tech and geeky stuff, and frequented by lots of people from outside US.
The problem I think is happening with the compliance cost/VC stuff is that it's also tainted by the other internet junk (Article 13) that the EU passed soon after when they started doing this whole internet regulatory push.
My question is, if you're a US startup, and you simply ignore GDPR requests, what happens?
Does Europe have some way to require its ISP's to firewall you off or blackhole your DNS? Can they force Amazon to shut off your AWS account? Do your executives risk being taken away in handcuffs to a European jail when they go to Europe on vacation?
If there are no consequences, why don't US tech companies just completely ignore it? (Of course, big players like Google probably have EU-based datacenters and other assets that could be seized to pay their fines. I'm thinking of small, cloud-hosted startups whose employees, bank accounts and physical assets are all on US soil.)
> If there are no consequences, why don't US tech companies just completely ignore it?
Once you grow big enough the EU will inevitably have leverage over you: Servers rented in the EU to lower latency, payment streams from EU customers, offices in the EU to get talent, subsidiaries created for tax reasons, executives on vacation, employees on conferences, money spent on advertising, etc.
If you are a startup in SV the EU migh not have much direct pressure it can apply, but how would an investor react when given the choice of "we could spend some more money now, or we could do nothing and be significantly limited once we grow to a certain size, basically unable to do anything significant in one of the largest economies of the world".
> how would an investor react when given the choice of "we could spend some more money now, or we could do nothing and be significantly limited once we grow to a certain size, basically unable to do anything significant in one of the largest economies of the world"
The simplest solution would be ignore GDPR, dominate the American market (which is easier to scale across than the EU), and then use that momentum to launch a simplified version in Europe. (Or buy a competitor.) The scale advantage will almost always outweigh being prepared for multi-market growth from the beginning.
Which gives ample room for a European competitior that does adhere to GDPR to clean up the EU market. We live in a very globalised world and the EU knows the leverage it has -- just as the US knows it's soft power extends well beyond her borders.
> Which gives ample room for a European competitior that does adhere to GDPR to clean up the EU market
Agreed. My point was with respect to an American start-up—compliance with GDPR is of lower priority than scaling. The priority, for both, should be scaling.
Advantage goes to the American start-up, however, in launching from a single market. But one might counter-argue that consumers in e.g. China will prefer to do business with European start-ups over American ones due to GDPR. (No evidence for that. But it’s a valid hypothesis.)
Not to mention that avoiding GRPR, laws that shouldn't need to have been written in the first place, is like walking around with a big sign 'we are evil and not to be trusted'. Because if you are to be trusted, a simple cursory check would simply affirm you are already within the GDPR.
We work in the b2b in the financial sector and part of our contracts in Europe is that all of the data is hosted in infrastructure that complies with the GDPR. That could be Google or Amazon, but not Slack or any SV startup.
Or they'll block payment processors from transferring money from EU customers to you
It's hard to imagine what a startup would be doing that makes them interesting enough for the EU to notice and want to levy fines, yet be completely out of reach.
You could probably get tangled if you accept money from EU citizens. If you don't take money from them (or use cryptocurrency), the EU can't really do anything.
If you're only moving packets, you generally have nothing to fear until the EU develops into an empire, at which point there's a good chance that they will have a mandatory firewall mechanism (some members already do impose firewall rules on ISPs through the courts, AFAIK).
If you have no business in the EU, generally the worst they'll do is censor your website.
If you are a US-based startup and don’t sell to EU customers, then I guess it doesn’t really matter if you attempt to comply.
However, most US SaaS-type startups very much want access to EU markets. Ignoring GDPR won’t matter until it does, and then when it does, it will matter very much. For example, you grow and want go establish a presence in the EU, investors with EU ties may be hesitant to get involved, a potential acquisition is ruined because the buyer has an EU presence and isn’t willing to take on the historical liability.
Yes, there’s a lot in GDPR. If you’re a startup that is making money by selling user data, the cost of compliance will be quite high. But if you are selling an actual product or service that generates revenue by collecting fees from your users, compliance is probably not as hard as you think. And building your startup with user data protection in mind, you’ll find it can be something you use as a selling point.
With more than a year of history, it’s not hard to find easy-to-digest articles that put GDPR in terms that an average person can understand. Integrate those principles and processes into your business, document what you’re doing, and then stick to it. Even without a huge compliance budget - if you do that and nothing else - you’ll be in a much better position than to just ignore it, even if you don’t fear punishment.
No B2C tech company can avoid doing business involving EU member state or UK citizens. You have to assume you’re in-scope unless you have zero contacts with Europe.
> Does Europe have some way to require its ISP's to firewall you off or blackhole your DNS?
Not in a systematic EU-wide way. Courts sometimes force individual ISPs to blackhole websites used for copyright infringement.
I guess if your company ignores the GDPR, it's treated as an illegal organization. So you may still be able to provide your service in the EU, but people cannot legally pay you, including paying you for ads.
GDPR enforcement is through the corresponding data protection offices of the different countries and fines issued by them (now if you don't pay the fines and completely ignore the offices that might be an issue that's escalated)
Some people are making it sound like the EU Cyber police is going to hack your services or parachute and kick their way into your office in SV because a user in Slovenia didn't get their data portability request on time, which is not what is going to happen.
GDPR set the terms of the debate as model regulation, and is inspiring similar legislation elsewhere. California's CCPA is largely similar to GDPR. Tech companies are lobbying in committee to neuter it, but it's not a foregone conclusion. Thus as a startup you should incorporate it in your architecture and roadmap, even if you do not execute on it right away.
Also your US clients may be subject to GDPR and pass it on to you transitively as they are required to do for subcontractors or IT services vendors.
If a US company wants to do business with another company (partenship, approved vendor, platform marketplace, or even having them as a B2B customer) that other company may require them to be GDPR complaint.
I never understood how anyone thinks being forgotten is a right. Wrong, false, liable, limited set of privacy related information should be correctable, removable. But facts about you and what you’ve done, no. I’m sorry you made embarrassing mistake. But it’s not worth losing so much public information and enabling bad actors to save yourself from your own actions.
>I never understood how anyone thinks being forgotten is a right. Wrong, false, liable, limited set of privacy related information should be correctable, removable
It becomes a right when enough people in a democratic society want it to be one, it's that simple. People in Europe believe that the right of individuals to control information about themselves and to not be stigmatized for actions in the past is to be valued higher than public access to it.
I perceive the US attitude simply as a sort of voyeurism. We already know it well from celebrity culture where people's entire lives are picked apart and put on a platter for the public to drool over, I have no interest of seeing it expanded to everyone, so I'm thankful for legislation to give me at least some control over information about me.
The biggest beneficiaries of this might very well be children who have had their entire lives put on the net by their parents without even having the slightest say in it.
>It becomes a right when enough people in a democratic society want it to be one, it's that simple.
That's not what a right is; a right is something that some logical/philosophical moral argument has detetermined people should have, regardless of what other people think. That is the whole point of rights in the US constitution: to protect people against the government and the "tyrrany of the majority". Otherwise you could call something like "the right of the German people to have no Jews within one kilometre nearby" a right if the majority voted on it, and the term loses all meaning.
this lockean discourse about natural rights and common law and republicanism isn't really a thing in Europe. It's very US specific, Europeans in general don't believe that their constitutions (if they even have one) are quasi sacred texts.
At the end of the day rights and laws are expression of preferences of the public. Europeans have different privacy rights because they want to have them. That doesn't mean they can't be good or bad, but they don't need to be derived from some higher realm of reason.
"The German eternity clause (German: Ewigkeitsklausel) is Article 79 paragraph (3) of the Basic Law for the Federal Republic of Germany (German: Grundgesetz). The eternity clause establishes that certain fundamental principles of Germany's democracy can never be removed, even by parliament.[6]"
"The Parlamentarischer Rat (Parliamentary Council) included the eternity clause in its Basic Law specifically to prevent a new "legal" pathway to a dictatorship as was the case in the Weimar Republic with the Enabling Act of 1933.[7]"
That’s a pretty specific implementation detail of a few specific democracies that doesn’t really refute anything that the parent comment said.
Like, for example, in the UK (which entirely lacks a formal constitution) the right to govern is literally derived from the most divine source: god himself, through his agent, the Queen.
In practice, despite the quasi-sacred and divine foundations of our government it doesn’t mean jack. If the Queen where to exercise any of her divine powers over the will of the current Parliament it would cause a constitutional crisis and she would have those powers immediately stripped.
Which only proves that Germany is indeed very sensible to how governments can degenerate under demagogic movements.
Recognizing that democracies can be infected does not mean that they must hold to some sacred text. It just mean that some path are considered too dangerous to even be considered.
There are human rights, civil rights and simple mere rights by law.
Also in the whole concept of criminal justice of most modern countries criminals that have completed their sentences are by default considered with a clean history unless relevant)
European correction system doesn't believe in turning each small crime into a life sentence of marginalization and punishment. Which is why we don't have to lock up a huge chunk of population into cages for life - letting people resume their lives after serving their punishment significantly reduces the amount of returning convicts and makes society a safer place.
> Wrong, false, liable, limited set of privacy related information should be correctable, removable
Which is exactly what the law is for.
The examples in the article were deceptively reported. For example, the doctor who asked The Guardian to take down articles about her suspension: she had successfully appealed that case, and a judge overturned her suspension and ordered the record expunged: her name was dragged through the mud on bad information.
Compare that to the US where any kind of accusation, even if it turns out to be false, can easily permanently ruin someone's job, career prospects, or life.
Well, I think it is a bit more complicated. Humans typically do not remember as good as machines. So the intention is probably to bring the internet experience closer to the real-life experience. And the GDPR doesn't seem to make a difference between embarrassing Youtube videos and simple facts about what someone did (would probably take decades in court to make such a distinction).
Nevertheless, it seems, the real problem isn't the impossibility of achieving such a 'delete-from-the-internet' mechanism, but rather the low value for average people and the high value for deceitful individuals.
I mean, I like having a legal right to make Google/Facebook/E Corp delete all the data they have about me and my usage, but there will be few occasions when I will make use of that right and even fewer for people who don't even care what they share online. Imposters, on the other hand, will find that right most valuable.
Sometimes only "mistake" you do is to be born into wrong minority.
Also in lot of countries punishment is more of a corrective action not revenge. This means even convicted criminals have right to hide convivtion after some time.
It seems like there’s things from both sides of the camp on first glance, can you point out the bias you see? (Legitimate question, and I only quickly glanced.)
I'm judging by "Compliance costs are astronomical" when the supporting evidence is largely estimates from before it went into effect.
So you can't take everything too seriously, but still, it's good to collect more links. Also, the author is being clear about the weakness of some of the supporting evidence.
At least from what I read, it is the slant of the entire piece than anything in particular. If we removed environmental protections we would also see a boom of growth in the industry, but we decided that those benefits don't outweigh the environmental cost, same here with GDPR, after Cambridge analytica using digital micro targeting to heavily manipulate populations and elections it is simply too high a cost to pay for what is otherwise a pretty shifty industry otherwise
I'd heavily recommend everyone here to read the book surveillance capitalism it is an incredible explanation of what goes behind the curtain
In short, GDPR has some relatively minor problems and externalities and blogger wants to scrap the entire GDPR because of them....
If you start out with an opinion and look for evidence to support it, yeah, that is biased. But it doesn't mean it's not a useful contribution to the conversation. Certainly it's better than not looking for evidence!
The article vaguely links Cambridge Analytics to GDPR. Is there really a connection or is the article merely trying to frame GDPR negatively by comparing?
I don't think so. The APIs that Cambridge Analytica were taking advantage of were available long before GDPR became enforceable and are most likely illegal under GDPR because they allowed third parties access to your personal information without your consent - where it was a friend of yours who consented to revealing their information, Facebook would also reveal some information about you.
Cambridge Analytics is a data portability exploit. It leveraged your friend's ability to send your Facebook data to third party apps. GDPR enforces more data portability, which in some sense allows for a larger attack surface for such exploits. The article mentions one example of hackers extracting all your personal data after a takeover of your account.
Framing a law that forces companies to allow you to export your own personal data in a readable format to reduce vendor lock in as "this will definitely cause the next Cambridge Analytics!" on the basis of "but user authentication might be bad!" is absolutely laughable and the fact that this article got so popular here is pretty discouraging.
Meanwhile, people started to believe that Google reads their mind and society evolved the way they behave as The Internet transforms the consequences for acts such as chatting or posting things in public (or having certain opinions in public)... I think GDPR same as other laws protecting citizens from The Internet failed to protect the user, even if it was a good first try, to begin with. Hope some standardization will come in the future to prevent all cookies or some HTML tag standard or whatever.
We have the Do-Not-Track header that should be interpreted as a denial to all cookie notices and GDPR requests but they didn't write that into the law.
exactly, that's what I mean: it cannot be part of a law... but such standard must exist and be enforced by the browser, same as window.alert is a browser pop up and not a javascript one. The cookie rejection should do it in the same way.
Laws like GDPR are encountering problems to be enforced without being too intrusive with the technology and the freedom to create products/standards.
The ePrivacy regulation is supposed to introduce something like like that. Passing it would require the consent of Austria or, of course, a sensible political system for the EU.
I have an interesting question about GDPR and all legal compliance efforts. When GDPR was first announced, I studied it in-depth because I'm the CTO of a company involved in first-party content analytics, and I wanted to ensure we complied.
In addition to making changes internally and technically to ensure compliance, I also prepared a long Google Slide presentation that basically summarized my technical understanding of GDPR, after receiving the advice of several privacy attorneys. The information in this slidedeck was presented to my whole company, as a way to further ensure compliance -- to make sure my employees understood the policy at least as well as I did, since I had spent countless hours discussing the implications of the law -- as well as reading the raw text, which is excellently published/annotated by Algolia here: https://gdpr.algolia.com/gdpr-article-1
My inclination was to publish this deck I had painstakingly prepared publicly, because certainly it would be valuable to others. I publish a lot of stuff publicly on our blog, for example: https://blog.parse.ly/post/author/andrew-montalenti/ -- with the only goal being to share information with the community.
But then, one of my attorneys advised me against this. Basically, the concern was that if I publish something publicly about my understanding of GDPR, and it contains an error of understanding (after all, IANAL), then I could be held accountable for that. That felt really crappy to me -- after all, I'm just doing the best I can, and it seems like there's a lot of misinformation about GDPR out there on the web. Does anyone know anything much about this? To what degree can a company executive get him or herself in trouble for publishing a document that summarizes his or her own understanding of the effect of regulation, if the executive's company is potentially affected by said regulation?
I am not an EU attorney, but typically the risk with publishing something like that is not that you make a mistake, but rather you get it right. The problem arises down the road when your company does something that violates the law. Now your wonderful presentation is used to prove that your company knew it was violating the law, even though the actual circumstances may be a bit more complicated.
Also getting it wrong might indicate they are unintentionally not GDPR compliant and make others aware of that fact. But would that actually be worse than regulators finding out later? Especially when you want to comply?
"But then, one of my attorneys advised me against this. Basically, the concern was that if I publish something publicly about my understanding of GDPR, and it contains an error of understanding (after all, IANAL), then I could be held accountable for that. That felt really crappy to me -- after all, I'm just doing the best I can, and it seems like there's a lot of misinformation about GDPR out there on the web. Does anyone know anything much about this? To what degree can a company executive get him or herself in trouble for publishing a document that summarizes his or her own understanding of the effect of regulation, if the executive's company is potentially affected by said regulation?"
To me this sounds like typical lawyer paranoia. In what way could you be held accountable for publish your interpretation? You are not giving legal advice.
It may or may not be overly paranoid, but I think the risk would essentially be that a publicly stated incorrect interpretation could be successfully used in court as evidence of failure to comply. I doubt the executive themselves would be held directly liable or be personally punished, it's just that it's a risk for the company that doesn't have any tangible benefit from a legal standpoint - so from a lawyer's perspective, why do it?
That being said - it seems unlikely that his understanding would be inaccurate given the amount of time and research and he claims to have done, so the actual risk could be negligible. It might even be conceivable that such a public statement could be used as legal evidence in the company's favor showing that the CEO took every practical step possible to comply to the best of a reasonable and well-informed person's understanding of the law. The public relations boost of giving out good knowledge/guidance (attracting talent, customers/clients) might be sufficiently beneficial to justify the risk.
> To what degree can a company executive get him or herself in trouble for publishing a document
Not from the EU. They are interested in compliance, which you either are or aren't, and will be explained to you why you aren't.
Possibly from your own company, but I assume your understanding and presentation of the GDPR does not hinge on gross negligence and it's a pleasant normal working environment where making a simple mistake will not lead to retribution.
Other than that there are other companies that may follow your guidelines and will be found lacking. I'm not sure about this one, and it might depend on the legal environment of your country.
Depending on your field of endeavour and location, I'd say it might be worth publishing. If customers can see online you take the GDPR serious, it might increase customer confidence, and, should there have been a mistake in your understanding, it might be pointed out to you before it becomes problematic.
“...there’s a lot of misinformation about GDPR out there on the web”
It sounds like your lawyer is telling you not to add to that misinformation. Also, you already said this deck is based on advice from counsel, so you’re maybe dragging them into an endorsement, and there are strict ethical rules about what lawyers can opine about to non-clients.
Anyway, GDPR is not that hard to understand. Just read the source materials. It’s one of the least-difficult legal texts you can take on.
Also, why bother? Like the EU directive before it, there won’t be any meaningful enforcement of these rules. A few examples will be made, but you’ll need to be woefully unlucky to be one of those.
I wish I weren’t so cynical but I’ve been following this area since 1997. It’s just an excuse for lawyers and consultants to rack up fees through careful manipulation of FUD. The intentions of the lawmakers are good, I’m sure, but laws without truly vigilant enforcement are eventually flaunted.
I’m being downvoted, but the article itself confirms that a huge, wasteful amount of time and money has been thrown down the drain by people who could otherwise just have read the original text: http://data.consilium.europa.eu/doc/document/ST-5419-2016-IN...
The major benefit that GDPR has brought (at least in our company, and I suspect in other companies as well), is an increased emphasis on not storing user data that is not needed. The idea that user data is useful, but it’s also a liability.
This likely will lead to certain private data that companies would otherwise have saved, because why not, not being saved anymore, which would reduce the damage caused by a breach, which will never show up in numbers and stats.
When GDPR first came out, I found the terms pretty vague. I don't know how I can implement it.
For example, a user's email is Personally Identifiable Information. When the user wants to delete her account, I shall remove her email. This is ease.
But what if in my comment system, another user mentioned her email in a comment. Do I need to remove this comment too? What if this comment has replies too, should I remove all the replies?
What if a competitor make use of these undocumented gray areas to attack my business?
Because you now need to implement a full text search or even natural language processing to identify personal identities in your comment, and that’s quite challenging. An evil competitor could register several accounts, purposely leave those comments with personal identities of those accounts on the system and then request to remove one of the accounts.
Your algorithm needs to parse those comments, identify which Comments mentioned the to be removed identity, and remove them without breaking the integrity of your database. Failing to do that, you get sued...
I think we went in the wrong direction in terms of public data. It really isn’t in my best interest as a citizen that our public sector can’t use my data to run more effective, spot health issues sooner or perform city planning based on citizen-mobility rather than educated guesses.
I think it’s absolutely the right direction for private companies though. I know, I know, a lot of you are distrustful of government, but I’m Danish and we generally trust our public sector in to an extend that would truly surprise a lot of you.
So with that out of the way, I think it’s a shame that we spend so much public funding burying public data in silos. I think we should absolutely keep citizen data safe, but I think we should also use it and perhaps work to make some of it less sensitive. Because some of it frankly doesn’t have to be sensitive.
In my country we have a social security number. You get it 1-5 minutes after you’re born, and in the olden days, it was used to identify you when you wanted to do things like open a bank account. It’s still used for that to some extend, but in the meantime we’ve created this thing called NemID (soon to be mitID), which is a national 2-factor secure digital identity, that we use to enter online agreements because it turned out that your social security number wasn’t actually safe. We’ve also had leaks and hacks exposing nearly half of the current social security numbers over the past 25 years.
Because a social security number is deemed sensitive by the GDPR, we’re spending hundred of millions on the bureaucracy around it. It’s by far the most reported thing to our national data protection agency, I think almost 80% of the public cases involve it. And it makes no sense.
Why the hell didn’t we make it illegal to use it as an identifying number instead? It would have saved us so much money.
And that’s just one issue with the GDPR. Another is machine learning and data. This is obviously a sensitive area. I don’t personally think we should troll through citizen cases to try and find possible alcoholics. Maybe someday, but society has to deem it morally acceptable first.
I do think we should use citizen data to schedule shifts though. It makes no sense to me, to have 10 nurses and 15 teachers do full time scheduling in a city of 60,000-100,000 citizens when an algorithm can do it instead. But we can’t, because the GDPR prevents us from using data that way.
I like the GDPR, but I think it needs a revision for the modern public sector, and I think we should really ask ourselves what we want with our data.
Do we want to spend trillions on a bureaucracy guarding it, or do we want to demystify some of it and put it to good use, so we can spend the trillions on nurses, teachers and better infrastructure?
I think it would help to state your nationality, European governments range widely in trust levels from west to east, and there are a lot of governments in there. I wouldn't trust the greek government for a second.
[An example: the social security number (which is given to every doctor/pharmacy/etc) contains verbatim the date of birth and sex of the citizen, and this has been deemed lawful]
> the social security number [..] contains verbatim the date of birth and sex (and city of birth) of the citizen
Italy does that, as I understand it is meant to be easier to remember and it was created in a time where thing were very different. I agree that it poses issues now, but at least it makes clear that the code is not supposed to be a password.
Really? Even after they tried to bring in mass surveillance only a couple of months ago? Even when they want to give double prison sentences to people who live in certain areas?
As a foreigner living there you might be less trusting as well. They are constantly changing the rules to make it harder for legal residents to settle.
> I do think we should use citizen data to schedule shifts though. It makes no sense to me, to have 10 nurses and 15 teachers do full time scheduling in a city of 60,000-100,000 citizens when an algorithm can do it instead. But we can’t, because the GDPR prevents us from using data that way.
Making sure the right staff gets to the right citizens at the right time is a massive undertaking. It’s where 60-80% of our staff works. We have something like 1200 teachers out of 7000 employees for instance.
It’s also an area that’s subject to a lot of requires/shared resources. I mean, you have the regular schedule which takes up a lot of time on its own because you need to find replacements when someone is missing. If I get sick, no one really cares that we go a day without an Enterprise Architect, but if a teacher goes sick, multiple classes will need an replacement. Then there is the irregular stuff like rehabilitation, special school events or a range of things.
Basically it’s so complicated that it takes up the time of several full time positions.
It’s not the kind of complicated that’s not solvable by ML though. Our neighbouring municipality did a PoC on it, and I’m not sure how the procured the rights use the data for it (they probably didn’t), and it turned out that they could automate almost all of the planning and scheduling. Humans still had to make decisions, but it would suggest available resources or alternative schedules making the process much smoother.
That’s a lot of nurses and teachers you could actually put to work, nursing or teaching if it works.
It works by knowing what resources are needed though, and in the case of rehabilitation that involves knowing that someone needs anti-suicidal therapy at 10 am at a certain address. Which is extremely sensitive data, that the algorithm isn’t allowed to access under our adoption of the GDPR.
Under the implementation of GDPR, or under derived local rules? Because as far as I understand, humans working on a filing cabinet full of paper can be just as much "data processing" under GDPR as a computer reading a database. If the scheduling person knows "Mr X is sick and needs a replacement scheduled", an algorithm in their place can know that too.
It's certainly something where extra care is required though, and it's easy to see how people in charge go with a rather safe than sorry approach, and extending it into ML poses additional questions.
Our national implementation of the GDPR for the public sector prohibits cross-sectoral access to citizen data.
So while our teachers/nurses could legally use ML for planning as they have a legal right to use the data for planning, our digitisation department can’t build/train/support it and neither can a 3rd party supplier.
Since laws are very open to interpretation, at least until they are tested a few times in the courts, you could interpret them different than us. Which I’m guessing is what our neighbouring municipality is doing. They have the advantage of being 10 times bigger than us though, giving them much more influence, so much in fact, that they may end up paving the way for the rest of us.
> Maybe someday, but society has to deem it morally acceptable first.
Err, please let's not make individual freedom dependent on what society finds morally acceptable.
> I’m European, I trust my government in a way many Americans simply won’t understand.
I am as well, and I don't trust my state + federal government. Not to the extent that it seems common in the US, and in slightly different ways, but I certainly want my government to have as little data on me as possible.
Even if you do trust your current administration, imagine that the most extreme party of whichever political side you consider crazy & generally wrong wins with a landslide next month and has all that stuff available that you figured you can trust your government to have.
All entities make decisions for their own benefit 100% of the time. Where you come out ahead is when their interests coincide with yours.
That works out rather poorly against large governments, because businesses can lose customers by misbehaving, but a government with 55% of the vote doesn't need any more than that and can then do anything they want to the remaining 45% of the people. Especially if they can convince their own supporters that the victims are the villains -- then they can set you on fire and still get re-elected.
Don't assume you know what "for their own benefit" means. It doesn't mean not doing things that benefit society, it means not willingly hitting yourself in the face with a hammer.
People do things to help other people because it builds goodwill and reputation, and because they're a large enough entity that doing something that costs them $5 and makes everyone including them $6 each will still result in a $1 profit, and because of Hofstadter's theory of super-rationality, and because they care what happens to their kids, and a hundred other reasons.
Nobody but an idiot does things that are purely destructive to themselves and everyone around them.
"For their own benefit" means literally what the words say -- something that benefits them. Your wrong assumption is that socially beneficial acts can't benefit individuals. That's something we can argue about, but you don't have to be a self-righteous jerk about everything. That doesn't benefit anybody.
> "For their own benefit" means literally what the words say -- something that benefits them.
You have to admit that "it means not willingly hitting yourself in the face with a hammer." and "Nobody but an idiot does things that are purely destructive to themselves and everyone around them." is not the clearest way of expressing that. "Not actively harming everyone" is not the same as "doing things for your own benefit".
> Your wrong assumption is that socially beneficial acts can't benefit individuals.
That would be a wrong assumption indeed, considering society consists of individuals.
(I'm tempted to write more, but I've already spent more than enough time telling people they're wrong on the internet today)
> That's something we can argue about
Even if we actually disagreed on that point: It looks like there's something about us that means that no, we can't argue.
> but you don't have to be a self-righteous jerk about everything.
That is true. But when I perceive someone as arguing in bad faith, then it's hard to resist – even if they aren't and we're just misunderstanding each other.
When you wrote that you believe that Google does not violate the GDPR, to me it was like arguing that the moon does not exist because it would rip apart the earth. That might be an interesting intellectual exercise but I don't have to analyze your theory to know that it's wrong. The GDPR plainly says that you cannot force your users to consent. Google does. Easy to verify from Europe. Case closed.
I hope that explains why I refused to debate that topic.
I think the issue is with power, not intentions. Microsoft may make selfish decisions 100% of the time, but there's only so much impact they can have. The government has monopoly on violence, implemented through army and law enforcement, and can also make your life painful (or end it) in countless less direct ways - like monetary policies, healthcare policies, welfare policies, etc.
Ultimately, what the government can do is only limited by the country's constitution. Not doing beneficial things doesn't reduce its ability to bad things.
> Microsoft makes decisions for its own benefit 100% of the time.
The nature of decisions is completely different. Microsoft cannot put you in jail, confiscate your property etc. Also, microsoft has outlived a ton of governments who lost the trust of the public.
>I think we went in the wrong direction in terms of public data.
Aren't all the complaints from private companies? You can't use say FB data to do some nice thing for the society because FB will not give it to you even before GDPR , maybe they give you some secret access if you pat them a lot of money. The API access is super limited.
People like to say that, but it does. Each member state has had to adopt the GDPR into their national legal system, it’s true that it’s called various things. In Denmark where I’m from we call it Persondataloven, but for all intends and purposes it’s GDPR.
This overlooks one of the biggest under-reported problems GDPR has created: a large percentage of all industrial sensor data is "personal data" under GDPR. These are systems and companies that nobody associates with collecting or using personal data, because their business isn't about people, but the regulations have defined the scope broadly enough that there is universal agreement among their legal experts that are liable for not treating this data as "personal" under GDPR.
This raises some difficult challenges that the average Internet business doesn't have to deal with:
- Compliance with GDPR requirements for personal data in many industrial settings is operationally impossible. These aren't Internet ad tech databases.
- Some industrial systems aren't the kinds of things you can trivially upgrade to make them compliant in any case. We are talking embedded systems with operational lifespans measured in decades. In many cases there are other strict regulatory compliance requirements around the design and modification of these systems.
- The workloads and data models for some high-scale sensor data models make it technically impossible, given the current state of computer science and hardware, to comply with some obligations under GDPR when handling "personal" data. And for a much larger set of systems, it would be economically implausible even though theoretically possible.
- Sensor data infrastructure software often lacks the basic functionality required to support compliance, as the functionality that the regulators assumed exists for other purposes has no purpose in this context and therefore has never been implemented. There is a disconnect between what is required of the software users and what the upstream vendors can or are willing to provide. These aren't software companies.
- For some specific industry sectors, compliance costs disproportionately fall on EU-based companies by virtue of the fact that their primary operations are in a European country, even though they sell into a global market. That's an economic own goal.
This has become a Sword of Damocles over some industrial companies because their legal teams have studied their exposure to GDPR, identified substantial compliance obligations, and realized that compliance is effectively impossible. It is pretty clear to me that the regulators were so focused on Internet advertising companies and similar that they were completely oblivious to the unintended consequences for unrelated industrial sectors.
I've been studying this problem for a few industrial sectors for a couple years now. You have companies scrambling to find technology that often doesn't exist and in some cases requires hardcore computer science R&D before it could exist. And this is a business opportunity for someone to add a tax to what these companies produce. But the worst part is that this extremely expensive compliance exercise does almost nothing for personal privacy because most of this data was being collected for boring industrial applications.
Many industrial companies measure the operational environment at massive scales now using multimodal sensor networks and platforms thanks to plummeting sensor costs -- LIDAR, hyper-spectral imaging, video, RF/radar, audio, remote sensing, chemical and particulate sensors, et al. The exact mix and scope of coverage varies with industry and company. Sectors are diverse and include automative, utilities, agriculture, oil and gas, logistics, etc. The sensor data is primarily used to manage risk, increase efficiency, improve safety, adapt to changing conditions, respond to incidents, do preventative maintenance, and similar.
Any sensor platform that can detect the existence of an entity in space and time and is measuring space where people exist, which is most of them, is collecting personal data. A sophisticated party can reconstruct the identity of detected entities in the sensor data in a straightforward way. Typically, the sensor coverage inherently collects data on a large number of people from which it is impossible to obtain consent, whoever happens to be within or wanders into the sensor range. The detectable people in these sensor data models are analytical by-catch. I've demonstrated this to many organizations using diverse exhaust from industrial sensor systems never designed for that purpose.
Some of these data models are incredibly large and fast moving, petabytes per day. Many of them collect data in federated environments that are severely bandwidth-limited and energy restricted; while the data model is very rich, there aren't enough local resources to do anything outside the designed scope. You can neither push compliance operations to the data, since there isn't enough compute, nor can you backhaul it to someplace that does. The aggregate data models can exceed an exabyte, so you aren't indexing where people are (that would be incredibly expensive) and any attempt to brute-force search to identify people for compliance purposes would effectively be a denial-of-service attack on the system.
tl;dr: the scale and scope of external environmental sensing platforms increasingly used by industrial companies inherently allows you to detect the locations of many people in space and time that are unrelated to the business operation. The necessary scale and operational architecture of these systems make GDPR compliance technically implausible.
Listing Klout as a casualty of the GDPR is like listing polio as a casualty of vaccination. Ditto for the vast swamp of ad intermediaries. It shows the legislation working as intended.
The right to be forgotten is the equivalent of the right to no punishment without the law. The deeds must be interpreted and judged by a court, not by a public, basing their opinion on newspaper publications. When someone is denied a job because there were some news about him in Google from many years ago, it's a form of extrajudicial punishment, which is illegal, and must be prevented.
It's far too powerful. It makes sense when you think of publishers and articles about criminal cases.
It becomes much harder in social media. If I type your name in a comment that I own, the platform becomes obligated to destroy my intellectual property to satisfy your right to be forgotten.
If a book is written with a politician's name in the title, can the politician ban this book from the Internet?
> If a book is written with a politician's name in the title,
The politician would need to argue that his/her own name is problematic.
> If I type your name in a comment that I own,
If you write a long comment about how I am a terrible person because of action I took 10 years ago and are no longer relevant, yes. As far as I understand the right does not apply to you human to be forgotten, but to specific information about you that are no longer relevant.
You are giving a government body the power of total Internet censorship and trusting in their opaque process to determine whether each request is warranted or not.
I can understand many arguments for censorship for the greater good. This is censorship as an individual right enabled by a closed review process.
I believe it will have far reaching implications and it will be fertile grounds for corruption. A massively powerful governance tool is created and all its stated goals are of limited public good.
every GDPR request I've seen has been public figures (actors, etc) asking to be forgotten from commercial acts (movies) they were in and no longer like. It's ridiculous.
> I can't go to my school or a credit bureau or an insurance company and say that all my past history should be forgotten.
> Why should we enforce such a regulation online?
I'm a bit confused. Exactly the same laws apply to the insurance company and credit bureau. They have to have valid reasons for keeping the data, and if they don't have those reasons they need to get rid of it. Which applies equally to online companies.
OK, can you also post here all your personal details, so I could sell them to anybody who wants to exploit them to earn money? OK, fine, you don't want to post it here. Could I get them from your bank, work, friends, family, post it here and sell to anybody?
This is a strawman. You are talking personally identifiable information and data usage policies which is an entirely different thing than the requirement for all data to be deleted.
> You are talking personally identifiable information and data usage policies which is an entirely different thing than the requirement for all data to be deleted.
Could you explain a bit more by what you mean? GDPR only concerns itself with personally identifiable information, and is at it's core about the rules for "data usage policies" around it (which of course will involve rules for when to delete data).
Are you sure that GDPR only concerns itself with PII information. In other words, is it legal to collect information about users as long as it isn't tied to PII?
I consider PII to be things like name, Social Security Numbers, a credit card #, an email, DOB, etc.
You seem to be suggesting as long as the data isn't associated with the above or can't reasonably be tied to the above then GDPR doesn't apply.
I am in favor of rules around the usage and collection of PII information. i.e. that information should not be shared with other parties without the user's consent and in general access should be restricted.
My main beef with GDPR, is the difficulty to implement such a system, with on demand wipeout.
"just PII" was a bit too strong, but this is how GDPR defines personal data, which is what it regulates:
> ‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person;
Identifiable is core to this definition. It's important to note that it's taken so far that it's enough if others can establish the link, which is why things like IP addresses or photos fall under it, even if I as a website operator can't just go ask ISPs for the user behind an IP.
> with on demand wipeout.
I keep coming back to that: GDPR only has something I'd call "on-demand wipeout" if your only base of processing is "I've asked the user for consent", because they can revoke said consent (or if you kept data without justification of course). If you need the data to fulfill a contract, you can store it as long as that's still true. If you're legally obligated to keep records, the person can't just request you delete it. If you can argue a strong overriding interest to keep some data, you can keep it - although that one is of course open to interpretation when your interest is actually weighing higher than the persons interest (an example might be fraud prevention records)
> right to be forgotten is to onerous to implement and not present in other domains
Not always, but it definitely is. Take the legal system. The UK has something called the Rehabilitation of Offenders Act. After some varying time - dependent on seriousness of offence - I can legally answer "no" to a potential employer asking "have you ever been convicted?".
Google makes a mockery of that. If the law says a 7 year old minor offence is spent, why not search and media too?
> They do make it harder for smaller businesses to compete
As someone working mainly for smaller UK businesses over my career, I don't see this at all. Complying with GDPR, and its very similar Data Protection Act predecessor has been fairly trivial.
> a credit bureau or an insurance company and say that all my past history should be forgotten
For both credit rating and insurance history, they age out after 5 or 6 years.
Seems like it's online that is wanting the exception of "everything, forever".
I don't believe that is true. Credit Bureau's and Insurance companies have information about every place you have lived and worked, what school you attended.
The actual ratings may be waited on information in the last X years.
Also, it is much easier to implement a policy where no data can be retained after X years than on-demand wipeout.
Neither has which school I attended, or has ever asked. An insurer has employers, but only those whilst insuring with that company. I suppose my bank could have told the credit rating agency, but they'd have to infer it from the monthly wages deposit. Is that required in the US?
If they are only weighting on the last 5 years they no longer have a business case under GDPR to retain it[1]. Essentially it crystalises in law what should already have been the case.
> it is much easier to implement a policy where no data can be retained after X years than on-demand wipeout
Not sure how when all that changes is the clock.
[1] If my account was fraudulent in some way, or there's a law requiring some retention, there is a business case for retaining longer, and it is permitted.
> Not sure how when all that changes is the clock.
This is most surely not the case. Many data stores are simply dated collection of files.
With fixed expiration for all data you can simply implement GDPR with things like TTLs and making sure that any downstream systems do not consume data older than a certain date.
With individual wipeouts that can happen at any time this becomes much more challenging.
Now all data, in all systems that use that data have to the ability to wipe data at the individual record level on demand.
This broad implications especially depending on how interpret whether things like derived models, aggregate stats, etc. need to be recalculated in light of GDPR requests.
> I can't go to my school or a credit bureau or an insurance company and say that all my past history should be forgotten. Why should we enforce such a regulation online?
GDPR makes no difference between online and offline businesses: They can keep data if they have a proper justification for it.
https://gdpr-info.eu/art-6-gdpr/ is the core list of reasons for data processing, any company needs to justify its data usage based on them. (Case a., consent, is a bit special, other articles put clear limitations on it so it isn't abused, make it revocable, ...)
> They do make it harder for smaller businesses to compete
Small business person here - implementing GDPR compliance basically meant a few tweaks to our privacy policy to make it clear what data we collect, why we collect it, and how long we retain it for.
Why was it so simple? Because GDPR is honestly not that onerous, and because we already gave a fuck about privacy.
Pottery Barn, owned by Williams-Sonoma, is a weird mention. They sell household goods from mall stores and their online catalog. There exposure to GDPR should be pretty minimal. Ship the product and don't sell your customer list and basic security work that they should be doing already.
This seems like a pretty even-handed analysis of the consequences of GDPR. I correctly predicted most of them, and have been highly criticized for it.
It’s truly stunning to me that a community like HN that consists of many current and future startup executives can be so adoring of regulation that has “been the death knell for small and medium-sized businesses“ and for which “compliance costs are astronomical” according to the article. I sense that it is mostly the vocal minority making these comments that ignore the seriously negative consequences of GDPR and paint any company or person that is critical of it as a privacy abuser. I suspect that it is the same small group of abusive users that downvote any comment critical of GDPR into oblivion. Still, it is a very bad look for a community that claims to be so invested in startup culture.
It really is OK to recognize that something with good intent (privacy legislation) can be poorly written and consequently fraught with problems (like GDPR). Any idiot could have predicted that the fine structure they imposed meant potential death for small businesses and a mere speed bump for large ones. GDPR should be torn up and rewritten. The fine structure should be a percentage of revenue, period - not 4% of revenue or €20 million, whichever is higher. That is ludicrous and was designed specifically to drive small businesses out of the market.
Highlighting the cost of privacy is the very point of GDPR. This is the only way companies will stop collecting personal data by default and think very carefully about the consequences of what they keep.
Several specifically pre-GDPR issues are mentioned but still attributed to GDPR.
GDPR is a win for the consumer. Not perfect, but overall a great step forward. But I do think The Right To Be Forgotten is a terrible idea. However, it was NOT introduced with GDPR, and there are several cases prior to GDPR.
Perhaps there are good points to be made. But the article fails to stay sober with the misleading and sensational claims.
Yes, an economy based on deception that uses it's customers in unknown ways, most of the time in ways entirely unrelated to the product, is failing after appropriate regulation.
All consequences seem entirely acceptable.
Businesses must act with responsibility for society. the regulator is usually lax, until all hell break loose.
Those businesses seem to all be doing fine. They just implemented one of those annoying pop ups that you have to agree to before you can see their site, and they’re still doing all the same things with your data. But with your “permission” now.
All the other businesses, though, the ones who were never planning to do anything bad with your data. Those ones still all had to do a bunch of work and show that same stupid notice that drives their customers away. And they’re a lot less able to defend themselves against the mean spirited user behaviour outlined in the article.
And for added fun, the eu now knows that it can pass silly laws like this as often as it likes, and the whole software world will need to devote a team to do a full sprint implementing another piece of user hostile code that doesn’t help their business.
> another piece of user hostile code that doesn’t help their business.
That's entirely their problem. Compliance with the law wouldn't require user-hostile "code" (I think you meant UI) if the business model wasn't user-hostile in the first place. It wouldn't hurt their business if their business was reasonable.
True. Users prefer "free" services. I can imagine that some do prefer paid services, but the majority expects things on the Internet to fall from the sky.
That said, I think your point doesn't prove those businesses good to have.
> though, the ones who were never planning to do anything bad with your data. Those ones still all had to do a bunch of work and show that same stupid notice
If they're not planning to do anything bad with the data (that is, only use the data for what they need to work with the customer) they don't have to show the notice. It's that simple. RTFR
>“Since 2016, newspapers in Belgium and Italy have removed articles from their archives under [GDPR]. Google was also ordered last year to stop listing some search results, including information from 2014 about a Dutch doctor who The Guardian reported was suspended for poor care of a patient.”
> including information from 2014 about a Dutch doctor who The Guardian reported was suspended for poor care of a patient.
FWIW, according to the Guardian article they reference, that case was not about getting the fact that the doctor was investigated and temporary suspended (that's public record), but a website taking such records and publishing a "blacklist of doctors unfit to treat patients". It's more a libel-like case than a privacy one.
> "The judge said that while the information on the website with reference to the failings of the doctor in 2014 was correct, the pejorative name of the blacklist site suggested she was unfit to treat people, and that was not supported by the disciplinary panel’s findings.
The court further rejected Google’s claim that most people would have difficulty in finding the relevant information on the medical board’s Big-register, where the records are publicly held."
That statement sounds to me to be synonymous with "this doctor does not / should not have their license"; given that a court actually decided otherwise, this looks libelous to me.
When did a court decide it was libelous? As far as I understand a court decided it fell under Right to be Forgotten, which surely extends beyond libel laws.
This is a circular argument. The courts decided it falls under the law in question, but that isn’t a justification in itself for the law in question.
> "The judge said that while the information on the website with reference to the failings of the doctor in 2014 was correct, the pejorative name of the blacklist site suggested she was unfit to treat people, and that was not supported by the disciplinary panel’s findings."
Again, IANAL, but what I’m reading does not seem relevant to libel. Namely, the pejorative title of the website has nothing to do with the claims they’re making.
If the doc had a libel case, why the hell did they not go for that, instead?
Sure. But what about the 100% of the population who are not perfectly rational beings with infinite time and energy to do thorough research before getting outraged?
Couldn't agree more. It's a biased presentation of the facts, and could easily be reframed. Take "compliance costs are astronomic", for instance, could just as easily be rephrased "Companies spend more money than ever to ensure the safety of customer data". I know that's certainly the case where I work: it's been a massive amount of work, but it's great for our customers. If it weren't for GDPR, I am certain that management wouldn't have allowed us to do it.
Cleaning up a shitty system is expensive, and there are bumps in the road. It's still worth doing.
> If your account gets hacked, the hacker can use the right of access to get all of your data.
If your account gets hacked, the hacker has access to your account. Duh.
> The right to be forgotten is in conflict with the public’s right to know a bad actor’s history (and many of them are using the right to memory hole their misdeeds).
People can change. Newspapers can exaggerate one's misdeeds.
> And the right to opt-out of data collection creates a free-rider problem where users who opt-in subsidize the privacy of those who opt-out.
Opting out of data collection isn't a thing under the GDPR. Breaking business models that involve people selling their privacy is an intended consequence of the GDPR.
> “Amazon sent 1,700 Alexa voice recordings to the wrong user following data request” (The Verge)
Doesn't sound like a company that can be trusted to ensure people's privacy without regulation.
> “The problem with data portability is that it goes both ways: if you can take your data out of Facebook to other applications, you can do the same thing in the other direction. The question, then, is which entity is likely to have the greater center of gravity with regards to data: Facebook, with its social network, or practically anything else?” (Ben Thompson)
Freedom includes the freedom to make bad decisions.
> “Presumably data portability would be imposed on Facebook’s competitors and potential competitors as well. That would mean all future competing firms would have to slot their products into a Facebook-compatible template. Let’s say that 17 years from now someone has a virtual reality social network innovation: does it have to be “exportable” into Facebook and other competitors?
No more than Facebook has to create a search engine so you can export your search history into Google.
> “About 220,000 name tags will be removed in Vienna by the end of [2018], the city’s housing authority said. Officials fear that they could otherwise be fined up to $23 million, or about $1,150 per name.” (The Washington Post)
The data protection authorities later told them that this is bullshit.
> As of March 20, 2019, 1,129 US news sites are still unavailable in the EU due to GDPR. (Joseph O’Connor)
"Losing" businesses that don't respect privacy is intended. It's kinda flattering that so many US news sites specifically cater to EU residents, making them subject to the GDPR. But frankly: We don't care much about your local news.
> During a Senate hearing, Keith Enright, Google’s chief privacy officer, estimated that the company spent “hundreds of years of human time” to comply with the new privacy rules. (Quartz)
> However, French authorities ultimately decided Google’s compliance efforts were insufficient: “France fines Google nearly $57 million for first major violation of new European privacy regime” (The Washington Post)
The French authorities rightfully didn't care how much time Google spent on not complying with the GDPR.
> Tradeoff between privacy regulations and market competition
Oh no, we might lose the ad market.
> GDPR has been the death knell for small and medium-sized businesses
Companies that cannot safeguard their users' privacy shouldn't exist, not to mention those whose business model is based on infringing on their users' privacy.
---------------------------------------------
The "arguments" ad companies use against the GDPR are just absurd.
EU 2016: We don't want businesses based on violating our citizens' privacy to operate anymore. You have two years two comply.
Ad companies 2018: Evil government! If you force us to stop violating our customers' privacy, we will stop violating our customers' privacy! You will regret this! And why didn't you warn us?
---------------------------------------------
GDPR: You must ask your customers to opt into data collection, letting them opt out is not sufficient.
Ad companies: The evil EU fined us for not complying with the GDPR! That's unfair! How could we know that "letting them opt out is not sufficient" means that letting them opt out is not sufficient? The GDPR is so vague! And we spent so much money on not complying!
> Amazon sent 1,700 Alexa voice recordings to the wrong user following data request
You know the best way to prevent sending 1,700 voice recordings? Not making 1,700 voice recordings in the first place.
Amazon will eventually mess up and send leak data to the wrong person. Blaming this on the GDPR is stupid. If Amazon actually respected user privacy they wouldn't make those recordings in the first place.
I just think it misunderstands how people actually behave.
Human behavior is always a challenge for any regulation. These are early days for digital age regulations, even the best efforts will be hit or miss at best I think.
It's not just for protection but also for control, which gdpr excels at. I am able to retain full control over where my data goes, who is allowed to have it, I can make sure it's deleted when I don't want it there, and I can actually request to have it.
I‘m not so sure about that. Sure, in theory this works, but if your data was shared with thousands of companies, how do you know it really was deleted? Furthermore, GDPR still has the thing that businesses are allowed to have a legitimate reason for keeping your data.
So far, as an end user, the GDPR doesn’t feel like anything changed at all. Facebook and Google still gather shitloads of data with zero control on my side. Cookie warnings on every website and loaded with dark patterns (link leads to link, leads to link, leads to link, leads to server timeout). I simply accept these popups and trust uBlock Origin to actually block the whole AdTech shenanigans, instead of relying on their popup bullshit.
>Those are a joke. Considering what's going on, those values should be orders of magnitude higher.
Frankly, I think higher fines and more aggressive fining would even further deepen the business moat that mega businesses are already developing over small and medium.
The article already describes how the current regulatory regime boosted Google and big players 20-40% directly at the cost of small and medium (not in the top 100 or top 50) sites.
Frankly, I think European countries should just take the China approach and ban American companies because there isn't going to be a regulatory structure that works here, and they're only going to hurt their domestic competition by playing this stupid game.
China doesn’t ban American companies for that reason, they ban them because political dissidents could use them to communicate privately (gmail) or publicly (YouTube Facebook twitter....). Not all American companies are banned, either.
I'm sorry but anti-trust for foreign companies is kind of a joke. The French government et al literally cannot "break up" Google or Facebook et al, all they can do is ban Google from their territory.
Also, deep and hearty guffaw from me regarding "the business moat is irrelevant". Thinking like that is why American companies dominate European tech and not vice versa, just saying. Thinking like that is why domestic competition just took a 40% hit in Europe as the business went to largely American firms. 40%!
> Frankly, I think higher fines and more aggressive fining would even further deepen the business moat that mega businesses are already developing over small and medium.
You can always charge small companies more than the big ones. Also: witness my world's tiniest violin.
And yes, if that doesn't help, absofuckinglutely block them.
This is not a joke, because the European Agencies believe in teaching and learning more than in punishing. What they want is compliance, not making money.
They will give you warning and time to fix it more than fine you. A fine is solution of last resort, after you burned them and proved you can not be reformed.
Exactly, apparently the average amount fines is fairly low. The maximum fine is 4% of revenue or 20 million euros (whichever is bigger). 4% of Google's 2018 revenue would be almost 5 billion euros (with a B). Google could have had it much worse.
Unintended consequence: because children are under the custody of their parents until they become adults, a parent could easily go in and demand the entire history of data a company collected from their child be provided; a gross violation of their right to privacy which some argue doesn’t even exist anyway.
Yes, of course. Imagine a father who rapes his child and abuses his wife. The child is removed from the father by the mother who goes into hiding, but with temporary legal approval.
We don't want the father (who likely retains some element of parental responsbility until the court case has finished) to be able to get the child's location by SARs.
In Europe children are humans and humans have rights. Children are not the property of their parents.
This seems like a problem with the temporary legal approval and not a genuine response to the question. In the US, parents accused of abuse are denied all parental privilege/responsibility at the beginning of the accusation until it is proven that the parent was innocent.
If your argument is that this regulation is fine as long as all other regulations are infallibly drafted and implemented, that only works until the other regulations are fallibly drafted and implemented. But fallible regulations are the rule rather than the exception.
Battered spouses frequently run away without notifying authorities, because leaving gets them out of the situation whereas contacting authorities extends and inflames the conflict.
And, because everything is a trade off, if you pass a law which is highly protective of victims then the abuser can file a false abuse complaint against the victim, then kidnap the children and deny the victim all access to their children until the fraudulent abuse complaint is dismissed.
Parental alienation is a thing. If the mother was making it up you'd be denying the child's right to a family life with its father, and the father's right to a family life with his child.
I was just stating how it works in the US. My mother was accused by neighbors when I was a child (certainly a much weaker tie than parental). I was taken away by the state immediately while they investigated. The claim was determined to be founded and the state took permanent custody.
As a parent, yes. It's a gradient that starts at "no" when they are an infant to "yes, just like any other adult" when they turn 18. My kids are elementary age and I definitely respect their increasing right to privacy, even from me.
This is the centre of a hot button issue in Alberta, Canada's politics. The previous provincial government passed a bill that prohibited schools and teachers from telling parents that their child had joined a GSA [0]. The new conservative government decided to change that aspect of the legislation and controversy ensued. Many believe that this change would put LGBT children at risk [1][2].
Not sure, if you are raising a child and they are hiding information or data from you that could be detrimental to their development or may imply intent of hurting or maiming others do you have a right to know about it so you can take action?
Could even be nasty in a case where separated parents have 50/50 custody and one parent uses GDPR to get data on what another parent might be doing with their child and build a case against them.
Not per se, no. But there are many overbearing/controlling parents out there. It's easy to imagine scenarios where parental access to GDPR data could be a bad thing (and yet other scenarios where it could be a good thing).
> And the right to opt-out of data collection creates a free-rider problem where users who opt-in subsidize the privacy of those who opt-out.
The actual fuck? Pardon the language, but I cannot express this strongly enough.
What does ”subsidize” mean? Users who agree to sell their data subsidize those who agree not to sell their data? And not having to rely on data sales puts financial burden on companies and drives them out of business? Good riddance
Can GDPR be used to actually destroy a company operating online in the EU? I'm guessing it could be, by simply overwhelming them with valid requests under GDPR, like "Give me all my data!"
Since the goal is merely compliance with the spirit of the law, the courts likely wouldn't hammer down a company for having issues when trying to legitimately and fully comply with the law.
I would hope that under such an intentional deluge, the court would accept that the reasonable turnaround time for requests would increase substantially.
> The relevant text in the final version (Article 12.5) is as follows:
> Where requests from a data subject are manifestly unfounded or excessive, in particular because of their repetitive character, the controller may either:
> (a) charge a reasonable fee taking into account the administrative costs of providing the information or communication or taking the action requested; or
You should then implement an automatic process. But why a large number of your users would start asking for their data at once? Maybe you did something wrong and they want to move elsewhere and in this cage you would like to keep them hostage?
That opening paragraph already speaks to the over-elevation of the market over any other concerns. So it perfectly fits onto "news.ycombinator.com". Human rights, including privacy and data rights, are more important than the profits of some companies
Most examples in the text are, for instance, related to companies failing to properly implement the GDPR (Amazon sending data to the wrong person, Spotify not asking for 2FA/email confirmation for the bulk download, companies deleting articles even when there would sufficient public interest, Ad vendors failing to ensure compliance and therefore seeing drops in demand, ...), that is, market failures - something this site would probably not call out but rather attribute it to the legislation.