Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook to Pay $550M to Settle Facial Recognition Suit (nytimes.com)
277 points by i_am_not_elon on Jan 29, 2020 | hide | past | favorite | 100 comments


I find it weird how people compare a fine for some specific conduct to the revenue or profit of an entire company. In this case, the illegal use of facial recognition probably slightly increased engagement for photo sharing features. That’s worth something but it obviously only accounts for a very small portion of the company’s revenues.

If the story was “judge declares entirety of Facebook enterprise illegal, punishment is $550M fine” I would get the outrage.


You need to view this in the context of all the things facebook is doing. They might get fined $550m for this one thing, but how many projects is Facebook engaged in that skirt regulations or outright breach them? Dozens of projects? Hundreds? So you can't compare the $550m against only the facial recognition, you've got to compare it against the entire suite of questionable things they're doing. THe point is that the corporate strategy is to skirt these lines and the success of that strategy is the cumulative success of all the projects along with the handful of fines they hit. So you've got to have disproportionate fines because you know enforcement is patchy.


When Facebook gets sued for all that other wrongdoing, they can pay out for it. Maybe you boost the fines by weighing the probability of getting caught. Facebook has been fined before and many other legal conflicts are ongoing, so it’s not as though they are getting a pass.

If Facebook gets a company-destroying fine for each act of wrongdoing, then it’s just a race where the first victims to prosecute get compensated. The slowpokes are screwed because the company has nothing left to pay out. This seems like a less fair system than one where fines are proportional to the value extracted or damages caused by the illegal conduct.


while i understand what you are saying, i don't think the law should be like that, especially for companies, where in the end nobody is going to jail.

if we see it just like that then the breaking a law is just a calculated risk. And companies take calculated risks all the time, they will take it. There is my mind no way that facebook did not know that they are breaking the law.

if we allow that the law is just one parameter in the strategy, then the law is just a tax that you have to pay if you get caught.


Following your logic they now have dozens/hundreds of projects risking $550 million fines.

If I was Facebook I would become quite uneasy by that.


I ... don't think you're grounded in reality. What do you think Facebook is doing right now that currently break the law?

Please, be specific.


Tracking users without their consent or them actually being users [0] is probably one of the most blatant ones for people in the EU.

But it's not like there's a lack of other issues to pick from [1], of course, one can insist on it all being "alleged", but let's be real here on hackernews: If it's technically feasible, then it's most likely being done, legalities are regularly just an afterthought to the actual product, particularly when it's about the monetization of user data.

A lot of that might be completely legal in the US with law-doctrines that undermine privacy expectations on a fundamental level [3], but the US isn't the whole world.

[0] https://www.reuters.com/article/us-facebook-privacy-tracking...

[1] https://en.wikipedia.org/wiki/Criticism_of_Facebook

[2] https://en.wikipedia.org/wiki/Third-party_doctrine


Yes. This is actually a huge settlement, driven by the fact that the Illinois biometric law has statutory damages of $1,000-5,000 per incident. As the article notes, the Equifax settlement was smaller despite being involving way more customers. Because proving actual harm for privacy violations is difficult.


The prior media coverage of this suit described it as a $35bn lawsuit, because the working number was 7mm incidents and the media assumed (and plaintiff alleged) Facebook acted “intentionally or recklessly” and went with the $5k/incident damages.

If we assume the 7mm Illinois resident incidents number is correct, Facebook settled for about $79/incident.

On the one hand Facebook would obviously not want to pay any amount in a lawsuit over their facial recognition feature, but on the other hand the law was about 1% as impactful as it had been written to be.


Even many minor laws include a short jail sentence. Imagine if the penalty was that facebook had to full shutdown everything for 2 months.

In comparison, a fine that you'll make up in no time is not a punishment at all, just a fee for the privilege of being allowed to ignore the law.

This fine for facebook is comparable to a speeding ticket price for a minimum wage worker. Is the crime on the level of speeding?


I'd argue it is. Nobody's claiming that Facebook has caused direct or serious harm here; they've just broken the rules that we put in place to ensure people who do want to do nasty things with your data can't do it. That's very analogous to speeding, which by itself harms nobody but is part of a regulatory system to minimize car accidents.


People get fined disproportionately to the actual cost of whatever they were doing, regularly, as a disincentive. Quite often it’s a significant portion of their pay check. We consider this normal. Why should it be different for companies?


There’s a thing often used of “fine triple the financial gain”.

Fines towards individuals tend to be associated to the crime committed , not to the persons paycheck. Why should it be different for companies?


Some believe that some fines should be a proportion of income, even for individuals.

Fixed financial penalties are inherently regressive; they disproportionately punish the poor and provide no disincentive to the sufficiently rich.

I believe this has been implemented for speeding fines in some countries.


Unfortunately, that's how the justice system works, it's basic game theory. Because we don't want or can't give infinite budget to justice, we need to make sure corporations and citizens abide by the law and put the burden on verifying it on them. If we had time and money to verify every Facebook project, sure, we could have fines proportionate to whatever they got in profit from that project. The reality is we don't, so we need to make sure Facebook is self-policing.


What if Facebook paid an independent investigator (selected by the DoJ) to determine the profits generated by a project?


Companies are fined in lieu of having their liberty constrained because it's not practical to do the latter.

If you as an individual perform an action that is punished by 6 months' jail time then all of your activities are curtailed. You're not just prevented from like, picking up a kitchen knife, you're locked in a box.


Maybe this is precedent?


Your move, Clearview AI.

They’ve been sued in IL as well under the same auspices, and Clearview’s defense that they aren’t a private entity because they "only" do business with public entities (e.g., law enforcement) is definitely not going to fly here. This settlement establishes how seriously the state of Illinois is taking non-consensual biometric information gathering, especially facial recognition.


If I put information about myself out there on a public site, I still have a right to say that people can't download and use that information?


This is a reductive argument and does not hold up to both social and legal norms.

Making data publicly accessible does not grant users of it carte blanche to use it however they wish. The data may be copyrighted: Viewing it does not grant the viewer rights to republish it or claim it as their own work. An image may contain a likeness: Viewing it does not grant the viewer rights to claim endorsement by that individual, or to use their likeness in ways of which the individual does not approve.

Beyond copyright and likeness rights, a likeness contains biometric information. Having access to that likeness, at least by the laws of the State of Illinois, does not grant the user rights to biometrics derived from that image.


> Viewing it does not grant the viewer rights to republish it or claim it as their own work.

But that's not the question here, right?

The question is whether I can use my high school yearbook pictures to look for information about my classmates.


To be fair here, it kind of is the question right?

I mean, say you and I are at a neighborhood kid's birthday party with our own kids. Suppose further that you take some pictures, and then post them publicly. Maybe to your FB?

Well, I never really gave you permission to use my image on your FB. Let alone publicly. I know that I certainly would never consent to publishing images of my children publicly. Further, I suspect the birthday party's host wouldn't give you permission to post his/her family's images on FB either.

So here we are, with all of these people on your public FB. None having signed or even given verbal authorization for any sort of release. And now FB starts running FaceRec on all the information you just gave them. To top it all off, my family and I don't even use FB. Never have. So it really can't even be claimed that we consented to facial recognition via the terms and conditions of service.

I only outlined all of that to outline this, just because a person's image is in public doesn't mean that the person consented to the image being public. Especially images taken from private spaces, (like bday parties at some kid's house).


> So here we are, with all of these people on your public FB. None having signed or even given verbal authorization for any sort of release. And now FB starts running FaceRec on all the information you just gave them. To top it all off, my family and I don't even use FB. Never have. So it really can't even be claimed that we consented to facial recognition via the terms and conditions of service.

> I only outlined all of that to outline this, just because a person's image is in public doesn't mean that the person consented to the image being public.

But what part of this would, even hypothetically, give you a cause of action against Facebook? What's Facebook supposed to have done wrong? You have a cause of action against your friend who illegitimately provided your photos to Facebook, not against Facebook who relied on the legal assurance your friend provided that he had the right to post those photos.


They ran the facial recognition software without knowing if they had all applicants consent. Ignorance isn't an excuse for breaking the law, the onus is on FB to make sure all people have consented. I would not say that they have done all that they can to check, by just having it in the T&Cs that its' the user's responsibility. They are hiding behind their T&Cs, knowing full well that they could be scanning people they have no consent for.

Comically, once they run the scans, they should be able to certain that they don't have consent because it doesn't match one of their users, and should throw the data away.


I think you are expecting too much from FB here.

FB offers a service that allows you to share your pictures with your friends among other things. Before you upload a picture Facebook asks you if you have all of the rights to the pictures you are uploading (albeit they ask that in the fine print of their TOS.) FB offers ancillary services based on those images you shared. If you are uploading images you don’t have the rights to how is this FB’s problem?


It doesn't mean they did their due diligence to confirm their data was clean of non-consenting individuals. They are clearly abusing their users ignorance. Signed T&Cs of one person shouldn't be enough when you're dealing with PII of multiple people in my opinion. You need everyone's explicit consent.

There is a disparity here between real world and software services I think. When my real estate agent's software asked me to put in my room mates details, I just had to check a checkbox that said he consented. Yet when it came to the paper contracts, we all had to sign individually. We need the latter for software, else PII is going to keep spilling all over the place and we'll forget we ever had privacy.

I don't think it's unfair to scrutinize a company in such a powerful position as FB either. They know they have unconsenting people in their photo database, it's just inevitable, and they are hiding behind their T&Cs hoping it's enough in the court of law. Ethically, they've failed already, and clearly in some states they're breaking the law too.


I find this type of logic to be inconsistent with the innovative culture of tech, and a fair bit hypocritical. On one hand, we have a new, fairly unprecedented technology (at least unprecedented on the scale on which it’s used) yet we are relying on tort law that predates the contemplation of anything of this magnitude. Why?

It was reasonable in 1990 that the mechanism that you suggest here would be highly effective in most cases. We are in uncharted waters now and unlike the titanic, I think we should proceed with caution.


I think I can characterize your position here with a bit of unfairness as "the innovative culture of tech means tech needs to stop being innovative".

I'd like to hear your comments on why in particular that's an unfair characterization, if you're willing.


Wouldn't it be similar to any copyright protection claim, similar to YouTube (or Napster, or LimeWire, or MegaUpload, or...)? If they don't legally have the right to share the media, the default action is to remove it without even investigating the claim (because of the scale Facebook and YouTube operate at).

Assuming Facebook cannot legally host the media, and they choose to do so after being informed of its contention, they are choosing to say they are legally in the clear (or that they don't care about the law, which seems a lot more likely).

IANAL, but it seems simple enough on the surface. Of course Facebook will claim otherwise, and without being required to make it easy to report contentious media very few people will actually do so, and sharing a photo without permission is not the same as copyright protection. I still think the same basic logic applies though.


Case law already exists on the right of people who appear in photos to stop the photographer from publishing the photos; the issue is not original to the internet at all.

If I recall correctly, there is no such right, because it would effectively eliminate photography.


This only applies in a public setting though (or anywhere where privacy is not to be expected). So any photos taken of friends/strangers in the privacy of one's home (or car, or work, or...) would be exempt, as I understand it.


There is no right if the photograph was taken in public, in private it's a much different story.


Intent matters in law. Facebook knows a substantial portion of their users will not acquire permission. In fact, their UX at every turn is a honeypot, so they doubly know.


Well, if the birthday party were in, say, Illinois, it's obviously that FB is in violation of Illinois law with respect to facial recognition.

Further, it being a child's birthday party, at a private residence, the photographer's action becomes a tort in the vast majority of states. If you were a big enough pain in FB's butt, you could use that to stop distribution of the images at a minimum.

It's true that the easiest method of guaranteeing ironclad, legally enforceable privacy with respect to this facial recog stuff is to live in Illinois. But that doesn't mean the situation is hopeless for people who live in other states. It's just a matter of whether or not you want to do the legwork of making the claims. Which, depending on the state you live in, may have to be based on anything from privacy violation like Illinois, to contributory copyright infringement if you took the photo and forwarded to your friend who then posted it.

So it would depend on both the situation, and the state and municipality in which you reside. I can't give you one catchall, because there isn't one catchall right now these laws are uneven across the country. And very much in flux right now.


Data protection principles, which date from about 1995 and have been recently boosted by GDPR, make it quite clear what the problem is:

- is the data personal? Well, the point of running facial recognition is to identify the person, so yes.

- does the organisation processing the data have consent of the person? No.

- is there one of the other justifiable or necessary reasons valid for processing the data? No.

So it's a data protection infringement.


But beware, if it concerns stuff that is already on FB or on partner sites with similar ToS one has signed away ones rights. IANAL but this is from the FB Terms:

>> Specifically, when you share, post or upload content that is covered by intellectual property rights on or in connection with our Products, you grant us a non-exclusive, transferable, sub-licensable, royalty-free and worldwide licence to host, use, distribute, modify, run, copy, publicly perform or display, translate and create derivative works of your content (consistent with your privacy and application settings). This means, for example, that if you share a photo on Facebook, you give us permission to store, copy and share it with others. [..]

>> When you delete content, it's no longer visible to other users; however, it may continue to exist elsewhere on our systems where:

>> - your content has been used by others in accordance with this licence and they have not deleted it (in which case, this licence will continue to apply until that content is deleted);

The Privacy Policy states that you are responsible with whom you share your content, after which above license applies.

https://m.facebook.com/legal/terms/update

Edit: misread parent, adapted accordingly


How do the FB Terms apply to the person being photographed and didn't upload them to facebook? None if this applies to them.


Thx. I misread and already adapted the comment. Can't delete anymore, so I deserve the downvotes.


I’m not worried about photos of myself that I upload, but am somewhat worried about photos of me that other people took. For example, there exist photos of me hanging out with my friends who are wearing kink gear, and some of those photos have been posted online. Everyone involved is having a good time and everything’s legal. Some of my coworkers might find the photos offensive, but those people aren’t browsing through kink photo galleries, so they aren’t going to see them. But I don’t want these photos to come up if you search for my name online. I also don’t want to engage in some censorship war trying to get them taken down.

Facial recognition tech, used in this way, seems to throw a bomb into a fragile peace where people can coexist with each other despite having incompatible values.


Well, they're certainly feeling empowered against not only Clearview AI, but IBM as well:

https://www.biometricupdate.com/202001/biometric-privacy-sui...


Wow, I'm glad that the rest of the country isn't following the Illinois example. I remember when the Google art institute released the "what classic painting do you look like", it was banned from use in Illinois, I guess this law is why.


That's the price you set for your personal privacy? What painting you look like?


Yes. That user and lots of other users. People even took the photos and showed them on social media.

https://www.cnet.com/news/google-arts-and-cuture-photo-match...

People's opinion of what constitutes "private" varies wildly.


I get that people don't think twice about giving it away. But it's another thing for that to be the moral justification for defending the merits of facial recognition.

Pardon my take, but:

A large uprising is starting to swell in the name of privacy, but the swing voter is concerned about what is essentially the visual equivalent of the "Which Disney Princess are you?" quiz.


> A large uprising is starting to swell in the name of privacy

i think it's more of a tide that comes and goes. wait till there's another pokemon go.


The moral framework that concludes that people need to be coddled against making the wrong decision about sharing their face data for stuff like this is a bit condescending. Why do we assume people haven't considered the risks and decided they think either the odds of worst case scenarios are low or they aren't bothered by them?

It feels a little bit like the gun control debate. Taking away people's freedoms because of the possible worst case outcome can result in a worse overall situation.

If people aren't bothered by how facial recognition is used, stepping in and asserting we know better and they need a legal protection seems premature and overreaching.


No, because I don't see my face as a private thing. Nor is the exact distance between my pupils, or the profile of my nose, or any other information that can be cleaned from a photograph of my face.


It's not that it's a private thing, it's that it's yours to control.

There aren't technical limitations to the use (just as there aren't technical limitations to copying and freely distributing all digital media), so we either institute legal limitations or accept that it's allowed. There are many negative aspects of allowing it, so we should probably make sure we take the time to look at the repercussions of both stances carefully (that is, more carefully than "I don't have a problem therefore allow it").


Of course my face is mine to control. I can wear makeup, grow out or shave my hair, tattoo the whole thing if I care to.

Images of my face created by other people may be a different story entirely. Does Donald Trump or Boris Johnson own every photo of them on the AP newswire?


These arguments are way more nuanced with new tech. What about your fingerprint? your signature? Cadence of every syllable you say when you call the tech support or takeout? Any one can do any of these things over a single weekend. Tomorrow, Google could enable some feature on Android to recognize your face or voice on any phone in the world.


This is an interesting opinion in the time of deepfakes.


We'll need a better answer to deepfakes than "You cannot do anything with an image of another person's face."

Unless we want to be governed by faceless legislators or entertained by faceless celebrities. There should be some way to divide responsibility for person's images that isn't totalitarian in either direction (either "You cannot use a person's face without their consent," which kills visual news as a practice, or "every face is fair game for anyone to use at any time," which feels invasive to the individual).


I agree; in retrospect my comment was a bit facile. In the long run I don't think anyone will have any rights with respect to images of her or his own (clothed or naked) body. We're not in the long run yet. It will take some time for most of us to be comfortable with that.

In my view, it's more important to create a sort of symmetry with respect to images and other data than it is to preserve particular customs that are problematic in light of modern technology. That is, it's probably OK that every e.g. FBI agent has access to thousands of images of me, or even my complete genome, as long as I have access to thousands of images and the complete genome of every FBI agent. We learned in kindergarten that "knowledge is power". Like power, knowledge is not symmetrical. A federal prosecutor having power over me doesn't necessarily mean I have power over that federal prosecutor.

If we've learned anything in the decade just past (to be clear, that's an open question), it is that authoritarian structures are easily hacked by the authoritarians who run them. It may be that e.g. the Department of Justice wasn't always constructed to capriciously surveil and/or construct false cases against innocents (although, was that before or after they harassed MLK and other civil rights leaders?), but ISTM at least the cases of Aaron Swartz and Carter Page, to cover both ends of the political spectrum, show that to be the case now. If the tools of knowledge/power are increasing in power, we all need to have access to those tools.


I'm afraid I fail to see what Swartz has to do with this topic.


> If I put information about myself out there on a public site, I still have a right to say that people can't download and use that information?

It's my understanding scraping is legal, but as the OP states, you could be fined over a half a billion dollars if you use that information. What part of the law/article was unclear?


Technically it's only illegal to use the data because the subjects of the data never gave you express written permission to do so, and you have to get that in Illinois before doing any kind of facial recognition.

You don't need that in most other states.


>Technically it's only illegal to use the data because the subjects of the data never gave you express written permission

Yes, many laws require consent. I'd caution against treating people's permission as a problem to be routed around.


> I'd caution against treating people's permission as a problem to be routed around.

This has been Facebook's general stance since it was established.


And like many users, I deleted mine.


in many jurisdictions yes. In most European countries people are allowed to download publicly available information of others but they can only use it for private use, not for commercial use without consent and they cannot distribute it without consent.

I find this notion slightly strange that the internet as a public space is some sort of voyeuristic free for all where one forfeits every right. If this was not stopped it would render the entire public internet hostile and borderline unusable which would be a shame for a medium with so much potential.


My view is that while it shouldn't matter, collecting a database of such information is potentially dangerous enough to society to require regulation - particularly when it is most dangerous in the hands of the very same government you would ask to regulate itself.

Some powers need to be checked.


Yes, they're called personal rights and are protected in sone jurisdictions. A Frenchman who was taking a leak in his own front yard when the Google Streetview car passed and photographed him sued G to erase his image.


If I go out for a walk in the park, I still have a right to say that people can’t murder me and sell my organs?

???

As a (supposedly democratic) society we (supposedly) decide what is acceptable and what is not. These decisions are, in the eyes of the universe, quite arbitrary. You may be discovering that now, but some of us have known it for a while.


How about if someone else puts you out there against your wishes? What then?


While it's very sad to hear this staggering sum called a 'rounding error' and a blip on Facebook's radar, I'm still happy about this news. It shows that fighting back legally is possible so the day we all have to wear recognition-scrambling patches is not as nigh as it could be. Just wish other places would amp up their privacy laws so Facebook would get more than occasional slaps on the wrist.


It's not a blip. It's part of a pattern including GDPR fines that has a certain weight. At the end of the day any fine hurts shareholders at least a little but even if fines get so high that profit is decimated and stock plunges, as long as incurring fines is more profitable than other business tactics, and there is enough free cash flow to make liquidation unprofitable to private equity / corporate raiders Facebook will continue operating as it does.

Same as for any business.



That's good and I had it turned off already but one ongoing issue is that I have to crawl through all the various settings every couple of months since there's always new privacy invading stuff found and it's obviously turned on by default.


How would that work for people who do not have a FB account?


I don't think someone without an account can be tagged in a photo. More importantly, the recognition engine will not find them, afaik.


Isn't Facebook known for building shadow profiles on non-users?


Friends can likely tag you in their photos, even if you don't have an account.


> David Wehner, Facebook’s chief financial officer, noted in an earnings call with investors that the settlement added to the social network’s rising general and administrative costs, which increased 87 percent from a year ago.

So he literally just framed this as a 'cost of doing business'?


There’s 4 broad buckets of costs for most companies - g&a (general and administrative), s&m (sales and marketing), r&d, and cost of revenue.

The cost has to be classified into one of these buckets and so the CFO was probably just clarifying into which bucket they lumped it.

This is part of the point of generalized accounting is so everyone gets a similar basis compare companies.


Oh I see, thanks!


> The cost has to be classified into one of these buckets

Either PR was not consulted or they don’t understand how this comes off.


I'm not fan of Facebook, but here it is in context:

> Total expenses were $12.2 billion in Q4, up 34%.

> Cost of revenue increased 25% and the growth was driven primarily by depreciation related to our infrastructure spend.

> R&D grew 36% and was driven primarily by increased investments in core product as well as our innovation efforts, particularly in AR/VR.

> Marketing and Sales grew 23% and was driven primarily by consumer and growth marketing.

> Finally, G&A grew 87%, largely driven by higher legal fees & settlements. This includes charges related to a $550M settlement in principle we reached this month in connection with the Illinois Biometric Information Privacy Act litigation.

This is standard for earnings calls. The point is for the CFO to add detail to the generally accepted accounting principles (GAAP) financial numbers so that investors understand why they changed. As @joez mentioned, companies have to report financial numbers according to GAAP, which mandates that companies disaggregate costs into specific areas. Both good and bad companies face lawsuits all the time, and the legal fees and settlements will show up in G&A. If G&A expenses change in an unexpected way, it is expected that the company will communicate why.

There's definitely a lot that goes into crafting these calls, but having listened to and read more than a thousand of them, I don't think there was anything poorly done in this specific example.


It's an earnings call. He put it in the terms his audience expected. Nobody made any deal out of it except one HN commenter. So I think it's okay


As long as the nature of punishment is a fine, it can be modeled and a cost-benefit analysis can be performed. If the forecasted economic benefit exceeds the cost, then the action should be taken, or so the theory goes.

This happens in all industries. Part of the reason many legal settlements lead to a massive jump in stock price is that companies reserve funds for expected litigation. If FB models the settlement cost to $1B and it turns out to be $550M, that's a good result


> As long as the nature of punishment is a fine, it can be modeled and a cost-benefit analysis can be performed. If the forecasted economic benefit exceeds the cost, then the action should be taken, or so the theory goes.

The theory has limits though. If, in discovery, prosecutors find out that the company intentionally did something illegal because it believed that the cost of the fine was less than the benefit of the illegal action, then the fine is usually much higher and the specific executives involved often face much stiffer penalties.


There is a difference between an intentional plot to commit one specific illegal act or tort, and recognition that torts and misbehavior will happen in a large enough organization and budgeting/insuring for it. This happens whether or not it’s believed paying damages is cheaper than fixing the problems.


Agreed. I didn't say the theory was invalid, just that it has limits!


True! Sorry about that.


Do you know of any examples when that has actually happened? Genuinely curious, because I can't think of any.


I think a favorable reading is that he described the category. Facebook tracked that fine under "general and administrative".


Every large company is juggling multiple lawsuits at any given time. So, yes.


> So he literally just framed this as a 'cost of doing business'?

How would you prefer he frame it?

Maybe I'm jaded or cynical now


7% of yearly profit doesn't sound like a rounding error to me.


Last year FB’s profit was 22 billion so that’s less than 3%


biometrics of the face, body or eye are mostly simple algorithmic transformations of photographic images. I find it hard to believe that "biometric data gathering" of Illinois residents can have massive civil liability without effecting all public photography and publishing thereof.

there's also the catch-22, that if an entity is pursuing legal collection outside of Illinois (for example of the Flickr dataset from the IBM case) it has no way of knowingly excluding Illinois residents without identifying them by using an (illegal) facial recognition dataset in the first place.



To put in perspective that's like paying 10k per FB employee of fine in a single year.


And to put that into perspective, their total spending is over 1,000k a year per employee.


Neither of these helped put anything into perspective for me. I don't know how many FB employees there are. Even if I did, I don't know how much they're being paid. And even if I knew that, I don't know how much FB makes per year.


Discussed earlier: https://news.ycombinator.com/item?id=22186790

In any event, the main point was that the fine is 1/100 of yearly spending.


FB: Lets face it, were here to make money. It was cost effective to settle and move on.


Did you expect them to fall on a sword and commit seppuku? Obviously they did the most cost effective measure. Why try to snidely vilify them for acting according to the law in payment for their illegal acts? Vilify them for the illegal and unethical acts. Settling a lawsuit is neither of those and is a core part of our legal process.


Parent comment contained no snide vilification.


Tone gets lost in words, I wasn't meaning to vilify.

I'd love to see Facebook actually fight the case to the end but nope, number say settle and move on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: