Hacker News new | past | comments | ask | show | jobs | submit login
Google/Fitbit will monetise health data and harm consumers (voxeu.org)
325 points by hhs on Sept 30, 2020 | hide | past | favorite | 151 comments



The argument they're making here is a little subtle, but it tracks.

They're not claiming Google will hand out a user's biometrics to insurance companies or other parties. They're claiming Google will offer biometrically-based signal as an ad discrimination channel.

At that point, they don't have to hand out the data to allow insurance companies to discriminate; insurance companies could set up targeted ads for Cadillac insurance programs on users having XYZ biometrics and those ads only run for users that are showing they're low health risks. So it's soft market discrimination; only the extra-healthy get explicitly told about new insurance plan A, and the self-select group signs up, possibly making A cheaper for the insurer to implement even if they don't do any additional screening.

I think the theory has holes, but it's not nonsense. And it's unclear to me that it demands government regulation (but my bias is American-style law; I don't know how European legal eyes see the story I just laid out).


This is called risk selection, and is already a common practice [1] that doesn't require much in the way of "health data" [2]. I suspect that 80% of the effect can be achieved via "simple" methods like advertising on health and fitness related sites or keywords, partnering with gyms to advertise to members, etc. ACA has some provisions to help address these incentives [3], though it's unclear how effective they are.

The Privacy / Big Tech angle is attention grabbing, but is grossly missing the point. In my opinion, the reality is we need to decide if this type of discrimination is something we as a society want to regulate or not.

[1] https://www.nber.org/papers/w19198 [2] https://www.federalreserve.gov/econresdata/feds/2015/files/2... [3] https://www.kff.org/health-reform/issue-brief/explaining-hea...


I’m not sure it’s missing the point though. I think the specificity and scale that big tech brings can change the impact of these practices significantly.

Some thing’s are not be a big risk to society until they get amplified.


Agree that is possible, but from what I've seen for the most part big data and predictive ML has been incremental (at best) to more naive methods.


Yup, yup. The real efficiencies in insurance come from data, not models. This is mostly true because the models which can be used are massively regulated, and must be explained to regulators (which sounds like a fun job).


> argument they're making here is a little subtle, but it tracks. [ ... ] They're claiming Google will offer biometrically-based signal as an ad discrimination channel

It's not impossible that that would happen, but it would be a departure from current behavior. To some degree, Google is selective about what data does and doesn't feed into ad targeting.

Examples:

(1) Emails are sensitive, and Gmail messages are off limits: "Google does not use keywords or messages in your inbox to show you ads. Nobody reads your email in order to show you ads." (https://safety.google/privacy/ads-and-data/)

(2) Drive/Docs/Sheets/Slides (if I'm interpreting this right): "We will not use your content for marketing or promotional campaigns." (https://www.google.com/drive/terms-of-service/)

And they've already stated they don't use health data: "We do not show ads based on sensitive information or interests, such as those based on race, religion, sexual orientation, health, or sensitive financial categories." (https://support.google.com/admanager/answer/140378)

It is very reasonable to ask how your data might be used or misused. But the scenario here supposes that Google will change its general pattern of behavior and go against what it publicly said.


I was shown a Autocad banner ad in a free-to-play game an hour after I got an email with a link to Youtube video about using autocad to design a working model-size airplane.

Note I hadn't clicked the link yet, simply read the small paragraph my friend wrote along with the YT link "hey check this guy out he is using autocad for blah blah".

That was the day I stopped using Gmail.


Doesn't it seem more likely that you - a person who is interested enough in Autocad that people email you videos about it - searched for something related to Autocad?


That is what made it so strange and surreal; I have never before or since searched for anything related to Autocad.

Me and my friend have an interest in model aircrafts and drones and often spoke in person about building a new drone/airplane from scratch as fun project and this was the first time that Autocad was even mentioned in our discussion. Another data point we are both old school and would talk about this over beers in the pub rather than through messaging apps.

The only thing linking me to Autocad was that single email I got in Gmail.


Did your friend search for Autocad, and are you linked by any social networks? I'm suggesting your experience may be related to other potential avenues of data convergence, much as it may have appeared on the surface exposed to your awareness to have been directly sourced from this email.


Hah. I got a home screen notification to use Google's travel product minutes after I got a hotel booking confirmation email.

I'm sure you can come up with reasons this is not breaking T&C but it would have never happened if the two products were in separate companies.

And I'm sure they will "not use" the health data in a similar fashion.


I mean if you have Google travel installed then it's a feature that it's integrated with gmail.

Additionally I wouldn't find it weird for calendar or maps to make suggestions based on gmail. That is the value proposition by adopting the Google ecosystem.


Sorry I think these are what Goole say they are doing, it is not what they are actually doing. Google use every bit our data to do whatever that benefit them or their investor and the military complex.


It's not true, but of course, since they're fundamentally a closed institution, there's no way to independently audit the claims that it's not true.


(For the benefit of the doubt, I'm choosing my wording very carefully here): I'm not convinced that Google works in this way, which could be a signal that their systems are so vast even they aren't aware of how users' data moves through it and gets used.

Of course, you could just as easily conclude that they're lying, but I find that harder to believe.

Everyone has some stories about how Google drew a suspicious correlation between products not supposed to be connected. In my case, there was one day where I wanted to watch an MP4 copy of Harry Potter I had on my NAS while on a work trip. Obtained ages ago. I uploaded it to my G-Suite Business Google Drive (not shared, nothing was pirated here despite ripping blurays still being a legal ish-zone). Later that day, YouTube started suggesting clips from Harry Potter to me.

Suspicious. Not damning. Are they scanning my Google Drive for filenames to feed to the YouTube recommendation engine? I don't know. Maybe it was just chance, or maybe a few days ago I clicked on a random article about JK Rowling's latest drama and that was enough to trigger three layers of indirection through the Google Brain and suddenly I love Harry Potter (I don't. Its fine. I'm sure after writing this out, though, Google's opinion on My opinion of it will be reinforced.)

But lets say I never watched those videos recommended to me. Is the YouTube recommendation engine data fed to their ad engine? What if I never click on something, but the system knows it presented it, and maybe I like Harry Potter anyway? I don't know. But it seems feasible, doesn't it? And if that's the case, then the data traveled two steps, where each step on its own is weird, but reasonably ok, but the two steps together look less ok.

We're really talking about the absolute cutting edge here; Google is building a brain processing exabytes of data. If you're in engineering; have you ever worked in a job where you felt like you knew how every part of the system worked? I work at a company with 10 people and I still don't feel that. Why would Google be any different? Teams talk to other teams. A middle manager two jumps up approves it. They get access to a database. Data travels around. An engineering lead at the top thinks they have a grasp on everything, a grasp spuriously assembled from reports written by people who themselves understand some non-100 percent of whats going on in their silo. Things fall through the cracks, it happens all the time, and in this case we're talking about buses of data so big that no one could even look at it all to verify what's in there is what they think is in there (and if they did actually have humans look at it, that'd have concerns on its own).

This is my pet theory: That the desires and realities of Google are so different that even what they say is untrustworthy because no one there knows whats actually going on anymore.


The internal truth is that these policies are policies, but they're maintained by human beings. Human error can occur resulting in the mixing of data. In general, that would be an error and corrected as soon as it's discovered. It's also hard for those errors to occur (data is siloed, and the process of correlating between silos is designed so that someone has to eventually submit an auditable document that outlines what data they want to correlate, why they need to, and why that's the right solution in the space of alternatives).

Regarding the spooky coincidence effect: the truth is that people are a lot more predictable than they assume they are. More often than not, when you're doing a thing related to X and then 10 minutes later you see an X ad, it's because you're actually in the bucket of people who like X and the buckets are highly correlative. You might be in that bucket because you like Y and 95% of people who like Y like X, and a search for Y, followed by doing something related to X, triggered the X ad because of the Y search. There's also huge confirmation bias on those coincidence ad views; everyone has a story of one, but I've yet to meet anyone who said "I sat down and recorded every ad I was shown for a week. 90% of them correlated with an activity I was doing in an unrelated Google product between 5 and 10 minutes before the ad was shown."


> the desires and realities of Google are so different that even what they say is untrustworthy because no one there knows whats actually going on anymore.

I can absolutely see this happening due to their sheer size losing control as everything is probably silo'd and confusing.

Due to a tax issue last year, I urgently needed a specific from a former GIANT company I worked for per my accountant.

It took over 6 days of calling, emailing and getting transferred around the bureaucracy because they had recently outsourced that HR function to another company and moved all the data and the Director of HR for that entire 1000 person department couldn't even get me the form and didn't know what to do! She and I had to put in a ticket together to the help desk and eventually someone at the outsourced help desk escalated it enough that we found out the physical forms were actually in transit to the east bay offsite location and had not been digitized yet which is why they were not able to provide the form until a 14-business day turnaround.

It was the most absurd and frustrating experience and definitely made me happy I'd moved to a more midsized org. We might not be an industry Titan but at least are small enough to be functional and not get entirely lost in bureaucracy.


This argument rests on an undeseriable business practice. Essentially we don't want insurance companies to discriminate on the health of potential buyers so let's stop Google from acquiring that data. But if that's what we want it makes more sense to ban the thing we don't want to happen. Namely, don't allow insurance companies to discriminate on the health of buyers.


Health insurance only risk-pools as a side effect of its ignorance. In a high-information world, it risk-smooths but does not risk-pool.

It's completely bonkers that we insist on doing it this way.


There are plenty of cases where plausible deniability allows bad actors to continue to exploit loopholes.

In this case, the health insurance industry could use statistical tricks to counter the allegation that they are discriminating at a significant level.


There's also the good old argument that your argument has been used in respect to Google for decades and it has yeilded the exact sort of outcomes this paper is warning against.


Can you give an example?


What i dont get is this: the purpose of insurance is to average out risks in life. The more insurance is 'tailored' and personalised, the more it defeats itself.

If an insurer could see the future, he would give me a policy that costs (cost of my future accidents + X% profit)/(my lifetime)

Essentially turning insurance into like bad savings account with negative interest rate.


This is what insurers already do. Using whatever information they can get on all their anticipated customers, they make it so the expected value of your costs is less than the expected value of all your payments. Going purely by expected value, insurance is a losing game for the customer.

The reasons it's a great idea are (1) you don't get to borrow from your future savings account's balance, and (2) insurers often encourage not-totally-rational customers to practice better habits to reduce costs. The first point is what customers want: avoidance of bankruptcy.


That's not quite true. A pool has less variability than an individual.

My expected cost of future accidents might be N, with variance M >> N.

The insurance pool can absorb it if I have a catastrophic outcome, but I can't.

So it may be worth buying an insurance contract with a negative expected value to hedge risks.


> If an insurer could see the future, he would give me a policy that costs (cost of my future accidents + X% profit)/(my lifetime)

But what happens when (cost of my future accidents) is very high? Are you as an individual just uninsurable? Is that the best outcome?

Distribution of cumulative healthcare costs maybe looks something like a lognormal distribution. Most people fall into a "normal" range but then there are outliers who can cost far, far more. But even though their individual costs are high, their absolute numbers are low compared to the total pool. So everyone can pay something like a normal expected cost, plus a small fraction, plus the profit overhead, and thereby cover those outliers.

Large employers will negotiate pooled coverage like this for their employees. That's why the 21-year-old with zero health issues pays the same as the 60-year-old with dialysis. But, it's the fact that this is essentially a monopoly market on those employees that allows this. They can create a single pool because there's no alternative market. (At least functionally, because the costs of those external market players don't get the employer subsidizing the plan.)

Without that monopoly and subsidy, market forces come into play. For those people who have a high likelihood of lower costs, if an insurer can create a pool targeting people with those same signals, then they can offer that pool lower rates and capture business. However, that necessitates that the remaining pools become more expensive. This is a classic race to the bottom, but with the added complexity of a zero-sum game.

Your stated position is the logical end-game of this direction. But it's not the only functional solution. One could also switch instead to a single pool over the entire population -- aka, single payer.


So, what happens in less mental markets (like auto or home) is the following: - new source of data becomes available, Insurer A starts using it

- They price policies based on the new information, causing some customers rated as high risk to move to other insurers without access to this data.

- Insurer B realises that they are making less money (or more realistically, losing more) and invests in source of data.

- Higher risk customers end up paying more for insurance.

- Smaller competitor comes in and insures higher-risk customers (Quote Devil in the UK are a good example here), and offloads the risk to reinsurers. Additionally, because not all high risk customers will claim, smaller competitor can make money.

- New source of data becomes available, rinse and repeat.

Note that the difference between insurers having more granular data vs aggregated data is not less price discriminination, it's price discrimination based on coarser features (zip/post code, age, etc).


Calling the home insurance market less mental to someone in Tampa Bay is laughable. The only reason I'll even be considered by insurance companies other than the public-run Citizens is because I'm not in a flood zone [0]. Most every other company won't cover Florida, because why include high-risk areas if you have a choice? Just exclude them and be done with it, right?

And that's the whole point behind my prior comment. As long as insurers can directly exclude individuals -- or indirectly exclude through price discrimination -- this is a completely sane way for them to behave. But it's not the only way to build the system, and this is definitely one of those times to ask, "Is this the best this society can do? Is this what our society actually wants?"

And I also have a feeling auto insurance is more sane in the UK due to the public healthcare being offloaded. In the US they severely cap medical payouts. We're talking like tens of thousands, which is laughably small here for anything serious. That will cover one ER visit. Instead, they give a big number for liability coverage of injury caused to others. So if the incident wasn't your fault, you're relying on that other person's liability coverage to make you whole, not your own insurance. Unless of course they were uninsured and you have uninsured motorist coverage. And if you're found at fault, then better hope you have some other way to cover that gap.

[0] But I still go with Citizens as long as they'll keep me. Because I'd rather pay directly into the public system than pay some for-profit company that's just going to beg for public bailouts when something big happens anyway.


Yeah, I worked in the industry for a while based in Europe, so that's the context of my comment. I deliberately avoided health insurance, both because I don't know a huge amount about it, and because it's so mental in the US.

I do agree with you in terms of the societal benefit piece, but was just trying to give some insight with respect to how the industry works now (without taking a position on how it should work).

My personal belief is that we should only allow individual level features to be considered, because otherwise you end up in a position where people living in richer areas both have more money and pay less for insurance (which is rational from an insurer perspective).

That being said, insurance doesn't really make any money, and is probably something that would be better provided by the state (especially for things like auto, which people are forced by the state to have).


> Yeah, I worked in the industry for a while based in Europe, so that's the context of my comment.

Just want to let you know that I appreciate the info. Cheers!


Insurance should pool random risks but it's not meant to pool all risk. You know your pool of 100 people all have 5% chance of a heart attacks. You don't know who will actually have a heart attack, that risk is pooled. But you don't pool people with a 1% or a 20% chance of a heart attack the same if you can avoid it.


Personally I'm more worried about companies outside the healthcare markets getting access to health data more than insurance companies. But it seems like everyone is in agreement on that.

I'd bet money that with consistent access to sleep cycles, heart rate, blood pressure, ecg, and body temperature combined with device analytics (screen time, content watched/listened to) you could easily come up with a behavioral model that identifies users suffering from depression and anxiety.

And while it could be useful and positive (for example, they already can identify and target smokers with smoking cessation ads), I'm more afraid of malicious actors targeting people already prone to destructive behavior with feedback loops that worsen their symptoms and profit off it.

For example: User 123 is experiencing a panic attack. Send notification that Uber Eats now delivers alcohol.


Google has specifically said it _won't_ do this sort of thing (health data will not be made available for advertising, either within Google or [obviously] externally).


Jokes on them, Fitbit hurts me[1] and being disabled insurance companies already discriminate against me[2].

On a serious note, Doesn't insurance companies already partner with fitness monitors? They claim to provide incentives for good health by offering discounts in premium but then again it could also mean higher premium if any health issues are detected by these fitness monitors irrespective of its accuracy.

[1]https://abishekmuthian.com/my-experience-with-fitbit-charge-...

[2]https://abishekmuthian.com/insurers-are-putting-the-lives-of...


I had tingling pain with a fitbit and I have solved it. Loosen it fully at night and the tingling stops.


Sorry, that's not it for me or many others who face the same issue. As I've mentioned in the blog, I've loosened it to the max with multiple fitbits and I always end up with pain; unless I switch off the heart rate monitor.

When the heart rate monitor is switched off, there is no pain/tingling even if I wear it tight and this has been corroborated by many others in fitbit/apple watch forums.

But since, most of these cases are due to wearing it tight (or the manufacturers want to project it that way) the heart rate monitor issue is not getting enough attention.


that is interesting.

I also switch from left to right. Maybe that makes a difference?

The other thing is that the two explanations are not mutually exclusive. Once you loosen it fully it moves around on your arm, which might move the harmful parts away from whatever is vulnerable.

Do we know what exactly causes the tingling? My left hand is still numb almost a week after I was first injured.


Switching hands doesn't make a difference.

Although loosening might help you place heart rate monitor at a different place or avoid direct contact with skin; in my(our) experience switching off HR produced definitive results immediately.

The reason for why HR does this is speculative(I've covered in my blog), as long as the consumers demand strong action from the manufacturers we're not going to find the reason.


> They're not claiming Google will hand out a user's biometrics to insurance companies or other parties. They're claiming Google will offer biometrically-based signal as an ad discrimination channel.

This cuts to the heart of the common, disingenuous, privacy bromide of "we don't sell your personal information." Of course you don't – you sell sophisticated insights about me derived from it.


Sophisticated, anonymized insights.


When the user clicks on specially targeted ads, they reveal themselves to be part of the group and are individually identifiable.


To a first approximation, yes. There isn't any authentication around following an ad link, so technically anyone could click the ad.

But practically, no; nobody's going to randomly click an ad link someone else handed them for funsies. In the common case, the ad company will be able to do conversions on the target demo (and make strong assumptions about the target demo, even if Google doesn't explicitly leak that data).


I wonder if it'd make sense to have a plugin that'd share ad links in a p2p network just to screw with these people.


That's click-fraud, and Google at least attempts to identify it (so they're not charging for false ad views).

No idea if they share the "That click was fraud" signal with their clients for this kind of display ad.


Click fraud sounds like an ominous term. But let's be real here. We aren't truly opting into all this tracking. I'd rather call it anonymous clicks. As for Google trying to resist it, I strongly suspect there isn't a network actively trying to give privacy back to the people.

On a related tangent, CloudFlare creating a Google Analytics tracker could help lessen Google's tracking. But this battle should've been waged about 10 years ago. It's a shame that Hydra has many formidable heads.


What about AdNauseum? Same basic premise.


I think that calling ad-sharing "click-fraud" is a little ridiculous. I can understand why someone would call an extension like AdNauseam click-fraud, but this would presumably be people legitimately clicking on the ad.


I'm using the industry technical term, in the sense that the algorithms that could protect an advertiser from screwing up their metrics already exist to protect Google from screwing up the payment model.


Don't forget that Google allows some ad buyers to run their own JS inside the app. They could embed additional tracking and would get the hit without the user even clicking.


Not in a way they reveals demographics; that JS is sandboxed and can't generally figure out why the ad was run.

There's room for abuse, but Google will slap wrists of advertisers who try it.


You don't need to figure out why the ad was run if you target the ad to people who match the demographic you want to identify. It runs because it fits the match.

I don't know whether Google will slap any wrists. It went so far that their ads hijacked the browser and redirected to scam sites. I've been on the debugging end on a few such attacks over the years, and it just happened once every other year or so on sites with a healthy amount of traffic. The scammers do try to fly under the radar (only targeting specific sites, only targeting mobile and specific demographics to make it harder to track). If they simply didn't allow them the access, the problem wouldn't exist.


They allow the access because "good behaving" advertisers can use it to do animation and such without having to load high-bandwidth media files.

It is a feature of their display ads system that user privacy is protected and advertisers shouldn't be able to solve to a user when the ad is displayed (though they can, of course, begin a business arrangement with a user if the user clicks the ad; that's called a "conversion"). Scammers that abuse the JS layer fly under the radar because Google will ban their display ads account if they get caught abusing what the JS allows them to access (such as trying to data-scrape parameters off the user's machine to estimate whether they're seeing someone they've already seen).

"3 Ad Serving. (a) Customer will not provide Ads containing malware, spyware or any other malicious code or knowingly breach or circumvent any Program security measure." [https://support.google.com/adspolicy/answer/54818?hl=en]


Let me just point out that the root problem here is the broken US "health insurance" system.

That targeted ads might cause these kinds of problems is a side effect of this.


I believe the article in question relates to a European Commission investigation.

In the US, this kind of data aggregation doesn't even make waves regarding antitrust. Google just has to comply with HIPAA and it's basically good to go (they already had "Google Health" as an app for self-reported data ages ago, which nobody stopped).

If Google started feeding this data to insurance companies, it'd be out of compliance with HIPAA.


>They're not claiming Google will hand out a user's biometrics to insurance companies or other parties.

Make no mistake though: those days are coming. It's just a matter of how soon. And no government will give a shit to stop it.


That's an odd prediction, given how unpopular the idea is.


Unpopular to whom, the HN crowd? The crowd that no one pays attention to? I'll tell you who it's not unpopular to: HMO execs that would love another excuse to raise your premium.


I'm going to come across here as an odd outlier, and that's OK. I accept it. Being in IT for 20 years has shown me that none of these companies can be trusted. Since I have to have a mobile phone, I chose Apple. I don't believe that there are "lesser evils". You're evil or you're not. Full stop. I do my best to use open source tools (save iPhone) in my house. Granted, there are no open source TVs, Roku's, etc, but they are not tracking me. My TV is set to be dumb, I don't use "real" information when I sign up for transient things. I use email aliases for everything with Fastmail. I have no apps save the ones that ship with the phone and I don't use the health app. I use the phone, texting, and camera. That's it. I don't have an iCloud account.

The field of companies is narrowing with FAANG buying up everything of value or anything that may threaten them. I get it. It's all about money. While I don't agree with everything that Richard Stallman says, he's right more often than not. We are increasingly giving our lives to FAANG. The tracking is insidious. The sharing of information is insidious. We can fight back should we wish. But how many people are willing to give up their conveniences for their freedoms? I'd wager not too many. I'll admit I bought a flip phone for the purposes of avoiding any virus apps that may be required in future should that happen. I just want to stay off the radar and I have a right to do so.


> Granted, there are no open source TVs, Roku's, etc, but they are not tracking me.

Generally our views are broadly aligned, though the above sentence gave me pause. Are you suggesting specifically that Roku doesn't monitor ("track") what you watch? If so, you should really spend a little time with Roku's privacy policy[1] wherein they say quite explicitly that they do.

I chose Roku pretty early on for our household, and I have regrets about that choice, but I can't bring myself to replace our Roku hardware with Apple TV hardware given the latter is quite a bit more expensive, and then I'm left to decide whether to spend more money on a third-party remote or be stuck with the terrible one that comes in the box.

[1]: https://docs.roku.com/published/userprivacypolicy/en/us


Just get pihole and put it on your network. Roku and Amazon Fire Stick are constantly calling home. Even every interaction with the remote is logged and sent back.

You can see it happening and block it. Roku also shuts down and stops you from using many apps if it can't call home for a long time.


Er...so the solution is to block the traffic, but then eventually that makes the device stop working unless you allow the traffic again?


Mine is to be an opinion-only post but...

> You're evil or you're not. Full stop.

... I'm afraid I can't agree.


At the individual level I agree that people are not 100% good or 100% evil. Corporations may have good people that work there and help with tasks that are mostly evil. But companies that are truly evil and offer no product, no benefit, no public good, no service, do not exist. A truly evil company offers nothing and expects everything in return. So from my perspective it's impossible for a company to be 100% evil. But some larger companies will get as close as they can.

Any for profit company that is sufficiently large enough will eventually behave in ways that are evil.

The problem here is corporate bylaws and shareholders. Many Corporate bylaws state the only purpose of the company is to pursue profits, everything else is secondary.

These things create a perverse incentive to continuously show growth.

It's not enough to be profitable as a company. The company is expected to continuously expand their market share.

If they own a relatively small market share they can innovate, improve their product, get better at marketing or sales and expand their share that way.

But eventually if they are already the dominant player in a market, to continue to grow they must snuff out competition with anti-competitive behavior and expand into to new markets often by buying companies already in those markets. And If a C level executive or board member refuses to put profits over people, they can be ousted by shareholders or sued for securities fraud.

This behavior pattern of infinite expansion of market share is the symptom created by the perverse incentives of shareholders and will eventually drive any for profit company that is large enough and has shareholders to behave in ways that we consider evil.

And while I am sure the solution is likely reclassification into B Corporations or non-profits, getting companies to reclassify is not going to be easy.


I appreciate your comment, but let's be honest, good and evil are binary like night and day. I don't cotton to the idea that one's morals can be suspended in "twilight". There are ways to make tons of money and be completely moral.


> let's be honest, good and evil are binary like night and day.

Now why would you think that?

In fact I'm pretty sure you're wrong: good and evil are various shades of gray, never white nor black. In real life, there aren't entities (corporations or humans or whatever) that are "pure good", nor are there entities that are "pure evil".


Furthermore by not differentiating between shades of gray there's no way to encourage good behavior from companies/people because they're all equally bad so you treat them the same even if one is making more efforts to do the right thing.


Indeed. Saying that good and evil is binary might be... evil ;)


In a philosophical sense yes, but there’s degrees to both of them, and once you consider practical issues your options will often be a mix of good and evil.


Thank you for your comment, but please clarify how a moral person can see "degrees" of evil and allow themselves to make money with a conscience, knowing full well they are not being lawful good (Sorry, my D&D youth)?

As for myself, I work for non-profits largely because I want to do good for others. I cannot see taking advantage of another's data, for instance, without their consent (real or imagined) to make a profit and then share it with them. Perhaps I'm a goody-two-shoes, but I'll accept it. I cannot fathom working a job where my existence is to bleed out as much profit from others without them knowing, or if they do know, them not having a voice or a way out short of not using the devices of modernity.


Do you work for a non-profit in the United States, Canada, the UK, France, or another western country and pay taxes?

Do your taxes contribute to murdering foreigners in a desert far away?

Are you evil?

This isn't a personal attack, I just really struggle when I see moral absolutism and binary thinking. The world isn't binary even though our work with computers and human systems often lead us to pretend that it is.


Do you eat meat? I don't, for moral reasons. If you do does that make you evil in my eyes? Are you in fact evil?

I could do this all day. A clear and unambigious view of what's wrong and right is a risky thing - you will be compelled to fix it, without limits to your interference.

That was provocative and I think you're a good person but I don't think it's so easy, or safe, to draw a dividing line. But that risks not drawing a dividing line, so more dangers lurk.


Thank you for your comments. I agree with you that this can be taken to the nth degree and it benefits no one. I guess I could say that I do my best to avoid "grey" areas if at all possible. If I don't stand for something, I fall for anything.

As far as meat goes, I don't eat red meat. Fish, yes. Chicken, don't really care for it. I could easily get by on fish, rice, beans, salads, etc. I see your point. My eating fish would be evil to someone else since a life is lost in doing so. Beef is nasty to me because I hate even seeing fat on food. As an aside, to me, nothing is better than fried fish or a salad made from chilled chickpeas, lime juice, cilantro, diced Roma tomatoes, and red onion. Add Serrano or Jalapeno peppers if you like spicy.


So there's no nuance to anything people do, there are only good or evil people? That's quite an extreme view.


I think the idea of morality being a spectrum is pretty self evident. There's good, great, bad, worse, and neutral (like the action of me sitting in a chair right now). The reason it's important is in avoiding "'perfect' is the enemy of 'better'" issues.


> I'll admit I bought a flip phone for the purposes of avoiding any virus apps that may be required in future should that happen. I just want to stay off the radar and I have a right to do so.

iOS 13.7+ can track your location for Covid contact tracing, even if there is no "virus app" installed. There should be an opt-out setting.

https://www.cnn.com/2020/09/01/tech/apple-google-contact-tra...


Yes, mine is turned off. Look at some European countries now. England has QR codes at the entrances to all public spaces. Download and install the app or provide your details that they hold for 21 days. No, thank you. Will. Not. Comply.


Germany is introducing a fine for providing incorrect contact details to restaurants. Former East Germans have living experience with "papers please".

We need all test statistics to be normalized, e.g. with transparency on PCR cycle threshold used by the test vendor and laboratory.

Given Bluetooth security weaknesses, it's only a matter of time before a legal challenge occurs against an iOS proximity assertion which has a large economic consequence, e.g. person returns to country B with PCR CT30 after being near a person in country A with PCR CT40 "positive" test result.

Should Person B be quarantined (e.g. they are expected to perform in a high-value sporting event) if they have no symptoms and there's a broad discrepancy in PCR cycle thresholds? Lawyers will sort this out after enough money is at stake to justify thorough collection of scientific evidence to challenge local policy assumptions syndicated by global iOS.


> Germany is introducing a fine for providing incorrect contact details to restaurants

The older I get, the more true the adage "those who do not learn the lessons of history are doomed to repeat them" becomes to me.

Why is it that every few generations we have to repeat the mistakes of the past, in versions ever more vicious and horrific, in order to learn (for a couple of generations maybe) that it was a bad idea?


To preface, I agree with your stance.

But it leaves me wanting more. I mean -- we know, to be blunt, we know that the vast, vast majority of people will never stand up and agree with you, even if some unidentified part of them wants to. The world is marching in lockstep toward authoritarianism again, and cheering it along the whole way.

Where does all this go? What can we do? Disappear into a countryside somewhere and live off the land? What's the endgame for those of us who don't want to play anymore?

I don't have an answer to this. And without an answer, symbolic acts of rebellion become meaningless to me and I suspect, is 99% of the reason nobody else stands up either. Without an alternative - there's no point in fighting back.

So how do we create an alternative? Not just advocate for one... create it. So that people flock to it.


I think one of the best voices in this regard is a man named Dave Cullen out of Ireland. His YouTube channel, Computing Forever, has always been spot on in regard to the plandemic and other authoritarian happenings. He has some fascinating guests as well. I've yet to find him wrong.

https://www.youtube.com/user/LACK78

I don't know what the answer is as far as getting people to wake up. Most people are utter sheeple when it comes to adhering to the diktats of government. They implicitly trust government and they shouldn't. Very few governments are really in lockstep with their people. Iceland is one, the Swiss do fairly well in this regard, as does Finland and Denmark. Again, very few.


Someone didn't like my question heh, just wanted to say thank you for your reply. Will check him out.


Would you consider something like the Shield TV open source? There are some proprietary drivers, for things like Dolby Vision I think, but probably as open as you can get. [1] Or you could certainly set up a Pi / NUC to run Linux and Plex to have more control.

[1] https://developer.nvidia.com/shield-open-source


As an anecdote, I previously tried to run Plex without any connections back to the mothership, and they were doing some super crazy stuff (server _and_ client side) to ensure that it wouldn't work if those servers/IPs were blocked.

If you really want to avoid external parties, you should probably be running Kodi, compiled yourself, and firewalled off from internet access for good measure. (This assumes you _are_ worried about tracking, and so only use offline media, and not Netflix etc)


Is it ignorant of me to wonder if the majority of the issues raised in this piece are either portrayed/worded poorly or just not very compelling?

>First, Google has the incentive and ability to favour the adoption of Fitbit over rival wearables on the user side, and to simultaneously undermine the ability of others to offer competing products to insurers and health providers.

I guess the argument is that Google buying Fitbit would mean other non-Apple wearables would be at a disadvantage by nature of there being a first-party alternative, but is that inherently monopolistic or anti-competitive? Isn't that essentially what Google has been attempting to do with Nest or the Pixel phone line?

After multiple reads, I fail to understand what blocking this merger does other than temporarily delay Google from doing what the tech industry is going to continue to do, exploit the ignorance of policy-makers and the general public to harvest and sell our personal information. Does blocking this merger even accomplish anything other than delaying what will be inevitable without legislative action?

E: forgot some words


> Is it ignorant of me to wonder if the majority of the issues raised in this piece are either portrayed/worded poorly or just not very compelling?

I think you’re just missing the overarching point that googles position becomes stronger in a nonlinear way when you plug persistently updated pollable health data into what they already have.

without resorting to name calling your argument seems to be: “we’re going to hell anyway so why shouldn’t we let google take us there as fast as possible?”

my naive response would be that we should slow our descent into hell by any of the few means possible, while we wait for more humans to understand the danger to them that google, amazon, microsoft, apple ... are.


It becomes stronger in a nonlinear way, but not in any fashion that American antitrust would block.

European antitrust is a different story, and if the concern that FitBit being a "blessed" device on Android pushes out other biomonitors has teeth, so be it. I personally find that flavor of reasoning too conservative (by that model, GMail bundled with the Android OS is a faux pas because it makes it harder for other email providers to compete), but there are differing theories of what antitrust is for.


You can sync Garmin (many devices) health tracker data locally via USB cable or wireless receiver, then parse/graph with OSS scripts:

Code: https://github.com/tcgoetz/GarminDB & https://github.com/mrihtar/Garmin-FIT

Wireless receiver: https://buy.garmin.com/en-US/US/p/10997 & https://www.thisisant.com/


Thanks, I guess I can make use of one. I was thinking there might be no more devices left that don't store your data on some shitty website you have to log into.


Garmin has a user-data website too :) But their devices have been around long enough to enable self-managed biometric data.


Imagine trying to explain this to your elderly mother... She doesn't want to learn software development (even basic levels) just to use a wearable health monitoring device


Business opportunity for home device for use by mere mortals, e.g. smartphone UI to data on local NAS. ANT can sync automatically when within range.


this article appears to be an opinion piece, speculating what a combined google/fitbit could do, not an official announcement of what they will do. but the headline is written in the tone of an official announcement from google. seems a bit misleading.


It's crazy to me that people will run non-free software on something that is designed to capture biometrics... but then again I've been looking for a decent open source alternative and there is nothing out there.


In this case I think the multi-device syncing capability is the thing that adds value. Or at least being able to sync the data gathered by the device to the cloud so you can access it through a separate interface.

If that is the case then whether you prefer a Google solution to a open source solution depends very much on your threat model. Google may use that data to target ads but running your own solution you risk losing the raw data to a security flaw in the open source code or infrastructure. At least with Google you get world class infosec to protect your data (from everyone except Google...)


I would be happy with a commercial product that simply stored data in a place of your choosing, like to a PC, in a parseable format. I don't think a product that was completely self-managed (no cloud) could be competitive anymore, so that's pretty much left up to OSS. I really miss software that didn't require internet connectivity being the norm.


An "open data store" as a standardized service could be interesting... a good chunk of backends are just "add this item", "get all items since X", etc.

If there was a service that gave you an endpoint that you could distribute to other services to do just that (one endpoint per service, keep things isolated), or alternatively host your own and give that endpoint to services... it'd be cool. The tricky part is getting other companies to agree to use your common API.


You can do that with the data/charging cable that comes with Garmin watches. It exports the data on a USB-MSD. I don't recall whether it's a CSV or some other format but they make no effort to obscure it.


Good to know, thanks!


The idea of open source software is not anywhere near the public consciousness.


That's the plight of open-source for biometrics. Wait around for something to show and it never comes, and you have to be an expert in the space to DIY something that will compete.


Is there any brand of wrist sensor that doesn't require you to upload your data to their servers by any chance?

I love the fitbit sensor, but I wish I could opt out of the server side of the things


Try Apple Watch.

You can disable the iCloud backup of Health data: https://support.apple.com/guide/iphone/back-up-your-health-d...


That's what I do. It'll sync with my phone locally over bluetooth once I'm back in range, but that's it.


There's a fantastic piece of free software for Android called Gadgetbridge[0] that allows you to use many of these "smart watches" without the first party app or servers. It supports Pebble, Mi Band, Amazfit Bip and HPlus and more.

[0] https://gadgetbridge.org/


Garmin doesn't require you to upload any data to their servers to get basic heart rate and other information. You can choose to upload your data to get more detailed information but the basic statistics are available without creating an account.


Most of the older Polar HR sensors didn't use cloud services but they typical worked with an auxiliary chest strap (it wasn't just wrist strap), they were mostly for monitoring your performance during physical activity not necessarily for the 24H wellness type of use-case that people usually look for these days.

If you are into DIY, you don't necessarily need to be a DSP guru, there already quite a few 'good enough' IC's, like the MAX30102 (Maxim) for HR and oximetry, it doesnt need any major analogue considerations you just need to implement the communication side (I2C)to extract data.


The most bizarre thing about fitbit is that I can't even view my data on my phone when I don't have an internet connection... The data is literally synced to my phone over bluetooth and THEN transmitted to the cloud. But for whatever inexplicable reason you can only view it after uploading it... Very annoying, when hiking you often don't have an internet connection.


Mi Band. $25, weeks of battery life and data can be extracted locally.

They've tried to make usage of 3rd party apps harder with the latest version though (color display). It can still be done though; the old versions are excellent (Mi Band 3)


There's an SDK that can be used to pull the data from a Garmin to a server you control.


There are alternative apps for the Mi Band that work purely on-device.

Mi Band Master is one of them. The setup is a bit more involved with newer bands, but a one-time thing. Works really well.


pine64. Of course their smart watch currently is a dev kit and doesn't do anything at all unless you write software to do something. Some good progress is being made - expect a first release "soon" where the watch will do something - though probably not something very smart. Still today if you don't want to write all the code yourself you shouldn't bother.


Samsung Galaxy Watch / Samsung Gear series and Apple Watch can both sync data only to the phone without uploading it to cloud.


Health privacy, while kind of important Ina vacuum, is maybe the most overrated issue of our day.

Not only do I not care if my health data is out there but I wish it were.

Right now healthcare is barely using data, it's absurd. Let's focus on making good diagnostic AI preventative medicine to let people live forever.


100 times this. I know privacy is a good thing but the amount of FUD over health data privacy is unreasonable.

I feel this hysteric reaction to aggregating health related metrics is in fact an impediment to running good scientific experiments. The current sample sizes are always small and results inconclusive wasting thousands of man hours and government grants funded by our tax money

Only to gain what? Apple and Google not storing my step counts on their servers because otherwise in future they'll go rogue and show me cheeseburger ads when I'm getting fat? Give me break


Health is a deeply personal thing to many people. It's not surprising why people want to keep it secret, and I don't feel right asking them to give that up because there might be other benefits. Health privacy is something the world won't suffer much for having.

And health care analytics don't need any more data. There's already a ton. It's held by hospital systems, insurers, and governments. But the data sucks for analytics because it's chosen, coded, and siloed for payments.


I believe lots of regulation plays a part in making these health datasets not interoperable and hard to migrate to some common schema and database


And when you can't get insurance coverage because said AI says you have a disqualifying condition?


Then solve the real underlying problem...insurance... NOT the segmentation of valuable health information that could help advance medical discoveries (with that data being handled properly)


We haven't solved that underlying problem yet, and the problem isn't insurance. It's that many kinds of actor have motivations to misuse individual, detailed health data in ways that can be seriously harmful to the individual.

I don't know if there's a way to solve that problem, even in principle. It seems like an intrinsic conflict of interest thing.

Until that's solved, that will remain a major and rational reason why people care about health data privacy.

There's no point saying "ignore health data privacy and fix the real problem" until people can fix the real problem. Which as I've said, might not be possible even in principle.

Meanwhile, the thing people can do for themselve is focus on privacy of their healthcare records and demanding that be maintained. Institutions and companies can incrementally develop improvements to aggregating data that don't simply discard that privacy, like differential privacy for smart data aggregation, and zero-knowledge systems like the Google/Apple coronavirus alert system.

Perhaps eventually we'll and up with a way to aggregate health data in ways that are more helpful for diagnostics and population-level understanding of diseases, while safeguarding individuals from harmful consequences.


For anyone who didn't see the Reuters article from yesterday:

https://www.reuters.com/article/us-fitbit-m-a-alphabet-eu-ex...


What about Amazon and Apple? Why is Google the only recipient with scrutiny? Especially since the merger hasn't happened yet.

You should be frightened about all of them.


Why is Google the only recipient with scrutiny?

They're not. The EU is investigating Amazon[1] and Apple[2] as well.

Your complaint is really "Why isn't every article written to be a general point about all big companies?" as if there's no value in writing about specific examples. What's wrong with writing about Google in particular? A focused article is just as useful and interesting as a more generic one.

[1] https://ec.europa.eu/commission/presscorner/detail/en/IP_19_... [2] https://ec.europa.eu/commission/presscorner/detail/en/ip_20_...


Apple makes money from people buying their hardware and services. They don’t make a significant amount of money from selling targeted advertisements. Amazon and Google do.


Maybe that's what Google has been trying to do here with diversifying their revenue sources, similar to what they are attempting to do with GCP. Look at the amount of control they have been adding on Pixel phones on what apps and services can access, even Google's.


The difference is Google doesn't stop monetizing your data when they start selling premium hardware.


How do you know this to be true? Are they using Nest data to monetize something?

Apple also tried their hands at iAds, in the end companies will do whatever works best for them. Considering Apple has made privacy a marketing feature for their devices, I am quite sure Google would want to have that playbook as well for their premium devices.


They don't monetize your data when you pay them (GSuite).


[flagged]


You're missing the point. Apple makes money by people buying their products, and Apple sells more products when people trust them. Therefore, they have a strong financial incentive to be trustworthy and at least appear to care about user privacy. The best way to appear to care about user privacy is to actually care about it: they make no money selling private information to 3rd parties, so there's no point in doing it.

Google, on the other hand, makes some money selling stuff to consumers, but they make a lot more money selling your data to whoever wants to buy ads. That's the business they're in.

No one is saying "Apple is inherently good and Google is inherently evil". They're both amoral massive corporations that care about profits above all else. It just happens to be the case that Apple makes more money when they protect user privacy, and Google makes more money when they violate user privacy.


I agree that they have an incentive to care about privacy because they have so far been incompetent at using data for the benefit of their users. However, they still collect a lot of data in order to try to get better at providing user benefit. Consider that on iOS you cannot get your location without also giving your location to Apple or install an app without telling Apple. Competing platforms do not suffer from these privacy invading issues.


> and Apple sells more products when people trust them.

I am interested to know how you have estimated/measured this increase. Is it 1%? 20%? What does your data show?


Just like Apple keeps changing the App store rules when they decide they can get more money by taxing developers, Apple will reduce privacy when they decide they can get more money, because they never have enough money. And it won't hurt Apple sales, because their competition is no better.

You claim Apple sells more because people trust them. Unless people are going to stop using smart phones they have no choice to use Google or Apple devices, privacy violations and all.


Only because they have margins of more than 20 times the cost of manufacturing, the day they stop having success selling iphones and more at such huge margins is the same day they will use all that data to make money from targeted advertisement, some think that day will never come because they have loyal buyers but only time will tell.


First line of the abstract: "The European Commission is conducting an in-depth investigation of the Google/Fitbit deal.". They are perusing a specific legal angle to investigate Google/Fitbit.


Apple doesn't have your health data. They are on device, and any work with the data (statistics) is also done on the device and not somewhere in the Cloud. They don't want your data, it's not their business model. Sure - Health data is backed up over iCloud, but this is encrypted and again Apple doesn't have a way to view the data outside of an Apple device.


Do you have a source for iCloud backups being end-to-end encrypted?? As far as I know, everything in iCloud is just encrypted (i.e. Apple has the decryption key). In fact, law enforcement often asks for data from iCloud because unlocking the physical phone is harder.


Health data in particular

https://support.apple.com/en-us/HT204351

Back up your Health data Health information is stored in iCloud and gets encrypted as it goes between iCloud and your device, and while it's stored in iCloud. End-to-end encryption requires iOS 12 and two-factor authentication. To stop storing your Health data in iCloud, go to Settings > [your name] > iCloud and turn off Health. If you aren't using iCloud, you can back up your information in Health by encrypting your iTunes backup. The information that you create or gather about yourself is under your control, and it's encrypted with your passcode when you lock your iPhone


>Apple doesn't have a way to view the data outside of an Apple device. >Apple doesn't have your health data.

I don't believe that any of this is true.

iPhone or no, your data is "out there", and you don't own it.


The scrutiny happens due to the merger upcoming merger. It requires permission of the European Commission, which raises those questions.


I don't understand this whataboutism.

Google is under scrutiny here doesn't exclude others from being under scrutiny.


if a ml algorithm could influence me to work out more, i would accept the consistent nonsense ads about weight watchers


I think the point is more that your insurance will become more expensive because you haven't walked enough between the age of 20 and 40, and therefore you're more prone to a heart attack.

Or maybe your mortgage is more likely to be denied


How did the mortgage and insurance companies get my private data in my Google account?


google bought the insurance startup


What insurance startup? Are we talking hypotheticals here or concrete action Google is undertaking?

I agree that Google + FitBit + Google owning a health insurance company would be suspect, but I'm unaware of them buying a health insurance company. Certainly, if the concern is government intervention, the intervention should hold off until an insurance startup is actually on the table?


Alphabet owns a company called Verily, which is entering the health reinsurance space:

https://www.theverge.com/2020/8/25/21401124/alphabet-verily-...

Verily already has its own smartwatch though.


Oh totally hypothetical sorry frighten, was just continuing the theme I saw in the previous posts

I do see insurance startups on HN from time to time so I think the idea isn't totally far fetched though


Does anyone have recommendations for a privacy-conscious alternative to the Aria scale and mobile app?


Wait until their AI enables customers to target people grieving from the death of a loved one.

This is borderline sickening.

I think they could feasibly make the case for selling data to drug and research companies for the purposes of medical advancement ... but beyond that it's ridiculous.

I'm generally realpolitik about these kinds of issues, but even this to me feels evil.


If someone here works for Google fitbit please drop me a line at www.koalasleeve.com


Why don't people just count their steps or breaths and use pencil and paper? I don't see these products doing anything exceptionally useful and now they are harming people.

Unless you a training to a specification (e.g. exert this many ft-lbs of force) it's just motivation.


Most people aren't mindful enough or free enough to do those things. By the latter, I mean they're multitasking. And then some people have ADHD and other things.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: