Yep I posted this hoping to raise awareness, but the reaction was not what I expected. In the US even a meritless legal threat will require hiring a lawyer to ensure you are in the clear which requires significant amount of money, in addition to the stress. Researchers should never be putting anyone in that position.
Just going to tag along to my comment above. I can't help but notice the difference in tone of the responses to this article vs. the one yesterday (https://news.ycombinator.com/item?id=29599553) which I did not see. It seems those who participated in the thread yesterday had a quite different opinion.
As the author, I wonder how much me being not quiet but not an asshole about being nonbinary and having furry stickers in the article makes people start hitting the vitriol button.
As a person who has watched this kind of thing happen a lot: I think a lot. I had a bunch of friends on tumblr who wrote and exchanged very similar fiction and art. The ones who were known to be trans, female, enbies, or non-white got harassed a lot. The one who was understood to be a white cis male got left alone. And this is on tumblr, where the ostensible position of a large part of the user base is that white cis males are Bad People by default...
So yeah, pretty sure it's that. I almost never get harassed by people who think I'm cis male, the bulk of the harassment came from people who thought I was transmasc.
Probably the furry stuff too, which is honestly sort of terrifying, do these people not know how much infrastructure relies on stressed and overworked furries?
You obviously don't need anyone to tell you this, but for passersby for whom this has not been on their radar before: this is almost certainly true. HN commenters will on occasion be superior jerks towards anybody, but this is some loud posters' instantaneously adopted position when they see a name that doesn't code as male--for another example, see many `rachelbythebay.com` posts, where you have randos getting sniffy and assuming she's incompetent or junior for some Real Interesting Reasons.
By my estimation it is real, the proprietorship (who I like and appreciate personally, and I think are operating on good faith) of this community has often made noises about how it should be better. It's not, and it should be.
I guess it's the wording. "without my consent" is used to imply a strong violation of the personality rights, at least in some circles [0]. It's also pretty much a repeat of yesterdays post. At least to me, this makes the issue feel a bit overblown.
[0] The irony that the whole problem is based on wording that implies a lawsuit is not lost on me.
Getting “informed consent” is one of the big, guiding principles for research done on humans. My guess is the author deliberately used that language of the scientific community, to make clear that they did not agree to be part of the research.
Yeah, but the idea of “informed consent” is misunderstood broadly. There is no constitutional right to informed consent. Not all human subjects research requires informed consent — or even consent. There are other institutional ethical lapses that are much more dangerous — and there are also ethical attitudes that rest on the researcher, not the institution.
Righteous indignation over something like this is dangerous, not least because it can lead to science being much harder (more bureaucratic and more expensive) to do by all scientists. “More oversight” or “dissolve the irb” all put too much responsibility on the institution.
Sometimes, we should just blame the people who did something rude and stupid not the institution.
> Not all human subjects research requires informed consent — or even consent.
There have been multiple examples of this going horribly, horribly wrong. (Naturally, the worst examples were government-funded and ran during the Cold War.)
As a society, we have since concluded that at bare minimum, people should know they are being experimented on--and even that isn't enough to stop things from going badly.
This is why the IRB exists in the first place. A major part of its purpose is to prevent this sort of thing from causing undue harm, i.e., by forcing people with limited incomes to seek legal counsel because they believe they're about to be sued into the ground. One of the rules generally agreed upon for this is that experiments with human test subjects must inform those humans up-front what they're getting into.
To say that the response to this "can lead to science being much harder" is an ethically wrong defense. We know it makes certain kinds of research harder; that's the point. There are certain kinds of research that directly harm their subjects, and we don't do that to people. More than that, people have a right to decide whether they want to be involved in a study, as they may personally feel endangered by it (i.e., someone who has a PTSD response to being sued may not want to deal with being fake-sued).
To say that calls to dissolve the IRB "put too much responsibility on the institution" is flat-out false. This IS the IRB's responsibility--they approve or reject studies like this specifically to avoid ethical problems like this one. To claim that this isn't the IRB's responsibility is like claiming that it's not the responsibility of the law to revoke a driver's license when someone has been driving drunk, or that it's not the responsibility of the Food and Drug Administration to reject approval for foods that contain dangerous contaminants.
I'd recommend reading up on why not all human subjects research requires informed consent. For instance, there are exceptions for human subjects research that takes place on normal educational practices. This carve out was made because of the difficulty of getting unanimous consent from all parents during normal classroom education. With greater oversight and full informed consent, a lot of educational research simply wouldn't happen.
So, to reframe this: "can you think of scenarios where institutional oversight could cause negative harms on society?" There are tradeoffs in ethical domains —and usually a lot of work has been done to find a middle ground.
The example you gave is not relevant. The researchers in this study are already directly contacting everyone they need consent from, and if an individual declines, the rest of the sample can still be studied (unlike in a classroom setting, where everyone's physically in the same room and it's impossible to study any of them in isolation).
Further, the study would still have worked if the researchers had simply asked for the information as researchers, instead of lying about their identity and making thinly-veiled threats of legal action if the subject doesn't comply.
I'm primarily concerned with the study, since A) that's the topic at hand, B) it involves technology and privacy, which I care about, and C) people keep comparing this to pentesting, just like last time, and I also care about security research.
I apologize for assuming that you were defending the study; I figured that was the topic of conversation, after all.
If there is nuance, but it does not apply to this situation, then it is worth saying that this nuance does not apply to this situation--so that I'm less likely to misinterpret what you're saying.
Wait, your motivating example is keeping parents out of the loop... do you give this example to show that it usually goes really bad when informed consent is missing?
Or do you mean to argue that it's okay to experiment on kids without consent, because the end justifies the means?
Or a third option, that I'm overlooking currently?
Yes, that’s correct. Because conducting experiments on things like “does this approach to teaching fractions work better” is important for society. The ends are good and the means are reasonable. We aren’t injecting kids with chemicals—we can only experiment with “normal” educational practice. It shows why nuance is important— and why requiring informed consent isn’t always the most ethical choice for society.
Ethical action involves nuance! It is very comforting to think that the world is black and white, good and bad. But it isn’t. Why is this so difficult to communicate?
> In the US even a meritless legal threat will require hiring a lawyer to ensure you are in the clear which requires significant amount of money, in addition to the stress.
So what happens when a site receives a CCPA inquiry from an actual person concerned about privacy instead of a researcher under a fake identity? The site still needs to determine if the law applies to them and if so what they must do to satisfy their obligations, so a real inquiry should be as costly and as stressful as a research inquiry.
Does this suggest that privacy laws such as CCPA (and GDPR) which create obligations for sites to deal directly with users on privacy matters are a bad idea? Should such laws instead require users to go through some state agency as an intermediary which would then only contact the site on behalf of the person if the agency determines that the user's data at the site is covered?
It would have been possible to make the requests without a threat of suit. The thinly veiled threat of suit came from a portion of the email that quoted a specific section and used legal verbiage to get people to respond within a certain time frame (as required by the law)
This was taken as a legal threat. The request would have been just as valid without the threat
Though due to the nature of the requests, they were not actually subject to that specific section of the law, and thus the demand of a response within 45 days had no genuine legal foundation.
> So what happens when a site receives a CCPA inquiry from an actual person concerned about privacy instead of a researcher under a fake identity?
They conclude that it's a Princeton research study and throw it in the trash.
This is part of the harm this study has done; because the researchers were not upfront about who they were and what they were doing, they have introduced uncertainty about the CCPA process.
> Should such laws instead require users to go through some state agency as an intermediary which would then only contact the site on behalf of the person if the agency determines that the user's data at the site is covered?
That could be a good idea. It would depend, of course, on that agency being well-staffed and well-trained (both of which are separate from being well-funded, which can help). There's a giant pile of messy problems that can crop up due to negative influences from, say, corporations that want to sell more data.
That said, it would be nice to have an org that can do the minimum legal work necessary to figure out if the claimant has a leg to stand on. That would not only minimize the harm of this sort of ill-advised study, but also make it harder to use threats of legal force to coerce smaller site owners.
This is just FUD. I've yet to meet a lawyer who won't do a cursory evaluation of your case for free. It's in their interest to know if you're bringing them an easy win.
> I've yet to meet a lawyer who won't do a cursory evaluation of your case for free.
You don't get out much, do you?
I've known tons of lawyers that won't look at their watch and tell you the time, unless they get a tenner from it. To be fair, they are used to folks trying to extract highly valuable services from them, for free, so it's sort of a defense mechanism.
I have (and have had) many friends that are lawyers. A few will help me out with quick consults for free. I even have one chap that has gone beyond that, and I'm grateful. I'm quite aware of the value of their services, and always offer (and am willing) to pay; even if they decline to invoice me.
I guess it depends on the lawyer. Not long ago I was shocked when a lawyer (not somebody we know, just a random phone book lawyer) stayed on a call with my girlfriend for a half hour talking about her father's estate and charged nothing for it.
It's also worth noting that under US law, a lawyer who gives you legal advice can be held liable if that advice causes trouble down the line.
This creates even more incentives to have a paywall--one, it keeps people from bugging you for free legal advice that can bite them and you in the ass later, and two, it ensures that the people who do get advice from you have followed your procedures for setting up an account with you.