I can add some details, knowing AlgorithmWatch a bit.
This is a very earnest and credible German NGO, with most of the active people coming from academia and with collaborations with highly-regarded news organizations.
Their message is very critical of platforms, but in the international context (i.e. compared to some US books and articles), it's very restrained. I would trust them as much as the NYU team (which means: pretty much completely).
There are two important things to note here:
- Facebook did NOT shut down this research right away! AlgorithmWatch decided to end their instagram research project for fear of retribution. It's chilling effects at work. The quote in context is:
"This is why we were very surprised when, in early May 2021, Facebook asked us for a meeting. Our project, said Facebook, breached their Terms of Service, which prohibit the automated collection of data. They would have to “mov[e] to more formal engagement” if we did not “resolve” the issue on their terms – a thinly veiled threat." (from the original article)
- The tool used here is similar to the NYU project and a past AlgorithmWatch project in that it scrapes content client-side. This has great advantages to the science part (i.e. platforms can't send you fake data to fool your research), but brings the immense unresolved issue of user liability. You don't want your data donors to be liable for helping science.
Finally, there are currently many ongoing regulatory debates in the EU and AlgorithmWatch is involved in those. It seems possible that there will be some sort of required public accountability for platforms in the future.
I personally would love to see some sort of mandated accountability that provides credible researchers with data (and real data, not the social science one clusterf* that facebook set up) instead of simply handing universal keys to states' law enforcement arms.
I agree that it's frustrating how CA is used as an argument to end all arguments.
But we have to acknowledge that Facebook and other platforms are under threat of exploitation from malicious actors, that they often do want to legitimately prevent abuse and that they have to do something.
My biggest gripe is of course the laughable shallowness of the debate. I've once had a Facebook representative tell me that they can't share political ad data because the data is so large "it doesn't fit on a usb stick". Come on. That's lawyers speaking bullshit, and it's tiring.
People love to meme that the regulatory action taken against FB didn’t do anything because the fine was too small.. but the truth is this is the reality from it.
Exactly. This (strict enforcement of terms and conditions and locking down user data) was the desired outcome of the actions taken, and will probably result in fewer leaks down the road which is fine with me. I’m actually glad they seem to be serious about it by even going after these “controversial” cases. I frankly don’t understand what people want them to do.
I don’t know if this is connected, but the company I work for we provide marketing services. One service we provide is marketing videos on Facebook on behalf of our clients. We would post the videos on their pages and promote them. We could then publicly see the amount of views they would get.
Well in the last month Facebook has changed how that works. We now have to add them to their ads manager and promote them through there and they will now not automatically get added to the clients page to be seen publicly.
That’s fine we can do that. The kicker here is that Facebook removed 2 years of videos from all of our clients. We have hundreds of clients and thousands of videos.
I wonder now if this is an attempt to stop people from researching this data since they have been removed.
I just have no other explanation for removing 2 years of videos. Note that these videos are not political in any way they are general 15-20 second animated ads. All unique and custom written a produced for each client.
Funny that Facebook had no issue with Cambridge Analytica until Cambridge Analytica became an issue. Let's also ignore the fact that Cambridge Analytica was a private foreign company illicitly compiling data to influence US elections. But yes by all means, it's no different than a research group and volunteers studying an algorithm. You definitely nailed it.
AlgorithmWatch is a non-profit, works with international highly regarded news organizations and has open souced the tools [1].
If you want to compare them, then the analogy is more like:
"This is the German version of Mozilla doing research on Facebook's Ad library" (which Mozilla found to be severely flawed some years ago).
Then, let's push for a regulation where the researchers will go to jail if there is a data leak from there.
Surely the regulation should be non-controversial since the researchers are 100% trustworthy like you mentioned.
Unless there is such protection, FB is still liable.
For FB, if the researchers turn out to be good, FB will benefit nothing. If the researchers turn out to be bad, FB will be fucked. I wouldn't want to play this game either.
What if the data leak was out of their control, aka should researchers be liable to field advanced, expensive security teams to protect against the liability of this suggested regulation.
I 100% agree with your sentiment, but I think the answer is incredibly complex.
Honestly, I genuinely regret having anything to do with that platform. It’s such a pain; we’ve integrated OAuth into a client’s platform at her request and every two months we get threatened to either provide them with stuff or just misc. BS. That we have to reply to or they threaten to shut us out. This last week we got an email saying our integration doesn’t work and we need to fix it or be shut down. Ironic, as it works with no issues… About two months ago we had to provide then with an account to our platform. That involved creating a fake Facebook and handing the credentials over to their dev team (something that I believe is against their ToS, but whatever). We only grab the bare minimum of information on registration and only use them to log in. I can’t imagine anyone having to rely on them for anything more serious.
We’re planning on transitioning our users out of using their Facebook accounts to log in.
I'll repost my comment from yesterday. It's unfortunate, yet not surprising, that Facebook's legal team would want to threaten and sue. Of course, Facebook's own behavior is most likely 1000x worse for users than the research that they are trying to prevent. As a consumer, if this bothers you, express your preference by uninstalling Instagram and whatever other product that belongs to Facebook.
I think the best way to go forward is to start a whole lot more initiatives that are similar to this and have FB shut all of them down, essentially building an anti-trust case against themselves.
All related research fields as a whole need to outcast FB research/ers unless they mend their ways. If they had their way anyone doing packet capture should be blocked...
"In May, researchers say Facebook asked to meet with the project leaders and accused them of violating the platform’s terms of service. Another objection was that the project violated the GDPR, since it collected data from users who had not consented to participate."
Aww, GDPR keeps on giving – to those who can afford the legal manhours. Thank you, Jan Philipp Albrecht. You've ruined the internet.
It is quite an interesting question if this was in fact a GDPR question.
You have a user of an electronic tool providing (potentially per definition of the GDPR) PII data to a third party without the consent of the people the data is about.
This coming from FB. The company how massively nudges people to share their whole phone books when installing any one of their apps.
So can we expect FB to stop this practice as it is structurally comparable?
Lmfao, sure, if you take Facebooks arguments at face value even though they are obviously wrong. But hey, who cares if the argument is true if its a launching point for a moronic personal grievance to be aired.
On the criteria of "legal manhours" the internet was ruined by Facebook/Google et al long before the GDPR was around. Plenty of other options for their lawyers to abuse.
Not exactly ruined, but it introduces a lot of problems. First, because it's a very WEAK privacy protection regulation, where explicit concern is only one of the six criteria you can use to legally collect personal information.
Second, because some of those other criteria are absurd: legitimate interest, really? What does that even mean? And let's not even get started on legal requests for interception of personal data.
Third, because it gives people a false sense of security while pretending their privacy is being respected. But GDPR is in fact much weaker than some previous privacy regulations, including French "Informatique et Libertés" law from 1978. GDPR is a huge regression for privacy online: a huge regression because most privacy invasion that was illegal in French law (and others) is now perfectly legal, and because in terms of UX we now have "consent" popups everywhere, destroying the very concept of consent* and forcing everyone and anyone to use JavaScript to use websites (JavaScript being the enemy of security and privacy on the web).
EDIT: Why do you think those regulations passed without even strong debate/opposition? In the rare case regulators want to make a good consumer protection regulation, there is systematically strong opposition from the lobbies, and the law is usually taken down or rendered meaningless. The "rendered meaningless" part didn't even have to take place with GDPR because the law was insignificant to begin with.
* Explicit user consent is slowly starting to be interpreted as in sexual consent, i.e. you can refuse without negative consequences. But even that basic interpretation is taken time to become unanimous.
This is a very earnest and credible German NGO, with most of the active people coming from academia and with collaborations with highly-regarded news organizations.
An original (english) post is here: https://algorithmwatch.org/en/instagram-research-shut-down-b...
Their message is very critical of platforms, but in the international context (i.e. compared to some US books and articles), it's very restrained. I would trust them as much as the NYU team (which means: pretty much completely).
There are two important things to note here:
- Facebook did NOT shut down this research right away! AlgorithmWatch decided to end their instagram research project for fear of retribution. It's chilling effects at work. The quote in context is:
"This is why we were very surprised when, in early May 2021, Facebook asked us for a meeting. Our project, said Facebook, breached their Terms of Service, which prohibit the automated collection of data. They would have to “mov[e] to more formal engagement” if we did not “resolve” the issue on their terms – a thinly veiled threat." (from the original article)
- The tool used here is similar to the NYU project and a past AlgorithmWatch project in that it scrapes content client-side. This has great advantages to the science part (i.e. platforms can't send you fake data to fool your research), but brings the immense unresolved issue of user liability. You don't want your data donors to be liable for helping science.
Finally, there are currently many ongoing regulatory debates in the EU and AlgorithmWatch is involved in those. It seems possible that there will be some sort of required public accountability for platforms in the future.
I personally would love to see some sort of mandated accountability that provides credible researchers with data (and real data, not the social science one clusterf* that facebook set up) instead of simply handing universal keys to states' law enforcement arms.