Hacker News new | past | comments | ask | show | jobs | submit login

Interesting article, though I find some of the reached conclusions somewhat unexpected:

> The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.

> And these algorithms are optimized to serve the individual wants of individual users; it is much more difficult to optimize them for the collective benefit.

This seems to suggest that the psychological effects of recommenders and "engagement maximizers" are not problematic per se - but today are simply not used with the right objectives in mind.

I find this view problematic, because "what's good for the public" is so vaguely defined, especially if you divorce it from "what's good for the individual". In the most extreme cases, this could even justify actively driving people into depression or self-harm if you determined that they would otherwise channel their pain into political protests.

If we're looking for a metric, how about keeping it at an individual level but trying to maximize the long-term wellbeing?




As the popular quip on HN goes, 'The majority of mobile web tech fortunes have been built on avoiding the costs of consequences and regulation, while retaining all the profits.'

> when what‘s good for the individual may not be good for the public as a whole

Is as good a summary of what Facebook has done wrong as anything I've read.

The problem is not that Facebook and its ilk are inherently evil, but that they seem willfully ignorant. Ignorant that past a certain scale they have an obligation to the public: an obligation very different from the laissez-faire world The Facebook started in.

The internet majors seem to be gradually awakening to this, but I'd argue that only Apple (with their stance on privacy) and Google (with their internal privacy counterparties) really grok the change. And to be fair, both have business models that can tolerate having principles.

When you've got a recommendation algorithm that could push someone to suicide or change an election outcome, you have a responsibility to optimize for more than corporate profit.


if thats the correct interpretation than it should have been written as:

"when what's good for the business may not be good for the public both as a whole and as a set of individuals"


I guess a clearer analogy is via environmental economics and externalities.

The byproduct of dominant market share in an industry where you influence people's thoughts is toxic responsibility.

And currently, some large companies are avoiding and externalizing the costs of that responsibility.


(And to be clear: I'm talking about corporate management with the ignorance comment. Employees have pushed dealing with these issues at many companies.)


Now, maybe I'm biased having lived in a country who started policing the Internet telling people they are fighting child pornography, and quickly evolved into a black hole of censorship and blocked Wikipedia couple of years ago because it doesn't fit it's own narrative.

I see the Internet as a great force multiplier. Want to watch courses from top professors for free? Here you go. Want to buy a yacht? Here is some videos of 10 best yachts reviewed. Endless entertainment to last you a million years? Check. Want to slit your wrists? Here's five pro-tips to make it quick and painless. It certainly makes everything orders of magnitude easier, as it's supposed to.

If I'm seeking information or encouragement about suicide, technically an algorithm that provides me exactly that is just doing its job, and I don't see why we would like to change -or god forbid, police- that. What I'd see as a problem is when the algorithm becomes more eager to find this content than I am, fights with me to have its point of view accepted (like, messing with elections) or becomes fixated on providing me the content even if I changed my mind. So, maybe the best way forward is to enable to user to tweak the algorithm, or at least make it more responsive to changes in his mood/wishes.


> What I'd see as a problem is when the algorithm becomes more eager to find this content than I am, fights with me to have its point of view accepted (like, messing with elections) or becomes fixated on providing me the content even if I changed my mind. So, maybe the best way forward is to enable to user to tweak the algorithm, or at least make it more responsive to changes in his mood/wishes.

That absolutely would be the way forward. However, my impression from blog posts where technicians explain their rationales and iteration processes behind recommenders and curation algorithms, the development most often seems to be motivated by growth, with the metrics that are actually considered being "user engagement" and "user growth".

As such, I would argue that recommenders always had an "agenda" separate from that of the user, it was just commercial rather than political: Keeping the user on the site for as long as possible.

As such I'm pessimistic that, in the current incentive structure, sites would make their algorithms adjustable by users just like that - doing that would simply be a bad business descision.


I don't get this line of reasoning. Humans are a LOT better than algorithms at creating horrible feedback loops, and we never hold those responsible.

Hell, if there's one thing you keep reading in psychological books about suicide it's how institutions ostensibly meant to help people reinforce the suicidal thoughts. Either by placing suicidal people together, at which point they also advise one another on how to go "painlessly" (hell I remember discussing painless ways to commit suicide several times with a group of friends on the playground in high school. Not at all often, once or twice in 6 years).

(I must say, now that I know a lot more about medicine, what I remember: slitting wrists in the bath is pretty bad advice. Peaceful ? Sure. But takes a very long time, and easy to screw up in so many ways. Hell, just cold water is probably going to save you, and of course it will get cold)

Second thing they do is even worse: making communication about it impossible. This is done through repression. Either locking people in their room (or worse: isolation rooms)

I've yet to hear a single story of people being held responsible. Why should Facebook face this sort of scrutiny ?


Scale.

Your local suicide prevention feedback loop really only encompasses your community, and revamping that system is left up to the people most affected by it. (The community)

Facebook/Google et al are everywhere, and are increasingly becoming everyone's problem. Google in particular has become so unreliable in terms of finding what I'm actually looking for without an overly specific query because it just has to push Google's idea of what they think I want, rather than what I want.

Honestly, I'm almost to the point of starting to figure out how to write and provision the infrastructure for web crawling and search indexing just because I find I simply cannot rely on other search engines to give me a true representation of the web anymore.


But even if humans are bad at this, how is that an argument to allow Facebook to do this on purpose?


Facebook is not doing this on purpose. Facebook is allowing communication about this, which of course provides a purpose and actually mostly helps prevent people from carrying this out.


They are not leading people to problematic posts on purpose - however, from what we know, I think we can reasonably assume they are tuning the recommender to maximize engagement - which leads to more problematic and controversial posts be recommended.


I think you'll find in the psychiatry literature that if there's one thing that can help a lot AGAINST suicide, it's engagement. As long as you keep the patient engaged, there is little danger of suicide (with the significant exception of a patient that came in determined to commit suicide and is executing a plan). Which is why I'm saying that even when keeping people engaged with strategies for suicide, that still works against suicide.

Of course engagement is expensive to do for humans and therefore is often explicitly not done in clinical settings, or to put it differently: hospitals are surprisingly empty for patients staying there and psychiatric hospitals are no different

Because you effectively can't do it with humans, preventing the "slide towards suicide", engagement, even discussing the suicide itself, is actually helpful.

A very recurring element in descriptions of suicide tends to be a long history of the patient with constantly dropping reaction/interaction/engagement and slowly increasing "somberness", suicidal thoughts and discussions, then suicide attempts. Then, days or sometimes less before the actual suicide you see a sudden enormous spike in engagement with staff, and while we obviously can't ask, it seems deliberately designed to mislead. And staff often "falls for it". That spike is designed to make staff give the patient the means for suicide or somehow prevent them from responding to it, or getting them information (essentially when they're not looking for some reason, such as watch change meeting)

When push comes to shove, once enough will to commit suicide exists, nothing even remotely reasonable will prevent the suicide. So knowledge about suicide mechanics seems to me much less destructive than people obviously think.

Therefore, knowledge about suicide doesn't matter much. People see it as being obviously associated and assume. Knowledge of suicide is not what causes suicides. It is therefore not "dangerous knowledge".


> though I find some of the reached conclusions somewhat unexpected

> > The issue is not just about making graphic content disappear. Platforms need to better recognize when content is right for some and not for others, when finding what you searched for is not the same as being invited to see more, and when what‘s good for the individual may not be good for the public as a whole.

Isn't this a good thing? It's very easy for politicians and bureaucrats to simply say "ban everything", so it's welcome that they're saying "it's complicated, we don't want to ban everything, we do want to make it harder for some people to access some content". It's a more honest discussion.


It sounds only slightly better. In essence it's still about the government telling others that they know better than the person. This kind of thing is bound to get false positives. We also know that the government doesn't care about false positives, because I almost never see a politician address abuse from involuntary commitment.


Why do we need algorithms, AI, etc.? Seems like over-engineering. Either don't put your kids online, or, if you do, the sites need to send the parents a report of all their activity -- if parents don't take responsibility to review or control what their children get up to, no AI can do it for them.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: