I saw a heartbreaking YouTube mini-documentary on Facebook moderators.[1] The American workers seem to get the "lighter" flags like animal abuse, which is more than enough to cause trauma with daily exposure. I shudder to think what the offshore moderators go through when dealing with human/child abuse flags.
In my view, Facebook clearly wants this to be a temporary evil, so they can use the data from human moderators as a training set for automated ML-based moderation. But I wonder how long people will have to endure this for those models to reach an acceptable efficacy.
> In my view, Facebook clearly wants this to be a temporary evil, so they can use the data from human moderators as a training set for automated ML-based moderation. But I wonder how long people will have to endure this for those models to reach an acceptable efficacy.
Alternatively I wonder if it's worth it to put any number of human beings through this kind of suffering for something as worthless as social networking.
Yes, this is a new thing AFAICT. Magazines that frequently published reader content didn't have to sift through quite as vile content when going through letters - death threats and the like have always been a thing but somehow we've lost the cost of creating vile content in this modern era. I've never heard of anyone getting a visit from the FBI over vague death threats on facebook.
This kind of content came with the internet, not social networks. Traditional forum moderators have had to deal with really vile content for a long time now, along with the operators of almost any other media sharing website.
Unless you want to make it illegal for individuals to widely share content without going through an editor, this kind of problem isn't going away any time soon.
It really depends on how any particular site functions. Some sites function better than others. For example, "this kind of problem" doesn't seem to plague us here on HN.
I'm guessing that most of the stuff heard about Facebook and ML is basically hype. Even after Christchurch the best they could do to counteract the spread was content fingerprinting. Human moderators are likely their longterm solution.
moderators could previously count on 45 minutes every week with a counselor, or two hours a day for those viewing images of child sexual abuse, with a minimum quota of one visit per quarter. Today, moderators find themselves barred from even this scant mental health care unless “their productivity was high enough for that day,”
I kind of think if you need hundreds of people looking at horrible content all day, then maybe your business is messed up and shouldn’t exist. Humans can survive without Facebook. It’s not like they’re hospice workers or criminal investigators. The only goal is to make Facebook shareholders rich.
I knew someone who did content moderation for BlackBerry (née RIM) back when BBM was huge. Any platform that enables communication can be used for illegal and horrible content. What’s changed is the fidelity. The worst I heard of on BBM at the time was images such as illicit/child pornography and human trafficking (via text). The ability to share high res video means the horror today is more visceral for those that have to watch it.
Having said that, there’s no excuse for how Facebook and their subcontractors treat the people doing this job. They can easily afford to do better.
If it looks to an outsider like you work for company F, but you actually work for and are managed by company A, then company F has deemed you not worth the trouble of direct responsibility. They don't even see you as expendable, they just don't see you.
For what its worth, I believe that the most egregious offenders (think violence, murder, rape, child porn) should be directly referred to the police with all data Facebook has on them.
This at least should slow down the torrent of shit that has to be moderated over the time...
Showing them to children is, at least in Germany, I do hope there are similar public decency laws in the US.
In any case, people sharing murders who are not journalists or citizens trying to get attention for an issue (think gang violence or political oppression) should be investigated for their sanity.
Heard a story a while back on NPR about how Youtube is outsourcing the trauma of moderating terrible content to the Philippines. It was a glimpse into a world I had no idea existed.
Maybe that's just a convenient source of desperately poor Catholic workers.
But I wonder, maybe they could recruit people who enjoyed viewing horrible stuff. At least, if they were self-aware enough to know that the stuff they liked was horrible.
To further that thought, they can just wire up sensors to these people's brains, and flash the images for then to see. If the sensors notice arousal: mark photo as explicit.
In the dystopian future, they're going to not have enough psychopaths for this kind of work, so they're going to wire up prisoners in lawless parts of the world and modify the software to detect distress... 20 hours a day, 7 days a week.
We already have prisoners making license plates as slave labor. All it takes is a couple congressmen in the pockets of a big social network to make it mandatory for “rehabilitation” to moderate social networks.
In one of Hannu Rajaniemi's dystopian futures, people sell "API access" to their brains. And ultimately, after uploading, are sold as digital slaves. Used as components of Kubernetes-like clusters.
>"Facebook, Accenture, and WeCare may try to feign ignorance or implement common liability limiting language in their response. We hope all parties do not succumb to these common and repeated trends, and instead do what is right instead of what you are legally allowed to get away with."
Well here's an interesting thought. Let's say you train a generative network on illegal imagery. Is the combination of network and weights illegal to posses? What about images it generates?
Hopefully a small group of licensed psychologists and therapists can set up a small office across from the FB office in Austin and start serving the moderators privately.
Yeah I'm sure the therapy will be covered by the amazing mental health benefits these moderators receive through the no-cost health insurance provided by Accenture. If not, they'll have no trouble at all paying out of pocket with that cushy $14.50/hr they're making!
There is something (in the US) askew about mental health coverage. I think that regulations were passed to try to get mental health conditions treated more like others, but it seems like there is a shortage of counselors, social workers, and doctors, much more than other kinds of medical professionals.
I have this feeling that there's something about the incentive structure under the ACA that is choking off the availability, but I'm not sure what exactly it is. I'm very curious about the economics of running a mental health practice these days.
If you can't pay $70-140/hr to talk to a social worker every week or two, they may well offer you the option of online chat for $150/month. But I feel like there is a mystery there, because why should it cost that much? And yet, the people who charge that much seem desperately overworked and barely functional. Where does the money go? Not to doctors, because everything is done by non-MDs these days. Not to insurance, because deductibles are in the thousands.
The issue is that mental health providers are grouped separately from other doctors and health care professionals on the insurance side. Different insurers, accordingly, have dramatically different networks of mental health professionals which vary in size and quality mostly depending on the reimbursement rate. Low-paying insurers have worse networks. Some don't even manage their own mental health benefits. In the bay area, for example, Blue Shield's benefits are managed by another insurer named Magellan with famously shitty rates and a correspondingly shitty network of available professionals.
Perhaps, but specific problems of the bay area or some insurance networks being insufficient are beside my point. The providers that exist are generally in-network with the insurance plans I'm aware of in my area, it's just that they seem to be overtaxed and mostly not taking new patients.
I'm saying, in an area where $50K a year is a decent living, and people take for granted that an actual MD is not available in most cases, why is $70+/hr not translating into supply?
It reminds me of my former employer, which was not healthcare, but they were billing clients something like five times what they paid staff, and yet their response was never to try to increase the hours billed and hire more people, but only to squeeze and squeeze costs and shrink through attrition as though they were losing money.
As a matter of fact, I really should ask someone I know who works for NAMI.
I’ve been wondering this too- especially since I’ve always heard that fields like psychology are “over saturated” and that there are tons of people with psychology degrees that can’t find jobs. Seems like if this is true there is a change for a win/win here (people get jobs and cheaper mental health care)
The majority of the excess psych grads did so because it was an "easy" degree. Few are qualified for work in the field. This is accounted for by the hard to enter graduate programs.
When? Certainly not during the day: "Accenture told contractors they could go to the attached parking garage to 'stretch their legs' but to go no further".
Also, counseling isn't cheap. These people are already complaining about working at Uber to make ends meet, and now they should be paying out of pocket for the trauma their job inflicts upon them?
In my view, Facebook clearly wants this to be a temporary evil, so they can use the data from human moderators as a training set for automated ML-based moderation. But I wonder how long people will have to endure this for those models to reach an acceptable efficacy.
[1] https://www.youtube.com/watch?v=bDnjiNCtFk4