> Many years ago a colleague who works in defence told me about a job posting he'd seen but was having a moral struggle with.
This is a good struggle to have. What's ironic in many cases is that we don't experience these quandaries in other jobs, but the ethical and moral ramifications still exist. The early days of search in Google or social in Facebook probably didn't elicit the same kind thought process as a lethality engineering post. (Anecdotally I spoke some years ago with an acquaintance Googler who told me that he enjoyed working there precisely because he was working on privacy issues that worked against some of the advertising side of the business.)
I've worked in telecommunications, industrial systems engineering, and energy. There are ethical and moral issues in the work that I've done/do as a contributor in each of those domains, even though I'm not involved day-to-day in decision making that feels particularly moral.
One of the base assumptions we probably need to make in our work is that whatever we do will always be misused in the worst possible way. If we explore that idea, it might give us some sense for how to structure our output to curtail the worst of the damages.
> The early days of search in Google or social in Facebook probably didn't elicit the same kind thought process as a lethality engineering post.
It did for at least one person (me). I was 16 in 2004 with 11 years of dev experience, trying to decide whether to go out to SV, go to college for CS, or do something else. I was from the same city/community as Larry Page and in Zuck's age group, so it wasn't an absurd consideration to try. Lots of things went into my decision to do something non-CS related for college, but morals were one of the reasons I didn't go to SV (I objected to the professionalization of the web + Zuck creeped me out + I didn't agree with cutting out humans/curators from the search process like Google did).
It's just that until very recently, people either thought I was lying OR that I was just batshit insane. Who is invited to a gold rush and doesn't go?
I'm sure not, and hopefully the description I provided isn't a blanket one. And, to be clear, I'm also not trying to say that working for any of those organizations is per se unethical. I don't think that this is the case.
The point, rather, is that ethical and moral considerations are actually much nearer to us than might appear at first blush. Sometimes this happens by the mere nature of the work (killing people more efficiently) and sometimes by scale (now when we surface search results, we make direct impacts on what people learn, where they shop, how they receive advertisements, etc., none of which was true in 1999). Navigating this isn't easy (indeed, you can make an argument that there is a morally good outcome for killing people more efficiently; I'm not saying it's necessarily a good one, but that one can be made), but we don't routinely equip people to think about it.
To make matters worse, our cultural assumptions shift over time. The Google/Facebook difference is illustrative. Page and Brin are a generation older than Zuckerberg, and their assumptions about what it means to be moral are probably not the same. These assumptions also change based on circumstance--when we scale a business from a garage to a billion dollars, it's hard to maintain the True North on your moral compass (assuming such a thing exists).
Anyway, I think a deep skepticism about human nature and the utility of technology is probably very useful in these situations.
But is the world better off if moral people avoid immoral jobs?
I believe the world shows there is plenty enough supply of talented people that are willing to do immoral jobs. So removing yourself from the pool of candidates makes little difference.
Alternatively, one could work in an immoral job and make a difference from the inside.
Why not do that? Perhaps to feel impotently virtuous, or perhaps the work couldn’t be stomached by the virtuous, or perhaps the virtuous but weak are scared of losing their virtuousness...
This is a good struggle to have. What's ironic in many cases is that we don't experience these quandaries in other jobs, but the ethical and moral ramifications still exist. The early days of search in Google or social in Facebook probably didn't elicit the same kind thought process as a lethality engineering post. (Anecdotally I spoke some years ago with an acquaintance Googler who told me that he enjoyed working there precisely because he was working on privacy issues that worked against some of the advertising side of the business.)
I've worked in telecommunications, industrial systems engineering, and energy. There are ethical and moral issues in the work that I've done/do as a contributor in each of those domains, even though I'm not involved day-to-day in decision making that feels particularly moral.
One of the base assumptions we probably need to make in our work is that whatever we do will always be misused in the worst possible way. If we explore that idea, it might give us some sense for how to structure our output to curtail the worst of the damages.