The programmer still decides whether that code gets written, since they’re the one writing it! If you write or review a piece of software, even if the spec was written by the PM/business, you’re endorsing whatever that spec says and all of its ethical implications. “Just following orders” is a famously poor defense at this point.
> You write a tool for let's say recognizing faces. Will it be used for login onto computer? Tracking dissidents? IDing corpses? Who knows.
I mentioned this in another comment, but I'll say it again:
Irrespective of any legal/ethical concerns, yes, I would like to know! If my boss just came to me and said "build a facial recognition system" I certainly will ask how it is going to be used. Not because I care about ethics, but it's a basic aspect of the job. You can replace "facial recognition" with "CMS" and I'd still ask.
If they tell me the facial recognition is for logging into computers, and then later decide to use it to track dissidents, that is a different concern. But I'll at least ask!
> What if you start as something 100% ethical. But your company pivots to unethical application?
If they pivot after my work is done, I won't feel responsible. If they never used it for the original application and pivoted to this, I may get upset and quit, but my conscience would be clear.
> If they pivot after my work is done, I won't feel responsible.
So if you invented dynamite you wouldn't feel responsible for its use?
But, let's change it a bit more personal. You write an awesome OSS yaml parser. It's so good, that GFW of China uses it as a main component, and this gets published in the news.
What would you do? Nothing you did changed, but suddenly your work is powering an unethical component.
Dynamite is a good example of why the philosophy is complete bullshit. How about you blame the stupid evil fuckers who started all of the futile wars to try to get rich in the quagmire that was European geopolitics instead of tbe person who made them safer?
It is the same sort of stupid blame shifting involved with the hippocratic oath for x nonsense. Oaths are majorly outmoded in the zeitgeist anyway because everyone recognizes lies are commonplace.
> How about you blame the stupid evil fuckers who started all of the futile wars
And I fully agree. Expecting people to individually bear the burden of "some oath", is a fool's errand.
My point was software on its own, much like a fridge, is amoral. You can use it to store your groceries, or you can use it to store corpses.
That said, there are some extreme cases (like a gun), that have very limited non-violent uses. And IMO, that should be regulated, instead of depending on people Doing The Right Thing™.
> So if you invented dynamite you wouldn't feel responsible for its use?
I would if I were inventing dynamite, but that's not what this scenario is.
A person working for a knife manufacturer need not worry about it being used for murder, as that's not what the primary use. And facial recognition is a lot less harmful than even that.
Trust me: I work for a company that produces certain goods used for all kinds of good and nefarious purposes depending on who buys it. My conscience is clear.
> But, let's change it a bit more personal. You write an awesome OSS yaml parser. It's so good, that GFW of China uses it as a main component, and this gets published in the news.
> What would you do? Nothing you did changed, but suddenly your work is powering an unethical component.
I wouldn't do anything:
1. This is milder than the knife scenario above. Of course I don't care if people use it in a poor way - unless there is a straightforward technical mitigation I could do. In your example, given that the source code is available, that is not an option.
2. There's a certain hypocrisy in releasing something as open source and then complaining about how it is used. If it bothers you, then modify your license!
That's hypocritical. Nobel didn't invent dynamite because he wanted people to blow themselves up. He invented dynamite because nitro-glycerin was a horrid mess used in mining.
He definitely didn't have an easy technical solution to problem of people misusing dynamite.
You can either say in both case do nothing, or in both case do something.
Right now your boss has no reason not to tell you- people at GitHub knew their software was being used to support ICE during the time in which families are being separated. People at Microsoft knew that Microsoft was having contracts with the military. Google engineers knew about Project Dragonfly.
Right now bosses don’t even have incentive to lie about it because no engineer is obligated to give a shit about the society they live in broadly.
This is a good time to point out that a significant chunk of the population isn't opposed to working for the military, for ICE, or for defence contractors who make weapons; they don't view that work as unethical. Moreover, the origin of Silicon Valley, and indeed the entire internet, is DARPA contracts and weapons manufacturing.
Any oath would either not be taken by those people, would be watered down so far as to be meaningless, or would require the entire industry to refuse to make weaponry. The first and second are ineffective and the third is ludicrous.
This is a great point, and reasonable people can disagree about when an application's abuse or potential for abuse crosses the line. The same goes for pivots or for general-purpose code that's used elsewhere. (Is it ethical to contribute to internal tools at Facebook? What if those tools make other engineers way more effective at doing things that ultimately undermine democratic systems?)
My point here isn't to dictate what software is or isn't ethical, but to argue that if a program is unethical, its ethical implications are the responsibility of the engineer(s) who wrote it.
Exactly. I can’t believe all the blame-shifting I’m reading in this thread! It’s as if software engineers are suddenly these powerless victims, lacking agency over their work, only capable of saying “yes, boss, whatever you say, boss!”
If a civil engineer’s manager told them to design an unsafe building or bridge, they’re not going to just say, “Sure thing manager! One death trap coming right up!” It is their ethical duty to build it safely.
A bridge is limited to a single purpose, like an appliance. If you insist on veto power over every outcome of what you build, that means you can only ever build sealed appliances for hapless consumers, not unfettered tools that empower clever human beings who will use them in unanticipated ways. Having sworn a Hippocratic oath, are you allowed to work on LLVM, which half of all evil apps probably depend on?
I could get behind a requirement that code be reliable and fit for purpose, though very few of us have any experience with the formal methods that might get us there, and most don't want to work that way.
Imagine if your manager copied the safe bridge you designed with a magic replicator and now uses that exact same design somewhere else. You tell him that the bridge was not designed for this location and that the bridge will collapse in 5 years. Your boss fires you but you are still responsible for the collapse of the bridge.
Let's go further into absurdity. The engineer is kidnapping the daughter of the manager and blackmailing the manager to take the bridge down. Is it ethical to force someone else to be ethical even if its only possible through unethical means? What if there is a hero saves the daughter? Will the hero be liable for the collapsed bridge?
Eh, this isn't a great analogy either. If I'm an engineer that develops a single beam in a bridge, is it my fault if someone assembles those beams together in such a way that is dangerous? At which point does a function become unethical? Do you now have to only use software made in your country by ethical developers?
Software isn't a bridge and comparisons fall apart quickly.
Engineers at the beam companies just certify that the beam meets its specs.
I'm not sure why software would be any different. Bridges are complicated and made up of versatile submodules, just like software. Some other software engineer eventually designs the "bridge" and selects "beams" for the structure. If those beams fail to meet their specs, then the engineers who stood by them are at fault. If the bridge fails because the beams weren't used in accordance with their spec, or didn't have a spec at all, then the engineers who approved their use in the bridge is at fault.
Wow, I’m realizing what an unpopular opinion this is on HN! Yes, as a software developer you should absolutely be accountable for the ethical concerns around what that protobuf you’re moving around from one API layer to the other is used for. You’re not a code monkey. Ask, and refuse if it’s unacceptable. I have quit jobs where the ultimate purpose of what I was building was evil.
EDIT:
Forum the original OP:
> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be.
They are accountable to themselves and their own conscience before both their bosses and their users. I understand this is an uncomfortable line of thinking if your employer asks for ethically questionable project work, but I’d argue that if this is the case for you, it warrants career introspection.
There is a gigantic difference between building an unethical software product and abusing an ethical software product for unethical purposes. The developer is not the user. Do you not understand that?
Sure, same as the LLVM example someone else pointed out. Good points. I’ll qualify my opinion then. To the extent that the engineer can know the ultimate application of their work, he or she should be responsible for ensuring it is being used ethically.
So, the engineer writing a binary search, knowingly working on “Project Orbital Death Ray” or “Voter Suppression 2.1” should know better. I hope we can at least agree on that one.
The engineer writing a linked list or moving around Protobufs for their some open source toolset gets a pass because their project as they understand it is ethically neutral. BUT there will be that engineer who then takes those tools and integrates it into “Project Orbital Death Ray”. That’s maybe where accountability should begin.
Everyone’s talking about the managers taking the blame and yes they’re culpable too. But at the end of the day an actual software developer’s fingers type the code in. If that developer knows what he is working on, he needs to bear the responsibility, too.
This argument is more akin to saying its the builders responsibility to decide whether a bad architectural decision should be built or not. They might bring it up, but it's not really expected that they get to decide.