But ultimately there's always a software engineer involved in the creation of software - and that's not true of any of the other roles you mentioned. Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.
> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will.
Well, yes - if there were no tradeoffs there would be no point in having an oath to begin with. But there are software engineers today, including some on HN, who do things more harmful and unethical than medical malpractice, and they are personally culpable for the decision to do so - just as their replacements would be if they refused. I would also like to see laws criminalizing those individual engineers' conduct - maybe you're alluding to the same thing? - but an oath is a good start.
So management bears no blame for requiring illegal work be done, on pain of termination? Said another way, engineers now need to be technical and legal experts in the business domain?
(Remember employees in the US depend on the company for health insurance. Saying 'no' could cost a lot more than just ones position.)
Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.
> Said another way, engineers now need to be technical and legal experts in the business domain?
Consider something like the 737 MAX debacle – did the programmers writing the MCAS code actually have enough aviation domain knowledge and understanding of where the component fit in the overall system to realise it was a threat to people's lives?
I don't know, but my guess is the most likely answer is "No".
From my limited information the MCAS code was primarily causing problems in association of incorrect readings by damaged sensors. Of course one could argue that this is an engineering failure because the MCAS failed to account for wrong sensor input but when you consider the legal implications of a MCAS fallback there is actually not much that can be done on the software side.
The MCAS is an optional component that reduces certification and training costs. It is definitively possible to fly the plane without accidents even with a disabled MCAS. So why can't the MCAS be turned off automatically when sensors fail? Because that changes the classification of the plane and therefore requires pilots to be certified for a new machine and receive new flight training for both MCAS and no MCAS modes.
If the software engineer was under a hippocratic oath then he would have to refuse to build the MCAS entirely but not because the idea of an MCAS is inherently unethical, no, he would have to refuse because the company he works at wants to use the MCAS for a non ethical purpose (namely operate and hide the existence of MCAS even when it is unsafe to do so).
This is basically a reverse audit but the software engineer has no authority conduct such an audit and even if he was allowed to, the business has no obligation to give him the necessary information to determine whether the MCAS will be used unethically.
> he works at wants to use the MCAS for a non ethical purpose (namely operate and hide the existence of MCAS even when it is unsafe to do so).
You think a programmer, handed a spec and asked to implement it, can be expected to know that their employer (or the employer's customer) wants to use it for a "non ethical purpose"?
Again, I can't know for sure, but I doubt the programmers who wrote MCAS (who most likely didn't even work for Boeing, but rather some subcontractor) actually knew, or could have known, how the code fit into Boeing's larger purposes
With all of that being said, were these software engineers (probably subcontractors) even given access to actual MCAS readouts or, more likely, virtualizations of expected readouts. These people probably didn't account for this type of faulty readout because the virtual machine never put out that type of fault.
Most types of development in these large companies is so compartmentalized that it's next to impossible to see the whole structure from a software engineers prospective. You need to be at a management level to understand how most of the pieces really come together, which is the only place where one of these "oaths" might make an influence. At that point, however, the selection is so goal oriented, I have a doubt as to whether or not people would take that oath.
Generally, there is somebody (typically in software assurance or systems engineering) who is supposed to ensure the fidelity of the simulator. Additionally, the hazard analysis or failure modes effects analysis should trace to specific test cases.
Of course, there’s all kinds of pressures that make these fall through the cracks. I vaguely remember an article stating some of these documents in the case of MCAS were not up to date
What the actual hazard analysis showed is that Boeing did not have the technical insight at the right level.
The HA listed MCAS as "hazardous" rather than "catastrophic". Meaning those in charge of that process document did not realize MCAS had the ability to down the airplane. I know it's tempting to arm-chair quarterback this, but let's assume they should have realized this hazard.
To your point, maybe the programmer doesn't have the systems knowledge to make those calls, but the process is predicated on somebody having both the technical acumen and the responsibility) for those decisions. This process broke down though.
> You think a programmer, handed a spec and asked to implement it, can be expected to know that their employer (or the employer's customer) wants to use it for a "non ethical purpose"?
No imtringued does not.
imtringued wrote:
> This is basically a reverse audit but the software engineer has no authority conduct such an audit and even if he was allowed to, the business has no obligation to give him the necessary information to determine whether the MCAS will be used unethically.
imtringued is saying that it would be impossible for a software engineer to determine whether what they were asked to do was ethical or not.
> If the software engineer was under a hippocratic oath then he would have to refuse to build the MCAS entirely but not because the idea of an MCAS is inherently unethical, no, he would have to refuse because the company he works at wants to use the MCAS for a non ethical purpose (namely operate and hide the existence of MCAS even when it is unsafe to do so).
One, it's not going to be clear from the request that the MCAS would want to be used in unethical ways.
Since the Hippocratic oath is the argument here, how many software developers want to work in a system closer to physicians? A national cartel controlling membership and licensure - tough luck if you want to hire more developers because there's an artificially limited supply. Mandatory academic training - goodbye self-taught developers. Follow-on training with pay 1/5th or less of your attending physicians - I know residents in specialties where attendings are paid $500k a year to start, and they are making $60k a year. Brutal shift work - residents work 70-80 hours a week easily. Toxic leadership - I've heard horror stories of residents being forced to lie on ACGME forms regarding their hours under penalty of being outright fired from their residency slot, which would make it nearly impossible to get a job as a physician (mainly because you'd have to apply to a different residency program and explain your termination).
I know they're not suggesting bringing the entire medical education & training structure over to tech workers, and everyone here likes to think that they're brilliant and changing lives every day but most of us are just throwing shitcode JS into a computer for 3-4 hours a day for an ad tech company and not much more. The comparison falls apart pretty quickly.
>actually not much that can be done on the software side.
This isn’t exactly true. There are mitigations (both software and non-software) that are expected to be done depending on hazard analysis. One of the items discovered is Boeing mischaracterized the MCAS hazard (it should have had a “catastrophic” hazard class). In addition, they didn’t appear to follow their own process for dual inputs required even for the lower severity class assigned. The “optional” part of MCAS was the secondary sensor reading into the software
>The MCAS is an optional component that reduces certification and training costs
No. The MCAS was a "necessary" component for pitch stability - without it, a 737 MAX in a pitch-up attitude would, in the absence of correcting inputs, pitch up further and further until it stalled. Without it, the airframe is uncertifiable, full stop.
I'm certain that's not correct, everything I've read on it has said MCAS was specifically a software modifier put in place to allow the plane to respond substantially the same as a regular 737 without the larger engines, in order to avoid having to have additional training for all 737 pilots worldwide.
Most aircraft, in a "pitch up attitude" will increase their angle of attack as thrust is applied. The issue was that the MAX would do so in a more radical way than the regular 737 did, and so the software was put in place to limit that so it flew like a regular 737 as far as the pilots could see.
Conceptually, MCAS wasn't a bad idea. The execution and using it as a replacement for training and not informing pilots of the flight characteristics changes between the models was stupid.
Although to be fair my summary wasn't entirely accurate - it wasn't that a MAX was outright dynamically unstable with no control input, as I described, but rather not sufficiently stable as to cause a monotonic increase in stick force as AOA increases, which can cause the combined system of pilot + flight dynamics to be unstable since the pilot relies on stick force as an indicator.
> Most aircraft, in a "pitch up attitude" will increase their angle of attack as thrust is applied
This is both incorrect and irrelevant. Most aircraft will climb when power is applied, but will not change their AOA unless the thrust axis is off-center. To a first approximation, power controls climb rate, and stick input changes AOA. Change in behavior under different power settings has little to do with the problem with the MAX. The problem with the MAX is that at high angles of attack - i.e., when the stick is held back, causing the air to meet the wing (and the engines) at a steeper angle - the engines, which are flung forward, start producing lift of their own and produce a pitch-up moment. This means that the further the pilot pulls the stick back, the less hard they have to pull. This is a dangerous inversion that increases the control order of the system, as it breaks the usual assumption that a given stick force will result in a given AOA, more or less.
Right -- it didn't give the exact same feedback to the pilot that the regular 737 did, which was why MCAS was created. The aircraft is no more or less unstable than a regular 737.
The original 737 does exactly the same thing the Max does with respect to producing a pitch-up moment -- as does nearly every other aircraft. It's just not nearly as pronounced as the Max is.
I'm sorry, none of that is correct. Did you read my link? The aircraft doesn't meet FAA regulations without the MCAS.
>The Boeing 737 MAX MCAS system is there ONLY to meet the FAA longitudinal stability requirements as specified in FAR Section 25.173, and in particular part (c) which mandates "stick force vs speed curve:, and also FAR Section 25.203 — "Stall characteristics".
A doctor refusing to do a procedure because he worries for his patient is seen as a good guy doing what is right, he is in his opinion saving a life. In addition he is trying to avoid the massive cost of a medical malpractice suit.
A software developer refusing a job because it does not meet his ethical parameters is just an unemployed software engineer.
I think one of the issues is the domain of is vast. One developer may be working on a basic CRUD app while the next is working safety critical code on a vessel going to Mars.
There are definitely areas where the prudent thing for a developer to do is raise a dissenting opinion, if not halting work. What seems lacking is clear industry consensus standards to back up that decision.
If a doctor says no, it's because of legal liability and risk to licence.
For the same reason, it's harder to hire some rando budget doctor because the field is gate-keeped by the requirement of a licence, and liability.
You can't magically make Engineering the same without the same conditions. Add barriers top entry that see my pay rise, or have cheap programmers with no liability.
Engineering already has these barriers, they just aren’t required or enforced.
I’ve never been on a project that requires a software product stamped by a licensed engineer. NCEES dropped the software license because there was so little demand, compared to, say, civil engineers who consider a license a rite of passage to career growth.
You need a licence to practice medicine, if there is something called a "license" for engineering but isn't required for practise, then it's not the same. If we have the same barriers, but not enforced, then we don't have the same barriers.
That’s not quite right. You actually do need a state license in the US to practice engineering for the public except for a few basic instances:
1) you work for the federal government
2) you work under a licensed engineer
3) you work under an industrial exemption
There’s differences depending on state. There has been a more concerted effort to remove #3 recently due to both political reasons and the technical issues in this thread. Most people performing engineering work under an industrial exemption don’t actually realize it. Again, this is different state to state. For example, in some states you can’t start a business with “engineering” in the name unless you have a certain percentage of owners/principals with an engineering license.[1]
What actually happens is conflating terms in common parlance. “Engineer” and “engineer” are not necessarily the same. For example, a computer engineer may work under an industrial exemption (due to working in a manufacturing service) while a software engineer does not. Legally, an “Engineer” claims an explicit responsibility to public safety.[2] Apropos to the headline article, there is a distinction with this difference.
From my experience, software engineers don't rotate nearly as often in traditional cyclical and defense businesses (auto, aerospace) as they do in the FAANG technology sector. There are a lot of grey beards who weren't so grey when they started.
That's completely the wrong comparison and context.
The reason MCAS came about was because management wanted to try to fudge a larger engine into an outdated design created for a different purpose rather than do the engineering and certification necessary for the new requirements and to update the system.
Management wanted to save money. Of course the engineering leadership did not want to fudge something -- they wanted to do proper engineering. But the people in charge just wanted to save money, and the engineering leadership could not do anything otherwise, even they knew that just making the engine larger and compensating did not make sense from an engineering standpoint.
By the time it got to the MCAS, that was far down the line of the decision to not do proper engineering.
Which demonstrates the irrelevancy of a hippocratic oath for engineers. Because the complexity of the system is such that no single engineer understands it all, and thus no single engineer is (or feels) responsible.
Blaming it on management is also irrelevant, since management merely takes the advice of engineers, and do financial/business trade offs to maximize profit. If the engineers cannot tell that MCAS system could fail this way (due to the complexity), management will not question it.
if the engineering leadership said, this is not a good idea. Management said, it saves money.
Engineering failed to convince management. Management didn't have the understanding that it was a bad idea.
It is now, no one's fault?
If management merely takes the advice of engineers ( and other who specialize in the things that they do not ), and they choose to ignore it because they do not understand the things they do not specialize in. I believe it's a reasonable to assume that management is more at fault than engineering ( I'm not sure they're is a situation here where any party is fault less )
Hippocratic oath for engineers is irrelevant, correct. But management does not take the advice of engineers. As I said, the engineers wanted to do proper engineering, but management wanted to save a buck, so they instructed engineering to fudge a cheap solution.
I'm not saying engineers taking an oath helps, its about the executives.. maybe something about the thread structure implies that but I actually only read the comment above.
> So management bears no blame for requiring illegal work be done, on pain of termination? Said another way, engineers now need to be technical and legal experts in the business domain?
This is a red herring. The oath is not needed merely for illegal work. In fact, the more common use cases will likely be legal. It's a common sentiment, but: Don't conflate ethics with legal.
> Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.
This is not a dichotomy - there can be a spectrum. You can restrict it to those who do know what the product is used for, or at least have good guesses for them.
And while not everyone is this way, I wouldn't really want to work in a job for long if I'm not told what the code I'm writing is for. It's not even an ethical concern for me - it just makes for a boring job. Ideally I want people to tell me the problem they are solving and give me some leeway in crafting a solution. Don't come to me with a solution and ask me to implement it.
How much autonomy do you think doctors have? They're not in the operating room inventing new procedures. They are following careful scripts, adapted for the intricacies of one particular human body. There are times where they need to qualitatively improvise, yes, but that's generally only when something has gone horribly wrong.
Also, the Hippocratic Oath is not terribly complicated. It basically says, I will not furtively and maliciously hurt people in an abuse of my authority, and I will try to heal them when they are sick. I don't think it's a lot to ask that software engineers to agree not to create knowingly malicious software. It actually addresses exactly the problem you describe. If everyone has taken this oath, and adheres to it, there is no "someone else" to do that evil work.
Last point, there is a wide variety of software engineering work out there. Some of it may be mindless of the bigger picture of what is actually happening, but for any sufficiently advanced behavior to emerge out of a complex software product, some engineer at some level has to have some idea of the path they are going down to create or allow that behavior. And every engineer has the ultimate autonomy over how and what is created because it is our hands on the code. If you don't understand that, you don't understand the power of the profession.
> So management bears no blame for requiring illegal work be done, on pain of termination?
No, I wouldn't say that. In many cases, management and engineering share the blame jointly and severally since they both have an opportunity to stop it.
> Said another way, engineers now need to be technical and legal experts in the business domain?
Engineers should know enough about their business domains to understand the ethical impacts of their work. Ethics and law are orthogonal, so thankfully this is generally much easier than being a legal expert.
> (Remember employees in the US depend on the company for health insurance. Saying 'no' could cost a lot more than just ones position.)
Thankfully, the healthcare safety net in much of the US is far better than it gets credit for, and the pay and availability of opportunities for software engineers in the US has generally been quite good. I'm sympathetic to this argument in general, which is one reason I don't think there should be an oath for, like, Amazon warehouse workers, but I'm far less sympathetic for anyone making 5+ times the median US income.
> Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.
Your responsibility as framed to you by the business is the how, but upstream of the how is the question of whether or not to do it at all. If you contribute to a piece of software, you've tacitly answered yes to that question.
> Engineers should know enough about their business domains to understand the ethical impacts of their work.
This would be more reasonable in the era of 10-20 years in the same company or industry. Needing to job hop every 2-3 years for a decent raise, and software skills applying to a vast array of industries, makes it less reasonable IMO.
> ...but I'm far less sympathetic for anyone making 5+ times the median US income.
Not everyone here or in software makes that kind of money. Some of us in the Midwest--or who aren't as skilled at negotiating--don't pull down nearly that much.
> This would be more reasonable in the era of 10-20 years in the same company or industry. Needing to job hop every 2-3 years for a decent raise, and software skills applying to a vast array of industries, makes it less reasonable IMO.
Yeah, to be clear, I don't think software engineers have an infinite level of responsibility for understanding the ethical implications of their work. If you were a software engineer at a credit rating agency in 2006, and you didn't see the ethical dilemma because you didn't anticipate that contagion would be exacerbated by the shadow banking system to bring down the global economy, you get a pass. But if your prospective employer is, like, locking children in cages, or spreading disinformation on political candidates, you should probably find that out during the interview process.
> Not everyone here or in software makes that kind of money. Some of us in the Midwest--or who aren't as skilled at negotiating--don't pull down nearly that much.
Good point - I'm also in the Midwest and make less than that, for what it's worth. I've naturally had FAANG in mind as I type these comments, and more generally I think salaries for the more unethical roles tend to skew higher.
> Yeah, to be clear, I don't think software engineers have an infinite level of responsibility for understanding the ethical implications of their work.
Yep. That's more the responsibility of product managers, upper managements and chief architects/engineers.
Ethics and law are not orthogonal. Look at definition from Wikipedia: "Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and wrong behavior"."
Ethics forms basis on which law is built.
And, given that, it is not simpler to make ethical decision, it is harder. The decision should be worth being basis for a law, how's that simpler than following law?
Even if you take definition of ethical decision from Wikipedia: "An ethical decision is one that engenders trust, and thus indicates responsibility, fairness and caring to an individual." These words can bear negative connotations - if I beat people, I should be trusted that I will beat people, I should beat people fairly and I should care to beat an individual thoroughly.
Yes, I fully see you talk about principles. It can be seen that it is easier to make decisions from principles. But, you can misguide yourself about application of these principles.
Edit to add: If you are being asked to perform illegal or unethical acts as part of your employment, then perhaps termination is an ideal course of action? Unless of course your personal enrichment outweighs legalities or ethics in your worldview?
And I wasn't implying engineers should be entirely blameless. Everyone has a limited understanding of legal systems too complex for one person to fully grasp. And workers far below the level of decision makers should be judged according to evidence of their knowledge and responsibility. Likewise those who give orders should bear more responsibly.
All these "companies take on a life of their own" arguments sound a lot like executives priming the pump of potential jurors with excuses. If decision makers cannot bear responsibility because of a company size or organizational structure then we can make some sizes and structures illegal before they stumble/march into devastating incompetence.
> ...then we can make some sizes and structures illegal before they stumble/march into devastating incompetence.
Was with you until this part. Just hold them personally liable if someone gets hurt should they create an uncontrollable system and predictably fail to control it.
Right. My point was in response to excuses being made elsewhere that the nature of large companies mean these executives cannot be personally liable. So if we accept that the nature of huge companies is no one can be liable (I'm not convinced yet) then it would be time for capping sizes or outlawing structures.
Keep in mind the US already has laws around corporate structures and conflicts of interest. (Even if they're selectively applied.)
The nature of any size corporation is to have one person in charge. In terms of assigning responsibility I'd think that works better than the alternative you'd get by breaking it up. Namely a bunch of cooperating smaller firms only doing part of the job each, and able to point the blame at each other.
We heard the "too complex to understand" excuse a lot regarding the pricing of subprime debt. Except a lot of people did understand it was a problem. It's basically the "I'm too stupid to know what I was doing" defense. If we accept that defense and try to make regulation to protect them from failing (as was done in finance back then), we basically allow stupid people to continue to be in charge rather than being replaced as they need to.
It may be fine in some countries, but saying that you’ll make some organization sizes and structures illegal, barring other criminal activity, smells like a violation of the freedom of association.
It doesn't matter if it's a good freedom, the chances of the US repealing the 1st Amendment any time soon are basically nil. You'd have a better chance of getting Apple/Amazon/Google to voluntarily split up their own companies out of the goodness of their own hearts -- it just isn't going to happen.
The only argument that actually matters here is whether or not restrictions on corporate structure actually do violate freedom of association or not.
I'm reasonably skeptical that they do, given that the 1st Amendment hasn't stopped us from enforcing antitrust and monopoly legislation in the past. Yeah yeah, Citizens United and all that, but we regulate companies all the time.
But I'd still want an actual lawyer to weigh in on that, I wouldn't feel confident saying that there aren't limits on how far we can go in that direction.
Antitrust doesn't violate the first amendment, so clearly limits on corporate scale aren't unconstitutional, so the legal defense is insufficient and the moral question stands.
> I'm reasonably skeptical that they do, given that the 1st Amendment hasn't stopped us from enforcing antitrust and monopoly legislation in the past. Yeah yeah, Citizens United and all that, but we regulate companies all the time.
> But I'd still want an actual lawyer to weigh in on that, I wouldn't feel confident saying that there aren't limits on how far we can go in that direction.
It doesn't necessarily hold that because one thing is legal, everything is legal. For example, we have 1st Amendment restrictions on threats and libel, but in the US hate speech is still protected speech. 1st Amendment exceptions are generally pretty narrow and specific in the US.
In the same way, clearly some corporate regulation is OK. It does not follow that there's literally no limit on what the government can dictate about how a company can operate. I would prefer to get input from a lawyer before asserting that so confidently.
Because it allows people to associate with whom they choose to. Remove that and you’ve opened the gate to legal racism, legally institutionalized homophobia, banning of religion; the list is endless. The five freedoms are the pillars of our Constitution. Without them, we are no better than China or Russia or even any third-world hellhole you care to mention.
I don't see that even a little bit, your cause effect isn't explained.
I should phrase it differently. Why is an absolute freedom of association more important then the freedom from being harmed by large associations with amoral machinations. The original argument asks that if large corporations inherently obscure moral outcomes, maybe they are immoral, which is an argument that puts these two moral axioms in conflict. Simply stating that one side wins is thought terminating; its important to argue for why its better.
Morality is highly variable, depending on the observers beif system. Legality is the only framework that we can establish in common. Ethics comes in second as it can be established by a group and does not bind those outside the group.
We can take lessons from other engineering domains. Disregarding exemptions, engineers who design and build for “the public good” have someone who is ultimately responsible. A newly minted civil engineer, for example, may be low on the hierarchy but has to work under the direct supervision of a licensed Professional Engineer. That licensed PE is the one responsible, legally and ethically, for now just the “what” but also the “how”. They may work for a project manager but ultimately it’s their stamp that allows the build.
As someone who works in safety critical code nothing irks me more than when people absolve themselves by saying “I’m just a programmer, that’s not my job/problem”. We need to hold ourselves to a higher professional standard
> Most software engineers are not like doctors. We have little autonomy over what is created. Our responsibility is primarily the how. And with devops sometimes the actual deployment and maintenance itself.
If you're an actual engineer of any kind, you always have some choice on this. You make architectural decisions every day, and you generally work for places that do, in fact, take your input into consideration. If you don't work for any of those kinds of places, then you are still responsible because you wrote the code to enable it. You can always say "no". There are consequences for that, for sure. You can always quit as well. And it may still get made. But it won't be by you.
And sometimes, that's still better than the alternative.
You are missing the point, the idea is that a average developer at a major bank has very poor understanding exactly what impact of his code will be. He generally has neiglther the information (secrecy, need to knwo basis) nor the understanding of the financial system.
On the contrary, a doctor's desision affects the life of an individual patient in a very clear and understandable manner.
Just check out cases of VW software devs who added code for emission test cheating. Check out the case of dev who was copy-pasting code at Toyota.
Illegal stuff is illegal because not knowing the law is no excuse. For your own safety and good, you better have some grasp on legal stuff in your domain. Don't have to be an expert.
I don't think the drone operator is a good example. There are situations in which unethical things must happen, no matter what. If wars could be fought without killing we'd be doing that already.
> Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.
That's not true in any way. Lots of software is written by people who don't even have a degree, others by some who have a computer science degree, but not an engineering one, etc.
The other issue is that software is rarely unethical. The unethical bit often comes from the way it is used.
And I'd have to agree with OP. In an idealist world you could assume software engineers would all be ready to quit their job at any sight of unethical affair, even say, launching something to production with a known vulnerability, or without anything but the most rigorous security review process having passed. But in practice, you're not going to achieve this result, unless you put a framework to incentivize software engineers towards being ethical. If you allowed them to sue their employer, and made it that they more often win the lawsuit, for asking them to build something unethical, or insisting that they do so even after the SE said it was unethical, or to retaliate in any way to an SE refusing to build something on ground of ethics, then you'd maybe start to see results. Otherwise, won't happen, and you've only created a scape goat to make it even easier for companies to push for unethical software, since they can now just blame SE they coerce into building it anyways.
The problem is, software can be really complex it's possible to make it so no one programmer can understand the whole picture. The tasks can be divided so the individual programmers are given order like "do X in condition of Y" which itself seems harmless and lawful, but combining them lead to malicious behaviours.
this is precisely the problem. few physicists knew they were all collectively building a nuclear bomb because the big picture was well compartmentalized. feynman only found out because he picked the locks of his colleagues to piece it all together.
Just finished “Atomic” by Tim Baggot. A history of the development of the bomb. From my reading, lots of the physicists knew. They pushed forward as they didn’t know the extent of the german project (who had stalled due to lack of resources and a belief that while possible, a bomb wasn’t practical due to a need for a large amount of material). The safe-cracking was for amusement.
Can you point to the part of the book where it is stated that only few physicist (not including Feynman) knew this? I remember it very differently. From my memory:
- lock picking was for fun and didnt gain info
- some warehouse workers didnt know about critical mass of uranium and stored the stuff to closely together
Maybe you meant the part where it is clarified that the weapon was not developed against Germany but strategically against the UDSSR. (if I remember that part correctly)
How about a compromise then? Software engineers are responsible for the negative consequences of the software. Any and all responsibility they take is then also shared with every person that is above them in the hierarchy.
Eg a developer does something and society finds this unethical and punishes them. The developer's boss, the boss's boss, the boss's boss's boss etc up to the CEO all get punished in the same way. Furthermore, to avoid companies trying to shield themselves from this by putting their developers into a different company, it will apply to software that you get from someone else too.
Suddenly this doesn't sound very appealing anymore, does it?
> Any and all responsibility they take is then also shared with every person that is above them in the hierarchy.
Currently, if I write software that performs illegal actions -- let's say software that allows me to use unlicensed Adobe products -- at the request of my boss and their boss, all three of us would be legally liable.
That makes no sense. Management just needs to decide on an ethics standard for the entire company and install an audit process that maintains that standard. That's the entire problem. Don't ask a handful of employees to do some charity.
I think it something should already kick in if you create tracking pixels to read canvas data to identify users or generally work on fingerprinting. Especially if it is for a benign purpose as advertising, an industry that is notoriously toxic and would have no problems selling ever kind of data they get their hands on. It is fine to generalize it that way in my opinion. The directly conflict with any spirit of the law in most countries regarding privacy.
Aside from that, the quantification of attributes/properties of people can have negative implications for many people. Oversharing is a problem on the net, but at least here people just endanger themselves.
> But ultimately there's always a software engineer involved in the creation of software - and that's not true of any of the other roles you mentioned. Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.
Nah, it's not the same at all. The fundamental difference between creating a program and medicine is creating a program only has to be done once, or at least a only by a few.
Medicine on the other hand: it has to be redone with each new patient. If the Hippocratic Oath works to prevent 99.9% doctors from doing a harmful procedure then you've hit a home run. Sure, you will never completely stop some bad egg removing a perfectly good limb because a patient suffering from Xenomelia offered enough money. But who wouldn't call a thousand fold reduction a huge win.
We demonstrably have the 0.1% of programmers who are willing to break any oath. They make malware, and willingly take out Sony as mercenaries because Kim Jong-Un got pissed off at a movie. All that 0.1% has to do is write the program once. Thereafter you are not trying to discourage hoards of high skilled professional from doing it again, you are trying to stop a legion of dark net operators copying the thing and selling it to anyone. An oath is a waste of time under those circumstances.
1) New regulation forces some branches of software engineering to have some type of oath.
2) Now some software jobs will require only oath takers to do.
3) A well payed and powerful new cast of software engineers is born.
4) They are highly paid and have a powerful lobby working for them.
5) The oath takers become very frisky and only work on jobs with minimal risk. The ones that do screw up have an armada of lawyers, because of course they have a new association with deep pockets.
6) Innovation stalls for a while.
7) Big corps start outsourcing some of the oath-taking jobs. These engineers are not bound by the same regulation. Screw ups happen, people die at some point.
8) Maybe we should have the outsourced engineers also take an oath? Back to square 1
This is exactly what I found happened for medical Doctors in Canada (don't know for US). Not saying doctors are not doing a good job, and I can't imagine the stress and pressure they operate from. But suing for malpractice in Canada can be challenging to say the least. I have personal account of a Family member who was grossly mistreated, and all the Doctor did was changed hospitals, nothing more than a slap on the wrist.
> But there are software engineers today, including some on HN, who do things more harmful and unethical than medical malpractice...
I'm having a hard time trying to find examples of this, outside the field of armament development.
And in those fields where a software failure may result in death, e.g. aircraft development, proof of a software engineer willingly causing it, would likely result in jail time already.
The big question is: who has ultimate visibility on the consequences of a particular project? Very frequently software engineers are asked to work on projects where they only know one side of the picture. The executives in the company are the ones who know the ultimate context of what they're doing.
With a combination of local engineers, remote engineers in other countries, mechanical turk and some sleight of hand, I wonder if you could craft a nefarious project where nobody knows the whole picture.
Not sure hou understand how software is made. A programmer doesn't decide what to write, when to write it and they are lucky to be included in how it's made.
Programmers get specs and write programs to match those.
At no point is it the programmers responsibility to talk about the moral compass of the project and where it fits into society.
An oath to do no harm? You first need to give programmers the power to decide the fate of projects on their own the way only a doctor can decide medicine or treatment.
The programmer still decides whether that code gets written, since they’re the one writing it! If you write or review a piece of software, even if the spec was written by the PM/business, you’re endorsing whatever that spec says and all of its ethical implications. “Just following orders” is a famously poor defense at this point.
> You write a tool for let's say recognizing faces. Will it be used for login onto computer? Tracking dissidents? IDing corpses? Who knows.
I mentioned this in another comment, but I'll say it again:
Irrespective of any legal/ethical concerns, yes, I would like to know! If my boss just came to me and said "build a facial recognition system" I certainly will ask how it is going to be used. Not because I care about ethics, but it's a basic aspect of the job. You can replace "facial recognition" with "CMS" and I'd still ask.
If they tell me the facial recognition is for logging into computers, and then later decide to use it to track dissidents, that is a different concern. But I'll at least ask!
> What if you start as something 100% ethical. But your company pivots to unethical application?
If they pivot after my work is done, I won't feel responsible. If they never used it for the original application and pivoted to this, I may get upset and quit, but my conscience would be clear.
> If they pivot after my work is done, I won't feel responsible.
So if you invented dynamite you wouldn't feel responsible for its use?
But, let's change it a bit more personal. You write an awesome OSS yaml parser. It's so good, that GFW of China uses it as a main component, and this gets published in the news.
What would you do? Nothing you did changed, but suddenly your work is powering an unethical component.
Dynamite is a good example of why the philosophy is complete bullshit. How about you blame the stupid evil fuckers who started all of the futile wars to try to get rich in the quagmire that was European geopolitics instead of tbe person who made them safer?
It is the same sort of stupid blame shifting involved with the hippocratic oath for x nonsense. Oaths are majorly outmoded in the zeitgeist anyway because everyone recognizes lies are commonplace.
> How about you blame the stupid evil fuckers who started all of the futile wars
And I fully agree. Expecting people to individually bear the burden of "some oath", is a fool's errand.
My point was software on its own, much like a fridge, is amoral. You can use it to store your groceries, or you can use it to store corpses.
That said, there are some extreme cases (like a gun), that have very limited non-violent uses. And IMO, that should be regulated, instead of depending on people Doing The Right Thing™.
> So if you invented dynamite you wouldn't feel responsible for its use?
I would if I were inventing dynamite, but that's not what this scenario is.
A person working for a knife manufacturer need not worry about it being used for murder, as that's not what the primary use. And facial recognition is a lot less harmful than even that.
Trust me: I work for a company that produces certain goods used for all kinds of good and nefarious purposes depending on who buys it. My conscience is clear.
> But, let's change it a bit more personal. You write an awesome OSS yaml parser. It's so good, that GFW of China uses it as a main component, and this gets published in the news.
> What would you do? Nothing you did changed, but suddenly your work is powering an unethical component.
I wouldn't do anything:
1. This is milder than the knife scenario above. Of course I don't care if people use it in a poor way - unless there is a straightforward technical mitigation I could do. In your example, given that the source code is available, that is not an option.
2. There's a certain hypocrisy in releasing something as open source and then complaining about how it is used. If it bothers you, then modify your license!
That's hypocritical. Nobel didn't invent dynamite because he wanted people to blow themselves up. He invented dynamite because nitro-glycerin was a horrid mess used in mining.
He definitely didn't have an easy technical solution to problem of people misusing dynamite.
You can either say in both case do nothing, or in both case do something.
Right now your boss has no reason not to tell you- people at GitHub knew their software was being used to support ICE during the time in which families are being separated. People at Microsoft knew that Microsoft was having contracts with the military. Google engineers knew about Project Dragonfly.
Right now bosses don’t even have incentive to lie about it because no engineer is obligated to give a shit about the society they live in broadly.
This is a good time to point out that a significant chunk of the population isn't opposed to working for the military, for ICE, or for defence contractors who make weapons; they don't view that work as unethical. Moreover, the origin of Silicon Valley, and indeed the entire internet, is DARPA contracts and weapons manufacturing.
Any oath would either not be taken by those people, would be watered down so far as to be meaningless, or would require the entire industry to refuse to make weaponry. The first and second are ineffective and the third is ludicrous.
This is a great point, and reasonable people can disagree about when an application's abuse or potential for abuse crosses the line. The same goes for pivots or for general-purpose code that's used elsewhere. (Is it ethical to contribute to internal tools at Facebook? What if those tools make other engineers way more effective at doing things that ultimately undermine democratic systems?)
My point here isn't to dictate what software is or isn't ethical, but to argue that if a program is unethical, its ethical implications are the responsibility of the engineer(s) who wrote it.
Exactly. I can’t believe all the blame-shifting I’m reading in this thread! It’s as if software engineers are suddenly these powerless victims, lacking agency over their work, only capable of saying “yes, boss, whatever you say, boss!”
If a civil engineer’s manager told them to design an unsafe building or bridge, they’re not going to just say, “Sure thing manager! One death trap coming right up!” It is their ethical duty to build it safely.
A bridge is limited to a single purpose, like an appliance. If you insist on veto power over every outcome of what you build, that means you can only ever build sealed appliances for hapless consumers, not unfettered tools that empower clever human beings who will use them in unanticipated ways. Having sworn a Hippocratic oath, are you allowed to work on LLVM, which half of all evil apps probably depend on?
I could get behind a requirement that code be reliable and fit for purpose, though very few of us have any experience with the formal methods that might get us there, and most don't want to work that way.
Imagine if your manager copied the safe bridge you designed with a magic replicator and now uses that exact same design somewhere else. You tell him that the bridge was not designed for this location and that the bridge will collapse in 5 years. Your boss fires you but you are still responsible for the collapse of the bridge.
Let's go further into absurdity. The engineer is kidnapping the daughter of the manager and blackmailing the manager to take the bridge down. Is it ethical to force someone else to be ethical even if its only possible through unethical means? What if there is a hero saves the daughter? Will the hero be liable for the collapsed bridge?
Eh, this isn't a great analogy either. If I'm an engineer that develops a single beam in a bridge, is it my fault if someone assembles those beams together in such a way that is dangerous? At which point does a function become unethical? Do you now have to only use software made in your country by ethical developers?
Software isn't a bridge and comparisons fall apart quickly.
Engineers at the beam companies just certify that the beam meets its specs.
I'm not sure why software would be any different. Bridges are complicated and made up of versatile submodules, just like software. Some other software engineer eventually designs the "bridge" and selects "beams" for the structure. If those beams fail to meet their specs, then the engineers who stood by them are at fault. If the bridge fails because the beams weren't used in accordance with their spec, or didn't have a spec at all, then the engineers who approved their use in the bridge is at fault.
Wow, I’m realizing what an unpopular opinion this is on HN! Yes, as a software developer you should absolutely be accountable for the ethical concerns around what that protobuf you’re moving around from one API layer to the other is used for. You’re not a code monkey. Ask, and refuse if it’s unacceptable. I have quit jobs where the ultimate purpose of what I was building was evil.
EDIT:
Forum the original OP:
> Software engineers are accountable to their bosses before their users, no matter how high minded we like to pretend to be.
They are accountable to themselves and their own conscience before both their bosses and their users. I understand this is an uncomfortable line of thinking if your employer asks for ethically questionable project work, but I’d argue that if this is the case for you, it warrants career introspection.
There is a gigantic difference between building an unethical software product and abusing an ethical software product for unethical purposes. The developer is not the user. Do you not understand that?
Sure, same as the LLVM example someone else pointed out. Good points. I’ll qualify my opinion then. To the extent that the engineer can know the ultimate application of their work, he or she should be responsible for ensuring it is being used ethically.
So, the engineer writing a binary search, knowingly working on “Project Orbital Death Ray” or “Voter Suppression 2.1” should know better. I hope we can at least agree on that one.
The engineer writing a linked list or moving around Protobufs for their some open source toolset gets a pass because their project as they understand it is ethically neutral. BUT there will be that engineer who then takes those tools and integrates it into “Project Orbital Death Ray”. That’s maybe where accountability should begin.
Everyone’s talking about the managers taking the blame and yes they’re culpable too. But at the end of the day an actual software developer’s fingers type the code in. If that developer knows what he is working on, he needs to bear the responsibility, too.
This argument is more akin to saying its the builders responsibility to decide whether a bad architectural decision should be built or not. They might bring it up, but it's not really expected that they get to decide.
Although it would be optimal for the management/leadership to not be pursuing unethical developments, a software engineer having the fallback of "I can't implement this in good faith" is another layer of defence (to society).
It would probably also allow for legal push back against being terminated for refusing to implementing the unethical thing.
But ultimately there's always a software engineer involved in the creation of software - and that's not true of any of the other roles you mentioned. Since software engineers are necessary and sufficient to produce software, they should always be held responsible, and any oath should fall on engineers.
> To say it’s on the engineer to do no harm puts them in the tenuous position of doing the job or being replaced by someone who will.
Well, yes - if there were no tradeoffs there would be no point in having an oath to begin with. But there are software engineers today, including some on HN, who do things more harmful and unethical than medical malpractice, and they are personally culpable for the decision to do so - just as their replacements would be if they refused. I would also like to see laws criminalizing those individual engineers' conduct - maybe you're alluding to the same thing? - but an oath is a good start.