Hacker News new | past | comments | ask | show | jobs | submit login
The Need for a Digital Geneva Convention (microsoft.com)
240 points by doener on Feb 15, 2017 | hide | past | favorite | 82 comments



The Geneva Conventions have whatever teeth they have precisely because they don't prohibit war. But these recommendations would more or less prohibit signals-based spycraft. It's reasonable to want governments not to spy on each other. It's not reasonable to expect it, especially since --- probably unlike most violations of the Geneva Convention --- there's a Prisoners Dilemma problem with adhering to these conventions.


I don't see how a Geneva-like Convention could even forbid something that the parties are generally trying to obfuscate the existence and/or the source of anyhow. If the armies fighting the forces of Dictator Bob the Third (long may he reign) suddenly die from mustard gas, it's pretty easy to guess where that came from, and for the international community to take action based on that. Sure, it's not 100% confident, but we know how to deal with that in the physical world. But when your signals get intercepted or some hack occurs, figuring out who did it reliably enough to respond can be very difficult. The entire game theory of the original Convention is much harder to apply cleanly here.

I say this not to trash the idea, because it's a beautiful idea, and possibly a very good one. But if anybody wants it to get any traction, a much more sober analysis of the game theory and who gets what is going to have to be done. This is, alas, a wish list, not a practical treaty framework.


It's reasonable to want governments not to spy on each other.

Is it? I think governments spying on each other is sort of a good thing, myself. Insofar as governments have conflicting interests then it makes sense that they would attempt to understand each others' strategies and seek to outmaneuver them.

I know some people argue that the world would be a better place if everyone minded their own business, but that's kinda like kids saying we could solve crime if everyone would stop being bad. It's a form of wishing the world away, reflecting an unwillingness or inability to engage with it. Uncertainty and informational asymmetry are realities of life, hoping to eliminate this entirely seems irrational to me.


I'm just saying there's an argument to be had. I have mostly your perspective on this as well.


Well, that's what "reasonable" means. Not that it's the correct view, but that it's understandable and, well, reasonable. Some people may have that view, and I don't think they're being unreasonable. The opposite view is also reasonable.


Also Geneva Conventions were developed following widespread experience of the horrors of traditional warefare. "Cyber" warefare does not yet have that same affect on policy-makers and the general public.


Worst case senarios such as damage to critical structures such as dams might have high fatalities. The key is to avoid a war before it happens.


Would existing protections in the Geneva Conventions against targeting noncombatants apply to damaging infrastructure that leads to civilian fatalities?


Dams and nuclear reactors are specifically covered under dangerous forces. But you can target highway bridges, ports, locks which can fail safe, etc.


And bridges and ports are among the priority targets for traditional air attacks to hamper enemy troop movements, supply lines and production capacity. I see no reason to prevent cyber attacks against them.


Probably the better question is whether everyone who might want to damage such infrastructure is in fact (a) a state actor who (b) was one of the signatories to the Geneva Conventions.


And few much care that governments spy on each other. Spying is a cornerstone of peaceful assurance and verification. I'm much more worried about my government spying on me. I'd rather be spied on by a forgien government than my own. They will keep quiet. Thet arent going to knock down my door and freeze my bank account if i use the word "syria" in an email to my mom.


> I'd rather be spied on by a forgien government than my own. They will keep quiet.

Not among allies. If you can spy on everyone but your own citizens, you only need 2 nation-states cooperating to spy on everyone. Which is already occurring at a minimum between Britain and the U.S.

There's no easy way to prevent the sharing of the private information of civilians without some kind of exception, and that exception cannot be precise without artificially limiting the intelligence capabilities of a country, which in practice will mean such a limit will be ignored, or, more likely, will have an easily abused exception (i.e. terrorism) to make it more or less toothless.


The "five eyes" are referred to by a common name for a reason. (The US and 4 commonwealth states.)


>They will keep quiet.

No they won't. The problem is countries you wouldn't even expect to will trade data to get around domestic spying restrictions. If, say, the US compiles big databases on German citizens, and the German government does likewise with Americans, German and American spies can trade and thus get their hands on data they would like to have but can't legally collect.

Once your data is out there, it's out there.


> But these recommendations would more or less prohibit signals-based spycraft.

That depends on the definition of "cyberattack", a term that in casual use is used both for insertion of monitoring capacities and more destructive actions (e.g., stuxnet.)

I think there is a good case that prohibition of the kind of indiscriminate harm to civilians widely accepted as prohibited by international law now (whether Protocol I for those who are parties, or customary international law in the US perspective) when achieved by means of physical attack ought to apply equally to harms caused foreseeably by cyberattacks. Mere non-destructive espionage (which isn't purely passive) targeting civilians is probably a different issue.


EXACTLY! This should focus instead on outlawing cyberwar operations that indiscriminately affect civilians (and therefore could be used to justify a conventional or nuclear retaliation to a cyberwar act)!

The list of this to be banned should look more like this (though I'm not a legal expert):

- No targeting of critical infrastructure (being specific: hospitals, water treatment, powerplants, electricity grid, public schools, public universities etc. - not simply "tech companies, private sector") - and by no-targeting I also mean no planting of dormant backdoor-enablers, even if they do nothing - if you have the IMPLANTED CAPABILITY to damage a hydroelectric dam, for example, it already means you have already TARGETED it, therefore violated the convention

- No stockpiling of vulnerabilities for more than X months by government agencies - any gov agency should have BE REQUIRED to publicly disclose any 0-day they get their hands on after at most 12 months or smth. like this (short term stockpiling is OK, you need some level of healthy conflict, but the playing field need to be leveled a bit) - and of course this law would not be enforced, but at least your congress for example would always have a good reason to "fire the head of the NSA for not enforcing the adherence to the 0-day chapter of the Digital Geneva convention", and this would put the balance of power where it should be

- Gov agencies should be required by law to help defend companies that hold private data (the state exists to defend the people, and defending their data is part of it, even if it happens to be on some company's private servers and people signed some semi-legal-EULA-thingy letting the company do whatever with it - form gov's perspective defending that data is still about defending the people that pay the taxes) - and companies should be able to sue the gov if they believe the gov had technical capabilities to defend them, but did help them using this capabilities

- Inciting a lethal-force retaliation against a non-lethal cyberwar operation should be consider equivalent to "starting a war without provocation" - if Wallstreet is hit by a cyberawr attack causing 10blns damage, you are NOT justified to retaliate militarily - just fix you fucking defenses instead of getting innocent peoples killed for your lack of technical competence that got you hacked in the first place - those having most $$$ to lose from cyberwar are also those having most $$$ in general, therefore they would be the least affected by it, and have the least right to retaliate violently.

Just as tptacek said, the point should not be to prevent war (war can be quite healthy sometimes), but to make the game "at least somewhat fair", to level the playing field (as in "let the smartest man win!"), and to prevent "civilian" victims (if you don't want to play the game, it should be ok, and you and your business should be protected by side-fire - this is what gov agencies should do mainly, protect civilians and legitimate businesses from cyber attacks, while letting the game play on!).

(EDIT+ note: I'm not American, and I believe the things above should still hold even if US citizens and businesses would have somewhat of an "unfair advantage" globally because of how much your gov has already invested in cyberwar... if it's an advantage bought by fairly earned $$$ you can call it a fair advantage... if it's not, than that's a different conversation to have :) )


This is wrong on so many levels. The worst is that it undermines the actual Geneva Convention by borrowing its name to advocate something totally different. The Geneva Convention doesn't ask for "no fighting", but rather gives boundaries to keep fighting within "human-ish" levels.

I think everyone agrees that hacking is not inhumane...


> The worst is that it undermines the actual Geneva Convention by borrowing its name to advocate something totally different

By "the actual Geneva Convention" do you mean one of the four Geneva Conventions of 1949, or do you mean one of three previous Geneva Conventions that were predecessors of the First through Third of the 1949 Conventions?

> The Geneva Convention doesn't ask for "no fighting", but rather gives boundaries to keep fighting within "human-ish" levels.

As the article indicates, the new convention would parallel the protection of civilians in the Fourth Geneva Convention of 1949 by protecting civilians from being targets of nation-state cyberattacks, not prohibiting cyberattacks generally.

Though I think a better model would be the protection of civilians in Additional Protocol I (1977) to the Geneva Conventions of 1949.


>As the article indicates, the new convention would parallel the protection of civilians in the Fourth Geneva Convention of 1949 by protecting civilians from being targets of nation-state cyberattacks, not prohibiting cyberattacks generally.

Out of their six points four simply make cyber attacks harder or nearly impossible, one exempts civilian targets from attacks, and one guarantees government help in defense and cleanup.

I struggle to see much more here than an attempt from tech companies to prevent cyber war, and have them as untargatable combatants if cyber war st still occurs.


I think you lack imagination. Right now computer security may not seem like an essential scenario but consider a few of these cases:

1. Self driving cars getting hacked (and killing passengers?) 2. Stock market being hacked or retirement accounts getting hacked and losing data (we don't know how much money you had) 3. Election fraud

We're building a public infrastructure that is highly susceptible to security threats, meanwhile governments around the world only intensify their capacity to do inhumane things to this infrastructure (rather than building up the defenses).


And when that hacking results in a power plant in a region being taken offline and after several days of emergency power several babies in incubators die is that still humane?


Taking down key enemy infrastructure during wartime is generally legitimate, even if it's clear that civilians will die as a side effect. The current treaties prohibit explicitly targeting civilians, but as long as the target has a military use as well, it's valid even if it's mixed with civilians. Bombing the defense ministry (with civilian offices in the next building), armament factories (with civilian workers inside), power plants necessary to run these factories (and everything else, including hospitals), bridges or railways (with civilians on them) - all these targets generally are not prohibited.

As it generally wouldn't be a violation to bomb that power plant from a plane or sabotage it via infiltrated spies, sabotage by hacking shouldn't have special treatment. Doing so should be just as [il]legal according to the exact same criteria that determines if it'd be allowed or not by conventional means.


And when a power plant in a region is taken offline via conventional means? That isn't covered by the Current Laws of Land Warfare, why should a digital version be any different?


> And when a power plant in a region is taken offline via conventional means? That isn't covered by the Current Laws of Land Warfare

While details of the context will matter, an attack on such a plant would potentially implicate any or all of Arts. 51-56 (exc. 53) of Protocol I to the 1949 Conventions (the US is not a party to Protocol I but had traditionally viewed it's terms as declarative of and redundant with pre-existing customary international law.)


I think that it is covered by the laws of war: as long as the military benefit of attacking the power plant is sufficient to outweigh the collateral damage, it's legal. If you're wrong, and you lose, then you hang after the war is over.


You are correct. I should have said that it's not prohibited.


That sounds beyond gross negligence (better described as maliciousness, really) by whomever set up that power plant control network. They probably belong in prison.


We do have international treaties, prohibited attacks to e.g. nuclear reactors. For these facilities, a Digital Geneva Convention is needed.


Do we have such treaties, can you point me at one of them? IIRC nuclear reactors have been explicitly targeted (https://en.wikipedia.org/wiki/Vulnerability_of_nuclear_plant...) and would be so in the future; although common sense states that both parties would want to avoid a Chernobyl-like event in their battlezone.

Furthermore, this would be a perfect example where a "digital Geneva convention" would not be needed; you'd want a treaty expressing mutual agreement that everyone understands that destruction of such facilities is not in anyone's best interest, and everyone agrees not to do it in principle - without any reference to specific means of destruction; the same principles should apply for digital attacks as for bombing or insider sabotage.

The Geneva convention Protocol I states that facilities like nuclear reactors "may be attacked but only in ways that do not threaten to release the dangerous forces", and that seems a perfectly valid description - there is no blanket prohibition to attack them, you are explicitly allowed to bomb or sabotage the nuclear plant in a way that disables it but not in a way that causes a meltdown; so you'd be also allowed to hack it in a way that disables it but not in a way that causes a meltdown.


Do those treaties apply only to non-digital attacks?


One could probably make a case that, e.g., Additional Protocol I protections apply equally to cyberattacks were one litigating a case; the problem is that laws of warfare are "laws" that rely more on everyone sharing an understanding and treating them as reciprocal moral rules rather than enforcement through litigation. So an explicit recognition is vastly better than an ambiguity that you think a court would resolve in your favor if it had the opportunity, even moreso than in the case of, say, domestic criminal law.


Or taken the other direction, this is equivalent to saying that we need a Geneva Convention for bridges, roads and buildings.

I do like the idea of a digital weapons non-proliferation agreement, but I suspect that it would be even less enforceable than the nuclear version.


> ...but rather gives boundaries to keep fighting within "human-ish" levels.

Let's say 'within profitable levels'.

Not all weapons are that good for enabling an army to advance, even if lots of people are killed, e.g. poison gas. Civilians get instantly killed by the stuff but armies are tooled up to survive it, or can be. With not much profit in poison gas compared to a cruise missile laced with depleted uranium it makes sense to ban the poison gas that anyone with a degree in chemistry can make in their front room rather than the missile.

Instead of cyber war being the threat those conventional and nuclear weapons are the threat, although hard to imagine if living in Kansas.

What we need are some jolly clever OSS licenses that have clauses in them 'not for military use'. Microsoft could put that in their license today and go heavy on the enforcement - 'sorry Lockheed Martin you can't be using Excel to design 'echelon 2 as that is in violation of the EULA, here is the cease and desist...'.

In this way we can make the military contractors fall out of the loop and be as IT savvy as North Korea.

'Don't be evil' is something we forgot about, however, again, if Google just 404'd on the military due to some EULA then that we wouldn't have so much evil in the world.

Seems that 'cyber terror' is the new al-qaeda, i.e. make believe.


> What we need are some jolly clever OSS licenses that have clauses in them 'not for military use'.

That may be 'open source,' for some definition of the term, but it wouldn't be free software.

Nor would it be right: it is the right of any people to band together in their own self defence, and munitions are part of that. The rights to own a gun, encrypt a file and operate a computer are one and the same.


> I think everyone agrees that hacking is not inhumane...

People have committed suicide for leaked information.


Agreed.


Inverse = better. Enact products liability legislation that holds software and hardware makers financially liable to damages due to defects in their products and watch the attack surface available to state and non-state actors shrink demonstrably.

Not only will more secure solutions be engineered, but greater resources will be expended by companies like Microsoft, Apple, Cisco, etc. on UI's, educational efforts, and features that enhance security.

There is no stopping a state actor through treaty. Its unrealistic and will cause compliant countries to unilaterally disarm at the expense of non-compliant actors (eg. Assad chemical weapons use in Syria).


This would kill all opensource software.


It only makes sense to apply such rules to closed-source and proprietary software.


It should be noted that the Geneva Conventions were signed immediately after a war, the signatories were all nation-states (no independent non-state groups), they were an ongoing process, and not every nation-state signed them. They have also been violated quite frequently. In fact, if it weren't for the constant peacetime indoctrination of military officers in signatory nations, they would be violated a lot more in time of war.

It also should be noted that for a long time there has been a solution for non-state actors inciting violence upon another state: military trial. Pirates, no matter what Disney may have you believe, were not the most friendly folks. The Royal Navy and others hung quite a few in an effort to establish open seas.

We already are at war, it's just nobody wants to come out and say it. So now is a terrible time to bring this up. Belligerents are going to stall and hem and haw while they seek tactical advantage. That's not conducive to a frank discussion. I can't see any of the major nations or third parties participating in any kind of honest way, sadly.

If we want to start hanging hackers (the bad kind), and I'm not so sure we aren't just a few decades away from doing that, we need to start having an honest discussion about just what constitutes an "open internet". And I imagine there are several nations right now, including the U.S., that do not have one. That's a great discussion to get started, but this idea is way, way premature and is based on a ton of assumptions which are not true. Loosely the analogy might work, but I doubt the authors have thought through exactly what they're leading us into.


Am I the only one who see this call as deeply misplaced?

The Geneva conventions were written and accepted by governments. Those governments where acting on the behalf of their people and had (some kind of) legitimacy. Further more, they were acting with a goal of universal humanity and common interest.

This call is Microsoft asking for global legislation for protecting itself and other tech companies, which have no legitimacy whatsoever to represent the common interest.


That is how I read it as well. Something written with absurdly transparent levels of self-interest.


Yo Microsoft, have you looked at the work done by e.g. EFF?


EFF is what I immediately thought of as well.


When our own government (USA) has violated the Geneva conventions many times we must keep in mind what the real purpose of the conventions seem to be. To be a tool to be used against 'the other guys'. The strong against the weak.


> a tool to be used against 'the other guys'. The strong against the weak.

Recently some African countries have withdrawn from the International Criminal Court on that basis; only the weak are prosecuted. It's inconceivable that a U.S. leader would be prosecuted, for example.

However, the GC are used by everyone against everyone, including the strong against the strong, so the strong don't escape scrutiny. I believe U.S. law requires the government to obey them. For example, the Bush administration crafted careful (and sometimes convoluted) legal arguments that their actions complied with the GC - they respected the GC enough to feel they couldn't just ignore the rules.

On the other hand: The Geneva Conventions are not followed or implemented perfectly, but neither is any law or rule. International governance is anarchy; there is no real authority; in that realm, nothing will get nearly 100% compliance.

Do the GC have a positive impact? Now militaries and governments are legally bound and their people trained to follow these rules, and accusing someone of 'violation of the Geneva Conventions' carries weight. Imagine the world without them.


Not that I disagree with your overall point but :

>Recently some African countries have withdrawn from the International Criminal Court on that basis; only the weak are prosecuted

Actually only two (Gambia and Burundi) has withdrawn arguing this. And in both cases, everyone knows the withdrawal is related to their own human rights abuses.

South Africa has announced its intention to withdraw (don't know if it's effective) because they believe it hinders its mediation efforts in conflict zones. (ie. we cannot help Uganda settle with the LRA because we're bound to arrest LRA leaders as soon as they set foot in South Africa)


Agreed, thanks for clarifying. I was not addressing the issue carefully. However, I thought that criticism was more widespread than the countries that withdrew.


Have you read any of the Geneva Conventions, let alone those specifically to which the USA is a signatory? Their rules for the treatment of prisoners are limited to uniformed combatants with specific insignia, not random terrorists or insurgents hiding among civilians.


I am surprised by the number of naysayers here.

Question: Do you have something better to suggest? (something that could realistically get implemented)

Conventions like the one suggested here are the baseline on agreeing on common rules. It allows accountability, boundaries, and once accepted open the way to discuss possible sanctions.

I would much rather see positive steps like this one than no progress at all.


I can't predict if it would actually be better, and I fear you'll say it can't be realistically implemented, but I would suggest the nuclear option: a Manhattan Project level of R&D into decentralization.

Problem is, those with the ability to make it happen are essentially the same people running the massive spy infrastructure while singing platitudes about privacy and security.


This particular suggestion is a pie-in-the-sky joke.

> Such a convention should commit governments to avoiding cyber-attacks that target the private sector or critical infrastructure or the use of hacking to steal intellectual property.

That basically translates to "plz don't hack things." It's as though cyber attackers would stop if only someone would think to ask nicely.


The baseline for common rules is something that would be desirable by all major players.

Gas attacks and torturing POWs is something where the disadvantage of being on the receiving side is much greater than the practical advantage of being allowed to use such techniques, all major combatants would generally prefer to fight in a war without poison gas or tortured POWs, so they're prohibited.

For at least some "serious players", the practical advantage of being able to use or threaten cyberwarfare (or nuclear attacks) is greater than the disadvantage of potentially being on the receiving side, at least some major combatants would really prefer to fight a war where cyberattacks against civilian networks are used, so they can't and won't be prohibited or meaningfully restricted; the suggestions are simply futile unless they somehow manage to show that all the countries who currently seem to benefit from unrestricted cyberwarfare actually somehow suffer from the status quo.


Build actual secure software. Build phones that CANNOT be unlocked or backdoored by any government, by design. Unless we protect our data from the ground up, using protocols designed to be resistant to censorship or pressure from centralized authorities, people are going to continue hacking private citizens.

The solution is technological, not political. We've already seen that governments change, fall, and generally avoid laws put in place to curb their power. We can't rely on them to protect free citizens' interests anymore.


Such a Digital Geneva Convention will be utterly useless, and perhaps even counter-productive, until we have solved the attribution problem.[1]

Until that is solved, we risk misuse of a Digital Geneva Convention to impose sanctions against innocent players.

Like with the real Geneva Convention, prevention is harder and not that sexy, but leads to much better long-term results: Invest in improving software security, run bug bounties, enforce accountability at least for non-free software. And, of course, resolve the conflict of interests within the state, e.g. by making clear that police and intelligence are going too far when they buy zerodays[2] and spread malware[3].

[1] Right now, attributing attacks to their origin is idle speculation. Every larger attack is initially attributed to the currently popular scapegoats, with "evidence" that is essentially based on coffee grounds reading.

[2] Buying zerodays creates an incentive for people to keep the volunerabilities they found secret, instead of publishing and fixing them. Moreover, these create an incentive to insert such "bugs" (backdoors) in the first place.

[3] ... or force other to build backdoors into their software, which is in effect almost the same as spreading malware.


One particular issue that makes this difficult to enforce is the nature of nation-state cyber weapons. Unlike physical weapons, which can be regulated by tracking their location or use, cyber weapons are undetectable by design. Less advanced cyber weapons from weaker programs like North Korea's or Iran's teams can be found but the capabilities of major players like the USA, China, or European countries likely extends far beyond what a company would be able to detect.

The technology that advanced nation-states are using to do attacks is highly classified and most likely farther reaching than most people realize. The documentary Zero Days gives a pretty good overview of how nation-state cyber attacks have transferred into physical attacks (e.g. taking large power grids offline, derailing trains, subverting anything that has a PLC, etc.). This technology has been around for almost a decade. Without knowledge of what kind of weapons they have, we won't be able to detect them. If these capabilities fell into the hands of non-nation state actors (terrorists) the damage they could do could be analogous to a nuclear weapon.

Even if we do discover an attack, it's even harder to attribute a piece of software to a country. Do you think an advanced nation would leave marks saying "foobar virus copyright X team 2017"? There's plausible deniability as well because one country could frame the other to make it look credible and we'd have no way to know what the truth was.

This program may work well for normal hacking attacks by people or lesser nation-states but it will not affect the missions for more advanced countries. Maybe it's worthwhile for stopping some attacks and setting a precedent but it won't be a silver bullet. I agree with the sentiment though, innocent civilians and companies should be left out of the crosshairs. It would be good if major software companies could work together to mitigate damage from attacks.


Nation states have gravitated to cyber because it carries less risk than espionage in the field, and because of the difficulty of attribution. Despite a mountain of evidence, Russia denies the DNC hack. The US pretends Stuxnet didn't happen. And N. Korea obviously is not owning up to Sony.

This puts tech companies in a bind. They want to innovate on society changing ideas like autonomous vehicles, but with nation to nation cyber attacks, they are potentially putting civilian lives at risk by doing so.

Will nation states play nice? No one expected direct attacks on private companies -- then Sony happened. No one expected attacks on a US election. And no one expected an attack on the grid without declaration of war -- but then Ukraine. Without defined international norms, anything is on the table -- even in peacetime.


Commit to nonproliferation of cyberweapons

A cyberweapon performs an action which would normally require a soldier or spy, and which would be considered either illegal or an act of war if performed directly by a human agent of the sponsor during peacetime. Legal issues include violating the privacy of the target and the sovereignty of its host nation. Such actions include (but are not limited to):

Surveillance of the system or its operators, including sensitive information, such as passwords and private keys[0]

[0]https://en.wikipedia.org/wiki/Cyberweapon


Civil or international warfare are bad for business, but trying to handshake them away with some sort of business compact is about as realistic as 'peace in our time.' It's going to happen and the time has come to place bets. Trying to postpone this reality by having a lightbulb moment and asking for the rulebook isn't going to work. Sorry.


Does anyone else see irony in microsoft advocating good behavior?

Just yesterday I was saying that some industries self regulate well, I even used video games as an example. Even with their cleaner behavior the past few year I don't think I won't companies like microsoft anywhere near actual regulation of anything vaguely connected to human rights. I see too many ways for them to abuse even an advisory role in such regulation.

Maybe they really have turned over a new leaf and microsoft is nothing but angels, but it is too soon to tell in my opinion.


Politics is about building coalitions. A good way to ensure nobody joins your coalition is to pre-sort parties based on some arbitrary measure of "goodness".


Politics is also about knowing when to trust that previous serial killer that wants your house keys for no good reason, and when not to trust.


This was exactly my point.

How do we know microsoft won't do something shady to strongly favor themselves or hurt others? They have done it in the past with standardizations groups.


Just what we need: more asymmetric warfare.


Can we start smaller? Say with a digital bill of rights?


"Following highly visible and even challenging negotiations, in September 2015 the U.S. and China agreed to important commitments pledging that neither country’s government would conduct or support cyber-enabled theft of intellectual property."

LOL. Does anyone actually believe that?


The full text is:

"The United States and China agree that neither country’s government will conduct or knowingly support cyber-enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sectors" [1] (emphasis mine).

Not a meaningful commitment since the U.S. does not (officially) do this and the concept of a purely-commercial Chinese company defies delineation.

[1] https://obamawhitehouse.archives.gov/the-press-office/2015/0...


Sort of http://www.darkreading.com/attacks-breaches/china-still-succ...

There are certainly challenges, but the threat/intelligence communities have made a lot of strides in attribution.

If you can attribute attacks, you can retaliate with sanctions, etc. So if you are credible in your threat of retaliation and the retaliation is meaningful and proportional, reducing cyber conflict may be possible.

This is certainly a challenging topic, but to draw a parallel; many people were sceptical of the Iranian nuclear deal, but even the Israelis admit that the Iranians look to have stopped developing nuclear weapons capabilities.


> many people were sceptical of the Iranian nuclear deal, but even the Israelis admit that the Iranians look to have stopped developing nuclear weapons capabilities.

It's incredible that people thought Iran would risk devastating sanctions just to pursue a risky nuclear program that would remain open to foreign intervention and espionage indefinitely. If Israelis really want peace, they might consider electing a government that actually promotes that.


no


If Microsoft actually cared they would donate to or support the EFF


Because the analog one worked wonderfully?


> 1. No targeting of tech companies, private sector or critical infrastructure.

In peacetime, the last item is okay — clearly in wartime it's appropriate to degrade a foe's infrastructure, consistent with the accepted laws of war and humane concerns.

I think the private sector ought to be generally off-limits — but surely there are times when that might not be the case. Do spies never duck through a dry cleaners'?

I'm negative about privileging tech companies vs. the private sector in general.

> 2. Assist private sector efforts to detect, contain, respond to & recover from events.

Sure, that sounds reasonable.

> 3. Report vulnerabilities to vendors rather than to stockpile, sell or exploit them.

I can't possibly imagine that will or should ever happen. Nation-states have a duty to their citizens to be able to conduct offensive & defensive cyber operations; a necessary condition of doing so is the ability to stockpile & exploit vulnerabilities.

There's a gain-loss calculation to be made for report any vulnerability: does the gain to national defense of closing that vulnerability outweigh the loss to national defense of being able to exploit it against an adversary? I see absolutely no reason to believe that the answer is automatically 'yes,' or even mostly 'yes.'

> 4. Exercise restraint in developing cyber weapons and ensure that any developed are limited, precise and not reusable.

Restraint of course is laudable. Limited & precise capabilities are obviously a good thing. Trying to limit reuse, though, seems impossible to ensure in the general case, and not really desirable anyway. Why restrain makers of software munitions from using one of the most powerful tools in a software developer's toolkit: reuse?

> 5. Commit to nonproliferation activities to[sic] cyberweapons.

Meh, I always thought nonproliferation in general is either a case of pulling up the ladder behind oneself (on the part of states which have already achieved a capability) or a exercise of wishful thinking (on the part of those who think that the genie can be crammed back in the bottle). As applied to cyberweapons, I have difficulty understanding what this is even supposed to mean.

> 6. Limit offensive operations to avoid a mass event.

This is already addressed by the existing laws of war, particularly the principle of proportionality.

Overall, I imagine this is really meant to be a starting point, not a draft: there is absolutely no way that a serious person can expect point 3 in particular to universally hold.


>>> Nation-states have a duty to their citizens to be able to conduct offensive & defensive cyber operations; a necessary condition of doing so is the ability to stockpile & exploit vulnerabilities.

You speak of war as those it's some necessary and good force, as opposed to a 0-sum-game that has lost all use in modern society.

Am I the only one who thinks that we as a race need to look to an era 200 years out where the very concept of "war" and "nation" is obsolete?


We need to get better at following the current Geneva Convention before we can hope to enforce a digital version.


The two are not mutually exclusive, and the set of people who could work on either of these is probably mostly disjoint. "Don't fix problem X because problem Y is more important" isn't a helpful stance if we can work on both.


Sort of. We can learn from why the the Geneva Conventions aren't respected while WTO rulings are.



My comment was meant to point out how frequently the Geneva Convention is ignored by nations today. Clearly we need a similar convention to protect our increasingly critical digital infrastructure. However, our inability to enforce the current convention does not bode well for enforcing this one. It's tough to see how we can compel nations to comply until international law as a whole is better respected by all nations.


and presumably like the Geneva conventions "police" actions will be outwith that - its ironic that if a solder in wartime used ammo that police regularly use they would be committing a war crime


The Hague Conventions restrict that, not the Geneva Conventions.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: