This is interesting, but runs contrary to my understanding of how Etherium works. I'm clearly missing something, any chance you (or anyone else) could elaborate more?
My understanding was that the decentralization of Etherium would mean that everyone watching the contract would need a copy of the decryption key. If that's the case, what prevents someone from publishing keys early? Or is it that the key isn't stored in Etherium, and Etherium is only being used as the consent to publish?
If the key is being stored somewhere else and just waiting for the contract to validate, how do we prevent a censor from just attacking that system?
If the key is being stored somewhere else and just waiting for the contract to validate, why not also store the contract on the same machine and do checkins directly into that? Would that be significantly less secure/reliable?
Killcord treats ethereum as a project backend API. The smart contract is pretty simple in construction by design. Writes are restricted to one of two accounts (the owner account and the publisher account) and the publisher account is further restricted to only allow writes to the publishedKey variable in the contract. Reads are open to the public.
As stated in other responses, the decryption key is stored own trusted systems that run the owner or publisher killcord projects.
As for attacking the system this is something to think about. So why did I choose Ethereum for this?
Why Ethereum - The contract code (backend API) and variable state are written to the block chain, so the availability are dictated by the network itself which is made of around 20K nodes (give or take). Of course, as others have mentioned the other aspect of this is internet access for the publisher and project owner.
For the publisher, this can be accommodated by running the publisher in a geographically distributed set of trusted systems. What do I mean by trusted systems? These are systems that meet your risk profile. The code can run on AWS Lambda in multiple regions, or on a raspberry pi, or in a datacenter in iceland, the more, the merrier.
For the owner... If you are cut off from checking in, the system assumes something bad is afoot. This is why its important that anything put in killcord is something you really want to publicly disclose. Killcord should really only be a system that runs on your behalf in the case that you go MIA and you feel that is a threat to the data being otherwise released.
Killcord is described as resilient and resistant. The resilience is undefined, and the resistance is defined as censorship resistance. I'll ignore the censorship resistance, as it doesn't seem to have any qualities different from any other Ethereum contract.
I don't see what this project is resilient against. In fact is seems unable to recover from issues such as the trusted third party publishing early.
How is key confidentiality preserved? The integrity of the keys? What if the keys are changed or deleted? How are DOSes protected against, so early disclosures don't get forced?
There are quite a few issues with the project. Unfortunately, killcord doesn't seem ready for release into prime-time as a key-management method. Killcord seems equivalent in intended operation to a non-blockchain HSM, but all the protections of an HSM, all the key management, all the security controls, they are all gone. This actually introduces security issues instead of solving for them.
What is the actual problem that killcord is attempting to solve? There are likely more robust designs, such as secret sharing, that will solve the target problem.
Killcord is designed to let the public know that a killcord project exists, where to find the encrypted payload, and how to check the status of the killcord project.
Unpublished secrets are currently stored on the owner and publisher project folders in clear text on a config file. This isn't meant to replace an HSM or secret manager, by any means. Though I've got some ideas on how to incorporate systems like Vault, Chamber, or other secret stores in the future.
It is also, indeed, early alpha and dealing with secret management for the owner and publisher are absolutely top of mind.
See also https://github.com/petertodd/timelock and similar projects. There might be a way to combine these two concepts plus ephemeral keys as used in perfect forward secrecy, so that the switching technology isn't a single decision to publish a key, but rather time-locking a share of a Shamir-split secret and constantly rolling it forward as the pings happen -- or letting it run out and reveal enough shares for anyone to decrypt.
I think it's really, really hard to guarantee that information has been destroyed, especially in a decentralized system, so you won't have the assurance that information was (1) available to encrypt, then (2) unavailable to anyone because it was destroyed, and then (3) somehow recovered, recalculated, or discovered to once again allow decryption. That feels isomorphic to the problem of time travel.
But maybe combining these technologies will provide a way to compartmentalize the risk of early disclosure sufficiently to satisfy some use cases.
1. Client generates necessary files (including keys and payloads).
2. Encrypted payload is placed on IPFS.
3. Keys are placed on a trusted published (potentially single point of failure).
4. A smart contract running on the EVM continuously checks for pings from clients. If client doesn't check in over some pre-defined policy, then trusted published will be aware and publish keys to the smart contract, visible to everyone.
Is that... good? I mean, I understand that security isn't black and white, and really you're just trying to make it harder for someone to attack you, not impossible. But how much do you gain by decentralizing just the trigger?
Since the trigger logic fundamentally relies on you doing something, it seems like that logic could be local to machine, your machine could query any number of public websites/platforms/IPs and it would still be pretty difficult for anyone to censor you.
It also seems like a party that wanted to force you to publish early would not be hampered in any significant way by Etherium. In either scenario, all they have to do is incapacitate you or block the IPs that your machine is looking at.
I still feel like I'm missing something. Would anyone be willing to break down a (fictional or real) scenario where adding Etherium to this equation blocks an attack?
There's a bunch of attack vectors, but most fall on the trusted publisher and client itself. IPFS and Ethereum are, by assumption (difficulty wise), ``secure''.
Assuming both client and publisher's internal systems are intact, then you have two attack vectors:
There's the false positive attack vector, where you can shut down the client's network access and force the secret to be prematurely leaked.
There's the false negative attack vector, where you can shut down the trusted publisher's network access, and indefinitely keep the secret ``safe''.
However, in general, the first attack is not as worrisome as the second for these kinds of application. The second is more worrisome, and there's many ways to distribute the trusted published using some crypto threshold scheme such that as long as no more than some threshold of the trusted publishers are shut down, the secret will be released in case of client shutdown.
I imagine the second attack may be mitigated by the fact that the publisher might be easier to hide than with direct access. E.g., if your dead man's switch were just some daemon running on a machine somewhere that you have to ping periodically, attackers could find the IP address of the daemon by watching your network traffic.
In the OP, you and the daemon (aka the trusted publisher) communicate exclusively via the blockchain, so it will be a lot more difficult to find the daemon's location.
Not sure if this is in any way better than just accessing the daemon through Tor though.
These are valid points and anyone thinking about using killcord should be aware of these.
As for the second attack vector, the publisher is built with idempotence, so it is important that a killcord owner configures n-number of publishers in geographically diverse areas to mitigate the false negative attack vector.
The one attack that I can see it blocking is that it allows for 100% untraceable monitoring [edit: of the deadman's switch by the the system-that-should-send-the-message]. Since every bit of data pushed to Ethereum goes to every single full node, you can't find out who has the keys to the secret data and will release them.
Oh, cool, this would actually help protect against a lot of things!
If you can set up a deadman's switch and there's no way to figure out who it belongs to, that should make it significantly harder to find out which publisher to attack.
Contrast that against 'every day at 5, I publish a signed checkin to Facebook, Twitter, Reddit, Dropbox, my blog, and a hundred other sites simultaneously.'
In that scenario, blocking or faking the trigger isn't the attack vector. The attack vector is that it's really obvious who the trigger belongs to, so to find the publishing IP an attacker can just monitor who connects to those domains.
I guess the trick is actually getting Ether anonymously, but that's not the hardest problem in the world to solve.
Hey Gang. Author of killcord here. I'm honored and humbled this was submitted to HN and I'll be reading through the comments to answer questions and respond to feedback. I started this project after a thought experiment in using newer decentralized tech for internet activism.
Neat project! I thought up a trustless scheme for this a while back, but it's beyond my means to implement:
You can encrypt an entire circuit with homomorphic encryption, which users can run without decrypting its internal state. Construct a device like so:
Inputs:
1. Ethereum block
2. Previous run-state (encrypted) or zeros.
Outputs:
1. Next run-state (encrypted)
2. Decryption key (if triggered) or zeros (if not.)
Internal state:
0. Hash difficulty range
1. Hash of previous block seen
2. Pubkey to scan for
3. Counter of # blocks seen without a tx signed by pubkey.
If you feed the device more than 1 week of blocks without a tx from pubkey, the accumulator hits zero and it spits out the secret.
An attacker would have to mine 1 week of blocks at hash power IS.0 in order to trick the device into spilling its guts. If you die, and don't send txs for a week, anyone with the device can play a week of blocks into it and the secret will pop out.
Unfortunately, homomorphic encryption is still too slow for this to be quite feasible. Food for thought though! And you can build this today with SGX, if you trust that.
And a landing page that omits this fact (but contains download and instructions for a command line tool). If you're thinking "wait, I can't put a self-publishing secret on the Ethereum blockchain, how does this even work?", the landing page leaves you hanging.
Even if the web front end is taken down, the contract is still on-chain so it can be accessed via a web3 browser, a client or even in etherscan in the "read contract" tab.
That just lets you see the contract. The decryption keys by necessity can't be located on the Etherium chain at all and have to be held by a trusted 3rd party/system that watches the contract and releases the keys when the checkin doesn't happen. If the attacker is able to locate and disable that system then the killcord is essentially diffused the owner would have to manually publish or have setup backups.
Right. The decryption keys aren't on the blockchain until they are "published". If all publishers are compromised or shut off before that happens, the killcord project has been terminated.
Yeah, though finding them should be fairly hard because all they will look like from a network traffic perspective should be a normal-ish etherium non mining node and no direct communication between owner and publisher should exist after initial setup. Anyone planning on using this for serious matters should make sure that their trusted publishers are hosted anonymously (as far as is possible) or so spread out jurisdictionally to make attacking them all impractical.
Given that the trusted party is required for this to work, is there any point at all in having it depend on the Etherium blockchain, other than perhaps a weak form of anonymity network?
The purpose of killcord + ethereum for public disclosures is that leaning on ethereum as an API backend ties itself to the fact that taking down the entire ethereum network is difficult and running your own backend resiliently is hard.
That being said, I'm working on the concept of "providers" so that storage, payload, and backend can be plugable and you'll be able to use whatever backend you are comfortable with.
The trusted party can be factored into a decentralized network as well. This is what our team is working on with the Keep network (we've considered dead man switches as potential applications for a while).
As far as I can tell, Ethereum isn't actually doing anything interesting here - it's just being used to transmit pings to the server, which could just as easily be done with, for example, tcp/ip.
Anyone who would think of using it you need to consider at least 2 threat models.
1) The key castodian can decrypt your Information either willingly or through coercion.
If you use the same key to sign and encrypt the message or if you do not sign it then they may also be able to impersonate you.
2) A third party who would gain from the information being disclosed can force its release through a denial attack.
Never use a deadman switch as a bargaining or as an insurance policy if you do not intend the information to be released to the public and if you are not comfortable with the information being released the moment the switch is set up rather than when it would be activated.
The only manner in which this or any simmilar setup does not expose you to additional risk is if you only use it to ensure the release of said information in a timely manner and there is no adversarial motive to release it sooner.
There is a lot of hate for the trusted party set up of this, which seems reasonable.
It seems like you could create a dead man's switch using arbitrary participants. You distribute a secret to every participant and then to attempt to activate the dead man's switch they raise k to the power s mod p and pass it to the next participant. As long as you act as a participant each time and raise the passed value to some invalid s then the answer that is arrived at won't be the final secret.
As long as you participate every round the wrong answer will be arrived at, but as soon as you don't participate the right answer will be arrived at.
Any singular party refusing to cooperate would destroy the deadman's switch so malicious activation would be tough.
Designing it so it can tolerate failures would be the hard part.
EDIT: I am wrong, this isn't that great. It's really hard to hide information that can be recovered without a secret being revealed.
Good point. You'd likely want to also encode something that opaque to who exactly has participated, only really show whether this is the last step and a way for individuals to tell if they have already added their secret.
The really bad part would be that if the poisoner happens to be the last step then the final step would produce the secret before handing it to be poisoned.
I built exactly what you’ve described, using semi-homomorphic encryption (addition of integers, used plainly as we were under the noise threshold of participants). Luckily for me though, I got to punt on some of the really hard questions of trust — the nodes that were communicating are adversarial, but the outside “organising” network was the government and “us” (company I worked for). It’s a really fun problem. I highly recommend taking a crack at it, or even just reading the literature regarding digital voting — you need to prove that one vote was cast for a given person, and no more, without ever tying back any specific vote to said person, and with a huge range of attack vectors!
So a lot of these comments seem to be criticisms of potential vulnerabilities (which is par for hacker news really). I'm curious if there are better alternatives out there that aren't vulnerable to the same issues, like a single point of failure or attack?
It's vulnerable in that whichever threshold N that you choose allows for N participants to conspire to publish ahead of time, or M - N to conspire not to publish after the fact.
interesting. i hadn't seen this although I implemented something effectively the same, except that all keys (which could be any number ≥ 2) be combined to reveal the secret (or any information about it).
You're boned. Then most systems including Ethereum are based on the assumption that the miners aren't majority controlled by an adversary. That may or may not be a sound assumption.
The whole idea is kind of predicated on whoever you're worried about attacking you not wanting the information to get out more than they care about getting to the person holding the dead man's switch. If they are more concerned with getting to that person than with whatever information the person has threatened to publish no level of security on the switch matters it just becomes part of the cost of getting to the owner.
Have any legal systems weighed in on a dead man's switch?
I get the premise, where typically it's illegal to take an action that releases confidential or censored information.
But, to governments, especially ones that want to keep information secret or censored, I'm not sure that negating that sequence and failing to stop the release of information (that you willingly put in a dead man's switch) will get you out of trouble.
Unless you're dead of course. But, I've seen this process promoted for living people to release information and I'm not sure it's any better than just posting the content anonymously, but with the added risk of accidentally releasing the information.
You're describing a different problem. Killcord solves the Insurance Policy problem:
Suppose you're a whistleblower, who exfiltrated gigabytes of unredacted data from the NSA. So far you've leaked only redacted excerpts, but the NSA might kill you to stop your leaking.
However, the NSA really doesn't want the whole archive leaked, or it would blow their agents' covers.
So, you put the whole archive up on the net, encrypted, and set up Killcord to decrypt unless you keep checking in. This keeps you alive, since the NSA knows it'll leak if you're dead.
This addresses the checkin aspect of killcord but it doesn't address the payload address broadcasting and decrytion key publication aspect of killcord.
I am in the early stages of building a "providers" abstraction for killcord so that backend, publisher, etc are plugable. Using this bitcoin pattern for check-ins could be really cool.
This is left up to the killcord project owner to set the publisher threshold. If the project owner is concerned about congestion the owner should increase the time allotted to the threshold.
No, if you have ordered keys and only you know the order, there is no way to do it unless you give the order, and even then there’s no way to confirm the order is correct without trying it.
The way around this is to threaten not to kill the target, but rather kill their whole family or those they care about viciously and painfully, and be ready to do it, if the order is wrong and there is an automated leak.
Well sure, just like you could give them the wrong private key.
I always find these arguments against coercion attacks unconvincing. "Well, they can force you to give them information A, but for some reason not force you to give them information B." No, they'll put you in jail and force you to give them all the information needed to send check-ins, period.
Yes. In the current form, If someone gets the project owner config file they could continue to check-in indefinitely.
I've been toying with the idea of optionally encrypted the owner config with a passphrase to mitigate this. It would even be possible to have a secondary "duress password" that pretends to decrypt the config, but publishes instead.
My understanding was that the decentralization of Etherium would mean that everyone watching the contract would need a copy of the decryption key. If that's the case, what prevents someone from publishing keys early? Or is it that the key isn't stored in Etherium, and Etherium is only being used as the consent to publish?
If the key is being stored somewhere else and just waiting for the contract to validate, how do we prevent a censor from just attacking that system?
If the key is being stored somewhere else and just waiting for the contract to validate, why not also store the contract on the same machine and do checkins directly into that? Would that be significantly less secure/reliable?