The flaw may be assuming everything else can be equal in the real world. Obscuring the algorithm has downstream consequences that may/will reduce overall security.
For example, hiding the algorithm from whitehats may prevent/discourage them from hunting/reporting bugs.
> For example, hiding the algorithm from whitehats may prevent/discourage them from hunting/reporting bugs.
Yea, this is a serious concern, I guess it depends on use cases. For sure "security by obscurity considered harmful" could be true, thats the thing people overgeneralize and fight over it when this should be weighted depending on the circumstances.
We measure password and cryptographic key security based on their entropy (keyspace) and speed (key tests / second). Given current attacks (GNFS), a 2048-bit RSA key has ~112 bits of security^1 and would take ~20,000 years to brute force using every computer ever made^2. Passwords and cryptographic keys are selected as the single point of obscurity in these systems so that many eyes may secure the other components. If the system is otherwise secure, then it is as weak as the passwords/keys which are (hopefully) picked to be very strong.
Most individuals defending algorithmic security through obscurity believe that hiding the algorithm improves security. That may be true in an extremely technical sense (the attacker must recover the algorithm first), but it is very misleading and unprofessional commentary. Algorithmic security through obscurity is at best calculated in difficulty-to-reverse-engineer (or difficulty-to-steal), which doesn't provide per-use(r) specificity (per-user password) nor scale in complexity (a 256-bit key is generally 2^128 times stronger than a 128-bit key, but doubling the algorithm length increases reversing time by slightly less than a factor of 2).
Algorithmic security through obscurity provides negligible security, but what's the harm? Why should we care? Attempting to hide the algorithm provides a false sense of security, limits review to "approved" parties, and induces legal/social efforts to "protect" the secret. The limited review is particularly noteworthy since it promotes bugs in both the algorithm and the implementation. The end result is a facade of security, some very unhappy whitehats, some very happy blackhats, and more users betrayed through poor security practices.
> "In return for the hollow credits, ConocoPhillips paid Green Diesel $18 million, according to court documents. Shell got stung for $14.4 million, BP for $13.6 million, Marathon Oil for $12.4 million, Exxon $1.2 million. All these companies also were forced to buy new RINs to replace Rivkin’s phony ones."
> Federal agents were watching, as was Houston attorney David Fettner. He’d been appointed by a court to find and seize Rivkin’s property on behalf of commodities trader VicNRG, which had sued Green Diesel and other Rivkin companies for selling it $3.8 million in bogus RINs. “Rivkin left a trail of unhappy people behind him,” he says.
Doesn't seem like everyone else was happy
> Well, given that the buyers bought just the numbers and not did not go to the authorities immediately it would seem that everybody in the deal was happy except for the EPA.
You'd be happy buying $1000 from someone for only $100, until you discovered the money was essentially worthless (counterfeit)
That's a bit like saying a ripped-off investor should have bought stock, not mutual funds.
These companies needed the RINs, not the fuel. If they had bought the fuel with the RINs they could have been assured the RINs were legitimate, sure. But that has all kinds of associated costs (quality control, finding buyers, the cost of the fuel itself, etc.). A legitimate biodiesel producer is probably better positioned to deal with those costs, so it makes sense for him to keep the fuel and sell the RINs to companies that need them.
And that's precisely the problem. As soon as you start selling something totally abstract in lieu of something that was presumably produced all control mechanisms fail.
Which is why I'm not a fan of these constructs. Even a legitimate biofuel producer will see better margins on the RINs than on the fuel...
I think you just don't understand how a cap and trade system works. You should try doing a google search to read up on it. It's confusing at first, but there's nothing inherently uncontrollable about it. The only reason these scams are allowed to happen is the EPA doesn't have enough manpower to constantly watch over every facility to make sure fuel is actually being produced.
> The only reason these scams are allowed to happen is the EPA doesn't have enough manpower to constantly watch over every facility to make sure fuel is actually being produced.
So, if you don't have that manpower don't set up a scheme like that.
I mean, in the end it's impossible to completely prevent any scams from occurring. Even if we used a tax based system people could still lie, and we will wouldn't be able to see if they're telling the truth. It's not feasible to constantly check up on everyone.
They did verify that the RINs were being produced properly, which is what led to the guy being caught.
So I'm not sure what your argument is here. If you understand cap and trade systems, then you know that they aren't inherently gameable (which you just admitted). So are you saying that you think the EPA needs more funding?
How does producing a bunch of biofuel achieve anything? Why is that something we want to incentivize? If this was about credits for sequestering x tonnes of carbon from the atmosphere then I can see how that should be worth a credit, but I don't see how it makes sense to single out biofuel any more than anything else those crops could have been used for.
Again, I recommend reading up on what cap and trade systems are. The goal isn't to produce biofuel, it's to reduce the amount of oil being used. Using biofuel instead of oil results in the production of less pollution.
This is like saying banks should ship gold bars to each other, or maybe we should give up on money and use barter. (Money is an abstraction, after all.)
Abstraction is just how finance works. Yes, there is a risk of fraud, particularly in newer abstractions, but that's why we need auditing and enforcement.
And without sufficient auditing and enforcement you'll get scams and your goals will not be achieved. You're essentially betting the farm on the honesty of corporations controlled by people who now have a direct incentive to cheat. That's putting an awful lot of faith in humanity right where it has the biggest chance of failing (at the CEO level).
Note how in the article the people working there were saying they were 'expecting an audit any day now', but as long as their paycheck depended on not being audited they were all fine with it.
It's rather strange how the CEO is the only person indicted.
> And without sufficient auditing and enforcement you'll get scams and your goals will not be achieved.
Auditing and enforcement are critical. Shipping around physical gallons does not get rid of that need. Physical gallons could just as easily be water if nobody checks. I don't see why you blame the abstraction.
The consumer can't put the water in her tank and drive her car away. Since she has skin in the game, everyone upstream from her does too. It is reasonable to blame an "abstraction" that removes consumer interest entirely, and doesn't substitute some other control of equivalent strength. The minute I heard of "cap'n-trade", which was decades ago, I expected exactly this sort of scam. Those who designed this scheme, unlike the useful fools who provide political cover for it, did so with exactly the same thought.
The consumer gets 100% gasoline in either case. Forcing the oil companies to buy biodiesel does not force them to give it to consumers. The RID abstraction is a completely separate issue from whether anyone actually wants biodiesel.
Firms will be happy to buy a product wholesale that they cannot in turn distribute to consumers? What are they, farm-aid charities? Let's try to remain focused on the plausible.
Who said anything about them being happy? But that has absolutely nothing to do with whether they are forced to buy gallons they don't want or RIDs they don't want.
Now you're talking about maintaining two sets of books, and somehow regularly disposing of vast amounts of some unknown substance that isn't fit for use in automobiles. That sort of scheme won't even get started before it falls apart, because it relies on the silent cooperation of dozens of people throughout the organization. TFA describes schemes that lasted for years, because they depended on the actions of only one or two people. You assume that liquid matter is as easy to store and transport as ephemeral ID numbers, but it really isn't.
You make it sound like anyone at the oil company cares what they're buying. They want to tick a box. As long as they buy something that has the legal weight of biodiesel, and isn't expensive to get rid of, they're happy. One set of books, no distributed conspiracy.
Or they could never even bother to ship the gallons. That's not a 'scheme' of any sort at the oil company, it's just the business focusing on the part where it makes a profit and dealing with the regulation of buying biodiesel in the most minimal way possible.
Okay, but the system isn't going to be abandoned based on one case of fraud. It's gotta be a lot more widespread than that.
It's similar to how we don't see stores abandoning credit cards despite the impressive amount of fraud we see reported on Krebs. Instead the holes get fixed (eventually, with lots of foot-dragging).
The article reports an interesting story but it doesn't seem big enough to have that kind of impact.
I'm not defending the system. It is quite evidently flawed. I'm objecting to the implication that the oil companies are complicit rather than victims of fraud.
>Even a legitimate biofuel producer will see better margins on the RINs than on the fuel...
It certainly seems to create an incentive to cut costs at the expense of quality. I don't understand why Green Diesel didn't just continue to produce poor quality fuel and thereby sell legitimate RINs.
> > but without the battle-tested implementations.
> "I don't know what's wrong but I'm kinda afraid to try".
Battle-tested implementations have dealt with (at least some of) these threats previously. New approaches often miss the lessons of past efforts, leaving themselves vulnerable to old attacks.
Only in revoked list=1,2,5,7 etc. Unlike storing all sessions, you only store revoked integers which takes way less space and achieves same revocation you wanted.
> That's optimistic
Nobody should store
> leaving themselves vulnerable to old attacks
Every new line of code is probably a vulnerability. That's life. BTW JWT indeed had some super stupid bug with alg=NONE before, and I would simply throw away "header" from JWT
Salt is random data used to cryptographically sign or encrypt data. It sounds like your JWT consists of a userID and a sessionID (stored in Redis).
Why not just store your sessionID in a cryptographically signed HttpOnly cookie? In most use cases, it'd be less ambiguous, better protected from JS attacks, and equal-or-less vulnerable to CSRF.
> I just wanted to show the common idea of "cryptography is black magic, you should never touch it, you can only do bad things" wrong.
Per the other thread, this is overwhelmingly likely to decrease overall security without any practical benefit.
> My scheme adds security through obscurity which may be worth the trouble.
It adds potential side channels and likely no benefit over KC alone. You've sidelined many potential problems by focusing on "if the composition is properly implemented," but that's a huge problem. The likelihood of properly implementing the composition is vanishingly small. It is very likely to be improperly implemented and provide less security than KC alone (and it costs more!)
Okay, this is how I'd ad-hoc implement the scheme I described without the security problems you are commenting. A and B know each other and exchange the custom cipher securely. Imagine they each have a Raspberry Pi which does nothing else then to take each IP package sent from the other side and reverse the custom transform on the data. For each package that it sends to the other party it applies the custom transform to the data. Now A and B can route their internet traffic through their Raspberry Pi and get security by obscurity for their communication on top of the usual security. Even if the custom transform is simple, it's overwhelmingly probable that no automated TLS-break-tool will be able to break the custom transformed TLS traffic to the other party.
Was CC and KC both performed on the Pi? That introduces side channel attacks.
Does CC include implementation flaws enabling remote access? An attacker may use the Pi to enhance attacks against the system executing KC.
Does the Pi include any remotely exploitable flaws? See before.
If assume a perfect/non-exploitable CC/Pi, we may not degrade the security of KC. That key word, may, is the problem professional cryptographers spent years analyzing though. If we spent a month thinking about this, we might identify other requirements needed to avoid weakening KC. This is not recommended.
Systems which rely upon cipher-obscurity are not secure. Most amateur cryptosystems are trivially defeated without any knowledge of their internals (FBI has some nice articles on cryptanalysis of criminal ciphers). Advising amateurs to rely upon homegrown ciphers is unprofessional and encourages bad risk mitigation strategies.
We also disagree about the difficulty of breaking non-keyed bijections (trivial) versus AES-at-scale ("not trivial"). The cost of the latter easily exceeds $1B. The former would take a trained cryptanalyst less than a month. Is your data worth <$10,000?
> Was CC and KC both performed on the Pi? That introduces side channel attacks.
I specifically said the RPi does nothing else but CC.
> Does CC include implementation flaws enabling remote access? An attacker may use the Pi to enhance attacks against the system executing KC.
> Does the Pi include any remotely exploitable flaws? See before.
The interface from the PC to the RPi should of course have the same security as any internet-facing interface. Thus the remote access to the RPi wouldn't pose a risk for using the protocol beside the obvious risks that you always get in such a scenario.
> Systems which rely upon cipher-obscurity are not secure. Most amateur cryptosystems are trivially defeated without any knowledge of their internals (FBI has some nice articles on cryptanalysis of criminal ciphers). Advising amateurs to rely upon homegrown ciphers is unprofessional and encourages bad risk mitigation strategies.
The system I described is not trivially defeated because defeating it implies defeating KC.
Did you read my comments?
> We also disagree about the difficulty of breaking non-keyed bijections (trivial) versus AES-at-scale ("not trivial"). The cost of the latter easily exceeds $1B. The former would take a trained cryptanalyst less than a month. Is your data worth <$10,000?
To break my cipher an attacker would need to solve _both_ problems.
My point is that _if_ AES is broken without our knowledge, using the system I described can still make untargeted-surveillance impractical. Isn't that an interesting property for a protocol using a custom cipher?
> Thus the remote access to the RPi wouldn't pose a risk for using the protocol beside the obvious risks that you always get in such a scenario.
That risk didn't exist without the custom cipher construct (KC-system is still independently vulnerable as it was without the CC-system). This means the construct has increased the attack surface, potentially critically.
> The system I described is not trivially defeated because defeating it implies defeating KC.
Your system introduces potential new vectors to defeat KC. If KC were not broken, this construct likely weakens KC. If KC were broken, this construct may provide some minor protection. Whether it provides sufficient additional protection to actually protect the data from an adversary capable of breaking KC is extremely unlikely. Given KC is a peer-reviewed secure ciphersuite and CC is not, the emphasis should be on keeping KC secure - not weakening it to introduce an untested (likely insecure) ciphersuite in a custom composition. This is doubly the case given significant and on-going real-world costs of implementing and maintaining this custom solution.
> My point is that _if_ AES is broken without our knowledge, using the system I described can still make untargeted-surveillance impractical.
An adversary capable of automating AES decryption would already have automated weak-cryptosystem decryption. This adds nothing except cost, complexity, and faux security.
This protects only against an attacker who has broken AES so thoroughly that they can essentially surveil TLS traffic en masse at no cost.
To a targeted attacker, TLS is trivially identifiable through packet analysis. After a few handshakes, your transform (as mentioned elsewhere, reverse some nybbles and XOR against a constant) will be fully understood and broken with likely less effort than it cost you to build in the first place.
Are you just attempting to argue the pedantic point that some theoretical subset of homebrew crypto applications may actually be secure? Because taken as practical advice your position requires a lot of awfully strong assumptions.
Every crypto is homebrown - just maybe not in your home.
Everyone cooks just with water.
The scheme I talked about is not entirely homebrew. It consists of a mainstream cipher KC and a custom cipher CC to unite the best of both worlds: Robustness of mainstream crypto with obscurity of homebrew crypto.
> Every crypto is homebrown - just maybe not in your home.
> Everyone cooks just with water.
The problem with applying this definition of "homegrown" is that it willfully ignores any distinction implied by the term and thus renders it semantically meaningless. This is a form of straw man.
Regardless, even if we assume that all crypto, at the time of writing, is equally likely to be safe, I posit that the security and cost of implementation benefits achieved by leveraging published techniques far outweighs the benefit of having an obscure fingerprint. This is because previously published methods have the advantage of selection and iterative hardening based on peer review.
Furthermore, I posit that even if you wrap your data in a matryoshka doll of encryption, each of these layers will be more secure when implemented using proven techniques.
For the same reasons I'd also argue that even if you were to develop your own cipher you would benefit more by publishing it than by keeping it a secret.
Another way to think about it is that "an attacker reading the documentation" should not be a failure mode of well-implemented crypto.
Speculating even deeper on the subject, it occurs to me that in the face of a global adversary (of whose automated cryptanalysis your proposal aims to thwart) displaying a unique fingerprint may actually be detrimental to the security of your data as it may flag it specifically for deeper inspection and manual analysis.
This is almost always less secure than KC alone, when KC is a well-known secure cipher.
A simple example would be a CC that hex-encodes the plaintext before applying some transformation on the data (before you laugh, this exists in enterprise systems today). This means CC would effectively expand the underlying data 200% (0xA1 -> 0x4131) and substantially degrade the security of a block-based cipher (32-bit block -> effectively 16-bits).
edit: It would be better to compose CC(KC(P)) so CC can't leak any information about P or degrade KC. Any reluctance to show the world the output of CC should suggest the low-practical-value of CC.
Why does expanding the underlying data that way "substantially degrade the security of a block-based cipher"? Can you think of a generic way that an adversary can gain an advantage against the block cipher because of this transformation/restriction of the plaintext?
(I appreciate the link to Matthew Green's post; I don't think his analysis of the effect of composition is as pessimistic as yours.)
CC doesn't get to see the plaintext in the construct I explained.
CC is just a bijective function on (blocksize of KC) bits.
The author of the mentioned blog ignores the fact that cascading ciphers like I described breaks automated cryptoanalysis which is a necessity for mass surveillance in a world with wide-spread cryptography.
I think cascading ciphers might be a good idea, but if you're following Kerckhoff's rule, if your system achieves significant use and there is a cryptographic weakness, you should assume an adversary will exploit it even if it's different from the weaknesses of other systems.
I guess there's an economic argument to be made that millions or billions of pairs of communicating parties could develop their own individual means of at least obfuscating their communications so that nobody could expect to find searchable plaintext after decryption. But the economic effort that the pairs of parties invested in creating their obfuscations may have been wasted because if the same level of time or effort had instead been spent to improve mainstream cryptography, it might have yielded major qualitative security improvements for the "official" stuff.
I'm well aware that it doesn't stop a devoted attacker. But it needs a devoted attacker. That is the import point which you also recognized (beside many others). Even a devoted attacker has it harder because cracking a code with known algorithm and unknown key is easier than cracking a code with unknown algorithm and unknown key. Even more if the unknown algorithm is not available.
I think your argument that the effort put in custom ciphers maybe should be put in mainstream ciphers instead is interesting.
Tinkering around with custom ciphers can teach you a lot. Maybe you have not the knowledge or no idea how to attack/improve mainstream ciphers.
However, if you can make a difference for mainstream ciphers, of course that's what we need.
Just to put the issue in an extreme perspective, suppose that the best mathematically possible attacks against AES-256 in some setting only reduce the attacker's work by the same factor as the best mathematically possible attacks against AES-128. (There's no proof of this now, but it's conceivable that it's true.) In that case, the decision to use AES-256 in a particular application instead of AES-128 improves security against cryptanalysis of AES by a factor of 2¹²⁸ steps for the attacker. (Maybe cryptanalysis of AES isn't actually the weak point anyway, but let's set that aside because that's what inventing new ciphers tries to address.)
If this hypothesis is true, the work that Daemen and Rijmen did to invent AES-256 and the work that a particular implementer did to implement it will produce an almost inconceivably vast security benefit against this particular threat.
The reason this is important is the kind of disproportionality between the effort of Daemen and Rijmen and the AES reviewers and implementers, and the magnitude of the resulting security benefit. They might have spent a total of 500 person-years on making AES-256 work well, and received a security improvement of 340 trillion trillion trillion trillion-fold relative to whatever the security of AES-128 is. Whereas a homegrown cipher that isn't very mathematically sound might be developed with 1 person-year of effort and end up make an attacker do, let's say, 100 trillion operations. In my hypothesis, Daemen and Rijmen and other folks then got somewhere between a trillion trillion trillion and a trillion trillion trillion trillion trillion trillion trillion times better security return on their effort.
Now you might reasonably point out that if you use a standard, known cipher, the attacker's costs for a direct brute force attack are purely computational and don't involve research and development, or attempting to suborn or hack your correspondents or colleagues to discover the principles of operation or your system. Whereas if you do have a homegrown mechanism in play, an attacker incurs these other kinds of novel and sort of one-off costs, notably including making other human beings think about stuff more.
The point that I've taken from a lot of the security experts who've talked about this, though, is that the scaling benefits are the important factor here, again especially if you want to make a system that many people could use for a long time. When the limiting factor is computer time, which is really only likely to be true for systems created, refined, and reviewed by experts, you can sometimes get the really absurd security ratios that are hard to even think about, and require your adversary to spend more money than exists in the world, build more computers than can be made from all the silicon on Earth, consume more energy than the Sun outputs, etc., etc. When the limiting factor is human reasoning, you might say "but that would require human cryptographers to think about my system for 1 year!". But if that's so, that may actually happen, and in any case you can't easily get the order of magnitude of the costs and resources required up to "inhuman" levels.
The point I'd take from your idea is that it could be valuable to try to make adversaries incur diverse costs in attacking your system, especially if you don't know what capabilities and resources your adversaries do and don't have. This is kind of akin to what's happened with key derivation, where people have proposed KDFs that are very CPU-intensive and also KDFs that are very memory-intensive, and if there are other sorts of resources that you could make an attacker burn, there are probably people trying to invent KDFs that burn those, too. It's not clear to me that there's a genuinely scalable way to require human analytical effort as one of those resources, but if there is, that could be a useful property for communications systems to have for defense in depth. But making up a new cipher by hand for every system is probably not going to provide that property very reliably, or be a very effective use of resources, again when other uses of resources can improve security to a staggering extent.
You're right; I misread your construct as KC(CC(P)). Your construct [CC(KC(P))] shouldn't be weaker than KC, unless information or resources are shared by CC and KC (such as keys). Shared information or resources may introduce side channel attacks. Per the previous link, this is likely only practicable on entirely separate machines.
Any entity that can break AES at-scale will undoubtedly find any unreviewed cryptographic protocol trivial to break. Any such at-scale effort would already include attacks against typical bad-custom-crypto (because they're extremely easy and common), in addition to the AES attacks. Cascading ciphers, particularly weak ones, will not stop the NSA.
edit: addressed information leak if CC & KC share keys/resources
Thanks for agreeing that we don't lose security when using my construct.
> Any entity that can break AES at-scale will undoubtedly find any unreviewed cryptographic protocol trivial to break.
Yes, but it would involve highly paid cryptoanalysts. The reason for my first comment is first of all to disprove the root of this thread. Secondly, it makes surveillance more expensive while it's free for us.
> Thanks for agreeing that we don't lose security when using my construct.
I don't agree. The construct may not degrade security under several caveats. Most implementations are extremely likely to share resources, which will introduce weaknesses. I'd wager those weaknesses would degrade security much more than the composition would enhance it, but it'd depend on the exact situation.
> Yes but it would involve highly paid cryptoanalysts. My proposal is first of all to disprove the root of this thread. Secondly, it makes surveillance more expensive while it's free for us. That what's cryptography all about. Making their life harder while not so much for us.
My exact point was that those cryptographers would already need to develop generic attacks for all the non-standard (read: non-secure) cryptosystems out there. Composing a homegrown cipher with a peer-reviewed secure cipher will not make their lives harder. It will make maintaining and improving the system harder. The net result is overwhelmingly likely to be detrimental.
And it has the added benefit from the NSA's point of view that your connection/data is precisely fingerprinted as 'homebrew-crypto-1629: refer to analysis cell 2865JQ'.
Then the computer is sub basement 19 goes 'ding!' and sends an automated SWAT team to your house.
To fingerprint the connection/data by the used cipher they would need to break KC. If they are able to break KC they can fingerprint you also when you only use KC.
You have said multiple times that the outer wrapper was CC, and the offline, inner wrapper was KC. All they need to do is infer it is CC output, either through online attacks, anomalies in its statistical distribution, block size, etc. Also, certain modes like ECB have watermarking attacks, which inherently reveal that ECB was used.
RC4 is a common stream encryption which can definitely be detected. In contrast AES seems much harder.
The output of KC is not necessarily completely random. It can contain some metadata. Like "encrypted with KC4096bits, initialization vector is 490282348992489, length of ciphertext is 26728 bytes".
When you pass this through a bad cipher it may create a characteristic fingerprint.
Wait are you telling me there are special classes of inputs that are unsafe to encrypt using the standard algorithms?
This sounds scary and please tell me more. In particular what happens in the pathological case where the input consists of 32/64/128/256 bit blocks, and in each block all bits are zero except the last one which may be one or zero?
If you restrict the search space of the input, you also reduce the possible outputs, and that makes it easier to narrow down the key. However, it's only really plausible on very small inputs, another channel of attack (e.g. the ones illustrated in the article), or if you know something specific about the input.
TL;DR don't base64 without a reason because it gives the attacker more bytes with fewer values to work with.