Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Million Dollar Curve (cryptoexperts.github.io)
309 points by aburan28 on Feb 23, 2016 | hide | past | favorite | 91 comments


The thesis of this paper seems to be that there aren't many alternatives to 25519. But anybody who watched the fiasco of the IRTF trying to standardize a recommendation on 25519 knows that's not true. Browser vendors were sold on DJB's curve and asked IETF to add it to TLS; given that simple charter, people came out of the freaking woodwork with alternatives.

The "randomness derived from lotteries" property seems like a gimmick. Generating plausible random seeds isn't the hard problem with generating new curves. Rather, the problem is coming up with curves that are performant on common hardware without any patent-encumbered techniques.

It's also not clear to me how the $1MM curve is a meaningful alternative to 25519. Almost the entire paper is given to considering how to generate random parameters. Having accomplished that task, the paper simply generates a random Edwards curve, using their lottery generator to find a random prime, d, and base point, filtered against the SafeCurve requirements. So they end up with (in effect) a randomized version of Bernstein's curve. If 25519 somehow falls, why wouldn't this curve fall with it?

Compare this paper to those of 25519:

https://cr.yp.to/ecdh/curve25519-20060209.pdf

... or to FourQ:

https://eprint.iacr.org/2015/565.pdf

Also: the swipe at Brainpool seems disingenuous. Zero is the number of people who believe that Brainpool's mathematical constants were arranged specifically to create a backdoor. The point of BADA55 is that Brainpool's use of mathematical constants as seeds isn't dispositive, and that the performance-targeting parameters of 25519 are equally credible.


I don't think that's the thesis. I think the thesis is that there could be a "BADA55 construction" even in Curve25519. Even though all of the individual pieces are justifiable, for instance the 2^255-19 is justifiable because it's the smallest number. They knew the entire curve before trying to make it popular. Also, they could have used 2^255+95 because it's the smallest larger than 2^255 with some justification that it needed to be larger than 2^255. With enough arbitrary decisions a "one-in-a-million" vulnerability is possible. The premise of this curve is that there are no arbitrary decisions, since you commit to supporting the curve before even knowing what it is.

I think the analogy is someone shuffling a deck of cards, and taking a peek at the first card. Then betting someone $20 that it's the ace of spades. The process is justifiable, because you don't have any control of what the top card happens to be. It's just that you know what card it is and wouldn't make the bet if you didn't know it was already an ace of spades. Similarly, Curve25519 might have been constructed and happened to have the "one-in-a-million" vulnerability. How many other "Curve25519s" are there that we didn't hear about. Imagine that instead, the procedure was shuffle the cards after you commit to the bet.

I really doubt there's anything wrong with 25519. I feel like if anything, if djb et al knew about some vulnerability in curves that 25519 happened to have before releasing it, they would have put the weakness as one of the failing criteria to pick a new set of parameters.


I'm pretty sure the authors aren't suggesting that Curve25519 has been tampered with, because the FAQ for this paper suggests (a) that they continue to recommend Curve25519, and (b) that they are proposing this curve because they're concerned that there aren't any good alternatives to Curve25519 with trustworthy seeds.

Curve25519 follows roughly the same generation procedure as Microsoft's NUMS: it uses minimal parameters that satisfy a performance/security goal. NUMS starts from a security level, selects the smallest prime that satisfies that level, and then selects the smallest Edwards 'd' that passes security criteria. Curve25519 selects a prime of sufficient security as close as possible to a power of 2 (making it sparse and thus fast to do math on in software), and then selects a minimal Montgomery A given security criteria.

$1MM and 25519 represent two different schools of thought about how to generate curves. $1MM says, generate random unstructured parameters, and come up with a randomness procedure that is difficult to impeach. 25519 says generate minimal parameters that achieve particular performance goals. Both schools of thought address the concern that the curve generation procedure might be untrustworthy, but in different ways: the former by somehow proving randomness, the latter by removing degrees of freedom.

25519's school of thought has pretty much won the Internet.


I really appreciate the level-headed discussion in this thread so far, especially the comment I'm replying to.

It's a stark contrast to the CFRG mailing list. (At least, so far, no one has tried to derail discussion here with "hey check out my custom cipher it's soooo secure but you need to compress the data before encrypting it or else you can observe a repeated structure out of it".)

I like 25519's school of thought. If you use the smallest possible value for a given performance/security goal, there's less room for conspiracy theory (provided the person making the theory understands what's even going on).


Indeed. One might be biased, given one was quite... um... vocal in the whole process! I personally feel Curve25519 and Ed448-Goldilocks were very thoroughly discussed, absolutely fine, and they are the final CFRG recommendations. I am quite certain of that, and I do believe we have proved that just about as well as anyone possibly can.

(I'd like to thank everyone who suffered through the whole thing. I am just a participant there, I do not speak for anyone at all except myself, and hold no special anything.)

Both of the CFRG curves are rigid 'safe' curves, with (unlike the NIST curves) solid criteria given for each and every aspect of their definition, all of which were argued in the open, and fully-reproducible parameter generation algorithms are included in RFC7748 - https://datatracker.ietf.org/doc/rfc7748/

CFRG made the decision to select rigid 'safe' curves pretty early on, so that everyone could see exact reasons why we picked what we did at every stage, in the open, and that therefore absolutely no shenanigans were afoot. Specifically, we ended up picking one curve over a near-Mersenne prime (2^255-19), and one curve over a trinomial golden-ratio Solinas prime (2^448-2^224-1) - because they're about the right sizes, and rigidly-structured in a way that makes them almost uniquely fast in software. (We voted on specific sizes, and had supporting reasons even for that.) Both curves are also very handily - thanks djb & Mike Hamburg! - both already present in the existing literature, and indeed quite a few examples of X25519 (DH) and Ed25519 (sig) deployment in particular already exist.

During the middle of the process, some people (some of whom were hardware developers involved in the selection of the "Brainpool" curves, a similar set of random prime field curves) showed up pushing for a new set of curves with random, unstructured prime fields.

CFRG didn't go that way, and we discussed that decision and the reasons why thoroughly. Mostly because the performance of curves over randomly-structured primes absolutely sucks in software (I'm paraphrasing here, but we are talking an order of magnitude difference, at least), but also because given we were even arguing about endianness for God's sake, we'd probably never have been able to agree on a sound, unbiased-in-the-face-of-future-scrutiny method for selecting random parameters in an unbiased way, let alone one that we were positive we could convince sceptical third parties about. It would appear this curve comes from the hardware developers who really wanted new random curves in any case, and have tried to do just that anyway. Good luck to them there.

The only possible advantage really with a random prime field is if some big problem exists with using elliptic curves over prime fields with a sparse bit pattern that doesn't also apply to random prime fields. We argued about that for a while (you may be detecting a pattern here...!), and the conclusion was that we simply don't know of any good underlying reason that would happen that wouldn't also pose a huge threat to random prime fields, or elliptic curve-based cryptography in general (e.g. big practical quantum computers - in which case, arguing about which elliptic curve to use would be pointless and maybe we just ought to burn our worldly possessions and go live in caves. I'm paraphrasing a bit here! <g>).

The only thing that popped up was that if one has old crypto hardware that one is trying to reheat and present as new crypto hardware with go-faster stripes, and one is trying to get better performance from it by abusing its pre-existing RSA multiplier to do your ECC maths as well (like, for example, a certain hardware developer, ahem, might do), the side-channel blinding technique there would not work properly, because it's only designed to work with unstructured prime fields as you usually see in RSA, rather than a field with lots of concurrent 0s or 1s in there. So, um, friendly reminder, as I said at the time: if you have an RSA multiplier, please don't abuse it to implement your elliptic curves. (Note: this would also present an issue if you did it with the NIST curves, which are also so structured.)

All the software out there that I've seen that implements Curve25519 appears to be at least timing side-channel resistant (this also being a goal, and why we went with Montgomery/Edwards forms specifically to simplify this, because short Weierstrass form is a bloody nightmare by comparison).

If you are tasked with designing new crypto hardware today, I do feel you'd be better placed doing what a friend of mine suggested: instead of bolting new bits on decades-old designs like most people seem to, actually go and design new open hardware (RISC-V?) and try to restore some verifiability in that. Put all the crypto in open-source, verifiable software which the user loads and verifies (a friend's prototype design just uploads the whole executable EEPROM to the host for comparison with a known-good image whenever you connect it, and zeroises the data EEPROM in hardware whenever the executable EEPROM is written by the host) - and then ship the hardware absolutely blank. Then you hopefully have one less problem, as you don't really have dedicated crypto hardware as much as a solid general-purpose microcontroller with side-channel protection. (I wonder what impact that might have on export controls, if any?) In any case, you probably also have enough problems to worry about in your supply chain already without risking becoming the next Gemalto... or, God forbid should things not go well, Apple. Just a thought there. I know several people are working on open hardware now; I've heard of a couple of dedicated hardware implementations of 25519 floating around, too.

I don't know of anyone who actually wants to use this so-called "million dollar curve". CFRG aren't recommending it. I'm certainly not. Not because it's bad. Just because - sorry, but as I said at the time - it's kind of... pointless? If you're using the Brainpool curves already, there's not really any reason you couldn't use this... but unless: a) you believe the Brainpool curves are deeply flawed somehow, enough to avoid, but these somehow aren't; and, b) you believe there's a huge breakthrough in number theory that affects structured primes but not unstructured primes or elliptic curves in general; and also, c) you believe nobody's going to show up with a quantum computer in the time period you're worried about... I just don't - personally! - see why you would bother switching. Clearly these people do. I suppose we simply disagree there, which wouldn't be the first time.


> we were even arguing about endianness for God's sake

Why is that? Where you scared that developers might have made mistake while typing constants in their implementations?


Because there is a little-known IETF/IRTF bylaw that no important standards debate can take place without relitigating endianness. You might think I am joking, but I am not. It's Postel's Second Law.


Quite! Some like big endian, because tradition ('network byte order'); little endian, because simplicity (current CPUs); mostly it's actually just classic bikeshedding¹.

Since X25519 and Ed25519 had already been deployed by a fair few people, all of whom used little-endian, recommending the opposite without a compelling reason would have been a bit too silly. (Upstream working groups are encouraged to consider the field elements as opaque octet strings, to try to avoid arguing the exact same point again.)

[1] http://bikeshed.com/


"Curve25519 was designed to be as fast as possible, with no security compromise. This is both a strength and a potential weakness:

    a strength because it gives a valid argument that no trapdoor was introduced in the design,
    a potential weakness because Curve25519 uses a very specific prime field. As of now, no attack exploiting this specificity is known.
For applications where speed is paramount, Curve25519 is probably the best option. But for most applications, where losing a little on the efficiency side is "not a big deal", Million Dollar Curve is probably the safest choice."

Sorry, no. Many more eyeballs have been on Curve25519 and DJB reputation, both of which gives me _more_ confidence in that than any new alternative.

Secondly, the implementation is as important as the design. Again, Curve25519 have vetted implementations, something which takes time.

Finally, the notion that speed is a liability is ludicrous. The lack of speed is a major liability.

Move along, nothing to see here.


I always was curious about about one thing with DJB. Even DJB must to admire that focusing almost everything in modern crypto world around DJB is not _secure_. Curve25519 became default in OpenSSH, used in Signal in Axolotl Ratchet (and our project - actor.im too) and so on. It is going to be everywhere. And all lines came to one specific person.


So? Curve25519 provides secure parameters for ECC, why not use it everywhere? How's that not secure? Is RSA secure? Is it because there were 3 people involved?


Good Point.RSA security depends mostly on how you build your private keys and ECC security depends on what parameters and what curve was chosen. Russians cryptoexperts doesn't fully trust DJB they found that at the last iteration of picking parameters by DJB for Curve25519 was a bit questionable. Changes was done for "better performance" but no one found what exactly was speeded up. I don't know details, but when curve parameters was tried to be being compromised by NSA was almost always was about adding such "performance optimizations".


> RSA security depends mostly on how you build your private keys and ECC security depends on what parameters and what curve was chosen.

No. RSA security depends on getting your parameters right and padding.

http://www.cryptofails.com/post/70059600123/saltstack-rsa-e-...

http://framework.zend.com/security/advisory/ZF2015-10

> Russians cryptoexperts doesn't fully trust DJB they found that at the last iteration of picking parameters by DJB for Curve25519 was a bit questionable.

Tell them to publish their findings and propose a better solution.

> Changes was done for "better performance" but no one found what exactly was speeded up.

What "changes" exactly? The word "changes" implies there was an early draft with vastly different parameters.

> I don't know details, but when curve parameters was tried to be being compromised by NSA was almost always was about adding such "performance optimizations".

If you don't know the details, try doing some research. Knowledge is healthy.


Sorry, you sound like those crypto nuts from various cryptography mailing lists. Curve25519 has been discussed to death during CFRG selection process (see https://news.ycombinator.com/item?id=11161315). If you (or these "Russian cryptoexperts") have legitimate concerns, please publish them and save the world!


Thank you i will forward it to them.


Plus, Curve25519 and EdDSA have djb's stamp of approval.


I think the team at cryptoexperts is way more impressive.


What's the other curve work they've done that makes you say that?

I agree with the implication that an appeal to Bernstein isn't dispositive. But Curve25519 isn't just credible because of Bernstein; it's also been scrutinized intensively by lots of other research groups.


from their FAQ:

> We, at CryptoExperts, actually use Curve25519 and recommend it to our partners. Yet, we think that people should not rely on the same few safe curves that are currently out. Our methodology allows to easily produce safe alternatives.


That's an answer to somebody's question, but not to mine. :)


[deleted]


> I mean, I'm not trying to be a conspiracy theorist, but certainly similar things have happened.

Could you list some?


Also it's called "Crypto Experts," which in this particular field somehow pegs my contrarian indicator button.

If they'd claimed to have designed a modern alternative to SSL using modern crypto primitives that's one thing. That's hard and requires solid crypto and coding knowledge and I wouldn't trust it without a lot of peer review, but it's something a competent crypto-understanding developer could pull off. Ordinary mortals can do things like combine a cipher with an authentication algorithm correctly if they take the time to study the state of the art and avoid previously understood pitfalls.

That's using crypto as a developer. But this is making crypto.

Creating a new cryptographic primitive is serious deep ninja-god black magic voodoo stuff that is beyond mere mortals... and this is coming from someone who is very anti-elitist when it comes to most things like this. There's not a huge pool of people I'd trust to attempt it, and even if it came from someone like DJB I still wouldn't use it until it's been in publication for at least a few years and attacked by many Ph.D's and others. Salsa and ChaCha are some of the newest ciphers in common use and those are now... what... a decade old? And they've been beat up pretty badly by researchers too, so they've passed enough of a gauntlet to be trusted with something.

So even if they did this new curve, I wouldn't use it for at least 5-10 years.

Edit:

The other thing I really don't get is why a new venture like this would start off by trying to push an entire new curve. Unless there is some good reason not to trust C25519, there are many other more pressing concerns in the crypto world that could be tackled. We could really use good end-user crypto software that is solidly engineered and offers good UX. That would be of immensely more value than yet another ECC curve with no clear benefit over existing curves. (Other than maybe NIST but that's another matter.)


I think the attitude that rolling your own crypto "is serious deep ninja-god black magic voodoo stuff that is beyond mere mortals" has a lot to do with why we've been blythly taking NSA's deliberately broken crap.

While I know it's hard, very hard, we shouldn't be discouraging people from making up new stuff, let a thousand flowers bloom around the NSA's walled garden .... what we should be doing is getting rigorous about testing and verification of new crypto be it from the NSA or the good guys


I don't think anyone wants to discourage people from learning about, playing with, and trying to invent new crypto.

The usual problem here is that people new to crypto frequently don't treat their whizzy new supercool algorithm as a toy that has almost certainly been done, cracked, improved, cracked and eventually abandoned. For whatever reason, it is entirely too easy for people new to crypto to convince themselves they've made a really cool discovery. There's even a cliche for this: "Anyone can create an crypto algorithm that they themselves cannot crack."

Absolutely, learn, play, and try to make something great. But keep perspective. You wouldn't invite your loved ones to be the first to test your first attempt at a home-brew parachute; similarly, don't use your home-brew crypto to protect important things.


We should put this part of the thread to bed, because CryptoExperts is the real deal, and it's kind of silly to debate "homebrew crypto" on a thread about their research.


Cryptoexperts must be one of the most impressive research and consulting private company that I know of (in Cryptography). Antoine Joux and Pascal Paillier are there...


CryptoExperts is for real. But the Million Dollar Curve seems gimmicky.


If I have a very large setup that does a lot of crypto, is there not a chance that this can save me a shit ton in cpu cycles?

One of the main reasons that admins didn't turn on crypto by default until the snowden leaks was because of the CPU overhead on the servers. (I don't know in google's case but that's when they turned it on)

It's not hard to imagine that if I was doing a lot of crypto (or a lot of connections) that strong efficient crypto would be on my list of things to keep track of.


No, the $1MM curve will cost you CPU cycles. If you want a fast curve, use 25519.


Sigh

The issue here is that we need a trusted source of "nothing up my sleeve" numbers: https://en.wikipedia.org/wiki/Nothing_up_my_sleeve_number

This is why the RFC 3526 groups use primes derived from Pi: https://tools.ietf.org/html/rfc3526

I'm sure that it would be difficult to tamper with the results of lotteries held all over the world. But it's even harder to tamper with the value of Pi.


Sigh. This is incorrect. Pi does not give the "nothing up my sleveve" property, because if I have a desired bit sequence in mind, I can find it in Pi, then retroactively claim that because it comes from Pi it wasn't "up my sleeve". Specifically, the methodology of transforming the its of Pi into the desired key was "up my sleeve".

What the authors propose here is much better. The methodology and protocol for transforming the seed digits is agreed to ("committed") ahead of time, before the digits are known, and only later are the numbers generated.


Isn't this circular reasoning? If you do that, you have another constant to justify: the offset into pi you used. Go find BADA55 in Pi, and relay back to us what the offset is.†

You don't need lotteries to break this logjam. You just demand the minimal parameters that satisfy the security a/o performance considerations for your problem. LaMacchia and Costello did this for NUMS without even needing recourse to constants: they just start with the target security level, find the minimal prime for that security level, and then the minimal suitable (given, for instance, SafeCurves) d for an Edwards curve.

My friend Peter found it: http://pi.nersc.gov/cgi-bin/pi.cgi?word=BADA55&format=hex --- so: justify the binary offset "2733181964". :)


The IETF (or whomever) could pre-specify a nothing up my sleeve number algorithm.

For example they could choose Pi and say that the following are the official NUMSN for each order of magnitude: 1, 41, 592, 6535, 89793, 238462, 6433832, ... and if a number is needed with a specific property (e.g. 40 digits and prime) than the nearest number to a NUMSN is the official NUMSN for that purpose.

Then any future standard that used a constant they claimed was a NUMSN but wasn't complaint with RFC 31415: Procedures of Generating Constants would be suspicious.


> Sigh. This is incorrect. Pi does not give the "nothing up my sleeve" property, because if I have a desired bit sequence in mind, I can find it in Pi, then retroactively claim that because it comes from Pi it wasn't "up my sleeve". Specifically, the methodology of transforming the its of Pi into the desired key was "up my sleeve".

Pi has not been proven to be normal, so you can't be completely sure that you could always find any particularly long (backdoored) sequence.


No naturally occurring constant has ever been proven to be normal. It's a really hard property to prove. However It's been calculated to billions of digits and it certainly seems to be. It even passes random number tests perfectly.


If primes derived from pi are not suspect, then that surely goes for primes derived from 1/pi, e, sqrt(2)... as well? What about bit order, byte order?

The problem is that there are lots of small choices involved in picking "nothing up my sleeve" numbers and deriving data from them. Each small choice may seem trivial in isolation, but taken together, the parameter space is so large that you can derive virtually anything from "nothing up my sleeve" numbers.

Bernstein et al. made a point of exactly this in the paper https://bada55.cr.yp.to/bada55-20140722.pdf by deriving the string "BADA55" from "nothing up my sleeve" numbers.


But, come on. Here's the process they used:

We begin with 17 natural constants: π, e, Euler gamma, √2, √3, √5, √7, log(2), (1 + √5)/2, ζ(3), ζ(5), sin(1), sin(2), cos(1), cos(2), tan(1), and tan(2). We extend this list by including reciprocals. We then convert to constants between 0 and 1 in three different ways: take the fractional part; divide by 256 and discard any minus sign; or take all bits starting from the most significant (i.e., divide by an appropriate power of 2), again discarding any minus sign. Removing duplicates produces 73 starting seeds.

For the hash function, we are using the 10 variants of Keccak listed above. As length for each seed, we use either 20, 32, 48, 64, or 128 bytes, or the block size of the Keccak configuration (6 choices). The seed is either stored in big-endian or little-endian format (2 choices). In order to update the seed during the generation procedure, we are using a counter of length either 0 (i.e., the seed itself is incremented), 2, 3, 4, or 8 bytes (5 choices), either at the beginning or at the end of the seed (2 choices). We either truncate the seed to the right length or round it to the right length (2 choices); note that in many cases these are identical. Finally, we are using several different ways to update the counter between generation of a and b and to choose the order of generating a and b (8 choices). All in all, we have 73 · 10 · 6 · 2 · 5 · 2 · 2 · 8 = 1401600 possible configurations, mostly different, with a high probability to find a procedure that produces the desired vulnerability.

They ended up using picky encoding of cos(1) and a counter that ticked 184 times, and they got a = 0x7144BA12CE8A0C3BEFA053EDBADA555A42391AC64F052376E041C7D4AF23195EBD 8D83625321D452E8A0C3BB0A048A26115704E45DCEB346A9F4BD9741D14D49.

It's not like they simply got a=BADA55... from pi.


There is so much variation in the methods that have already been used to derive "nothing up my sleeve" numbers that you could easily come up with 2^10 equally plausible methods. The 24 chosen bits for "BADA55" might be overkill; depending on the circumstances, being able to pick "A55" might be enough.


I agree that the collection of all nothing-up-my-sleeve numbers anyone has ever used gives you a good starting point to find a 1/10000 curve flaw, but I'm not advocating for allowing cos(1), sqrt(2), 1/pi, &c. I'm saying: "use the leading digits of pi, or the first N digits that pass some security function if you need to be picky".


If primes derived from pi are not suspect, then that surely goes for primes derived from 1/pi, e, sqrt(2)... as well?

If someone uses 1/pi, the first question they face is going to be "why not pi". If you need a series of values, sqrt(p) is reasonable, but if you're just looking for one value then using sqrt(17) is going to look suspicious.

What about bit order, byte order?

What about them? Primes don't have bit or byte orderings. They're just integers.

If you say "I generated this prime by taking Pi, writing it as a series of bytes, permuting those bytes, and then interpreting those as an integer", people will be very suspicious.


Check out the Wikipedia article https://en.wikipedia.org/wiki/Nothing_up_my_sleeve_number for some examples:

ARIA uses 1/π (indeed, why not pi?)

MD5 uses sin(n) (why not cos(n) or sqrt(n)?)

SHA-1 and SHA-2 use square roots and cube roots of small primes (why not sin(n) or pi?)

DFC uses e.

NewDES uses the United States Declaration of Independence.

RC5 uses both e and the golden ratio.

BLAKE uses "a table of 16 constant words which are the leading 512 or 1024 bits of the fractional part of π". (Here you see how things like bit or byte order can give you more choices.)


I suspect the reasoning is thus:

Pi & friends are all well and good when all you're doing is filling out a few initial bitfield states in a pipeline, but when you're designing a highly complicated process by which to deterministically produce a safe but "random" curve, the process of designing that process itself is going to have some unjustified choices in it. The meta-algorithm, if you like, could have been designed to produce a specific result given the input pi.

Publishing & rigidly defining the method before choosing the seed would seem to alleviate all possibility of foul play.


>Publishing & rigidly defining the method before choosing the seed would seem to alleviate all possibility of foul play.

Except, how do you know the seed was not secretly chosen first? There are probably ways around that again, but this rabbit hole is deep.


I can't imagine many things harder than secretly rigging >30 worldwide lotteries.


Could the selection of primes derived from Pi be optimized to find weaker ones vs. stronger ones? eg: calculate a very large set of them and then pick the ones that are weakest and then publish that as a standard (of course you wouldn't let on that you had chosen the weakest, you could pick an arbitrary function that gets you the weak ones which would then look really good because pi is objective and that function is objective.)


Problem, pi is not unpredictable.


It doesn't need to be. The group you're using is part of the cryptosystem, not part of the key.


And the cryptosystem might be backdoored by the way you choose the constants. (they are way too many constants to choose from)


It really isn't backdoored, though. Bada55 made a legit point that curves shouldn't be required to derive from simple math constants, but it does less of a good job showing that math constants are themselves necessarily untrustworthy. You'd need an awfully funky looking set of constants to get the degree of freedom bada55 proposes.


Right. Hence "nothing up my sleeve" numbers. It's hard to fit a backdoor into Pi.


I used to think so until I read Carl Sagan's Contact.


Why did you choose Pi? And not e? (see what I'm doing?)


Because pi is the simplest constant that has already been used as a magic constant in other cryptographic algorithms. Next question?


That's completely not true.

Example sha-1:

> The constant values used are chosen to be nothing up my sleeve numbers: the four round constants k are 230 times the square roots of 2, 3, 5 and 10. The first four starting values for h0 through h3 are the same with the MD5 algorithm, and the fifth (for h4) is similar.


230 times the square roots of 2, 3, 5 and 10

versus

the first N digits of Pi, minus the leading 3 (what Blowfish used).



Look at the BLAKE example there: it's simply the leading digits of pi.

The rest of the examples seem like they prove my point. 1/pi? sin(n)? First N primes? Cubes of first N primes? I agree! Super sketchy! Why would you use any of those? Just use pi.


You probably don't want to use BLAKE as the poster child for this argument, since it uses two sets of constants of distinct provenance: the initialization vector, which uses the same constants as SHA-2, i.e., the square roots of the first 8 primes, and the constants used in the compression function, which are indeed the expansion of pi. BLAKE2 got rid of the pi constants.


WAT, there was no pi in that list


> Of course, one should not conclude that cryptographic algorithms using similar constants are systematically insecure (certainly, some designers are honest!) and we will not dispute the right to trust those algorithms.

I'm reminded a controversial aspect of the design of DES. The values of the S-Boxes were unjustified, and some people feared they constituted a backdoor (much like what's alleged with Dual-EC-DRGB). It turns out that in fact they were specially chosen to strengthen DES against a cryptographic attack that the NSA was trying to keep secret.

https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA.2...


My crypto prof in uni commented on this. (This is from memory). Apparently, the number of rounds (16) seemed to have been chosen arbitrarily. Many years later, when attacks on Feistel(?) ciphers (of which DES is one) became public, it turned out that these attacks only worked up to .... 15 rounds! So the speculation was that the NSA knew about the attack, and chose to set the round number just out of reach...


This story is kind of true, but is missing some detail. The first differential attack on DES broke up to 15 rounds, that is correct [1]. 2 years later, however, the same team broke the full 16 rounds, albeit with a hefty amount of chosen plaintexts [2]. Coppersmith disclosed the DES design choices a year after that, showing that the S-boxes had been changed to increase resistance to differential attacks, in 1994 [3]. Additionally, Matsui's linear cryptanalysis of DES didn't seem to have been taken into account and also broke through the full DES [4].

Another interesting anecdote is the tale of Skipjack, which unlike DES was an entirely NSA-designed block cipher. After it was disclosed in 1998, an impossible differential attack broke exactly 31 out of 32 rounds, and I believe that is still the best known result today [5].

[1] http://link.springer.com/chapter/10.1007%2F3-540-38424-3_1

[2] http://link.springer.com/chapter/10.1007%2F3-540-48071-4_34

[3] http://simson.net/ref/1994/coppersmith94.pdf

[4] http://link.springer.com/chapter/10.1007%2F3-540-48285-7_33

[5] http://link.springer.com/chapter/10.1007%2F3-540-48910-X_2


My understanding is that NSA's changes to DES were all in the s-boxes, and that it was the s-box changes that hardened it against differential cryptanalysis. I think there's even a story that tries to explain how they did it (randomly generating s-boxes selecting the ones least vulnerable to the attack).


Don Coppersmith wrote an article in the IBM system's journal that said as much.


The attack is "differential analysis" -- essentially, if you have a black box encoder, carefully chosen plaintexts sent through the system can leak information, and substantially weaken DES as described.

Academic literature started discussing differential analysis in the early '90s if memory serves -- NSA helped harden against it in the 1970s!

Current wisdom is that the NSA is not nearly so far ahead of the cryptography curve these days. But, it could just be speculation. :)


What about the weakdh stuff? See e.g. http://www.scottaaronson.com/blog/?p=2293


That's less about advances NSA may have made ahead of industry/academia and more about the operational advantages NSA has in deploying already-plausible attacks.


I wonder how many global sources of randomness they've discarded before finding one that is both plausibly safe and exploitable.


I get the concern with NIST but I'm unsure about the concern with Curve25519. Is there any reason to suspect shenanigans there to be plausible?

Also I'm not sure that future public lotteries are really that great. Technically such a thing could be rigged, though it would be hard. A truly good source would be something cosmic and public like sunspot records in the future that could be independently verified and recorded by multiple people and could not be tampered with unless God wants to backdoor our crypto.


Yes, I hate the idea of future public lotteries, published ahead of time. The entire idea is to be resistant to a nation-state attack; lotteries seem pretty far down the chain.

If you're in love with lotteries, you could do something like order all lotteries in the world in a published list, and then choose them based on the last digits of bitcoin block hashes over a certain period.

I'm not sure what bitrate you could get through sunspots, but I agree we shouldn't worry about subversion from that angle.


It's based on lotteries from different countries all around the world. No way ONE state adversary could rig this. It's also set in the future, you need to commit to the time frame where you'll be doing that. I think their plan was to tweet it and sing it on soundcloud (among other things)


> No way ONE state adversary could rig this.

What about multiple state adversaries? It would be hard as hell but it could be done.

I'd still opt for an astronomical source. If God wants to backdoor our crypto we're SOL, but other than that it would be pretty solid. I also don't get what's wrong with nothing up my sleeve constants. Hashing the word "YOLO" is mathematically unlikely to be a rigged constant to almost "probability of falling through the floor due to random quantum fluctuations" degrees.


> What about multiple state adversaries? It would be hard as hell but it could be done.

Highly unlikely, there is too much money in play, too many processes to corrupt... Cryptography is usually made solid enough so that there are easier ways to break the system.

http://cryptoexperts.github.io/million-dollar-curve/#lotteri...

> what's wrong with nothing up my sleeve constants

It's not unpredictable. There are a million constants you could choose from if you wanted to make a backdoor. I think the badass paper is about that.


Why not just use Bitcoin blocks directly? A SHA-256 hash of 100 consecutive blocks should be sufficient.


I wondered about that, but Bitcoin mining is increasingly dominated by a small number of big players, and they also have the hardware to do exhaustive searches on hash codes to find inputs that give the answer they want. I'm not saying its necessarily a bad idea, but its not immediately obvious that this is going to be harder to subvert than a selection of national lotteries.


Could it be something like stock market values in specific dates?

If an actor can rig the stock market, the problems are different I Would think?

Does the cosmic source not have an issue of who actually can make that measurement? will you not require some sort of specific tools and knowledge? or you mean using some astronomy databases created by reputable institutions?


Sigh, yet another attempt at solving the wrong problem.

There is no shortage of proposals for how to generate curves. There is a shortage of implementations that you can just "npm install" or whatever and use.

For ECC, I can name you one library that people like me can use - libsodium based on djb's Curve/Ed25519. It got into the TLS standard not because it's so much better than anything else but because it has an implementation that works, is available for many languages and has decent documentation.

Of course it's helpful that djb spent a lot of time securing the thing against side-channel/timing attacks and employed the best coding practices and writing and presenting papers on the thing but no amount of writing papers is a substitute for sitting down and actually building the thing in a "production quality" way.


Elliptic curves are going to be rendered redundant with the introduction of quantum computing via shor's algorithm.

25519 is already trusted and fast, and in all likelihood will probably last until we need to shift from this type of cryptography.

Seems as though this is pretty futile and doesn't need to exist.


From a practical point having sources of randomness/entropy isn't very useful. A CSPRNG will do a great job generating random numbers. It will create secure Tls connections and a ssh_key that's never been seen before. However it's Achilles heel is that it MUST be seeded with random numbers. That's hard to do in a predictable state, such as a headless vm.

If the seeds no good it can't establish a secure Tls connection to fetch random data (ala Ubuntu pollinate service).

The next thing to push crypto security forward is easy, efficient, and trustworthy ways to generate this seed.


Yeah, you're missing the point a little. If you want to generate and standardize a random curve (like, in this paper, the "random" version of Curve25519, which doesn't rely on specific magic parameters carefully chosen for speed), you have a big problem: even though your CSPRNG might be perfectly fine, nobody is going to believe that you actually used it, because you can't prove that you did.

Hence: lotteries.


One thing that is ignored is that you can't be sure that a lottery isn't rigged. We've seen cases (Serbia) where lotteries were rigged, so it's very possible that someone with enough money could rig your entropy source.


It's relatively rare that lotteries AREN'T rigged. It's a bunch of money sitting RIGHT THERE to the people implementing the lottery. And like any other situation where human beings have a GIGANTIC incentive to commit fraud, they often do: https://thehornnews.com/authorities-lottery-may-be-rigged/ http://www.nydailynews.com/news/national/lottery-employees-r...

"State lotteries in Colorado, Wisconsin, and Oklahoma have confirmed they paid jackpots worth $8 million to Tipton associates, including his old college roommate, Robert Rhodes. Investigators are looking at payouts in the other 37 states and U.S. territories that used random-number generators from the Iowa-based association, which administers games and distributes prizes for the lottery consortium." NOTICE THAT 40+ STATES AND US TERRITORIES ACTUALLY USED ONE RIGGED RNG FROM THIS CORRUPT DUDE IN IOWA

I don't know what they these curves are used for, but, if there is anything at stake, it's worth looking into whether some of the people working on this project had college roommates that are now in lottery sinecures.


https://simple.wikipedia.org/wiki/Elliptic_curve needs more input, so if anyone here is feeling up to it...


Hahaha they haven't seen all the past instances of lotteries being rigged? And they don't think it will happen in the future?


Is it conceivable that a government agency could manipulate lottery results specifically to break this?


cf. the FAQ


Doh, thanks!



ELI5


more garbage over the garbage !!


I'm donating 35 dollars to Million Dollar Curve. Who wants to match me?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: