As compact-ish explanation: A "standard" wideband RF system in an EW or RF reconnaissance platform covers between 0-18 GHz (DC up to the Ku band), or at least as much of it as possible (and Ka/mmW becoming common on new systems); and they have challenging requirements compared to a communication system. Communication systems are simpler to design since both sides of the link cooperate and filter out a wide swath of potentially interfering signals, but a military system wants to see as many signals as possible so they can be collected or jammed. It has not been advantageous to use an integrated RFSoC in the past given the requirements. If a company were spending millions of dollars designing a complicated front end, they might as well pick a separate ADC/DAC that maximizes the performance they cared about, rather than go with the "easy" integrated RFSoC option that might not have the absolute best performance. Now the industry is just getting to the point that a system like a direct sampling ADC/DAC integrated into a Versal might be able to process massive bandwidths at high enough bit rates that they can do useful things for military applications; it may actually be worth it now because you can push to really high data rates, and the additional processing might make up for a small loss in ADC/DAC performance. Give it a couple years for these to make it into new designs and get fielded.
So I guess the tl;dr, is that it is not because defense doesn't like integrated packages, they just haven't been worth it considering the design goals. Defense does move slow, but this is more about being able to field "military-grade" solutions that work well in challenging RF environments, and once that is possible the government will start to pay for it.
> The prime-counting function approximation tells us there are Li(x) primes less than x, which works out[5] to one prime every 354 odd integers of 1024 bits.
Rule of thumb: Want a 1024-bit prime? Try 1024 1024-bit candidates and you'll probably find one. Want a 4096-bit prime? Try 4096 4096-bit candidates and you'll probably find one.
The approximate spacing of primes around p is ln(p), so ln(2^1024) = 1024*ln(2), and ln(2)=0.693 so if you are willing to absorb 0.693 into your rule of thumb as a safety margin you get the delightfully simple rule of thumb above. Of course, you'll still want to use a sieve to quickly reject numbers divisible by 2, 3, 5, 7, etc, and this easily rejects 90% of numbers, and then do a Fermat primality test on the remainders (which if you squint is sort of like "try RSA, see if it works"), and then do Miller-Rabin test to really smash down the probability that your candidate isn't prime. The probabilities can be made absurdly small, but it still feels a bit scandalous that the whole thing is probabilistic.
EDIT: updated rule of thumb to reflect random candidate choice rather than sequential candidate choice.
It's been a while since I've looked at the literature on RSA prime generation, but I seem to remember that picking a random starting point and iterating until you find a prime is discouraged because primes aren't evenly distributed so key generation timing could reveal some information about your starting point and eventual prime choice.
I'm not sure how realistic of an issue this is given the size of the primes involved. Even if an attacker can extract sensitive enough timing information to figure out exactly how many iterations were required to find a 1024 bit prime from a 1204 bit random starting point, I'm not aware of a good way to actually find either value. You do also introduce a bias since you're more likely to select prime numbers without a close neighbor in the direction you are iterating from, but again I'm not sure how practical an attack on this bias would be.
Still, to avoid any potential risk there I seem to remember best practice being to just randomly generate numbers of the right size until you find a prime one. With the speed of modern RNGs, generating a fresh number each time vs iterating doesn't seem like a significant penalty.
Yes, excellent point! I originally omitted this detail for simplicity, but on reflection I don't think it actually achieved much in the way of simplifying the rule so I changed it to reflect reality. Thanks for pointing that out.
EDIT: the rush of people offering up sieve optimizations is pushing me back towards formulating the rule of thumb on a consecutive block of numbers, since it makes it very clear that these are not included, rather than implicitly or explicitly including some subset of them (implicit is bad because opacity, explicit is bad because complexity).
I've seen many implementations that relied on iterating (or rather, they used a prime sieve but it amounts to the same thing in terms of the skewed distribution). While maybe not a problem in practice I always hated it - even for embedded systems etc. I always used pure rejection sampling, with a random one in each iteration.
Encountering this by chance is exceedingly unlikely of course, if p and q are randomly generated. In probability terms it amounts to the first half of p (or q) all being zero (apart from a leading 1) so roughly 2^(-n/4) where n is the bit size of n. So for RSA 2048 the probability of this happening is on the order of 2^-512, or in base-10 terms 0.0000000...0000001, with roughly 150 zeros before the one!
> Rule of thumb: Want a 1024-bit prime? Try 1024 1024-bit candidates and you'll probably find one.
Where probably is 76% [1], which is not that high depending on what you are doing. For example, you wouldn't be ok with GenerateKey failing 24% of the time.
To get a better than even chance, 491 [2] 1024-bit candidates are enough.
Iterating over some huge search space in an essentially sequential manner is generally not going to be nearly performant as simply selecting an odd number at random. You could try using a generating polynomial instead such as f(x) = x^2 + x + 41 but even that isn't going to help much in the long run. (There are Diophantine equations which one day may prove useful for generating random primes however AFAICT finding efficient solutions is still currently considered a hard problem.)
Yes, but the more we mix sieve rejection into candidate selection the more we complicate the rule of thumb. "Reject even numbers as prime candidates" is probably OK to leave as an exercise for the reader, as is the equivalent "round every candidate to odd" optimization. The point about random vs sequential is well taken, though, and it doesn't complicate the rule of thumb, so I changed it.
Neither is a significant amount of the time required to reject a candidate factor. The cheapest rejection test is "Bignum Division by 3" and something like 2/3 candidates will need more expensive further tests.
Oh, he only busted the Great Depression, won WWII, built half of the infrastructure that we keep kicking the expiration date on, and negotiated 80% of the beneficial fine print in your employment contract. Don't you think he could have done a bit more?
My list would be: 1. FDR, 2. Carter, 3. Teddy. Carter because he sacrificed his career to fix inflation (Republican attempts to rewrite history notwithstanding), and Teddy because he wasn't merely an excellent man with excellent politics, but also because whenever present-day Republicans try to claim the man without claiming his politics I can turn it into a teachable moment, and putting him on a list with the other two is the perfect bait.
He didn't end the depression. it clearly continued right to wwii. You can dabate how things might have been if he had been allowed all his ideas (some of which were as undemocratic as what trump wants)
He steered us to join the war which did end the depression.
Whether it be the new deal or non-isolationist policy, his direction led us out of the great depression which started before his presidency and ended before he died.
I think this kind of counterfactual is pretty impossible to do.
Do you mean that it would have been won without any US involvement? Or do you mean the US involvement would have proceeded similarly with a different president?
From my cursory sense of things it seems like both are probably true. The U.S with FDR obviously contributed to it ending when it did, but so did other countries who were there earlier and sacrificed more. The U.S seems like more of a winner in the sense that they sacrificed relatively little while getting the most out of it, but I don't know if that's a good use of the term "won" in the context of a world war.
Infrastructure was mostly built in Eisenhower's era, not FDR's. Helping Soviets during WWII was a major mistake and it can be personally attributed to FDR - a radical leftist - himself. Many people around him advised him of the dangers of helping Commies.
U.S. should have ignored Soviet-German war. Then finish Commies with nukes.
If they'd done that they'd be down in history as worse than the worst of communism. It was bad enough that they dropped 2 on the Japanese which scores American civilisation a questionable footnote in the history books. "Only people to use a weapon this terrible".
The problem with unprincipled aggression is that, sooner or later, other people match it. The US ended up doing much better by defeating the communists without directly fighting them - one of the few wars the US unambiguously won and why people don't want to learn that lesson is one of the great mysteries. Victories through overwhelming prosperity are both decisive and comfortable.
But that is the point! Get rid of everyone who wasn't friendly/under control, who could match it. Thus achieving worldwide democracy for all nations who could support it immediately, and unlimited time to get everyone who can't, prepared (with potentially unlimited violence applied to force them to). Achieve a sustainable hegemony.
I love cheap and reliable TP-Link routers as much as the next guy, but it's definitely also a security issue. The CCP almost certainly has a backdoor. Maybe a respectable one in the form of an undisclosed bug or the ability to lean on an update provider, but the point stands: it's absolutely a security issue and denying this is cope.
Routers are going to be a bit more expensive and a bit less reliable for a while. We'll live.
Probably a better approach than the futile attempt to excise all routers with backdoors or bugs would be to continue the ongoing efforts to make network security router agnostic.
I thought Shor's algorithm could attack ECC too and the lattice crypto with the sci-fi crystal names (Kyber and Dilithium) was the response?
If I go to https://www.google.com using Chrome and Inspect > Security, I see it is using X25519Kyber768Draft00 for key exchange. X25519 is definitely ECC and and Kyber is being used for key encapsulation (per a quick google). I don't know to what extent it can be used independently vs it's new so they are layering it up until it has earned the right to stand on its own.
It's new so they are layering it up.
At https://pq.cloudflareresearch.com/ you can also see if your browser supports X25519MLKEM768, the X25519Kyber512Draft00 and X25519Kyber768Draft00 variants are deprecated ('obsolete'?)
The 50 000 000th prime is 982451653, but fun fact: you may have already memorized a prime much larger than this without even realizing you have done so!
2^255-19
This is where Curve 25519 and the associated cryptography (ed25519, x25519) gets its name from. Written out, 2^255-19=57896044618658097711785492504343953926634992332820282019728792003956564819949.
You could have memorized even large one if you are familiar with the full name of the Mersenne Twister PRNG: MT19937 is named so because its period is 2^19937-1 which is a prime number (in fact, the largest known one at the time of Zigler's writing). In my knowledge any larger prime number hasn't been used for the practical purpose.
https://oeis.org/A004023 "Indices of prime repunits: numbers k such that 11...111 (with k 1's) = (10^k - 1)/9 is prime."
OEIS says "19937 ones in a row" isn't prime, but "1031 ones in a row" is.
And "8177207 ones in a row" is at least a probable prime. (Which you can maybe remember as a seven-digit phone number, or as either BITTZOT or LOZLLIB depending on how you prefer to hold your calculator. But those mnemonics are wasted if (10^{81777207}-1)/9 turns out to be merely pseudoprime.)
10 years ago, I burned about 6 months of project time slogging through AMD / OpenCL bugs before realizing that I was being an absolute idiot and that the green tax was far cheaper than the time I was wasting. If you asked AMD, they would tell you that OpenCL was ready for new applications and support was right around the corner for old applications. This was incorrect on both counts. Disastrously so, if you trusted them. I learned not to trust them. Over the years, they kept making the same false promises and failing to deliver, year after year, generation after generation of grad students and HPC experts, filling the industry with once-burned-twice-shy received wisdom.
When NVDA pumped and AMD didn't, presumably AMD could no longer deny the inadequacy of their offerings and launched an effort to fix their shit. Eventually I am sure it will bear fruit. But is their shit actually fixed? Keeping in mind that they have proven time and time and time and time again that they cannot be trusted to answer this question themselves?
80% margins won't last forever, but the trust deficit that needs to be crossed first shouldn't be understated.
Jargon and shared context are barriers for newbies. In a new field they simply don't exist yet. The avenues for accidentally excluding people (or intentionally but I like to be charitable) don't exist yet.
You certainly can, provided you have a truck, and a DOT certified fuel trailer, and a transfer pump.
That system also allows you to participate in oil futures as an end user, not to mention it lets you keep your generator up and running for a very long time.
Downside is, modern e10 gasoline tends to adsorb water from the air over time, so fuel isn't stable long term. Most guys doing this are running diesel cars/gensets for that reason.
The model is, go to a truck depot with a 300 gallon trailer, fill up trailer and truck, park the trailer at home. Then fuel the truck off the trailer until it needs to be filled again, repeat. Do understand that, you can get a larger tank, but anything over 1000 gallons requires a placard/permit to haul around. That's in a single tank, so, in theory, a legal length 5th wheel trailer could have multiple tanks under that and be compliant. If you want the tanks attached to a vehicle itself, the maximum size is 150 gallons, hence why semi trucks have multiple fuel tanks that are smaller than that.
Really the only difficulty is finding a place nearby that is willing to sell that much fuel to an individual.
Is running a generator for long periods cost effective? If someone lived where the grid wasn't reliable, and could make the initial investment for all this fuel storage stuff, why not do solar?
Usually it's a backup thing, not a 24/7 thing. If you're really out in the sticks, solar/battery/genset is a very likely complete system. I know of a couple places way up north in Saskatchewan that have what amounts to a mining village running off a big portable genset backed up with batteries, and one of them I know was just on a very big genset, and they were able to pay for a battery bank by downsizing their generator significantly due to the disparity between peak load and average load. I'd share the article I read about it, but, in spite of my normally excellent google-fu, I've been unable to find it.
Either way, people forget about this, because if you have any sort of power generation, doesn't matter the method, if you go over capacity, you have brownouts/blackouts/grid failure/etc. What is deployed now, for the most part, is sufficient genset capacity to ramp into peak load, and most of it is under-utilized the majority of the time. Batteries eventually pay for themselves because of this, as they allow for peak load handling in addition to allowing generating capacity to stay online and remain profitable outside of peak load events. It really is revolutionary.
More or less, gridscale batteries are what dams/reservoirs were to water systems, and if you consider the impact of us as a species being able to save water for later use at-will, the promise we're looking at is going to really change the way we live.
Here in Ukraine, the grid is no longer 100% reliable for obvious (and highly intentional) reasons. Solar is somewhat inconvenient to use in large cities, and cloudy days are common. A variety of generator types and sizes can be seen instead. Living in an apartment, I have an Ecoflow Delta 2 Max as my first line and a 3.5kW gasoline/natgas generator as my second line. Most of the time, there is some power at some hours of the day, so the Ecoflow can store it. If it runs out, I can run an extension power cord down from my balcony, lug the generator down into the yard, and use it to charge the Ecoflow. I do have plans to buy a solar panel and put it on the building roof with a Starlink, but it would require negotiating with the neighbors and stuff.
Gasoline storage really is an issue, though. Currently I just hope I'll have enough money to buy it as needed.
"You mean I can't just drive the car? I need to think about how to find fuel for it every couple of days? And then I have to drive there and hope nothing explodes?"