I think so. Yes. IIRC, the first time I saw this leaked was a combo leak by a Navy Seal and a member of the Executive. The Seal leaked that they powered down Bin Laden's computers to take his hard drives after they shot him. The Executive member said that the drives were encrypted and it would take a few days to get the data. Jihadis are known to use a custom version of PGP with 2048bit RSA keys. They either used that, a COTS drive encryption program (unlikely), or reviewed and adapted an open source drive encryption program. In either of the likely scenarios they would have been using 2048bit RSA. Therefor, it is highly likely (due to the NSA having target motivation even if the drives weren't well encrypted) that the smooth barrier does not exist and the NSA can factor 2048bit RSA in a hours to days scale time frame.
Also, it was leaked that NSA TAO had a 70%+ success rate compromising Chinese systems. Even with the tech companies giving them secret zero days for an extended period of time, anyone that has been a blackhat knows they're not getting to a 70% success rate through exploits. Therefor, it's highly likely they can decrypt VPN/SSH (TLS) traffic encrypted with AES256/RC4-128/3DES and/or the RSA/EC public cryptography used. As you noted the leaked slide seems to indicate that.
Breaking RSA is just a matter of managing to factor prime numbers faster than anyone else, isn't it? Unless if there is some sort of oversight inside the RSA algorithm that allows the encryption to be broken easier.
Do you have more information on the smooth barrier? I did a quick google but didn't see much relevant.
I'm not sure it's relevant whether the b-smooth barrier exists of not, since that assume use of NFS.
There's a reason the NSA is pushing folks to use Suite B ciphers including Elliptic Curve along specific curves. It's not unreasonable to think that the NSA mathematicians have proven some relationship between EC and prime number theory in general.
Interesting comment! Yes, I have been trying to avoid EC because some of the random walk stuff I read made me uncomfortable given standardized curves. I always thought that NSA vector register desire was strictly due to block size of ciphers (particularly Russian). This was definitely true when DES/3DES where in use. Then again, I thought Bluffdale was just to crack old Russian intercepts with GPU like custom hardware. BTW, a Cray hw engineer and I talked about how Cray was trying to pivot into Bioinformatics since the gov biz was no longer robust (in 2004, IIRC?).
The whole reason the USG rescued Cray in the late-1990s/early-2000s was to insure the continued availability of large memory image vector supercomputers. Part of this may have been to it being less costly than converting their processing systems from vector codes and algorithms to massively parallel distributed processing ones. At that time the cluster interconnects were much, much slower in terms of both bandwidth and latency than they are today. Solving very large sparse matrices would have been tougher on an MPP system than on a vector one. You can read about some of this history in Bamford's "Shadow Factory."
There have been a number of very cost effective hardware approaches proposed for significant acceleration of both the sieving and linear algebra components of the NFS. Many of these proposals could successfully and cost effectively attack a 1024-bit number in the 2003/2004 era. The process at that time was around 130-nm. Today's process would have features at the 32-nm or 22-nm size. Today there has been a 100-fold increase in performance since 2003. (See http://tau.ac.il/~tromer/cryptodev/ for an overview.)
Combine this specialized hardware with an algorithmic improvement that gets to O(log n) or O(n log n)....
AES appears fine. The NSA and USG in general make a very strong effort in the 2000s to move all civilian command and control systems for satellites to AES-256 with TRANSEC capabilities. A brute force attack on AES-256 with a quantum computer should be on the order of 2^128 operations with currently know QC factoring algorithms. AES-128 looks weak at 2^64.
If the NSA can break something, they need to assume that their primary opponents can do so or will do so soon. China specifically comes to mind here. The can not release cryptography suites with known vulnerabilities. It is widely thought that it is more importantly to secure one's own signals before intercepting and decrypting one's enemies.
I think everything on the internet needs to be moved to Suite B protocols with forward secrecy enabled. AES-GCM overcomes all the known attacks (i.e. CRIME) against AES-CBC and AES-CTR.
I get the impression that the NSA is eight to ten years ahead of the public domain cryptographers in some areas. I think this gap is shrinking slowly. However, I have also heard that the NSA is preventing publication of some papers developed in the public domain due to national security reasons.
As the size of a semiprime increases, the number of smooth numbers that can be discovered (the "yield") by the GNFS with polynomials selected with academically known optimal polynomial selection algorithms decreases. With a reduction in smooth candidates the GNFS sieve operation can be wholly unsuccessful. If a smooth barrier exists (such as a semiprime size where smooth yield becomes deficient) factoring time degenerates from the GNFS improved rate to old school factoring rates due to the need to pivot. Yield decay has been observed <2048bit. If 2048bit is easily factorable for the NSA, no barrier challenge is suggested.
I don't recall the source of the Executive comment. It was kind of buried in a news piece with a broad focus that I read. I'll look for it. Unfortunately, I can't recall the exact language to do a good search and find it. Sorry.
The Executive and Legislature couldn't keep something secret to save their lives. And, JSOC leaks like a fucking sieve. If I can't find that particular leak on the web, I'm sure there will be another one soon with the same info. Every guy likes to talk to pretty news reporters and seem important.
I know nothing about this stuff, so apologies for my naiveté, but what technical barriers prevent us from changing from 2048 bit encryption to something of a much much greater magnitude? 2,048,000 bit (or whatever).
There's no evidence that any encryption is broken (other than people misusing it (edit: or broken protocols like PPTP)). Anyone could do this kinda thing given enough motivation and money. Determining a VPN's users? Just monitor all inbound connections to the VPN service. Now you have the IPs of the users. The IP alone might be enough to know the user or a search on that IP might show them logging into other services that reveal their ID. Pretty simple.
"Show me all VPN startups in country X, and give me data so I can decrypt and discover users."
Holy crap. Is all encryption broken?