Hacker Newsnew | past | comments | ask | show | jobs | submit | kornholi's commentslogin

The new RK3588 boards look promising. ITX3588J comes with 4 SATA ports and a PCIe 3 x4 slot that you could add more with, but it's not shipping yet.


Don't need to be a superpower: https://spire.com/aviation/.


Although GPON is shared, it is not half duplex! I'm not exactly sure why the asymmetrical version won out, perhaps due to cheaper optics back in the day?

Latest iterations (NG-PON2 and XGS-PON) are 10G symmetrical, and XGS-PON is now cheap enough that GPON doesn't make much sense. AT&T Fiber in the US has been deploying it for the last few years.


> Although GPON is shared, it is not half duplex!

Seems like you are right! Although most are asymmetric (2488 / 1244 Mbps). I guess that even for ones with symmetric bitrates, effective upstream transfer rate would be lower than downstream transfer rate due to synchronization / higher inter-frame period needs.


This was posted recently: https://tonsky.me/blog/font-size/


Yeah, ML libraries are a huge offender. The Torch 1.8.1 manylinux wheel is 1.6G unpacked, with 1.1G going to CUDA support.



If you zoom in on the highways, you can see RGB ghost images of moving vehicles. Is that an artefact of using 3 distinct filtered inputs or something else?

[ps. The frequency shift is definitely reflecting vehicle direction - right side of highway is BGR, the other side is RGB. Delays in the sequence of 3 filtered captures?]


Yes, the different bands are not collected at the same time across the whole scene, so you get ghosting with moving objects. The SkySat sensor [1] is split in two halves: panchromatic and RGBN (where N is near-infrared). It takes lots of captures as the satellite travels that are later aligned and combined into one version.

[1] https://directory.eoportal.org/documents/163813/5615117/SkyS...


I wish Bazel was more explicit about things leaking in from outside the workspace, but you can always vendor the toolchain to be more hermetic. For example see https://github.com/grailbio/bazel-toolchain which sets up LLVM and Clang for you.

https://github.com/bazelbuild/bazel/issues/4558


Ok, but that decision would have to be made by the project maintainer, in this case Google, not the person using Bazel to compile protobuf. (And not particular to Bazel -- a developer can make any build system effectively hermetic by vendoring everything.)

In my view the challenge here is that a dependency changes (e.g. /usr/include/stdio.h is upgraded by the system package manager, or two users sharing a cache have different versions of a system library) and Bazel doesn't realize that it needs to rebuild. It would be a pretty heavy hammer if the way to fix that requires every OSS project to include the whole user-space OS (C++ compiler, system headers, libraries) in the repo or via submodule and then be careful that no include path or any part of the build system accidentally references any header or library outside the repository.

And maybe this issue just doesn't need to be fixed (it's not like automake produces build rules that explicitly depend on system headers either!) -- my quibble was with the notion that Bazel, unlike CMake or whatever, provides fully hermetic builds, or tracks dependencies carefully enough to provide an organization-wide build cache across diversely configured/upgraded systems.


> Ok, but that decision would have to be made by the project maintainer, in this case Google, not the person using Bazel to compile protobuf.

No, that's up to the user. If you include protobuf in your Bazel workspace and have a toolchain configured, protobuf will be built with that toolchain (this is also how you would cross-compile). Bazel workspaces are still a little rough around the edges and interactions between them can be very confusing, as Google doesn't need them internally (everything is vendored as part of the same tree).

Under the hood there's a default auto-configured toolchain that finds whatever is installed locally in the system. Since it has no way of knowing what files an arbitrary "cc" might depend on, you lose hermeticity by using it.

Of course, the whole model falls apart if you want to depend on things outside of Bazel's control. However in practice I've found that writing Bazel build files for third-party dependencies isn't as hard as it seems, and provides tons of benefits as parent mentioned.

For example, last week I was tracking down memory corruption in an external dependency. With a simple `bazel run --config asan -c dbg //foo`, I had foo and all of its transitive dependencies re-compiled and running under AddressSanitizer within minutes. How long would that take with other build systems? Probably a solid week.


From Krebs tweets:

The NSA's Neuberger said this wasn't the first vulnerability the agency has reported to Microsoft, but it was the first one for which they accepted credit/attribution when MS asked.

Sources say this disclosure from NSA is planned to be the first of many as part of a new initiative at NSA dubbed "Turn a New Leaf," aimed at making more of the agency's vulnerability research available to major software vendors and ultimately to the public.


more like someone with some commonsense decided to capitalize on disclosing issues when other countries get zero days. Oh well, guess we can't use this anymore Bob, china has been exploiting it over the past week. Call Microsoft lets at least get some free PR in exchange of having to give this up.


They have probably done that for a while (this is the first public attribution, not the first disclosure); but they are now blowing their trumpet because they need some good PR. Why?

Snowden.


Much more likely the bad reaction to Eternal Blue.


EternalBlue would have not received that much coverage had it not happened after Snowden proved that the American public cannot trust the agency. They had been dragged to the foreground before without repercussions, because reactions were limited to the IT world. Snowden made it a general-public issue, and now they are forced to to shape up.


You write that like it's a bad thing.


You can do the right thing for the wrong reasons.


An alternative angle that could make sense is that it shows that they're not purely intent on hoarding exploits (particularly dangerous ones) and are willing to report them to software vendors in order to reduce everyone's risk profile.

That'd be more of a communal-good, de-escalation approach. There's certainly something to be said for the fact that it displays the talent and expertise available too though (i.e. helping for recruitment).


The tweet* from the call with reporters - a cynical person might think instead that NSA thought that with the similarity to the LE and FF flaws it was not much longer before a hostile actor would find this crypt.dll flaw so it was time to notify MS.

* https://twitter.com/briankrebs/status/1217125030452256768


Didn't the FBI or NSA push for flawed Elliptical Curve Crypto in the past?

Could be the knew about it for a while and had milked it hard until they caught someone else using it. Or like the parent said, previously discovered flaws meant that someone might catch this one, too.


It was Dual EC DRBG, a prng


There is no evidence that US push flawed curves.


>There is no evidence that US push flawed curves.

"Reuters reported in December that the NSA had paid RSA $10 million to make a now-discredited cryptography system the default in software used by a wide range of Internet and computer security programs. The system, called Dual Elliptic Curve, was a random number generator, but it had a deliberate flaw - or “back door” - that allowed the NSA to crack the encryption."

https://www.reuters.com/article/us-usa-security-nsa-rsa/excl...


"Dual Elliptic Curve" is an RNG, a PKRNG, that works by using a public key to encrypt its state, which is then directly revealed (as public key ciphertext) to callers (for instance: in the TLS random blob). The problem with PKRNGs has nothing to do with elliptic curves; you could design one with RSA as well. The problem is that for a given public key, there's also a private key, and if you have that private key you can "decrypt" the random value to reveal the RNG's state.

That's not a flawed curve that NSA pushed; it's a much more straightforward cryptographic backdoor.


"random number generator"


>a new initiative at NSA dubbed "Turn a New Leaf,"

More like "do the actual job they are paid to do"


They are paid to collect intelligence for the benefit of the american people, not american companies. Luckily citizens united hasn't stretched that far.


Their mission also explicitly includes information assurance:

Mission Statement The National Security Agency/Central Security Service (NSA/CSS) leads the U.S. Government in cryptology that encompasses both signals intelligence (SIGINT) and information assurance (now referred to as cybersecurity) products and services, and enables computer network operations (CNO) in order to gain a decision advantage for the Nation and our allies under all circumstances.


They've got to balance both roles.

IIRC, in earlier times the government didn't use as much COTS stuff, and civilian computer systems weren't so critical, so the roles were easier to separate. The NSA developed whole series of secret encryption algorithms for the exclusive use of the government/military, and civilian algorithms weren't approved to secure classified communications.

https://en.wikipedia.org/wiki/NSA_cryptography


I always wondered why Barr, Comey and basically every AG I paid attention to, consistently want to break encryption for the populace.

I guess it makes sense proponents of those changes would be ok of breaking it for the proles of they thought their secrets are protected.


You don't see how a lack of critical vulnerabilities is software infrastructure is of benefit to citizens?


No, I don't see how this is part of foreign intelligence/surveillance/espionage work. It is good that these vulnerabilities are fixed, of course. But shouldn't that be at least a separate partially independent branch of the NSA? Otherwise you get a large conflict of interest.


Their job is to collect signals intelligence and execute cyber warfare operations. Not whatever you think it is.


Their job is more than that.

"The National Security Agency/Central Security Service (NSA/CSS) leads the U.S. Government in cryptology that encompasses both signals intelligence (SIGINT) and information assurance (now referred to as cybersecurity) products and services, and enables computer network operations (CNO) in order to gain a decision advantage for the Nation and our allies under all circumstances."

[1] https://www.nsa.gov/about/mission-values/


So...SIGINT and CNO. Exactly as I stated.


Security assurance isn’t necessarily cyber warfare. To have the high ground is not the same as using it offensively, hence the expectation of defensive posture as part of the NSA’s mission (although admittedly some offensive activities are to be expected, depending on the situation, such as Stuxnet and Iran).


Not sure if you’re just being snarky, but the NSA’s stated mission includes helping with cyber security: https://www.nsa.gov/about/mission-values/


It also involves breaking enemy cyber security (signals intelligence).

It's actually a rather fascinating incongruity, since we live in a world where "the enemy" is more likely than not to be using the same software systems that the NSA themselves are, and that therefore any exploitable flaws they find in enemy systems are pretty likely to be just as exploitable in their own. (And that similarly, disclosing the flaw in order to fix the issue in their own systems is very likely to result in "the enemy" fixing the flaw as well.)

A couple years ago the White House released a document explaining the process they use for deciding what vulnerabilities they keep secret: https://www.cnet.com/news/white-house-trump-administration-h... noting that "In the vast majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest". Though from what we've seen in past leaks, it's pretty obvious they don't reach that conclusion for all vulnerabilities they find.


And what do you think the end state of all that cybersecurity research is?


NSA has long had an explicit offensive and defensive mandate. They even recently created a cyber defense directorate:

https://www.washingtonpost.com/national-security/nsa-launche...


NSA has both attack and defense mandates and organizations. Currently, the attack org has priority, but it's not like the defense org does nothing. So if the attack org doesn't want a vuln, they can let the defense org reveal it for PR points.


I like NSA being more active, but the concept of trusting NSA on crypto is just never gonna happen. Their core mandate is being able to break it so the whole concept is a non-starter


This kind of logic is attractive on message boards but makes little sense in the real world.

What NSA needs are NOBUS ("nobody but us") backdoors. Dual_EC is a NOBUS backdoor because it relies on public key encryption, using a key that presumably only NSA possesses. Any of NSA's adversaries, in Russia or Israel or China or France, would have to fundamentally break ECDLP crypto to exploit the Dual_EC backdoor themselves.

Weak curves are not NOBUS backdoors. The "secret" is a scientific discovery, and every industrialized country has the resources needed to fund new cryptographic discoveries (and, of course, the more widely used a piece of weak cryptography is, the more likely it is that people will discover its weaknesses). This is why Menezes and Koblitz ruled out secret weaknesses in the NIST P-curves, despite the fact that their generation relies on a random number that we have to trust NSA about being truly random: if there was a vulnerability in specific curves NSA could roll the dice to generate, it would be prevalent enough to have been discovered by now.

Clearly, no implementation flaw in Windows could qualify as a NOBUS backdoor; many thousands of people can read the underlying code in Ghidra or IDA and find the bug, once they're motivated to look for it.


I mean, the 0 days in the shadowbroker dumps wouldn't count as "NOBUS" backdoors either, but the NSA was sitting on those like a dragon hording gold.


Those aren't vulnerabilities NSA created, unlike Dual_EC, which is.


Neither is this crypt32 vulnerability, which is part of the analogy the parent comment is making.


NSA disclosed this CryptoAPI vulnerability. What's the lesson to draw from that?


My point is that the structural "NOBUS" framework the parent was trying to construct has glaring, recent counter examples, and can't really be used to holistically describe their behavior over the past couple decades.

Of course I applaud responsible disclosure, and if they continue down that direction they have the possibility of rebuilding some of the trust they've broken in modern times.


You've lost me. What are the glaring counterexamples to NOBUS? The NOBUS framework says that NSA introduces vulnerabilities and backdoors only when it has some assurance that only NSA will be able to exploit them. It doesn't follow that NSA would immediately disclose any vulnerabilities they discover.


...the parent is literally talking about it in the context of today's crypt vulnerability and using that as example of their cohesive NOBUS framework.

> Clearly, no implementation flaw in Windows could qualify as a NOBUS backdoor; many thousands of people can read the underlying code in Ghidra or IDA and find the bug, once they're motivated to look for it.

The counter examples are the hordes of critical 0 days they've been sitting on, some of which have led to to a body count of five eyes citizens.

Like I said, disclosing is a step in the right direction, but they don't get a cookie for the first major disclosure in decades.


I don't think anyone should give NSA a cookie. I think it's useful to be able to reason through where NSA is (relatively) trustworthy and where they aren't.


Right, but in the absence of everyone using their NOBUS-backdoored software presumably the next best thing would be to hoard zero days and hope they can work as pseudo-NOBUSes.


That's certainly true; NSA is chartered to exploit vulnerabilities and certainly hoards them. But that doesn't address the question of whether you should trust NSA "on crypto". Here, they're the ones disclosing the crypto flaw; there's no need to "trust" them, because they're clearly right (Saleem Rashid worked out a POC for this on Slack in something like 45 minutes today).

Should you trust them about Dual_EC? Obviously not: the sketchiness of Dual_EC has been clear since its publication (the only reason people doubted it was a backdoor was that it was too obviously a backdoor; I gave them way too much credit here).

Should you trust them about the NIST P-curves? That depends on who you ask, but the NOBUS analysis is very helpful here: you have to come up with a hypothetical attack that NSA can exploit but that nobody else can discover, otherwise NSA is making China and Russia's job easier for them. Whatever else you think about NSA, the idea that they're sanguine about China is an extraordinary claim.


> Sources say this disclosure from NSA is planned to be the first of many as part of a new initiative at NSA dubbed "Turn a New Leaf," aimed at making more of the agency's vulnerability research available to major software vendors and ultimately to the public.

Sounds like "we find so many critical bugs... we don't need all of them to achieve our goals, so let's blow some of them for PR"


I think it's more like, "We find so many critical bugs, let's blow some of them for PR once we discover that adversaries are using them too."


Bull.... A more likely scenario is they've been sat on this for years and finally saw another actor using it in the wild.


So... exactly what I said?


That's not exactly true, for example see https://github.com/grailbio/bazel-toolchain


I am glad there is work going towards bazel-ifying the toolchain! And maybe now that 1.0 is out, we will see more work like this.

However, I think my point still stands -- this work looks pretty experimental. And most tutorials, including bazel's own [0], recommend using default configuration, which uses system compiler. And most of the examples I have seen also rely on system compiler.

[0] https://docs.bazel.build/versions/master/tutorial/cpp.html


For the record, in the beginning Bazel does not support using auto-configured system compiler at all. The only thing it supports is using a controlled build environment and hardcode all paths (including compiler paths), or using a checked-in toolchain. But at a later stage a dumb autoconf-y "detect system toolchain and try to use it" thing was added and it became default.

This is because configuring a checked-in toolchain in Bazel needs tremendous effort and very specific knowledge about something 99.9% of developers does not know and DOES NOT WANT to know or spend their time on.


BGRA is a "weird" convention in the graphics world that goes back to at least OS/2 bitmaps, when there was no wrong answer, and has stuck since then. Since early GPUs targeted Windows (where BGRA is the native format for GDI/D2D), it makes sense that it made it this far :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: