> Python has been the defacto standard in scientific/data/academic programming for decades
In my experience (Genomics) this is simply not true. Python has caught on over the last 5 or so years, but prior to that Perl was the defacto language for genetic analysis. Its still quite heavily used. Perl is not a paragon of simplicity and clarity.
I feel like trying out various languages/frameworks would affect compsci labs a lot less than other fields, since the students probably have some foundational knowledge of languages and have already learned a few before getting there. Might be easier for them to pick up new ones.
(a) While I'm being honest that my observations are based on the fields I have experience, there is no such justification that "It is true broadly for computation in academia" in your comment.
(b) Interpreting "niche" as "small" (especially given your "true broadly" claim): Computational genetics is huge in terms of funding dollars and number of researchers.
I don't understand this comment, what are you suggesting Google will do? Surreptitiously insert code into your binary during a patch?
If you are running in Google cloud, its their machines and they have power to do pretty much whatever they want anyway. How would this feature affect anything?
Once you're dependent on their packaging process, why would they ever have to introduce whatever tracking or restriction they want "surreptitiously"? It'll be a part of their internal monitoring and diagnostics package.
This paper from 2010 has some info on why this may be the case [0]: "These results indicate that the protection of bacteria on the leaf surface by biofilm formation and stomatal colonization can reduce the antimicrobial efficacy of irradiation on leafy green vegetables."
I wonder how the author feels about leaf vacuums like this, which is what I use: http://a.co/d/dGh7M4R
Its electric, so no fumes, and since it sucks instead of blows I don't have to worry as much about annoying passers-by with dust. Sadly it still makes noise, of course, though its far quieter than the backpack blowers the author is talking about.
I have something similar. It grinds whatever it sucks and creates a lot of dust unfortunately. Hopefully your model is better. I think it comes down to bag quality.
The article mentions a blower that runs on batteries and is super expensive. Yours has a cord and is cheap. It seems yours is better as long as extension cords can be dealt with.
Maybe somebody more familiar with TLS can set me straight here, but I find it surprising that SNI still exists and there isn't much effort to replace it. To me it seems like an odd hole to punch into the encryption layer. Back when I was doing security research I wrote a TLS MITM and I distinctly remember thinking "wow thanks SNI this makes it so much easier".
Thanks for the links. Sounds like the main reason it hasn't been addressed is DNS, if the client is just going to make a DNS request before their TLS session, the host is effectively leaked anyway. Sadly DNSSEC wouldn't address this as it only provides integrity and not confidentiality.
A Firefox Nightly can be configured to do D-PRIVE (specifically DNS-over-HTTPS to Cloudflare) and do eSNI, and thus connect to a default configured Cloudflare site without any indication to other parties about which one...
Some other cloud providers or CDNs have made interested noises, if those noises weren't just for the public record they might begin doing the exact same thing in short order, especially if the Firefox tests go well.
This is a step in the right direction but it's not perfect. The name of at least one domain that the responding server must host is still leaked. This can be a non issue if the same ip is hosting hundreds of domains (e.g. CloudFlare) or pointless if it's just hosting one site.
I just had an idea that might be able to work around this though:
1. Create a new TLD: .ip. All *.ip domains are valid ip addresses (in some encoding, e.g. 740-125-138-139.ip, or anything else) and they always resolve to the ip address specified.
2. Automatically issue certificates for each host for each of the ips that they serve on. (Thank you Lets Encrypt)
3. Every new connection made can just use the ip-domain as the esni originating host, because you can know that every host is serving https://ownip.ip.
This doesn't solve the fact that server ips are still fairly unique, and so a reverse dns might be enough to find the host, but it doesn't leak any more information than what the IP header already leaks, and it doesn't require leaning even more on increasingly centralized proxies like CloudFlare.
Certificates already support IP addresses. They just need to be public (i.e. not RFC1918 space) and for legacy browser compatibility the IP needs to be in the commonName and subjectAltName.
In terms of browser compatibility the situation is:
The address must appear as a SAN ipAddress to work in modern browsers like Chrome and Firefox
BUT
The address must appear as either a SAN dnsName or as the X500 series Common Name to work with older Microsoft SChannel implementations.
Key root programme rules and the Baseline Requirements mean that:
IP addresses must not appear as a SAN dnsName (they're IP addresses, writing them out as text doesn't mean now they're part of the DNS system) but only as a SAN ipAddress
The X500-series Common Name must be the textual representation of one of the SANs (doesn't matter which one).
As a result the only compliant certificates for IP addresses that also work in older IE / Edge releases do this:
1. Write exactly one IP address as a SAN ipAddress
2. Write the same IP address, but as a text string as the Common Name.
There are a LOT of certificates that do something else, some of them work but aren't compliant (and so get finger wagging from people like me) some are compliant and don't work in older Windows systems (which may be OK if you're building a new system for like CentOS users, who cares if it works in Windows?) but only the pair of traits I described above manage to be compliant while also working, and they're limited to a single IP address per certificate.
Hey, at least Windows 10 finally groks SAN ipAddresses, in another decade we might not need a workaround.
Is this just a restriction on current CAs? I have a self-signed certificate on my router (more out of curiosity than any practical benefit), and it comes up fine on https://192.168.1.1/
Yes, this restriction applies only to public CAs. The purpose is to prevent someone from getting, for example, a 192.168.1.1 cert and then using it on another network in a mitm attack.
Worth keeping in mind that without SNI, we wouldn't have anywhere near the current level of HTTPS adoption either.
It wasn't that long ago I had to sell clients on a separate IP address just to set up HTTPS. Let's Encrypt using SNI allowed me to secure everyone for free.
This is super cool to play around with and really illustrative.
One thing I noticed about the game is that I was able to "solve" some puzzles by accident when I went to erase a set of edges at once. I guess at some step during removing edges the puzzle was solved, but the game both registered the puzzle was complete AND applied the rest of the erasures afterwards. This lead to me moving to the next screen when the network didn't satisfy the requirement, so I didn't get to see the actual solution.
I would recommend changing the game so that it either waits for the user to let go of the mouse button (ie all the edits are applied before checking for correctness), or ignoring edits for a set period of time after the solution is found.
"The planetary magnetosphere comes into play with the charged dust particles. The rotation of Saturn’s magnetic field generates an electric field directed radially outward from the planet. This field will eject small, electrically charged particles, into interplanetary space. Once that happens, the interplanetary magnetic field “frozen” into the solar wind can accelerate the dust particles to high speeds, creating the bursts of particles recorded by cosmic dust detectors. Evidence for this interpretation came from the changes in direction of the stream particles when the interplanetary magnetic field changed direction, in close timing with the Sun’s rotation."
So my guess is that a planet with no atmosphere and a magnetosphere weak enough to accelerate the charged dust just only enough to keep it moving without being ejected of the planet could have something happening, I'm not a planetary scientist and I don't know if a storm would be plausible so take it with a grain of salt and hope for someone else to jump on the question.
But if we're talking about "common" dust storms, without an atmosphere there would be no winds, so no common storms, and since air density in Mars is much lower than Earth (around 1% of ours), high speed storms in Mars would not be as severe as in Earth, wind speeds that would turn a car or rip trees on Earth wouldn't even push a human out of his place in Mars.
This reminds me of a great book I read, "Into Thin Air"[0]. If you're interested in getting a first-hand account of climbing Everest, and what effect novice climbers have had on it, definitely give it a read.
In my experience (Genomics) this is simply not true. Python has caught on over the last 5 or so years, but prior to that Perl was the defacto language for genetic analysis. Its still quite heavily used. Perl is not a paragon of simplicity and clarity.