Hacker Newsnew | past | comments | ask | show | jobs | submit | more gluejar's commentslogin

Leanpub. Please don't think of it as "crowdfunding" think of it as "audience building".


it looks like a great platform, even if it seems their userbase is not that big. Is that right?


Should we worry that a Deep-Packet-Inspection vendor has the same owner as an SSL certificate vendor?


We have at least another such example of a merger - BlueCoat + Symantec, with BlueCoat's CEO becoming Symantec's CEO. And now Symantec's certificate business is gone because it couldn't be trusted.

The article also seems to imply that Comodo is selling its CA business "because of what happened between Google and Symantec." They seem to try to spin it as an opportunity for Francisco Partners, but I wonder why Comodo was suddenly interested in selling its CA business - is it because their infrastructure was just as shake and insecure as Symantec's? Certainly something to think about.


All the more reason to build over top Google, Dropbox, AWS... treat them as vanilla endpoints for encrypted traffic

Do nothing in AWS that isn’t hidden, encrypt local and store on Dropbox...

Keybase and similar should be used as new examples of the kind of apps and features to be offering

End-to-end encrypted messaging, git, and file sharing

Am done with Google and the like for personal and soon at work where we’re migrating from AWS and Google G suite

It’s market evolution. Out with Kodak and in with digital cameras.

Only now it’s social media and services 1.0

Since it’s just software it can happen much faster and more frequently


I worry about it too, my company use Fortinet with DPI enabled that it strips down target server's SSL cert and replaces by its self-signed cert. So, maybe with Comodo's CA, SonicWall DPI would be transparent to end users. Yes they assure in the press release but who knows.


An important thing about the way Public Key cryptography works is that if you try shenanigans like this you're obliged to supply the client with the smoking gun as part of your scheme. The signed certificates prove beyond doubt what happened, and they are automatically delivered to the client as a necessary part of the initial SSL/TLS connection.

Imagine if you had a fool-proof way to murder people, but it requires you leave their corpse in a public square with a copy of your photo driving license and a signed confession. Now, perhaps for some reason you are politically untouchable so you will never see justice. Still though, by this method absolutely everybody will know you did it, so it doesn't seem like a good idea anyway.


Next year Google Chrome will require people to publish all the certificates in Certificate Transparency, which will be a significantly better source of proof because then people who are in a better position to know that the attack is an attack will also have a straightforward way to see them.


Is each cert chained to the previous somehow? Like a field from one hashed into the next, so that you can detect gaps in the issuance? That way they can't even issue a secret cert for a one time national security op without breaking the chain.


Kind of. The way CT works is that everything submitted to a CT log is chained, so you can validate that the CT log's record of issuance is complete—that is, everything it has received has been digitally signed into its record, and you can verify that nothing has been removed from the record. So if you're presented a cert with a CT timestamp, you can validate that a CT log server saw and accepted that certificate (making it part of the public record). The signed timestamp (or timestamps--you're supposed to present stamps from a minimum of three different CT logs to make Chrome happy) has to be embedded in the cert or presented as part of the TLS handshake. (See https://www.certificate-transparency.org/how-ct-works for more info.)

All of that doesn't prevent someone from issuing a certificate from a public CA and /not/ submitting it to a CT server: there's no easy way to detect that. If someone did that, though, they would have to present the certificate to your browser without a CT stamp attached. Both Firefox and Chrome are working on implementing mandatory CT validation, at which point your browser will yell and scream if it is presented a cert from a public CA that doesn't have an associated CT stamp. (Right now, if you want to check CT timestamps on certs, you need a plugin (there's one for Firefox, e.g., at https://www.elevenpaths.com/labstools/certificate-transparen... although I can't vouch as to its completeness).) At that point, sneakily grabbing certs from a public CA won't do you any good because it will be obvious they're not legitimately issued.


> At that point, sneakily grabbing certs from a public CA won't do you any good because it will be obvious they're not legitimately issued.

An interesting problem in this design is how to persuade users that they've encountered something genuinely important that it would be helpful for them to tell someone else about. (Maybe browsers can store such questionable certificates offline and gossip about them to other TLS servers later.) It's not very common for people to be persuaded that errors on their computer matter and that other people will care about them... but this one does! :-)

I know HPKP has a report method which one could imagine generalizing somehow to CT inclusion failures, but, in many attack scenarios involving use of misissued certs, the victim's network connection is controlled by the attacker. In that case, the attacker will probably not want to allow the victim to report the attack to another server in real time.


Interesting idea! I know the CT spec recommends that clients archive CT data received for later review, and it defines the concept of a CT 'auditor' that constantly reviews the logs from CT log servers looking for malfeasance. It would be interesting and very useful to amalgamate anonymized data from browsers into one of those auditors and crunch through it to compare to CT records.


Yes.


This would be a violation of root policies and would certainly cause the CA to be distrusted nowadays. It would also be detected with a high likelihood due to HPKP (and CT in the future). The economics of buying a CA for this purpose don't make sense.


You might have missed yesterday's news from Chrome announcing that the HPKP feature is being considered for removal.


The deprecation timeline is being synced with the rollout of CT and CT enforcement headers (Expect-CT). This provides roughly the same detection (if not prevention) capabilities.


Would it be a violation if a company or person were only using it to crack open traffic into or out of their network? On my network, shouldn't I be able to do just about anything I want?


It would be a violation to do it with Comodo's publicly-trusted PKI, yes. You can do whatever you want with a private root that's manually deployed to the clients within your network.


Stop by the repo and say hello. We can always use more help with maintenance and improvements; a bit of history doesn't hurt either!

I've been surprised how often new contributors mention that it's their first time contributing to an open source project. We need help to make sure it's not their last.


Is there any reason to to this without cooperation with https://github.com/vhf/free-programming-books which already has over 4K commits and 16K forks?


I agree they should cooperate, but I'm curious about the 16k forks.

What's the point in forking a project like this? Adding changes and submitting pull requests would be the obvious thing, but that doesn't seem to be the case because there are only 4K commits and less than 30 open pull requests. So anybody know what's up with all the forks?


I'm guessing the forks are a form of bookmarking. At least for my part, I fork every single repository that might even be slightly interesting for me in the future. Hell, I even fork repos that I explicitly hate, just so I can keep tabs on them or learn more about them.


I guess bookmarking would explain it, but seems like the wrong tool for the job IMO. Forking means you have to do extra work to get the latest version of the code, even if you're not making any local changes. Using the "watch" or "star" feature or an actual bookmark means you'll always go straight to the most up to date version.

> Hell, I even fork repos that I explicitly hate, just so I can keep tabs on them or learn more about them.

I don't know what it means to "hate" a repo, but whatever it means, forking the repo actually means you won't be keeping tabs on them because your personal fork will have no activity until you pull from the original. And if you need to go to the original to get updates, why not just go straight there?


You don't get it, sorry. Not in a mean way. I just like to learn about things I do not like, because sometimes I change my mind. Forking a repo means cloning it to my local network because of some automation I have, and I learn applications by playing with code.


There is a "Star this repo" option for just this case.

But forking has its benefits if you want to make sure that the original repository does not disappear.


I'm not sure if this is what you're talking about, but unfortunately, I've had a few repos disappear because the original was served with a DMCA. Github is fast when presented with a DMCA (but also equally fast when presented with a counter-DMCA notice, thankfully). You'll need to clone it off of Github if you really want to ensure its survivability.


You can 'watch' repos. You should do that instead of forking them.


You hate some repos?


Do you like systemd?


Touché


I've been struggling to imagine how hypothes.is annotations can maintain quality and relevance. Will there be a reputation or social layer?


Now in EPUB and kindle, too. https://unglue.it/work/153041/


The title should be "Engineering Security". Subtle difference, but a meaningful one.


Ok, we reordered those words and added a date.


I think you really have to understand that at its heart, Let's Encrypt is not about free certs as much as it is about automatic certs. If you just want a cert, definitely use an established provider. But a year from know, LE will be making this a "set and forget" thing, which is how it should be. LE is NOT a painless way to get certs for legacy infrastructure. I found this out by using it for an elastic beanstalk hosted site. I just wrote about it at https://go-to-hellman.blogspot.com/2015/11/using-lets-encryp...


Head of Let's Encrypt here.

You nailed it. It's important that our certs be free because we can't automate a billing interaction. If we had to charge then sysadmins couldn't just type a command and be on their way. Automated renewal could fail because billing info was out of date. This stuff has to just work, reliably, if we're going to expect the entire Web to use TLS.


On the plus side, Amazon could choose to automate IAM SSL storage and renewal through Let's Encrypt so it would be fully automatic. Might take a bit until they do that though...

Paging /u/jeffbarr?


Don't feel so bad, it doesn't run on Mac OSX or even RHEL. Use the Docker container, it worked for me.



I used to work in the same research field as Xi. A very nice guy, did some very excellent work. It's outrageous what's happened to him.

You might be asking, what's the FBI looking at his emails for?

So here's a random fact. Xi was a Professor of Physics and Materials Science and Engineering at the Pennsylvania State University up to 2009. Guess who was a graduate student in the Materials Science and Engineering at the Pennsylvania State University from 2006 to 2008, had Xi on his thesis committee, and attracted a huge amount of attention from the FBI. Yep. Ross Ulrecht. https://etda.libraries.psu.edu/paper/9710/4335 The idiots at the FBI must have been looking for Chinese links to Silk Road! Because isn't that in China?


Statement by Professor Xi on the Dismissal of the Federal Indictment and Legal Defense Fund http://www.xiaoxingxi.org/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: