Hacker Newsnew | past | comments | ask | show | jobs | submit | dc352's commentslogin

technology building blocks are rarely a problem. The problem is to scale it up.


Akamai certificate renewal with a couple of Lambda functions running in AWS. We have built this as a tool to help us and wonder if there are more people who would find it useful.


... and we'now fixed a CORS issues that caused some problems to pull the list of test servers :(


The only way to do it (I'm lazy so didn't read any of the documents - my gut feeling of an engineer) ... is to use ECDH, which provides EC params in ServerKeyExchange. CryptoAPI might have used those and just pull the public key from the cert.


I suspect you're overcomplicating the attack with all the math and we can ignore most of it.

The only way the attacker can tell the MS Crypto API is via the TLS protocol. You can only do it if it's relevant. The only option for that is to use ECDH, which allows the server to supply EC parameters for the Diffie-Hellmann exchange.

My bet is that the problem is that MS Crypto API took those parameters as correct without checking them against what's in the certificate. I.e.,

ServerKeyExchange - here's the EC spec, we just need the public key Certificate - ah - here's public key, we have the ECparams - let's run the math

:)


our disks in London went down at about 8:45pm UTC (10 mins 100% disk utilization alert triggered at 5 to) and DO recovery message was sent out at about 2am UTC. We switched our service (keychest.net) on at 3:15am


that would be pretty cool but to have that, you need a high-network-latency solution, i.e., pretty much cold back-up. For some time I thought it's pretty last century option but having been experimenting for some time now, it's the option with lowest impact on system performance. More importantly, it's reasonably resilient.


I've read your comment now about 4 times and all I have come up with is "huh?"

Literally thousands if not millions of organisations operate multi-DC infrastructure across the planet.

Is it harder than setting up a single box in one DC? Yes. Is it harder than setting up a mini-cluster of boxes in one DC? Yes. Is it rocket science? No.


That wouldn't be at the top of my list. We have "Volumes" for databases and they were inaccessible for like 6 hours. I don't think any DNS is involved in mounting these. But hey, there's always a lot of crap hidden behind the scenes :)


I would be absolutely amazed if DNS was not involved in mounting a block storage volume.


We are still looking into it and I’m in touch with DO and I hope to learn more from their support. Unfortunately, their response time is currently around 2 days.

I guess my point here was that Our database should be resilient to this kind of infra issues and ideally self-heal if these are transient events.


Yes, the work to get the cluster resilient is terrific. Just suggesting moving away from the problematic VPS if a root cause can't be found.


You can test yourself at https://beta.keychest.net


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: