Well, apparently it's not even an issue for gmail users:
To unblock myself, I switched to a personal @gmail.com address for the account. Gmail's own receiving infrastructure is apparently more lenient with messages, or perhaps routes them differently. The verification email came through.
So it's only an issue for people paying for Google's hosted email—a much smaller set!
Have you been reading the thread? https://news.ycombinator.com/item?id=46952590 there are a lot of reasons why browsers need to care about whether CAs are issuing insecure certificates to XMPP or SMTP servers (or credit card machines)
> […] there are a lot of reasons why browsers need to care about whether CAs are issuing insecure certificates to XMPP or SMTP servers (or credit card machines)
Why does having the clientAuth capability make a certificate "insecure"?
GP gave a very good reason that non-web-PKI reduces security, you just refused to accept it. Anybody who has read any CA forum threads over the past two years is familiar with how big of a policy hole mixed-use-certificates are when dealing with revocation timelines and misissuance.
"it's complicated" is not the same as "it's insecure". Google feels like removing this complexity improves security for web-pki. Improving security is not the same as saying something is insecure. Raising security for web-pki is not the same as caliming non-web-pki usage is insecure or is degrading security expectations of web-pki users. It's just google railroading things because they can. You can improve security by also letting Google decide and control everything, they have the capability and manpower. But we don't want that either.
How would dialback-over-TLS be "more vulnerable to MITM" though? I think that claim was what led to the confusion, I don't see how TLS-with-client-EKU is more secure then TLS-with-dialback
Firstly, nobody is actually calling for authentication using client certificates. We use "normal" server certificates and validate the usual way, the only difference is that such a certificate may be presented on the "client" side of a connection when the connection is between two servers.
The statement that dialback is generally more susceptible to MITM is based on the premise that it is easier to MITM a single victim XMPP server (e.g. hijack its DNS queries or install an intercepting proxy somewhere on the path between the two servers) than it is to do the same attack to Let's Encrypt, which has various additional protections such as performing verification from multiple vantage points, always using DNSSEC, etc.
If an attacker gets a misissued cert not through BGP or DNS hijacks, but by exploiting a domain validation flaw in a CA (e.g. https://bugzilla.mozilla.org/show_bug.cgi?id=2011713) then it's trivial for them to use it as a client certificate, even if you're requiring the serverAuth EKU. On the other hand, dialback over TLS would require the attacker to also MitM the connection between XMPP servers, which is a higher bar.
The good news is that since Prosody requires the serverAuth EKU, the misissued cert would be in-scope of Mozilla's root program, so if it's discovered, Mozilla would require an incident report and potentially distrust the CA. But that's reactive, not proactive.
You're not wrong. PKI has better protections against MITM, dialback has better protections against certificate leaks/misissuance.
I think the ideal approach would be combining both (as mentioned, there have been some experiments with that), except when e.g. DANE can be used ( https://prosody.im/doc/modules/mod_s2s_auth_dane_in ). But if DANE can be used, the whole CA thing is irrelevant anyway :)
Firstly, nobody is actually calling for authentication using client certificates. We use "normal" server certificates and validate the usual way
I'm not sure I understand this point. You authenticate the data you receive using the client's certificate. How is that "nobody is calling for authentication using client certificates"? Maybe there's some nuance I'm missing here but if you're authenticating the data you're receiving based on the client's certificate, then how is that "validating the usual way"?
There is a lot of confusion caused by overlapping terminology in this issue.
By "client certificates" I mean (and generally take most others in this thread to mean) certificates which have been issues with the clientAuth key purpose defined in RFC 5280. This is the key purpose that Let's Encrypt will no longer be including in their certificates, and what this whole change is about.
However when one server connects to another server, all of TCP, TLS and the application code see the initiating party as a "client", which is distinct from say, an "XMPP client" which is an end-user application running on e.g. some laptop or phone.
The comment I was responding to clearly specified " I don't see how TLS-with-client-EKU [...]" which was more specific, however I used the more vague term "client certificates" to refer to the same thing in my response for brevity (thinking it would be clear from the context). Hope that clarifies things!
Completions run using a much simpler model that Github hosts and runs themselves. I think most of the issues with Copilot that I've seen are upstream issues with one or two individual models (e.g. the Twitter API goes down so the Grok model is unavailable, etc)
Interesting, but it feels like it's going to cope very poorly with actually safety-critical situations. Having a world model that's trained on successful driving data feels like it's going to "launder" a lot of implicit assumptions that would cause a car to get into a crash in real life (e.g. there's probably no examples in the training data where the car is behind a stopped car, and the driver pulls over to another lane and another car comes from behind and crashes into the driver because it didn't check its blindspot). These types of subtle biases are going to make AI-simulated world models a poor fit for training safety systems where failure cannot be represented in the training data, since they basically give models "free reign" to do anything that couldn't be represented in world model training.
You're forgetting that they are also training with real data from the 100+ million miles they've driven on real roads with riders, and using that data to train the world model AI.
> there's probably no examples in the training data where the car is behind a stopped car, and the driver pulls over to another lane and another car comes from behind and crashes into the driver because it didn't check its blindspot
While there most likely is going to be some bias in the training of those kinds of models, we can also hope that transfer learning from other non-driving videos will at least help generate something close enough to the very real but unusual situations you are mentioning. We could imagine an LLM serving as some kind of fuzzer to create a large variety of prompts for the world model, which as we can see in the article seems pretty capable at generating fictive scenarios when asked to.
As always tho the devil lies in the details: is an LLM based generation pipeline good enough? What even is the definition of "good enough"? Even with good prompts will the world model output something sufficiently close to reality so that it can be used as a good virtual driving environment for further training / testing of autonomous cars? Or do the kind of limitations you mentioned still mean subtle but dangerous imprecisions will slip through and cause too poor data distribution to be a truly viable approach?
My personal feeling is that this we will land somewhere in between: I think approaches like this one will be very useful, but I also don't think the current state of AI models mean we can have something 100% reliable with this.
The question is: is 100% reliability a realistic goal? Human drivers are definitely not 100% reliable. If we come up with a solution 10x more reliable than the best human drivers, that maybe has some also some hard proof that it cannot have certain classes of catastrophic failure modes (probably with verified code based approaches that for instance guarantees that even if the NN output is invalid the car doesn't try to make moves out of a verifiably safe envelope) then I feel like the public and regulators would be much more inclined to authorize full autonomy.
When bandwidth matters, when you don't want to over or under provision, when you need multiple seats: if you make a project with a small team, Render is going to be quite expensive because of the cost of each seat while Railway offers unlimited seats for they paid plans. Just the whole pricing is different, I found myself more leaning into Railway when doing calculations.
This applies only to Heroku Fir and Cedar apps (apps that run in Heroku Private Spaces). Heroku Common Runtime apps still use shared wildcard certificate and their domains are not discoverable like this.
reply