I would trust ChatGPT code about as much as I trust the code produced by any human. All the Therac-25 code was written by a human, so what is the argument here exactly? At least when you tell ChatGPT that its code is wrong it agrees and tries to fix. Ok, it usually fails at fixing it, but it doesn't refuse to acknowledge that there is a problem at all, unlike the case of the Therac-25.
I like to think that it is not about who (or what) writes the code in the first place, it is about the review and testing procedures that ensure the quality of the final product. I think. Maybe it is just hopeless.
In general we would like developers/engineers to know as much as possible about the things they're engineering. ChatGPT-based development encourages the opposite.
So because ChatGPT exists now, less experienced programmers will be hired to developed critical software under the assumption that they can use ChatGPT to fill the gaps in their knowledge?
Even in that case, I would argue that is entirely a problem of the process, and should be fixed at that level. An experienced programmer doesn't become any less experienced just because they use ChatGPT.
I like to think that it is not about who (or what) writes the code in the first place, it is about the review and testing procedures that ensure the quality of the final product. I think. Maybe it is just hopeless.