Zing! But well, depends on the algorithm. Some aren't that complicated to understand, like linear regression. Others, like DNN are basically impossible. But with ML you're at least always testing the code you don't understand in the process of training the parameters. That's better than the minimum effort when using copilot code. And many will just make that minimum effort and release untested code they don't understand.
Well, I think this overestimates people outside the HN echochamber again. Most senior ML people we see in big corps have no clue what they are doing: they just fiddle with knobs until it works. They would not be able to explain anything: copy code/model, change parameters and train until convergence, test for overfitting. When automl was coming a bit I hoped they would be fired (as I do not think they are doing useful work) but nope: they have trouble hiring more of them.
No. You could use copilot to generate code you do understand and double check it before committing. It’s similar to just copying and pasting from stack overflow.