Hacker News new | past | comments | ask | show | jobs | submit login

Isn't "Code you don't understand" the definition of AI/ML?



Zing! But well, depends on the algorithm. Some aren't that complicated to understand, like linear regression. Others, like DNN are basically impossible. But with ML you're at least always testing the code you don't understand in the process of training the parameters. That's better than the minimum effort when using copilot code. And many will just make that minimum effort and release untested code they don't understand.


Well, I think this overestimates people outside the HN echochamber again. Most senior ML people we see in big corps have no clue what they are doing: they just fiddle with knobs until it works. They would not be able to explain anything: copy code/model, change parameters and train until convergence, test for overfitting. When automl was coming a bit I hoped they would be fired (as I do not think they are doing useful work) but nope: they have trouble hiring more of them.


I'd say that's "Code you (should) understand doing things you can't understand (and possibly can't audit)."

The art and practice of programming didn't change much over the last 50 years. 50 years from now, though, it will be utterly unrecognizable.


> Isn't "Code you don't understand" the definition of AI/ML?

We don't need to understand the process to evaluate the output in this case. Bad code is bad code no matter who/what wrote it.


No. You could use copilot to generate code you do understand and double check it before committing. It’s similar to just copying and pasting from stack overflow.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: