Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems like you might be deep in the ML rabbit hole. Zoom out a bit. A hash function is "just a function." Every image labeling black box can be though of as a hash from images to label vectors.

OP's comment effective asks whether there is another, more grokkable function that maps/hashes inputs to the same labels.

Granted, that question boils down to "can we create human-understandable models?" which is the whole point of this discussion.

It's a good question, though. If we had black-box-like spaghetti code performing the same task, I predict that the comments here would be very different.



Thanks for explaining that perspective. I was seeing things a little too narrow-mindedly for sure.

Based on your phrasing of the issue, it seems like we could think of the problem as: can we reduce the number of parameters in an ML model to the point where humans can understand all of the parameters? That's related to an active research area - minimizing model size. ML models can have billions of parameters. It's unfeasible for a human to evaluate all of them. Research shows that (sometimes) you can reduce the number of parameters in a model 97% without hurting it's accuracy [0]. But, 3% of a billion parameters is still way too many for people to evaluate. So I think the answer so far is no, we can't create human-understandable models that perform as effectively as black boxes.

[0] https://openreview.net/forum?id=rJl-b3RcF7




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: