Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that term already has usage as a proxy for "lowest sampling variance"; for example the Gauss Markov theorem shows that OLS is the most efficient unbiased linear estimator.

I guess this is echoing your point 2, but I would have generally said that "principled" statistical models are less efficient these days than DL (see: HMC being much slower than variational Bayes). Priors are usually overrated but I think the risk is more that basic mistakes are made because people don't understand what assumptions go into "basic" machine learning ideas like train/test splits or model selection. I'm not sure it warrants a lot of panic though.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: