Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you'll be a programmer - that is what counts. How good of a programmer you will be will determine your success. never put your eggs in one basket (not saying you shouldn't become an ML expert though, that's pretty damn nice). as a Phd, you are probably good enough.

as to ML, its adoption is hyped. it is powerful, but not as anyone really talks about.

support vector machines and Bayesian learning have been around since the 70s/80s (ninja edit: SVM's since 1963! Markov Chains 1950s, Bayesian Learning/Pattern recognition sine the 1950's), but adoption has been slow due to the nature of business, which is now drooling over it since neural networks beat a few algorithms.

due to the hype, more business will opt for ML now, but the craze will plateau and ML will become another tool in your arsenal.

so basically, you really have nothing to worry about - use your Phd to do interesting things, come up with novel and new research and/or develop your own product.

don't let your job security worries get in the way of enjoying what you want to do now, you're already good and in STEM (and if you don't feel good enough, work on yourself until you do).



> support vector machines and Bayesian learning have been around since the 70s/80s (ninja edit: SVM's since 1963! Markov Chains 1950s, Bayesian Learning/Pattern recognition sine the 1950's), but adoption has been slow due to the nature of business, which is now drooling over it since neural networks beat a few algorithms.

This is one of the things I find hardest about convincing managers and leads of. They think things like CRFs and Markov models are "new" methods and too risky. So they opt for explicit rule-based systems that use old search methods (e.g. A*, grid search), which hog tons of memory and processor. Those methods rarely ever work on interesting problems of the modern day.

They can understand the rule-based methods easily. They have a hard time leaping to "the problem is just a set of equations mapping inputs to outputs, and the mapping is found by an optimization method."


I explain it using the infinitesimal method, which if done right using the hill climbing metaphor, often delivers. But it does take away the magic of "wooo, neural" :p




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: