I just finished doing an introductory project for my machine learning course. I had to write a program in matlab using Bayesian Regression and KNN to do character recognition. Our professor gave us 2400 feature vectors of horribly written characters and we had to write this damn program that would recognize the characters if we gave it 2400 more horribly written, but unclassified characters. It was a mess.
However I did learn a few things. 1) KNN is a pretty good beginning classifier. 2) Character recognition is a really tough problem because of the ridiculous dimensionality 3) Machine learning is chock full of "black magic" type techniques.
I'm planning on launching my own website pretty soon though. I hope to have some real world applications of machine learning for all to see and use freely. Some of the theory really needs to be distilled.
However I did learn a few things. 1) KNN is a pretty good beginning classifier. 2) Character recognition is a really tough problem because of the ridiculous dimensionality 3) Machine learning is chock full of "black magic" type techniques.
I'm planning on launching my own website pretty soon though. I hope to have some real world applications of machine learning for all to see and use freely. Some of the theory really needs to be distilled.