Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Historically, symbolic techniques have been very desirable for representing richly structured domains of knowledge, but have had little impact on machine learning because of a lack of generally applicable techniques.

Is this generally true? I mean, "impact" can be measured in different ways, but this paragraph gives impression that symbolic logic was always orthogonal to ML. However, there was clearly much research in that area.

Here is just one example:

http://www.doc.ic.ac.uk/~shm/Papers/lbml.pdf

Frankly, I don't understand why the field of symbolic AI was so thoroughly abandoned. Contrary to popular belief, it did deliver results - a lot of them. It had a good theoretical foundation and years of practice. It could (with ease) do a lot of neat tricks, like systems explaining why it made certain decisions. Not just that, you could implement those tricks after you implemented core functionality. And most importantly - it was scalable downwards. You could take some ideas from a complex system, put them into a much simpler system on vanilla hardware (i.e. normal application) and still get very interesting and useful results.



I think to some extent it disappeared, to some extent it rebranded. But mostly it disappeared, because commercial motivations pushed a lot of money towards machine learning research. But that's just a hunch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: