>Historically, symbolic techniques have been very desirable for representing richly structured domains of knowledge, but have had little impact on machine learning because of a lack of generally applicable techniques.
Is this generally true? I mean, "impact" can be measured in different ways, but this paragraph gives impression that symbolic logic was always orthogonal to ML. However, there was clearly much research in that area.
Frankly, I don't understand why the field of symbolic AI was so thoroughly abandoned. Contrary to popular belief, it did deliver results - a lot of them. It had a good theoretical foundation and years of practice. It could (with ease) do a lot of neat tricks, like systems explaining why it made certain decisions. Not just that, you could implement those tricks after you implemented core functionality. And most importantly - it was scalable downwards. You could take some ideas from a complex system, put them into a much simpler system on vanilla hardware (i.e. normal application) and still get very interesting and useful results.
I think to some extent it disappeared, to some extent it rebranded. But mostly it disappeared, because commercial motivations pushed a lot of money towards machine learning research. But that's just a hunch.
This reminds me of a book titled Grammatical Inference [1]. If you are interested in the blog post, you might also be interested in the book as well. It's a different kind of "learning" though.
Grammar inference is a very interesting (tho quite different) topic. We have absolutely no idea how humans do it, and we know that its really really hard (nearly impossible!) to do given just raw text. Super interesting topic!
I'm working in the lab that built Spaun [1] that might be of interest to you. We use the Vector Symbolic Architectures you talk about in combination with Deep Learning and other stuff to build brain models.
As far I know, my lab is one of the few places mixing symbolic learning with Deep Learning. Do you or anyone else know other modern attempts (Reinforcement Learning?) with this approach?
Oh! Was Murray Shanahan's lab involved with this? Cool. I've chatted with Murray a very tiny bit on twitter about this all. That paper is part of what inspired me to write this blog post, because I was dissatisfied with the approach they propose.
It felt to me like they had found a way to restrict "symbolic" data to a very narrow domain where issues like hierarchical structure was totally absent, and vectorial representation was therefore quite straight forward, and I wanted to figure out an alternative. Instead of restricting the kinds of symbolic data involved, do the full spectrum. I had the symbolic vector idea in my head for about two years as an answer to the representation problem, but I hadn't seen what the right answer was to the other aspects, at least not clearly. But that paper motivated me to start thinking about it again, and this time I saw the answer. :)
> This post aims to provide a unifying approach to symbolic and non-symbolic techniques of artificial intelligence.
This is the sort of thing people build academic careers on.
I'm extremely skeptical that a blog post will live up to this claim; I'd expect anything that even approaches to unify the techniques to be on the order of a Ph.D thesis or an extensive postgraduate research report.
It's a very small unification, and a big, eye-catching claim.
Besides, I said "aims", not "succeeds"! :)
I certainly don't think this is The Big Solution, tho. It's far more mundane. Unifying the techniques is nice, but all that means is that we don't have to think about GOFAI and ML as fundamentally distinct. It doesn't mean that any truly deep problems have been solved, just that there's now a reasonably reliable way to use both that isn't just "simultaneous" but rather "fused".
Are there any examples of reading a graph back from this representation? It doesn't seem straightforward, unless I'm missing something. How do you handle the nondeterminism caused by the lossy transform in to a vector?
Reading a graph back can't be done with any certainty, nor is it intended to be done. It's a representation that's intended to be used to build inputs to ML systems, not outputs. Building symbolic outputs should be done via the other techniques mentioned in the blog post (eg. structural editing), that way you can take advantage of notions of correctness inherent in the symbolic representation.
Regarding handling lossiness, it's not clear to what extent it needs to be handled, but the `k` mentioned in the definition of symbolic vectors is a parameter, and the larger `k` is, the less lossy the representation.
It depends on what you mean by that. It assumes a fixed set of node labels, but not node identities, since we're counting subgraphs. So for instance, multiple distinct nodes labeled `X` or edges labeled `Y` will contribute to the same count, despite having distinct nodes involved.
I don't think you need to do anything special for them. Your algorithm for counting subgraphs shouldn't go in an infinite loop when fed a cycle, but assuming you have such an algorithm already, getting the vector representation should be straightforward.
Is this generally true? I mean, "impact" can be measured in different ways, but this paragraph gives impression that symbolic logic was always orthogonal to ML. However, there was clearly much research in that area.
Here is just one example:
http://www.doc.ic.ac.uk/~shm/Papers/lbml.pdf
Frankly, I don't understand why the field of symbolic AI was so thoroughly abandoned. Contrary to popular belief, it did deliver results - a lot of them. It had a good theoretical foundation and years of practice. It could (with ease) do a lot of neat tricks, like systems explaining why it made certain decisions. Not just that, you could implement those tricks after you implemented core functionality. And most importantly - it was scalable downwards. You could take some ideas from a complex system, put them into a much simpler system on vanilla hardware (i.e. normal application) and still get very interesting and useful results.