> If you'd prefer to take this discussion to email we can.
Sure. I'll take a crack at these followup questions first, though!
> I think that my question was a bit too ambiguous. I think that I meant like an area where logic programming isn't currently commonly used (to me, the network configuration tool seems very out of place for example) but it would be a match made in heaven. I'm aware of expert systems :-).
It doesn't surprise me to see Prolog in a configuration system, since configuration is a standard use of expert systems--for example, Xcon (https://en.wikipedia.org/wiki/Xcon). Getting a manager at Microsoft to allow the use of Prolog is the surprising thing to me! :)
> That's what I was asking about. E.g. would it make sense to e.g. write an OS in a logic/relational language? What sort of CPU support would be required to make this a serious effort? I think what I'm getting at is that I feel like e.g. Prolog isn't truly general purpose. I get that it's turing complete but large portion of programs are much slower in prolog (as you say yourself). Is this something that's "inherent" to logic languages (like even if HW support were better, will it always (i know i know, no absolutes but you get the gist) be MUCH slower or could it be just slightly slower)?
You can play all sorts of "extra-logical" tricks in Prolog to improve performance, at the cost of giving up declarativeness. Still, Prolog is not the language I'd reach for first when writing system software, as you point out.
Have you looked at Mercury? Mercury is a logic programming language that borrows heavily from Haskell. One cool thing about Mercury is that you define high-level relations, specifying types and other annotations for each relation. The Mercury compiler then compiles the high-level declarative relations into lower level, more efficient functional definitions. One high-level relation might be compiled to many functions, which are dispatched to at run-time based on the groundness of the arguments passed to the relation. Mercury is specifically designed for system programming, and also has support for parallelism.
I should point out that Mercury relations aren't really full relations, like they are in miniKanren. Mercury compiles away unifications for efficiency, which means that Mercury can't handle more sophisticated uses of logic variables. For example, the quine-generating query would be impossible to specify in Mercury. But the resulting code is much more efficient.
Greg Rosenblatt has applied a few of the Mercury compiler techniques to the relational interpreter used by Barliman, which has sped up some synthesis queries by orders of magnitude. He suspects there are more gains to be made from borrowing Mercury optimizations.
> Like I realize that they are slower because they are fundamentally doing a search but I'm wondering if e.g (and I'm really just trowing ideas on the wall here) it's possible to leverage type systems and hw support to considerably reduce the runtime search space and do most of it on compile time. Are you familiar with liquid types (you mention dependent types, so I'm guessing yes?)?
I'm not sure what you mean by hardware support, exactly. Are you talking about using accelerators like GPUs or FPGAs? If so, as far as I can tell miniKanren doesn't map well onto these accelerators, although the Intel Phi might be a decent match.
I suspect that running synthesis tasks on a beefy server, or even on a cluster, would result in a significant improvement to performance. Static analysis and compiler optimizations would probably make a much bigger difference, though.
I'm slightly familiar with liquid types. There may be ways for dependent types to prune parts of the search space statically, although I'm not sure. I'd like to explore combining miniKanren and Idris, to see where that leads.
> In your work, you seem to be gravitating towards dynamically typed languages. Wouldn't a lot of work be easier in like Haskell or some other type language? I'm sure you have good reasons for not having switched and I'm interested in hearing those. Why are Lisps better for your work?
I like Scheme because it has very powerful meta-programming capabilities, it is easy to write meta-circular interpreters, it is trivial to write DSLs (without having to write parsers or whatever), and the abstract syntax and concrete syntax are essentially the same. Being able to just call 'read' to parse an expression, and being able to redefine built-in binding forms, for example, can be extremely handy.
Haskell is a language chock full of interesting ideas. I don't love it, though. The syntax, lack of macros, all the extensions in GHC, etc., make for a language that is powerful but complex. I like languages that are powerful but small, containing only a few concepts. miniKanren itself only has three core operators; the relational interpreter uses an extended version of miniKanren with six operators, but we hope to get that down to five.
In short, Scheme fits my programming style better. I'm not averse to using other languages, though. And miniKanren has been ported to dozens of host languages, including multiple Haskell implementations.
As far as the types making things easier, I'm not sure this is true. Many ML and Haskell programmers have looked at miniKanren and said, "this would be so much better with types!" After playing around with the implementation for a while, they usually say, "taking advantage of the types without losing the flavor of miniKanren is much harder than I thought." As far as I can tell, the only interesting embedding of miniKanren in a typed language is Dmitry Boulytchev's OCanren (https://github.com/dboulytchev/OCanren). Dmitry had to hack the OCaml type system at a low (and potentially unsafe) level to make the embedding possible. I would be interested in exploring OCanren for writing relational interpreters. I'm not exactly sure what it would buy us, though.
There are lots of interesting connections between dependent types and logic programming. I'd like to explore combining Idris and ideas from miniKanren, for example. To me, combining Idris and miniKanren seems more interesting and promising than combining Haskell and miniKanren, which has been done many times without apparent benefit.
Now, in contrast to the host language used for the embedding of miniKanren, implementing a typed language in the relational interpreter definitely makes synthesis easier. In other words, implementing a relational interpreter for an ML-like language, using miniKanren embedded in Scheme, is probably far better for synthesis than implementing a relational interpreter for Scheme, using miniKanren embedded in Haskell.
I definitely plan to investigate relational interpreters for typed languages (Michael Ballantyne has already done some work in this area, and showed great improvements in synthesis). I'm also open to exploring miniKanren's embedded in typed languages. This seems like a much trickier problem than you might think, though. OCanren looks like a promising start, and I look forward to playing with it more.
> I was actually thinking about wording it slightly less absolutely but I was interested if there were any problems where logic programming would be a bad idea no matter what so it would be basically "never".
I think this largely depends on what you consider "logic programming". Mercury seems applicable to systems programming problems that aren't an ideal match for traditional Prolog, for example. And Curry allows functional programming and logic programming to be blended in interesting ways that blur the lines between the paradigms.
There are also approaches like Answer Set Programming, and even Answer Set Programming Modulo Theories, that use powerful solvers to solve problems that would be hopelessly inefficient with naive Prolog (or miniKanren) search.
So I'll stick with my original answer, and say it's far too soon to tell. :) We do need new langauges and approaches, though!
> I think that's part of what I was asking. What would be a "killer" app for logic programming? Are there any problems out there that aren't currently being solved with logical programming but would really benefit from them?
Such a tiny percentage of programmers have even heard about Answer Set Programming; it's possible there are killer apps for ASP laying around, just waiting for someone to connect the ideas.
I suspect killer apps will be found for probabilistic logic programming, as the languages and implementations improve, and as more people become comfortable with probabilistic programming.
And, of course, I'm hoping program synthesis using logic programming will be a killer app! :)
> Btw can you recommend some literature on the intersection of logic/constraint/relational programming and music? I've ordered this book https://amazon.com/dp/1848212887/
> I'll check out the links you posted. I'm actually working full-time on this music composition app (logic/relational programming isn't the main contribution of the application but I'm just trying to see if there's some low hanging fruit I could do easily and blow minds). If you have any sort of input into that I'm be interested. I saw on your CV that you supervised some undergrad research in this area so I'm guessing that you might not be unfamiliar with that intersection.
> I did see like the top google links when I searched for the combination of those keywords (e.g. https://vimeo.com/55677313) so I think I'm interested in some sort of meta analysis.
I'd be happy to talk to you about this in more depth, maybe via hangouts. I think it depends very much on what you want to do. For example, synthesis of music that pleases humans is still an open problem. But other problems seem more tractable.
Sure. I'll take a crack at these followup questions first, though!
> I think that my question was a bit too ambiguous. I think that I meant like an area where logic programming isn't currently commonly used (to me, the network configuration tool seems very out of place for example) but it would be a match made in heaven. I'm aware of expert systems :-).
It doesn't surprise me to see Prolog in a configuration system, since configuration is a standard use of expert systems--for example, Xcon (https://en.wikipedia.org/wiki/Xcon). Getting a manager at Microsoft to allow the use of Prolog is the surprising thing to me! :)
> That's what I was asking about. E.g. would it make sense to e.g. write an OS in a logic/relational language? What sort of CPU support would be required to make this a serious effort? I think what I'm getting at is that I feel like e.g. Prolog isn't truly general purpose. I get that it's turing complete but large portion of programs are much slower in prolog (as you say yourself). Is this something that's "inherent" to logic languages (like even if HW support were better, will it always (i know i know, no absolutes but you get the gist) be MUCH slower or could it be just slightly slower)?
You can play all sorts of "extra-logical" tricks in Prolog to improve performance, at the cost of giving up declarativeness. Still, Prolog is not the language I'd reach for first when writing system software, as you point out.
Have you looked at Mercury? Mercury is a logic programming language that borrows heavily from Haskell. One cool thing about Mercury is that you define high-level relations, specifying types and other annotations for each relation. The Mercury compiler then compiles the high-level declarative relations into lower level, more efficient functional definitions. One high-level relation might be compiled to many functions, which are dispatched to at run-time based on the groundness of the arguments passed to the relation. Mercury is specifically designed for system programming, and also has support for parallelism.
I should point out that Mercury relations aren't really full relations, like they are in miniKanren. Mercury compiles away unifications for efficiency, which means that Mercury can't handle more sophisticated uses of logic variables. For example, the quine-generating query would be impossible to specify in Mercury. But the resulting code is much more efficient.
Greg Rosenblatt has applied a few of the Mercury compiler techniques to the relational interpreter used by Barliman, which has sped up some synthesis queries by orders of magnitude. He suspects there are more gains to be made from borrowing Mercury optimizations.
> Like I realize that they are slower because they are fundamentally doing a search but I'm wondering if e.g (and I'm really just trowing ideas on the wall here) it's possible to leverage type systems and hw support to considerably reduce the runtime search space and do most of it on compile time. Are you familiar with liquid types (you mention dependent types, so I'm guessing yes?)?
I'm not sure what you mean by hardware support, exactly. Are you talking about using accelerators like GPUs or FPGAs? If so, as far as I can tell miniKanren doesn't map well onto these accelerators, although the Intel Phi might be a decent match.
I suspect that running synthesis tasks on a beefy server, or even on a cluster, would result in a significant improvement to performance. Static analysis and compiler optimizations would probably make a much bigger difference, though.
I'm slightly familiar with liquid types. There may be ways for dependent types to prune parts of the search space statically, although I'm not sure. I'd like to explore combining miniKanren and Idris, to see where that leads.
> In your work, you seem to be gravitating towards dynamically typed languages. Wouldn't a lot of work be easier in like Haskell or some other type language? I'm sure you have good reasons for not having switched and I'm interested in hearing those. Why are Lisps better for your work?
I like Scheme because it has very powerful meta-programming capabilities, it is easy to write meta-circular interpreters, it is trivial to write DSLs (without having to write parsers or whatever), and the abstract syntax and concrete syntax are essentially the same. Being able to just call 'read' to parse an expression, and being able to redefine built-in binding forms, for example, can be extremely handy.
Haskell is a language chock full of interesting ideas. I don't love it, though. The syntax, lack of macros, all the extensions in GHC, etc., make for a language that is powerful but complex. I like languages that are powerful but small, containing only a few concepts. miniKanren itself only has three core operators; the relational interpreter uses an extended version of miniKanren with six operators, but we hope to get that down to five.
In short, Scheme fits my programming style better. I'm not averse to using other languages, though. And miniKanren has been ported to dozens of host languages, including multiple Haskell implementations.
As far as the types making things easier, I'm not sure this is true. Many ML and Haskell programmers have looked at miniKanren and said, "this would be so much better with types!" After playing around with the implementation for a while, they usually say, "taking advantage of the types without losing the flavor of miniKanren is much harder than I thought." As far as I can tell, the only interesting embedding of miniKanren in a typed language is Dmitry Boulytchev's OCanren (https://github.com/dboulytchev/OCanren). Dmitry had to hack the OCaml type system at a low (and potentially unsafe) level to make the embedding possible. I would be interested in exploring OCanren for writing relational interpreters. I'm not exactly sure what it would buy us, though.
There are lots of interesting connections between dependent types and logic programming. I'd like to explore combining Idris and ideas from miniKanren, for example. To me, combining Idris and miniKanren seems more interesting and promising than combining Haskell and miniKanren, which has been done many times without apparent benefit.
Now, in contrast to the host language used for the embedding of miniKanren, implementing a typed language in the relational interpreter definitely makes synthesis easier. In other words, implementing a relational interpreter for an ML-like language, using miniKanren embedded in Scheme, is probably far better for synthesis than implementing a relational interpreter for Scheme, using miniKanren embedded in Haskell.
I definitely plan to investigate relational interpreters for typed languages (Michael Ballantyne has already done some work in this area, and showed great improvements in synthesis). I'm also open to exploring miniKanren's embedded in typed languages. This seems like a much trickier problem than you might think, though. OCanren looks like a promising start, and I look forward to playing with it more.
> I was actually thinking about wording it slightly less absolutely but I was interested if there were any problems where logic programming would be a bad idea no matter what so it would be basically "never".
I think this largely depends on what you consider "logic programming". Mercury seems applicable to systems programming problems that aren't an ideal match for traditional Prolog, for example. And Curry allows functional programming and logic programming to be blended in interesting ways that blur the lines between the paradigms.
There are also approaches like Answer Set Programming, and even Answer Set Programming Modulo Theories, that use powerful solvers to solve problems that would be hopelessly inefficient with naive Prolog (or miniKanren) search.
So I'll stick with my original answer, and say it's far too soon to tell. :) We do need new langauges and approaches, though!
> I think that's part of what I was asking. What would be a "killer" app for logic programming? Are there any problems out there that aren't currently being solved with logical programming but would really benefit from them?
Such a tiny percentage of programmers have even heard about Answer Set Programming; it's possible there are killer apps for ASP laying around, just waiting for someone to connect the ideas.
I suspect killer apps will be found for probabilistic logic programming, as the languages and implementations improve, and as more people become comfortable with probabilistic programming.
And, of course, I'm hoping program synthesis using logic programming will be a killer app! :)
> Btw can you recommend some literature on the intersection of logic/constraint/relational programming and music? I've ordered this book https://amazon.com/dp/1848212887/ > I'll check out the links you posted. I'm actually working full-time on this music composition app (logic/relational programming isn't the main contribution of the application but I'm just trying to see if there's some low hanging fruit I could do easily and blow minds). If you have any sort of input into that I'm be interested. I saw on your CV that you supervised some undergrad research in this area so I'm guessing that you might not be unfamiliar with that intersection. > I did see like the top google links when I searched for the combination of those keywords (e.g. https://vimeo.com/55677313) so I think I'm interested in some sort of meta analysis.
I'd be happy to talk to you about this in more depth, maybe via hangouts. I think it depends very much on what you want to do. For example, synthesis of music that pleases humans is still an open problem. But other problems seem more tractable.