So this appears to be a language to specify logic systems. This makes it especially odd how unstructured the website appears.
If the language is so simple and uses only 3 or 4 operators, why not just start with a short definition of those operators, and maybe a list of short examples? That would go a long way towards explaining to me as a casual visitor why I should care about this.
So, this project is a web side of a famous (arguably THE most famous) book on relational programming. It's a book that introduces a dialogue for learning the language, how it's implemented, and its principles. The principles of relational programming are the important part. It's not strictly "Incorrect" to say "MiniKanren specifies logical systems" but it doesn't really capture the spirit of relational programming.
The book is linked from the page, and the book itself is well reviewed on average (and gets an excellent rating from me, personally). The first 3 paragraphs explain this and link to a video with more details.
MiniKanren's had some extremely interesting results presented in it, most famously a relational bidirectional interpreter (which can suggest programs that transform input in a specified way). William Byrd (one of the primary authors of the work) still gives an excellent talk on the subject because it's not well known. [0]
I can't help but notice the link to the book, the exhaustive list of papers and talks, links to videos and an invitation schedule. It seems like there was an awful lot of information you just ignored. As a friendly bit of advice, PLT, PLR, and in general the deeper and more modern end of algorithmic work is not going to present a punch, javascript-framework-esque 60 second animation for you.
So if you are going to pursue more on MiniKanren (and I really recommend it, it's amazing stuff) then I strongly recommend you gird yourself for rifling through references and books as a first resort rather than a last.
I disagree with you, but to really understand why you will need to read my book. It may sound like a lot of effort, but I can assure you that my disagreement is as intricate as the celestial orbits, as deep as the darkest ocean, and as as divine as the heavens themselves.
The scholars and holy men have spent years studying why I disagree with you, and they will confirm that, once you too have spent years studying, you will not only disagree with your former self, but understand, with the first true clarity of your adult life, that the sacred knowledge cannot be grasped by the clumsy hands of a dilettante.
"Please, sir", I hear you ask, "can you not spare some crumb of wisdom from the table of your enlightenment? Some small morsel to nourish me on my journey?" I am afraid not. Summaries are nothing but fornications of the mind: easy pleasures to tempt you away from the righteous path of long and laborious study to an uncertain end. The truly wise know that it is better to know nothing than to know a little.
And if, in some fit of insouciance, you begin to think it is not worth your time to engage with the entire depth (as the ocean, recall) of my argument before knowing if it is true, or useful, or relevant to you? Then I can offer nothing but pity. Do you not understand that it is you who must prove yourself worthy of my ideas? You who must justify your relevance to me?
The ego of students these days is astonishing. It's no wonder they never learn anything.
I like to imagine a hypothetical, parallel universe where you spent the same amount of time writing your overwrought post and instead focused on the first paragraph of the simple, accessible webpage that is linked, you'd see a link directly to a series of interactive code samples. An exact summary of "what it's good for" is found in the second paragraph (as noted by others in this thread). Such an investment of time would surely get you below the fold, and see the link to multiple talks and papers. There's a whole section just entitled, "Recorded Talks" with dozens of videos offered by professionals far more qualified to talk on the subject than either you or I as implementors of such software systems laid out in an archival format. Talks by such amazing people as Byrd, Freidman, Stokke or Nolan.
I like to think in that same universe, the poster I responded to also took this time. So that I could have instead spent more time on my OTHER, way more fun and interesting post where I go into a use case. Maybe actually reformatted some source code from the prototype I'm contractually forbidden to share to give a more in-depth look in how one might use these principles (which are, as I suggested, not at all obvious) in industry applications.
I think that alternate universe is one I'd like to live in. Instead, I'm stuck here in this one. A universe where instead of doing more cool posts about the electromechanics of power tools and their restoration, you instead write a prickly little post defending someone's request for a thing that was literally offered in the first paragraph of the source page they asked about. One where I spend time writing this boring post as well.
Truly, we live in the darkest timeline. Probably the darkest part about this timeline is that my post you responded to, devoid of novelty and missing the talk link I was going to copypaste leaving a blank footnote annotation? That's gotten over 3x the positive moderation of my other post where I describe an active use case.
But thanks for your feedback. You're right, I shouldn't post here anymore.
Good answer. Actually a 10-line blurb on the core operations of Minikanren and some output from programs would be pretty useless. It would only serve to confuse the reader because names like "fresh", "conde", "numbero" etc aren't very descriptive (you can argue whether they're good names at all).
Maybe having a little blurb about the meta-circular interpreter might be useful as a way of "exciting" the audience by presenting a cool result.
Glibly: Not every subject is amenable to a 20 second summary animation with pictures of rich people on computers in cafes in the background.
More seriously: They explain it, they link to a whole book, and they link to a video with that crazy bidirectional interpreter. What else do you actually think would help? It's very unusual looking Scheme code, it's not going to be readable to the average viewer. I have written Lisp for over a decade and the code was nonsense to me until I read the book.
I haven't had the chance to build anything that uses them. But having become familiar, I think there are lots of potential applications. I always thought that Prolog was strange. Logic programming makes a lot more sense as a library IMO.
Minikanren (and core.logic) is very fun to explore. It's such a brain bender and it seems well suited for some PL things. E.g. say you have some custom interpreter. If you implement that interpreter in Minikanren, you can run it backwards and answer the question "what program, if any, evaluates to this result". To generate a quine, you make both the input and the output of the interpreter the same logic var, and have Minikanren solve for it.
On the other hand, I was unable to find a use for it any real world problems (in my case: scheduling and portfolio optimization). I always use a linear constraint solver instead.
I've been looking at the Clojure implementation, core.logic, but can never figure out what to use this paradigm for. Can anybody name the cases where logic programming really shines?
It allows you to specify complex rules for workflows, who may sign off on what patches etc. Those rules are written in Prolog, but I guess they might as well be written in some other logic language.
The abstract doesn't say what "declarative, logic-based specification" is used in the implementation, but I know that two of the authors do a lot of work in Datalog, so it's Datalog.
Sure, I can name one I've prototyped (but never gotten to deploy in prod) in industry.
I proposed an embedded μKanren in an Authorization framework for API endpoints. It relied on a system like JWT (or, preferably something less broken like Macaroons) to provide a series of "auth grants" on the request, auth grants being parameterized lines like:
The system would then perform a kind of logical unification with the request, the cryptographically validated request token, and the API policy. When a request comes in, the request token is merged a "all is forbidden" default set (which is not a literal hash table, it's a policy annotation for the map that contains them). Thus, it's very easy to create arbitrarily complex security policies and so long as they're logically consistent, the interpreter can tell you so (and if it fails, it can tell you why the request doesn't have access).
What's also interesting was that you could unify the API endpoint annotations with an empty set of grants, and then it'd produce a series of example permissions that would be valid (or you could merge it with a user's invalid grants, and it could tell you what access they need to go through). This worked well for internal documentation and I had hoped that it'd be applicable to internal tooling to explain to an operator of a tool what access they needed to go request to use the tool (and even where to do it).
If you're curious, for most requests the time complexity and memory costs were very modest (approaching but not passing 1ms in the worst case I saw, which for API endpoints written in Python is usually dwarfed by garbage like JSON serialization).
IMO it sits at quite a higher level of programming. You use logic programming to explore ~infinite program states; all that in extremely concise (near fully declarative of course) ways.
In Bratko's prolog book you quickly get to do graph processing, some algorithms are 4 clauses and almost word for word what you have in mind. It's mind boggling to me.
You also have bidirectional reasoning, you don't have to write a function A => B and then another one for B => A. It's even more telling with multivariate functions. I knew a guy who needed something similar for a large application. He needed a way to reason backward with incomplete parameters.
I would look at uKanren, the smallest mini-kanren cousin. I like Bodil Stokke's [1] introduction, but if you are not afraid of lisp, you can read/re-implement the original(?) [2] paper :)
It is basically bunch of combinators for depth-first search?
So if you think "I kinda want to brute-force solution for these hard-coded constraints", you might instead want to create a simple logic-programming environment first :-)
Last time I played with these, I used it for implementing a toy cryptograhic protocol verification system for a term project. It have barely worked :D
A.f.a.i.k. type-checkers often use some sort of logic programming at their core.
It is useful for problems where the solution is difficult to find -- any kind of problem with many constraints which conflict, and maybe an optimisation condition. Such problems are very difficult from most programming, when it tends to be way to find what you want (although of course you can always optimise it further).
This sounds like a layout engine for a GUI toolkit. If you have a bunch of widgets with optional min and max widths and other constraints and you put them all in the same row, the layout engine needs to compute the real size of each widget (also constrained by the width of the parent widget, which is also dynamic and resolved by the layout engine save for the top level window widget) so it can be rendered.
You can use standard constraint solvers for these kinds of problems. Although, you can use minikanren in such an application, you are missing out on a lot of its power if you do that.
An example where such systems shine is program generation, theorem proving, knowledge databases, program verification and other applications where the search space is huge but the rules are relatively simple.
Ok, thanks for answering. I guess what I'm really looking for are some practical use cases that can serve as an example, otherwise it will never cross my mind to use logic programming when I need to decide on technologies.
Languages of all kinds of paradigms are Turing complete. But we still use different languages for different purposes, because some programs are more convenient to write in a certain paradigm that maps well to their problem domain.
In Prolog for example, you can easily define concatenation of cons-lists as
and then generate all 4 possible solutions for the query concatenate(X, Y, cons(1, cons(2, cons(3, nil)))).
Of course Prolog has built-in lists that'd allow simply writing append(X, Y, [1, 2, 3]), but the ease of defining certain simple algorithms and then running them backwards is what makes logic programming so useful in my opinion.
If SQL can express an equivalent program equally succinctly, I'd very much like to see how.
I took a class with Daniel Friedman, the creator of Minikanren and author of The [Little|Seasoned|Reasoned] Schemer. One of the homework assignments was to write a typechecker for the simply-typed lambda calculus in minikanren.
(Most of the semester, we were writing interpreters for the lambda calculus, and doing things like rewriting the interpreter in continuation-passing-style (so that every recursive call was a tail call).)
Be forewarned: MiniKanren has no "not" operator. (I invested a lot of time teaching myself MiniKanren, working through most of "the book" (The Reasoned Schemer), before I found that out.)
If the language is so simple and uses only 3 or 4 operators, why not just start with a short definition of those operators, and maybe a list of short examples? That would go a long way towards explaining to me as a casual visitor why I should care about this.