Implementing and using a miniKanren was fun and enlightening. And it helped me appreciate how incredibly optimized and fast SWI-Prolog is for relational/logical programming. If someone knows of a miniKanren/language combination that outperforms SWI-Prolog and has good developer ergonomics for relational/logical programming, I’d love to hear about it.
IIRC SWI-Prolog can be embedded in Python if that's your thing. Interfacing with Scryer Prolog might be require a bit more but ought to be doable, and if they haven't surpassed SWI in some performance metrics yet I'd be surprised.
I'm kind of a lisper but I find it easier to get things done in a proper Prolog than the kanrens.
It hasn't been merged into the main branch yet, but you can embed Scryer in Python now[1], and it's quite enjoyable. I've also embedded in Java, Clojure, and elisp[2] :)
The Clojure one is bundled up into a library called libscryer-clj, which I haven't released yet, but it operates the same way as libpython-clj[3], libjulia-clj[4], and libapl-clj[5].
As an aside, Scryer is hella ergonomic. clpz+reif is an amazing and amazingly powerful combination, and Scryer's DCG philsophy over double quoted strings is chefs kiss -- it really delivers on the original mission of Prolog. Ediprolog[6] is a fantastic REPL for Emacs.
And honestly if you haven't seen Marcus Triska's work[7], by God you are missing out on one of the true joys of life.
Yeah, I really like Scryer, it's been my goto Prolog for quite some time.
You wouldn't know a way to get it working under Termux on Android? I tried to build it a while back but got inscrutable compiler errors and didn't follow through. Maybe building Trealla would be easier, since it's implemented in C.
> if they haven't surpassed SWI in some performance metrics yet I'd be surprised
I was/am also anticipating performance gains from Scryer. Which is why I made a point to request up to date Scryer benchmarks in the SWI forums. Still, for free/open Prolog, SWI-Prolog is hard to beat: https://swi-prolog.discourse.group/t/porting-the-swi-prolog-...
i've come to appreciate, over the past 2 years of heavy Prolog use, that all coding should be (eventually) be done in Prolog.
It's one of few languages that is simultaneously a standalone logical formalism, and a standalone representation of computation. (With caveats and exceptions, I know). So a Prolog program can stand in as a document of all facts, rules and relations that a person/organization understands/declares to be true. Even if AI writes code for us, we should expect to have it presented and manipulated as a logical formalism.
Now if someone cares to argue that some other language/compiler is better at generating more performant code on certain architectures, then that person can declare their arguments in a logical formalism (Prolog) and we can use Prolog to translate between language representations, compile, optimize, etc.
Recently, there was an HN post[0] of a paper that makes a case against pure logic languages in favor of "functional logic" ones, which they exhibit with Curry[1]. The setup argument is that Prolog's specs backtracking, which strongly downlimits it from full SLD resolution, causing fatally sharp edges in real world usage.
Being fairly naive to the paradigm, my interpretation is that writing real Prolog programs involves carefully thinking about and controlling the resolution algorithm, which feels very different than straight knowledge declaration. I believe cut/0 is the go-to example. Is that your experience with Prolog in practice?
The real meat of the paper, however, is in its case that functional logic languages fully embed Prolog with almost 1-to-1 expressivity, while also providing more refined tools for externalizing knowledge about the intended search space of solutions.
Thoughts? How are you using Prolog, logic, or constraint programming? What languages and tooling in this arena do you reach for? What is some of your most hard-earned knowledge? Any lesser-known, but golden, websites, books, or materials you'd like to share?
> What is some of your most hard-earned knowledge?
1. If you find yourself straying too often from coding in relations, and instead coding in instructive steps, you're going to end up with problems.
2. Use DCGs to create a DSL for any high level operations performed on data structures. The bi-directionality of Prolog's clauses means you can use this DSL to generate an audit trail of "commands executed" when Prolog solves a problem for you, but you can also use the audit trail and modify it to execute those commands on other data.
How do you debug DCGs? I get "false." instead of "syntax error at line 23", which is unacceptable for bigger inputs.
Also DCGs for high level operations? Do you mean "use DCGs to parse strings that contain instructions" or do you parse things other than strings with DCGs? I'm assuming you take the parsed instructions and run them through some kind of interpreter that does the execution and audit trail.
>> How do you debug DCGs? I get "false." instead of "syntax error at line 23", which is unacceptable for bigger inputs.
You need to include exception handling in your DCG rules. For example, in Prolog-like pseudocode:
pink_apples([A|As]) --> [A], { red_apple(A), throw(error(type_error(pink_apple,red_apple),_)) }.
% Raises type error:
ERROR: Type error: `pink_apple' expected, found `red_apple' (an atom)
ERROR: In:
ERROR: [12] throw(error(type_error(pink_apple,red_apple),_10070))
Called from a source file the error output will list the line in the source file where the exception was raised. There are more tools to debug the error:
> How do you debug DCGs? I get "false." instead of "syntax error at line 23", which is unacceptable for bigger inputs.
I also sympathize. "false" as the default failure mode is a challenge with Prolog. Most Prologs I've used have good debugging/stepping features (see spy and trace predicates), logical debugging of pure monotonic Prolog can often help (explained by Markus Triska), you can easily write (use existing) meta predicates that assert a called predicate must not fail otherwise throw an exception. For example: here the ./ is supposed to look like a checkmark. So `./ true.` is true. `./ false` throws an exception.
I sympathize. I nearly dropped Prolog for this reason until I learned about term_expansion/2 and goal_expansion/2.
If what Prolog is doing you consider incorrect, _make it_ incorrect.
DCGs can be used to convert any data structure to a sequence. Actually, they are capable of any graph to graph , so they could produce a sequence of commands.
The oft-cited Markus Triska has some great work on this:
SICStus, GNU Prolog, XSB, Ciao, Scryer, Traella, Tau but does not mention at all of SWI-Prolog. I remember using SWI-Prolog back in the day, did it somehow fall out favor, or is there some animosity between implementation and Markus is just not in the SWI-Prolog camp?
My 2 cents, it's hard for me to believe there is any animosity there.
You don't have to take my word for it, you can just see what he says and does.
But I've seen nearly every one of his videos and blog posts, read through many previous comments on HN, there's not a single disparaging remark in any of them. In fact he even makes his philosophy clear in this post [1]:
Things that I, at least, always do without even thinking about it. For instance, when I mention Scryer Prolog in Internet discussions, I always try to mention at least one other Prolog system too in a flattering way, out of respect and admiration for other systems and also to encourage more cooperation between systems.
What I gather is Markus strongly advocates for ISO compliant Prolog implementations and especially open source ones, because that's where his heart is right now. But one thing to remember is that Markus has also contributed 10s of thousands of lines of code to SWI (check out the author tag in many of the SWI libraries), has co-authored authored many papers with Jan Wielemaker, and there is plenty of professional respect there. This is like trying to understand the nuance reasons why Steven Hawking disagrees with Einstein (or whoever) on something. Probably in agreement about 99.999% of most things but strong disagreement on a 0.001% that probably doesn't matter much to you and I about whether or not black holes are Humperdink-Blazensort compliant or... waves hands stuff.
As you've seen from the comments though, SWI is an extremely successful, established system. Nearly every book and example you will read uses SWI. It has great libraries, great IDEs, FFI, embedding, support, documentation, all kinds of great stuff -- SWI is already REALLY successful, and for very good reasons!
So my guess is just trying to give a voice to the little guys who are up and coming and are in line philosophically with his beliefs about the language and open source. I personally would describe that as "support", I don't think I would use the word "animosity".
> This is like trying to understand the nuance reasons why Steven Hawking disagrees with Einstein (or whoever) on something.
I like the analogy!
> So my guess is just trying to give a voice to the little guys who are up and coming and are in line philosophically with his beliefs about the language and open source.
That makes perfect sense. Thanks for answering! It's an interesting insight into the world of "Prologs" to a complete outsider. I had only used SWI-Prolog in the university and remember it having a variety of modules, web development libraries and other such things, so at least superficially to a novice, that was pretty impressive.
So first, let's keep in mind that with no execution model, Prolog is still a "syntax" for Horn clauses. It's still a way to document knowledge. Add SLD resolution and we can compute.
The paper (intentionally I presume) orders clauses of a simple predicate to illustrate (cause) a problem in Prolog.
But what I actually find is the more time spent in Prolog, the more natural it is to express things in a way that is clear, logical and performant. As with any language/paradigm, there are a few gotchas to be experienced. But generally speaking, SLD resolution has never once been an obstacle (in the past 2 years) of coding.
The general execution model of Prolog is pretty simple. The lack of functions actually makes meta-programming much clearer and simpler. A term is just data, unless it's stated as a goal. It's only a valid goal if you've already defined its meaning.
So I'd be concerned that Curry gives up the simplicity of Prolog's execution model, and ease of meta-programming. I struggle with the lack of types in Prolog, but also know I can (at least in theory) use Prolog to solve correctness problems in Prolog code.
I'm currently using SWI-Prolog. Performance is excellent, it has excellent high-level concurrency primitives[0] (when was the last time you pegged all your cores solving a problem?), and many libraries. I might be one of the few people who has committed to using the integrated editor (PceEmacs) despite being a Vim person. PceEmacs is just too good at syntax highlighting and error detection.
At the same time, I'm a huge fan of Markus Triska. His Youtube[1] stuff is mind-expanding (watch all of it, even if you never write Prolog). He has an excellent book online[2]. I admire the way he explains and advances pure monotonic Prolog, and I appreciate the push for ISO conformance and his support for Prologs that that do the same (SWI is not on that list).
If you want to learn Prolog, watch all of Markus Triska's videos, read his book, and learn what Prolog could be in a perfect world. Then download SWI-Prolog, and maybe break some rules while getting things done at a blazing speed. Eventually you'll gravitate to what makes sense for you.
The Art of Prolog is a classic "must have". Clause and Effect is a good "hit the ground running" (on page 70 you're into symbolic differentiation via term rewriting).
>> So I'd be concerned that Curry gives up the simplicity of Prolog's execution model, and ease of meta-programming.
That was my concern with the paper listed above also. Functional syntax is to my mind needlessly over-complicated. Types are useful but you can roll your own if you need them, and Prolog makes that easy enough.
>> I'm currently using SWI-Prolog. Performance is excellent, it has excellent high-level concurrency primitives[0] (when was the last time you pegged all your cores solving a problem?), and many libraries. I might be one of the few people who has committed to using the integrated editor (PceEmacs) despite being a Vim person. PceEmacs is just too good at syntax highlighting and error detection.
Hah! Hello fellow PCEmacs >> vim user :D
Markus Triska's stuff is good and he's undeniably an expert in Prolog programming, I mean duh, but he and a couple of others have needlessly caused a rift in the Prolog community (mainly between themselves and everyone else) by being so stroppy about the ISO non-conformance of SWI-Prolog. I hope Markus is reading this. The Prolog community is very small and dwindling and we can't afford such drama. We need more Prolog compilers, yes, ISO conformance is good, yes, but SWI-Prolog is a robust and battle-tested implementation and it is the ISO standard that should be leaning on its experience of being a real-world Prolog used by real programmers and not just the other way around.
Still checking every now and then if SICStus has open sourced. I used Prolog daily during my PhD and SICStus had such nice features. E.g. it could raise an exception when no more heap space can be allocated or when a 'call' would not finish within a given time. These features made it much easier to use Prolog in real-world systems (this was a parsing system and when parsing a very large corpus, this was highly preferable over simply crashing the interpreter).
Maybe things have changed, but this wasn't possible with SWI at the time. Even worse, most C extensions would use malloc directly, making it impossible to track allocations done by extensions.
>> Still checking every now and then if SICStus has open sourced.
There used to be a free (though proprietary) version of Quintus, the predecessor of SICStus. It might still be around somewhere if you look hard enough.
I figured it would be a good introduction to prolog, but to date there doesn't seem to be any prolog interpreter that lets me copy the things in the book to play with them?
Strongly 2nd checking out Markus Triska's work. It's practically poetry.
It's really surprising to see someone at his level, a software reliability researcher no less, retain that level of passion about something. Usually folks with that level of experience are really grumpy and lose the evangelistic flair. I tend to maintain that level of evangelism about things I'm passionate for but no one would ever accuse me of being a "reliability researcher", lol.
In fact, I was/am a hardcore lisp hacker, and never in my life did I think I'd find something that would even come close to lisp for me. because it's FUN! but I'll be damned if Prolog isn't turning out to be even more fun in ways I didn't expect.
In fact the only thing I really miss about lisp at this point is structural editing (paredit and such).
And it's not like it's either/or, you can use both. But learning pure Prolog is really delightful if you follow the breadcrumbs (gold nuggets, honestly) Markus left.
You’re in good company — the most influential AI academic of all time, the cooky grandfather of AI who picked up right where (when!) Turing left off, the man hated by both camps yet somehow in charge of them, agrees with you. I’m talking about Marvin Minsky, of course. See: Logical vs. Analogical (Minsky, 1991) https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...
…the limitations of current machine intelligence largely stem from seeking unified theories or trying to repair the deficiencies of theoretically neat but conceptually impoverished ideological positions.
Our purely numeric connectionist networks are inherently deficient in abilities to reason well; our purely symbolic logical systems are inherently deficient in abilities to represent the all-important heuristic connections between things—the uncertain, approximate, and analogical links that we need for making new hypotheses. The versatility that we need can be found only in larger-scale architectures that can exploit and manage the advantages of several types of representations at the same time.
Then, each can be used to overcome the deficiencies of the others. To accomplish this task, each formally neat type of knowledge representation or inference must be complemented with some scruffier kind of machinery that can embody the heuristic connections between the knowledge itself and what we hope to do with it.
He phrases it backwards here in comparison to what you’re talking about (probably because no one in their right mind would have predicted the feasibility of LLMs), but I think the parallel argument should be clear. Talking about “human reasoning” like Simon & Newell or LeCun & Hinton do in terms of one single paradigm is like talking about “human neurons”. There’s tons of different neuronal architectures at play in our brains, and only through the ad-hoc minimally-centralized combination of all of them do we find success.
Personally, I’m a big booster of the term Unified Artificial Intelligence (UAI) for this paradigm; isn’t it fetch? ;)
UAI seems reasonable as we need both discreet and non-discreet systems to start talking in a meaningful way.
Was wondering recently - given that a lot can be done with predicate logic, and given that DNA is a sort of grammar, is there anything more powerful than these formalisms in math in genera, that is actually put to work somewhere, anywhere?
Minsky is right. The "rift" between symbolic and sub-symbolic, or learning and reasoning has only impeded progress. The problem is it's very hard to be an expert on both at once, and it's getting harder and harder as more and more work is done on both.
Since you mentioned Hinton, he has worked hard to entrench the idea that symbolic AI failed. I was listening to a lecture he gave [1] and he went on and On and ON about how symbolic AI was a stupid idea and it was conclusively proved wrong by neural nets. He went on for so long bashing symbolic AI to the ground that at some point I started wondering whether he's deep down worried that there might be a resurgence of it right on time to address the limitations of neural nets with respect to reasoning, before he and his mates have the chance to figure out how to overcome it. Which, well, good luck with that.
You’d be astonished how correct is this. I know several top PHD ppl well versed in ML who openly admin they know nothing about SQL which implies they also know very little PROLOG and very likely are ignorant about everything that is grammars and state automata.
Top surprise was when my high school classmate who went on to win two gold medals in IoM and has been doing quant mathematics for finance more than 16years openly admitted he knew nothing about grammars and was like ‘is this useful at all…’. I was amazed how is this even possible. But it is - he went the probabilistic and symbolic way, I went the discret and graph way.
On the other side I’m completely oblivious of what people use complex analysis for, even though I know a little DSP, some electronics, even some nano opto-electronics, and also can explain Fourier Transform to people. Even though I know what dérivâtes, nabla and vector field is, I can’t put them to work for me…
Science is never done in isolation, and the whole LLM thing seems from another planet to many people cause it was devised in a ML silo and also enterprise silo.
I'm also not a cross-disciplinary expert, to be clear. When I say it's hard, it's because I find it hard! My strength is in discrete maths and logic. I can deal with continuous maths because I need to keep abreast with the latest statistical machine learning developments but I don't think I would ever be able to contribute directly to say neural networks research, unless I turned it into a logic-based approach (as has been done in the past). To be perfectly honest, if deep learning didn't happen to be the dominant approach to machine learning, which forces me to pay attention to it, I doubt I would have followed my own advice and looked far beyond my narrow band of expertise.
But, that's why we're supposed to have collaborations, right? I can pair up with an expert on neural nets and we can make something new together that's more than what we can each do on our own. In the process we can learn from each other. That stuff works, I've seen that, too, in practice. I'm working with some roboticists now and I've learned a hell lot about their discipline and hopefully they're learning something about mine. I am convinced that in order to make progress in AI we need broad and wide collaborations, and not just between symbolists and connectionists, but also between computer scientists and biologists, cognitive scientists, whoever has any idea about what we're trying to achieve. After all, a computer scientist can only tell you something about the "artificial" in "artificial intelligence". We study computation, not intelligence. If we're going to create artificial intelligence we need to collaborate with someone who understands what that is.
The hard part is to kick people out of their comfort zone and to convince them that the other experts are also, well, experts, and that they have useful knowledge and skills that can improve your own results. And seen from the other side of the coin, from my point of view, it's very difficult for me, as an early career researcher, to convince anyone that I have useful knowledge and skills and something to contribute. It takes time and you have to make your name somehow otherwise nobody will want to work with you. But that's how academia works.
It's just not a great mechanism to ensure knowledge is shared and reused, unfortunately.
Prolog was a neat exercise, but for practical programming you might want to combine both logical and functional programming. I think 'Curry' does that.
Has anyone tried to mine LLM world model via sampling to extract all relations it's believes to be true (like 99%+ certain) into Prolog like clauses. I think this is way to achieve reliable world/domain models in logical sense (non-probabilistic). Probably brain doesn't do it but it could be cool anyway. Seems like good sampler could somehow mine this info by using stuff like I believe that X is true or false for all X imaginable. Then try go generate relations for these etc.
Prolog (and logic programming in general) is much older than you think. In fact, if we take modern functional programming to have been born with John Backus' Turing Award presentation[1], then it even predates it.
Many advancements to functional programming were implemented on top of Prolog! Erlang's early versions were built on top of a Prolog-derived language who's name escapes me. It's the source of Erlang's unfamiliar syntax for more unlearned programmers. It's very much like writing Prolog if you had return values and no cuts or complex terms.
As for penetrating general use, probably not without a major shift in the industry. But it's a very popular language just on the periphery, even to this day.
> Erlang's early versions were built on top of a Prolog-derived language who's name escapes me.
AFAIK, Erlang was originally implemented in Prolog and the original VM was inspired by the Warren Abstract Machine targeted by some Prolog implementations. It was also inspired by PLEX, but PLEX wasn't a Prolog derivative.
Oh, apparently I only had half the story. I remembered reading that it was written in a committed-choice derivative of Prolog for concurrency reasons, but apparently that was just a failed experiment that happened early on.[1]
With the right prompting you can get LLMs to output pretty much anything :)
> What is the airspeed velocity of an unladen Eurasian blue tit?
> I’m not sure. The specific airspeed velocity of an unladen Eurasian blue tit hasn't been studied or widely documented in the same way that birds like swallows have been. It would likely depend on many factors like the bird’s weight, wing shape, and wind conditions. If you’re looking for general information about bird flight or blue tits, I can help with that!
I've played with and seriously used many languages in my career. My experience is that pure functional (done Elm style) is productive and scales well to a larger team. Dynamic stuff like Ruby/Javascript always has more bugs than you think, even with "full" test coverage. I'm not smart enough to make sense of my own Scheme meta-programming when I revisit it months later. I have loads (but dated) experience with Java and it (and peers) are relatively easy to read and maintain.
Prolog is very surprising, because it is homoiconic and immensely powerful in metaprogramming, BUT ... the declarative style and execution model reigns in the complexity/readability. A term is just a term. Nothing happens when you create a term. If/when a term is a goal, then you match it with the head of an existing predicate (something you've already coded). So it never gets too messy.
Now, the biggest problem with Prolog is that it's so flexible, you'll perpetually be realizing that you could have coded something much more cleanly. So you do that, have less, code, it's nicer, etc. Doing this on a large team might not scale without effort.
>> I'm not smart enough to make sense of my own Scheme meta-programming when I revisit it months later.
Then be smart enough to comment your code :P
>> Prolog is very surprising, because it is homoiconic and immensely powerful in metaprogramming, BUT ... the declarative style and execution model reigns in the complexity/readability.
Iiiish? This is from one of my yesterday's commit messages:
* New look and look-around actions in the Basic Sim Environment allow
for looking ahead in eight direction. This does get a liiittle bit
complicated, or rather there's the usual millefeuille of abstraction
layers on top of abstraction layers all the way down pou that mou ta
spasei kapoia stigmi but OK.
The Greek-lish interjection says approximately "that is going to bite me in the ass down the line". Because it will. The better I get with Prolog the more I worry nobody will be able to maintain my code but myself, and my future self will hate me with deep, burning passion.
>> The Greek-lish interjection says approximately "that is going to bite me in the ass down the line". Because it will. The better I get with Prolog the more I worry nobody will be able to maintain my code but myself, and my future self will hate me with deep, burning passion.
Isn't that similar to the GPs "I'm not smart enough to make sense of my own Scheme meta-programming when I revisit it months later."? Commenting your prolog won't help?
It's similar. Commenting my code (which I do, almost religiously) helps. It still taxes my brain to follow it.
There's a certain kind of abstraction that is easier to write than read and understand. My current project is full of it, not least because a set of low-level predicates performing "primitive" operations on a foundational data structure are automatically generated and then everything else is built on top of them with a concentrated effort to avoid code duplication. There's a bunch of actions that move an agent around a map or look from the agent's position around the map, in discrete directions (currently) and the easiest way to implement these would be to implement one, say "step_north" or "look_north", and then copy/paste it with small changes however many times I need. Instead I opted to have parameterised "step" and "look" actions that I instantiate as I need. It's kind of the obvious thing to do, but starting from a "step" action, in my zeal to DRY (or NRM, I guess) I ended up creating a chain of predicates six or seven links deep that makes it harder to trace the execution of a top-level (step or look) action, just because I have to keep in mind each link in the chain and what exactly it does; and that's not obvious because some links compose new predicates from their arguments so I need to have a clear model of how that happens always in my mind. I could keep the chain shorter by using a higher level of abstraction but that would just make it even harder to debug.
Prolog makes it easy, even pleasant, to program like that, but it doesn't make it any easier to read and maintain that kind of code than any other language as far as I can tell.
Maybe the solution is not not do any metaprogramming and just copy/pasta and DRY and get it over with. But I find that this, too, makes it harder to debug because after a while all the instances of the copy/pasted code blur together into one smudgy fudge that has a downright hypnotic effect.
I found it completely impenetrable in college for all but the simplest problems and I tried to re-read the textbook recently and I didn’t do much better.
Which textbook was that? I think that Bratko ("Prolog programming for Artificial Intelligence) is probably the most friendly to beginning programmers with a background in more mainstream languages.
As someone that went through a degree where Prolog and LP was cherisched, I would say yes, however LP might be even weirder to start into than even FP.
Many folks on our degree couldn't be happier when they didn't had to see Prolog ever again, while me and others went on to take our chances on the national LP challenge across universities.
Tarski's World was a good way back then to dive into what LP is all about, without being programming language specific.
Out of curiosity, in which university did you study for your degree?
I did my MSc in Sussex Uni which was one of the centers in the UK where logic programming was developed, but when I got there in 2014 there was no trace of that history. From conversations with professors and past students it seems that Sussex tried to ram Prolog hard down students' throats and that caused a furious backlash so that nobody wanted to hear about it anymore after the '90s to early 2000's.
> It's one of few languages that is simultaneously a standalone logical formalism, and a standalone representation of computation. (With caveats and exceptions, I know).
Would you be able to formulate all those "caveats and exceptions" in Prolog?
For example: the logical core of Prolog along with it's resolution model (for the logical part) are non deterministic (something can have none, one, many solutions) but only one solution is explored at a time. So it's a "meta logical" thing to express something like "the set of solutions for ...". Given that the core of Prolog is Turing complete, you can still get Prolog to compute anything, you might just not have a nice way of declaring it in pure Prolog.
Prolog has an interesting history of people discovering ways to express things that are simple, powerful and elegant. And yet despite the simplicity, these ways of expressing things were not immediately evident. DCGs are a prime example.
I'm curious what your thoughts are on this paper [0]Lessons from the evolution of the Batfish configuration analysis tool. Initially they used Datalog but have since migrated to binary decision diagrams for performance reasons, the ability to more accurately model semantics, and more deterministic execution.
Not GP, but if Prolog has been around for 52 years, has many implementations old (SICStus/Quintus, SWI, yap, ciao, xsb, eclipse, many more) and new (Scryer, Quantum, Strawberry, Tau, Trealla, ...) then it follows it's not doing so bad after all.
I have to say the "expert system" quote comes from lack of insight and perhaps outdated CompSci lecture notes when even at the height of "expert systems" around 1990 or before, Prolog's backward-chaining was seen as the opposite of an expert system's forward-chaining.
It may have been the case that what was understood as an "AI language" was anathema following the AI crash around 1988/89 (the original "AI winter").
What also may have contributed to the impression of Prolog becoming less used around 2000-2012 or so is the effect of W3C's/TBL's "Semantic Web" and description logics efforts capturing a portion of the academic and commercial attention in graph and logic databases, as well as in other applied logic domains such as formal verification (in a quite direct sense considering research grants for OWL, etc.)
Early systems were largely closed source, proprietary, licensed commercial systems. In some ways Prolog is 52 years old and in some ways it's 5 years old.
"Prolog" is like Lisp, a wide array of superficially similar languages that actually are quite diverse.
Mind you, in that sense, Java and C# are more or less the same language, which has Prolog programmers nodding their heads and Java and C# developers screaming.
Nope. Prolog is an ISO-standardized language since 1995 and the spec was updated in 2012. Where older "legacy" Prolog implementations such as SWI, YAP, and SICStus are deviating from the standard is generally pretty well-known to Prolog practitioners, and the convener of ISO 13211 actually can verify claims of ISO conformance; for example, [1] is a link to the ISO certification of Quantum Prolog (the web app at [2]).
It's true however that people are quick to conflate Prolog with constraint-logic programming libs, "expert systems" (RETE-style forward-chainging systems and other "rule engines"), or random "functional-logic" programming languages. The misunderstanding of Prolog and logic by Lisp programmers has been ongoing since the 1980s, probably because at one point Prolog and Lisp were seen as competing "languages for AI" for some reason even though they have very little in common.
Also on Lisp there's Common Lisp with ANSI and maybe closer-mop from QuickLisp standarizing the cores, and then Scheme is it's own universe of compiler with official (SRFI) and own extensions and libraries.
I have to use XQuartz for some apps I run in macOS. Magnet (and other apps I've tried) don't understand those windows. ShiftIt does, but it's buggy and no longer maintained.
Luck. Model an agent that has some odds of doubling wealth, maintaining wealth, or halving wealth per iteration of some game. It's easy to generate downward trends.
Yes. TFA ignores what I think is the biggest factor. In researching family history, I've been struck by how important chance was regardless of how smart and hardworking they were.
Yeah it was a bit annoying they didn't mention luck in the opening of this article. That being said, work ethic and a some inoculation against nihilism are necessary ingredients.
In theory, Prolog is the king of languages. Simultaneously a logical formalism, and (with a resolution system) a language for computation, AND the ultimate meta-programming language as its homoiconic but only goals are evaluated (there is no eager/lazy evaluation fuss - a term is just a term), and goals can only succeed (and have any consequence) if there is already a matching clause.
In practice, there are some very performant and maintained implementations with small but helpful communities.
Also in practice. With all of this power, it's clear that anything could be done (well) in Prolog, but it's not always clear what that way might be. DCGs are an example of a beautiful, elegant, simple, powerful way of building parsers (or state machines) that was not immediately evident to the Prolog community for some time. The perpetual conundrum as a user will be "I could do it this way, but there are certainly better ways of doing this, and I have many avenues I could explore, and I don't know which might be fruitful in what timeline".
I've been using Prolog daily for the past 1.5 years. I've also implemented and used a Kanren in Elm, and there is simply a world of practical difference.