 Oct 29, 2018  Introduction to Series: Declarative Programming in Healthcare
 Nov 04, 2018  A PractoTheoretical Introduction to Logic Programming
This post is part of a larger series of posts that examines the use of declarative logic programming in implementing a healthcarespecific risk score called the HCC Risk Score. The scope of this series runs through such topics as
 healthcareintensive dissection of SAS code and translation into Python and PyDatalog
 a generic introduction to Logic Programming with Prolog and SQL (this post)
 discussions on the role of declarative programming in a technical organization
 the difference between forwardchaining and backwardchaining as an implementation strategy in logical/declarative technologies
 the choice of a forwardchaining embedded language called CHR for reimplementing our code from PyDatalog
This post has the following characteristics:
 It is long. Go ahead and scroll down and see all the effort we put into it.
 It is written to convince imperative programmers that the logic paradigm is worth both understanding and using in their daytoday activities.
 It is also a tutorial that tries to bring something new to the dozens of existing Prolog blog posts rather than a rehash of the exact same ancestor(x,y),parent(x) example.
 It is a balance between theoretical and practical and will err on the side of explanatory rather than rigor (there might be some things on the academic side of things that are not quite right).
While this post will eventually introduce Prolog as an example of a Logic Programming language and compare its basic features to SQL and databases, our hope is not to claim that Prolog is all that there is in the logic programming world. We hope to expose the wider world of logic technologies as worthy of being interrelated with each other. If you ever asked yourself “What does an entreprise rules engine, a SAT solver and Prolog have to do with each other?” well then this is the post to whet your appetite. We are not sure we know the exact answer to this question, the image at the top of the post notwithstanding.
At the end of this post, we hope the following table makes sense. This table shows a surprising correspondence between two different technologies. By the end of this post, we hope it ceases being surprising (as both technologies are built upon firstorder predicate calculus).
Prolog/Datalog 
SQL 


Facts sharing same predicate name 
Rows in a table 
Arity of Predicate 
Number of columns in a table 
Predicate Rule 
View 
Conjunction between two goal clauses sharing a common variable 
Join 
Goal clause submitted at top of REPL 
Query 
Rule using recursion 
Recursive CTE 
Rules with multiple heads 
Union 
About you, the ‘typical’ imperative programmer
Many programmers are imperative programmers, whether their language be Java, Javascript, or C or C++ or Python, or any of a dozen other variants. Many other programmers are functional programmers (Haskell,F#, Scala, clojure, etc) and have a spiritual kinship with logic programming via the declarative mindset to modelling programs. Many other programmers use SQL only, and unbeknownst to them, are tantalizingly close to the logical paradigm.
Many ‘general purpose’ programmers don’t view SQL as programming for its perceived lack of Turingcompleteness*, lack of complex data structures, lack of variables and lack of classic staples of loops and ifstatements. We encourage these programmers to be a bit more ecumenical. * In fact SQL is turing complete 
Most programmers might be rightfully puzzled by our advocacy for logic programming since the word ‘logic’ is something already baked into any programming. We daily use conditionals in ifstatements and to terminate loops. We know about booleans and how to use AND, NOT and OR to compose our conditionals. Is this not logic? Aren’t we already familiar enough with the fact that our data and processing translates down through layers of abstraction to the ultimate in logic: voltage differences representing truth and nottruth?
Well, yes…
One problem in the introduction of logic programming to the wider programmer community is disentangling the notion of Logic (with a capital ‘L’) from its less formal meanings, and in doing so acknowledging that a word like ‘logic’ has many different definitions to many different audiences, from the lay person, through the programmer, to the mathematician and the logician.
Strangely unheralded outside of academic circles, most of these logical systems are not new, having been discovered/invented in the mid 1800s through the early 1900s and into our modern era. They undergird most mathematics and therefore science and in their formalization sought to explain and generate everything from how we think to the totality of knowable knowledge. Luminaries in this early effort include Frege, George Boole, De Morgan, Leibniz, CS Peirce, Hilbert, Russell, Peano, Gödel, etc etc. 
Our aim here is to surface this alltoofamiliar conflation, and to enumerate and judiciously survey some of the formal Logic systems. But as we do so, we hope to be practical by explaining how we daily use these systems in more popular technologies and programming languages, regardless of paradigm. Later on in this post we’ll show how a simple Prolog program, equipped with facts and rules, is essentially the same thing as a SQL database equipped with tables and queries.
Visuals Help Us Understand
The above visual has been woefully absent anywhere on the Internet. It captures some basic notions of the interrelationships of these formal systems with each other. For the purposes of this discussion, it is worth focusing on the center of this diagram as it captures the most ‘practicality.’
If these logical systems are the seed to mathematics, then these two formal systems are its tasty healthy germ from which most of our intuitive usage of logic arises, especially in programming. Here we see that propositional calculus is subsumed within predicate calculus, but not vice versa.
Another problem with learning these Logical systems is that they go by alternate names, dependent on the source. Here are some of their other names:
Propositional Calculus 
Predicate Calculus 
Propositional Logic 
Predicate Logic 
ZerothOrder Logic 
FirstOrder Logic 
ZerothOrder Calculus 
FirstOrder Predicate Calculus 
Sentential Logic (from the root word for ‘sentence’) 
Relational Calculus (are you thinking Relational Algebra and SQL? Well, you should be) 
Sentential Calculus 

Statement Logic 

Propositional Calculus
Let’s start with a bold statement that is moreorless correct, and state that Propositional Calculus is kinda the same thing as Boolean Algebra . This captures the essence of what is being represented and calculated.
This formal system, this zerothorder logic, has a syntax and rules for generating wellformed sentences out of this syntax. This is not a new idea to a programmer as all programming languages, being formal systems, start with a base set of symbols as syntax and the rules by which this syntax may be textually composed so that it may be wellformed.
If a program is wellformed then it can be compiled or interpreted– i.e. it can be understood. If the program is not wellformed (intermixing tabs with spaces in Python, missing a semicolon in Java, etc.) it will not compile – it is not a wellformed formula/program.
A propositional calculus has the following language constructs:
Symbol 
Meaning (if you want to ascribe meaning!) 
¬ 
Not (or negation) 
∧ 
And 
∨ 
Or 
p,q,r,... 
A proposition, i.e. a value that has a truthvalue. These symbols are italicized because they can be introduced by the ‘programmer’ or logician and are countably infinite. They are part of the formal syntax but you can use whatever symbol you want. 
→ 
Material conditional, or “If, then” (e.g P→Q means “if P then Q”) but equivalent (in a truthtable sense) to ¬(P∧Q) 
Here is a propositional formula, (or program):
Which we could translate into english as: “it is not raining or there are twenty peanuts in the cookie jar or it is pouring AND there are not twenty peanuts in my cookie jar or it is pouring AND there are twenty peanuts in my cookie jar”. This is wellformed and has a truth value under some model of what we know. It is up to you to decide if it is meaningful.
It is at this point, we must make an apology because even here, in this simplest and most basic of all formal logics, there is a wealth of nuance and explanation that could be explored – the above table is just one example of “a” propositional calculus. For instance, in our last row, we introduced a syntactical form called ‘material conditional’ that exists in most forms of propositional calculi, but does not have to exist in the language. This seems like a cheat, since our promise was to explain “the” propositional calculus. In fact, it is even possible to remove our intuitive symbols for “and”s and “or”s and rely on just one, the Sheffer stroke ↑ (also known as NAND) – it would be an equally valid propositional calculus.
The challenges here are numerous:
 We are at the lowest level of all axiomatic formalisms. Science and programming depend upon math which in turn depends upon logic, which is itself anchored in propositions of truth and nontruth. We are so low in our understanding that it is difficult to decouple the notions of ‘meaning’ and ‘syntax’ as we consume our own tails.
 Because of this attempt to capture intuitive notions of logic into symbols, we can’t help but prefer certain propositional calculi that more accurately map to our notions of meaning, in the logical sense. However, our desire is to decouple meaning (semantics) from syntax. The former would devalue a system that used the Sheffer stroke ↑ , but the latter would love it.
 Because we are interested in providing a foundational syntax, we have to be careful about language versus metalanguage ( modus ponens and ⊢).
 There are many syntaxes for all these combinations. Some are simply matters of substituting different characters (like using the “!” symbol or “not” for not or ‘&’ or “and” for and). Some include symbols that could be crafted from others, and are therefore not fundamental.
The point here is that we are only trying to introduce what a propositional calculus is, not trying to explain it completely. The takeaway is that we have created a syntactical language that has some basis in our sense of what a logical statement is. This syntactic language features a base set of symbols, a base set of propositions, and a base set of rules about how to combine these together to create new, wellformed sentences/formulae/programs.
From a practical standpoint, the following Python code shows a propositional formula in its conditional:
The problem discovered by the metamathematicians, philosophers and logicians in the 19th century was that there are many statements that cannot be expressed by such a logical formalism. For that we need to move on to Predicate Calculus.
Predicate Calculus
Predicate calculus is just propositional calculus with the addition of two more things:
 Predicates: P(A)
 Quantifiers: ∀ and ∃
If you have ever seen symbols like ∀ (meaning “for all”) and ∃ (meaning “there exists”) then this is an indication that you are in, at least, a firstorder predicate calculus. These symbols are called quantifiers. Furthermore if you’ve ever seen a statement like P(A) or ∀A.P(A) then you know you are in, at least, a firstorder predicate calculus (but maybe higher).
By introducing predicates (also known as relations, also known as tables), we have done a couple of things.
 First we have created a reusable means to make statements about things (objects) that are not inherently logical, like people or cars or diagnoses, or whatever you want – each exist in a space called a “domain of discourse”. These nonlogical data are lifted into the logical world by being inserted into a predicate statement. The predicate P is a template that may be instantiated into propositions. So while the syntactic fragment predicate P might stand for “is an employee,” P(daniel) will stand for “Daniel is an employee”. While “Daniel” alone has no logical meaning, the statement “Daniel is an employee” certainly does – it is either true or not true, or in a closed world assumption it is in the database or it is not.
 Second, by employing quantifiers, we have allowed a means to wrangle the infinite. In the lower level propositional calculus, the propositions (the p,q,r, statements shown above) are fully instantiated truthvalues and don’t speak to an entire class of objects. So to express a statement about integers (nonlogical objects) or people (more nonlogical objects) that applied to all of them, we would have to create an infinite (or unwieldy) propositional calculus formula. In predicate calculus it is enough to say ∀H.P(H) which could be interpreted as saying: “For all humans (H), H has a parent.” A propositional calculus would have to enumerate all humans in a proposition that spoke just to them: “daniel has a parent” ∧ “methuselah has a parent” ∧ “adam has a parent” ∧ …∞
Predicates can have many arities which is just a different way of saying that they can have multiple arguments. So while P(A) has an arity of 1, the predicate Employee(daniel,manager) has an arity of 2, and could be interpreted as “X is an employee who has level Y”. Again, according to how we model the world, this statement is either true or not true.
From a practical standpoint, the following Python code shows a predicate formula in its conditional:
Together, the use of
 quantifiers
 predicates
 all the syntactic forms of propositional calculus,
allow us to create wellformed programs/formulas in firstorder predicate calculus.
Here is an example of a firstorder predicate calculus statement:
Loosely translated this says, “for all things that are houses, they are therefore physical objects, AND for all people, each person owns a house.”
It turns out, that we, as humans and programmers, tend to like the statements that use the ‘material condition’ → (if/then) in their formula, as these formula correspond to our notion of knowledge representation and calculation. Lest the point be lost, it is this logic, the firstorder predicate calculus (and actually this subset that has ‘material condition’ with a specific form called Horn clauses), that we are referring to when we speak of ‘Logic’ in Logic Programming _{(*terms and conditions apply)}.
Something Practical: The BiDirectionality of Logic Programming
Consider the following table:
Python 


Prolog 


There is something being shown here that some people call “mindblowing” and others call “weird”. The top line is a Python function/subroutine (it could very well be C or Java or Haskell because it will exhibit the same behavior) to add two numbers and return the value. The bottom line is a Prolog predicate to do the same.
The top line shows that calculation is unidirectional, we see that parameters are holders into which data flows (green arrows), and the return value (the overall calculation of the function as a whole) returns data “out” of the function (red arrow). These directions are baked into the paradigm. This does not seem controversial or even noteworthy, but such a unidirectionality does not exist in Prolog and most relational/logic programming languages. As you can see above, the Prolog “subroutine” returns values through all of its parameters depending upon how you choose to use it.
A statement like:
while utterly mundane in most programming languages is nearly meaningless in Prolog. It just doesn’t add up.
Nitpicky Prolog programmers will say that this statement actually has meaning, but it will be something completely different. In this case if we were to first capitalize our variable StunningResult the code above would unify our variable to a ‘function symbol’ or ‘complex term’ with two values. This is Prolog’s manner in which to create a recordlike datum. It is not a calculated value assigned to a variable. 
The bottomline of the above table shows that calculation (in Prolog or Logic) is multidirectional, and as “weird” as this might seem at first, it provides a stunning capability that is absent from nonrelational paradigms. This capability allows us to use the plus/3 predicate code in multiple modalities. In the table below we show us calling this predicate to effect different modalities. The code is run from within a Prolog REPL (readevalprintloop) and would be the actual results if the plus/3 predicate were properly defined.
Goal Clause Submitted as Question to toplevel of Prolog REPL 
Explanation 

∃Sum. plus(2,4,Sum) ?
Does there exist a value for the logic variable Sum that makes the predicate with its two instantiated parameters true?
In this situation, we supply (instantiate) the first two parameters, and ask for the result to be returned to us in the variable Sum.
This is like how most imperative and functional functions/subroutines work, though in those paradigms the returned variable is syntactically separate from the input parameter list and effectively becomes the rewritable replacement for “plus(2,4)”. 

plus(2,4,9) ?
Is relating 2 and 4 and 9 in this manner true?
In this situation, we supply all three parameters, and in doing so we are asking Prolog to prove or disprove the query.
As you can see, Prolog evaluates the three instantiated parameters and decides that this is false according to its logical theory of the world. 

plus(2,4,6) ?
Like the one above, we pass in three instantiated parameters, but Prolog can prove that this is true according to its logical theory of the world. 

∃Y. plus(2,Y,5) ?
We get subtraction for free, by passing in an instantiated X, an instantiated Sum, and an uninstantiated Y.
The green arrows show that the instantiated data flows into the predicate and Prolog tries to prove that there exists some Y which would make this overall statement true. 

∃X∃Y. plus(X,Y,5) ?
Does there exist an X and a Y such that they add up to 5?
Give me all the Xs and Ys that add up to 5.
This is the second part of Prolog’s “mindblowing” aspect that a lot of people new to Logic programming don’t quite grok. We have multiple answers that will make this query true.

Here in an example of adding another constraint.
∃X∃Y. (plus(X,Y,5) ∧ length([1,2,3,4],X)) ? Does there exist an X and a Y such that they both add up to 5 and (,) the length of the hardcoded list T[1,2,3,4] is equal to X? 
The Classic Prolog Tutorial, with a Dash of Comparative SQL and Comparative Predicate Calculus
The Internet/blogosphere is replete with introductory Prolog tutorials. Most of these tutorials rehash the same example (ancestor, parent) that has been around for at least fifty years. We will do the same at this point, but we will then add two new considerations:
 How Prolog is a “predicate calculus”
 How Prolog is extremely close to being SQL (or vice versa)
The second point, stated as it is, might lead some readers to assume that this means Prolog is underpowered, which is far from the truth. Prolog has been a general purpose programming language for forty years now, and can be used to program anything that can be coded in C or a Java or a Pascal, etc. There are numerous examples, easily searchable, where Prolog has been used to implement everything from traditional business logic, to GUIs, to the “hardproblems” where it excels.
The Ancestor Parent Example
Here is the classic code that captures a full Prolog program:
Please note that we have preloaded this code into an online Prolog REPL provided by the SWIPL distribution of Prolog so that you can follow along:
https://swish.swiprolog.org/p/algorex_prolog.pl
What can be seen here is that nearly all of Prolog programming is two things (in actuality these are just two manifestations of just one thing – a horn clause):
 Facts
 Rules
Just the Facts
A fact is a predicate/relation/table that states some truth that is always true in the program. These are the constants that you know about your problem domain and roughly correspond to the numerous locations in imperative or functional programming where you have ‘hardcoded’ (or loaded from a configuration file) data values that are near and dear to your needs. In this toy example we have loaded three fact predicates: male/1, female/1, and mother/2, and father/2.
Maybe you already figured it out, but worth stating now: all atoms/constants in Prolog are lowercase, and conversely all variables are always uppercase (capitalized usually). This is the opposite of most programming languages – Erlang, having been developed from Prolog, keeps this legacy syntax choice (PDF ). There is no special reason for this. It was a choice made decades ago. Just keep it in mind.
These facts express things we know about the atomic values of the people in our example. We know that Terah is male. We also know that he is the father to Abraham. From our REPL we can query these facts and prove to ourselves that we know what we know:
What is great at this point, is that even with these facts (and no other programming), we can change one of these instantiated atoms to a variable, which is always capitalized, and get all the children of Terah:
Rules
The other firstclass citizen in Prolog programming is the rule, which always has the bigram of a colon followed by a dash ( : ) to indicate its presence. In the code, let’s look at the son_of/2 rule.
This rule, and all rules, follows the following syntactic form:
Where ‘Head’ is all the stuff to the left of the colondash, and ‘Body’ is all the stuff to the right. Here it fully annotated:
A rule is to Prolog as a function or subroutine is to other paradigms. It is a means to abstract and compose. We abstract by creating a new name and we compose by using other predicates in the body.
This rule states: “a child (Child) is the son of a possible parent (Parent), if that child is male, and the possible parent (Parent) is, in fact, the parent of that child”.
Let’s run it in our online SWIPL REPL:
Given this humanreadable example, you can begin to see how this is just firstorder predicate calculus. Let’s translate it to a humanreadablepredicatecalculus statement, and then show how it maps:
 “for all variables Child and for all variables Parent, if male(Child) is true and parent(Parent,Child) is true then son_of(Child,Parent) is true”
Here is a visual of this code annotated to show the correspondence:
Already, with this example we have seen the things that make a predicate calculus a predicate calculus. We have some of the underlying truthconnectives that come from propositional calculus:
 ∧ (AND) but denoted in Prolog with a comma (,)
 → (if/then) but denoted in Prolog with a colondash (:)
And we have the quantifiers and predicate symbols that predicate calculus adds to propositional calculus:
 son_of/2, male/1, parent/2 are all the predicate symbols
 ∃ and ∀ (though they are hidden but follow a simple rule we will explain now)
The Hidden Quantifiers
Prolog has a simple translation for the two quantifiers that can be seen above.
 All the variables (GrandChild, and GrandParent) that appear anywhere in the ‘Head’ of the clause are alway universallyquantified (forall , ∀)
 All the variables that are only in the ‘Body’ of the clause are always existentiallyquantified (thereexists,∃ )
One last gotcha for the quantifiers: all variables that are submitted to the toplevel REPL are always existentiallyquantified. This is not actually a gotcha, because the REPL interpreter treats queries submitted to the toplevel as a ‘Body’ only, or, in other words, as a clause that has no ‘Head’.
Rule Example: parent/2
The rule for parent/2 has two parts, one that uses the underlying father/2 fact and one that uses the underlying mother/2 fact. It can be read as: “a person (Parent) is the parent of another person (Child) if the possible parent is the father of the child, OR the possible parent is the mother of the child.”
This syntactical convenience can be found throughout other programming languages (primarily functional languages) and allows us to compose the entire meaning of what being a parent is into two disjunct clauses. It is still one rule and could be written as
Where the semicolon is Prolog propositional syntax for OR (∨).
Nitpicky Prolog programmers will understand that this is not just a syntactical convenience, but a consequence of the algebra of propositions. In fact, this makes perfect sense when understanding that a Prolog program is the conjunction of all its clauses. By algebra, we can transform the ‘material condition’ operator back into the hornclause form, and do some factoring and refactoring magic, and these two forms of disjunction are the exact same – search for the phrase “puzzling questions” in the following: http://faculty.nps.edu/ncrowe/book/chap14.html 
With either form we get our answer:
Rule Example: ancestor/2
This last rule, ancestor/2, shows something that all programmers should be familiar with: recursion. In the body of one of our clauses that defines ancestor, we make reference to ourselves. Just like many functional programs, Prolog has no explicit looping syntax and uses recurrence/induction to iterate.
The effect of this rule is to allow us to get all ancestors, or descendants (remember, Prolog allows us to query whichever way we want) of a person. Let’s go in one direction:
Here we ask for all the ancestors of Isaac, and find what the logical modelling of our problem domain tells us, that Abraham, Sarah and Terah are all his ancestors (note the duplicate).
Going the opposite direction:
We see what Prolog can prove to us would have to be true to make the toplevel true.
There is a lot more to say about how Prolog works as a general purpose language. We will not go into more detail here, because there are plenty of other excellent tutorials out there. Not covered here but obviously of concern to anyone interested in using Prolog as a general purpose language are
 How does Prolog handle state?
 How does Prolog interact with the outside world (input/output) like files or databases?
 How is Prolog compiled/interpret or run as an executable?
 How does Prolog modularize?
 How does Prolog interact with other languages (foreign function interfaces)?
 How does Prolog allow us to create data structures?
 What is Prolog’s approach to typing?
 How do I do “real” stuff, like with sin and cos and matrices?
Again, there are plentiful answers to these questions and in their answers you will see a lot of criticism regarding how the abstract concept of Prolog as a Logic paradigm loses its purity to answer many of the above questions. These are similar in nature to the criticisms levied against many functional programming languages (with the possible exception of Haskell) for their lack of purity.
Intermediate level Prolog programmers learn to accept that Prolog having been inspired by Logic should be viewed as a powerful still (more powerful than imperative modalities) by how its underlying implementation – a unification engine – searches through a properly modeled (and pruned) space of facts and rules to arrive at conclusions in a wonderfully terse, declarative, and simple fashion.
Finally: the ‘Surprising’ Correspondence Between Prolog and SQL
Here is a SQL file you could load directly into SQLite. It does exactly the same thing as our Prolog program.
Facts are Rows. Tables are Predicates/Relations
As shown above, a Prolog fact with an arity of two is exactly the same thing as a table with two columns. Just like Prolog states that these as universal facts, so does a SQL table. In many interpretations of a database, membership in the table is equivalent to a statement of truth – if “abraham” and “isaac” are related to each other in this table, that means it is true that “abraham” is the father of “issac”. If someone isn’t in this table, then they are not related by fatherhood.
A View is a Rule
The correlation shown above is not just an accidental equivalence – it is a real equivalence. Just as the Prolog son_of/2 predicate accomplished both abstraction and composition, so does the view. Behind the scenes, a query against the view resolves to a query against the underlying tables (and/or other views). So too does the predicate rule resolve to the underlying facts (and/or other rule predicates).
Union for Disjunct (OR) Rules
The above correspondence shows how the concept of set union (which is what SQL union is) corresponds to the multiplerules of a multiheaded rule. Again, this is an exact correspondence and is made even more delicious when you remember that set Union (denoted ∪) is the exact same thing (isomorphic) as propositional “OR” (∨).
While evolving from different needs, set theory, with its set algebra, was found to be the exact same things a propositional logic, with its boolean algebra.
Joining is Conjunction (AND) in a Clause
This correspondence is a little more difficult to see, but is still present. The actual correspondence is between Prolog’s conjunction (,) and a natural join, but we can still see the same effect here. Above we see that the AND’ing of two clauses in the body allows us to use one variable (Someone) as the column/positionalvariable that is the joincolumn. Prolog makes this (in our opinion) more obvious to see.
SQL has Recursion Too !
Finally, we see that the recursion we introduced in the Prolog example can be accomplished in SQL too. The introduction of common table expressions in SQL 99 filtered into most SQL implementations through the mid to late 2000s and should now be universal. Nevertheless, a lot of people are still unaware of the convenience of this new construct (starting with the keyword ‘with’). The recursion capabilities allow for useful hierarchical queries that are founded upon strong theoretical results.
In this example above, we have replicated the exact same ancestor_of/2 query from Prolog.
 The yellow box shows the UNIONOR correspondence.
 The blue box
shows the base case that starts all recursions require.
 The red box shows the
clause that implements recursion (note how the SQL joins against itself like
the Prolog predicate calls itself).
 The green box shows the recursion more explicitly.
Last Thoughts
The purpose of this post was to provide a different sort of introduction to Logic programming and a bit of a peek at Prolog. We hope we accomplished that.
There is still a lot more to say about Logic programming and Prolog, but for our purposes the main concern is for the ability of declarative programming to aid us in creating better and more maintainable code.
In the following posts, we will examine the implementation strategies that Prologlike systems use, specifically backchaining . The comparison between backchaining and forwardchaining is extremely useful to know when thinking about when you might want to use a specific logic system – hint: the datadriven domain of calculating HCC Risk Scores is probably inappropriate for a backchaining engine.
At the very top of this post was an image that shows just a sliver of the wide ecosystem of logicinfluenced or logicbased technologies. There are so many other languages and logic systems that we have not even mentioned yet. There is exciting work going on to apply entirely different logic systems to very hard and interesting problems.
Prolog and other logic approaches were part of some of the original AI research in decades long past and have become outshined in recent years by the success of statistical and probabalistic models. As we all know, things go in cycles and it would not surprise us that the efforts in probabalistic logic/knowledge representation and computation will catch up to the successes demonstrated. What’s more, this sort of “catching up” shall be less of a competition but more of a holistic backfilling and explanation with a focus on interpretability.
Inductive probablistic logic and constraint logic programming for bayesian models are just some of the interesting research areas that hint to this direction.