A program which could derive theorems given formation rules in any modal logic?
There are multiple modal logics which have different formation rules:
Suppose someone asked the intuitive question, "If phi were a theorem in a modal logic M, what general, related conclusions might be drawn from that being so?" (See What is the modality of a statement that follows from a necessary statement? )
I am picturing a program which can generate all theorems of a logic L which include a specific formation rule R (for example, the rule □P ⊢ P). There are two criteria I would have it fulfill: a) there must be a bound to how many theorems it should generate; and if necessary, b) an algorithm which "recognizes" interesting vs. non-interesting theorems.
For a), I wonder if there are choices as how to one can define the bound. It could be the number of times of application of a formation rule (does this have a name? The "order" of a theorem?), or maybe there are other, better ways.
For b), I anticipate this is because, since we are seeking interesting conclusions, we want a way to filter out a large amount of seemingly boring or trivial theorems.
(Also, in case brute-force generation of all theorems is too combinatorially explosive, I am interested in algorithms which make more efficient choices in what theorems to generate.)
Lastly, I would have such a program run over all modal logics L featuring formation rule R, so as to see what the implications of such a rule are across various logics, to compare them.
Can anyone please present a blueprint of such a program, in a programming language (of their choice)? I want a clear conceptual grasp of such a program more than implementable code, which I can write myself once I see the pitfalls, best practices and common design patterns and so on.
NB I originally asked this question elsewhere and have copied across a very good CC-SA answer.
1 answer
The following answer comes from Jason Rute, all credit goes to him:
Ok, I am going to bite. As the commentators have already said, I think you are proposing a much more ambitious and difficult program than you realize. If you could get this to work well, it could conceivably be a paper in Nature or Science (like AlphaGeometry). Asking on a chat forum how to do this is a bit like asking on a chat forum how to build a self-driving car. (Or pre-2010 asking how to make a Go-playing program that could compete with top-level humans.) Nonetheless, let me give you my thoughts.
Background in Logic
First, there isn't anything particular to modal logic here. This question holds for any deductive calculus $L$, where you are trying to find interesting formulas $\phi$ such that $L \vdash \phi$.
In a logic class, $L$ is usually presented as a list of Hilbert-style axioms with one inference rule, modus ponens, and a proof is just a list facts which either follow from either an axiom or from modus ponens. In practical logic, it is usually better to use a natural deduction system with more inference rules and fewer axioms. But either way, you can imagine just generating proofs systematically or randomly with this approach.
But you quickly realize the combinatorial explosion. Even a rule like $A \vdash A$ stands for infinitely many theorems and what you fill in for $A$ is entirely dependent on the result you plan to prove.
So another approach is to work backwards, starting with a goal to prove. Then one can use "unification". For example, if you want to prove $\vdash \square (p \rightarrow q) \rightarrow \square (p \rightarrow q)$, you can work backwards to apply the rule $A \vdash B \implies \vdash A \rightarrow B$ (where say $A$ "unifies" with $\square (p \rightarrow q)$). So now the goal is $\square (p \rightarrow q) \vdash \square (p \rightarrow q)$. This now follows from the rule $A \vdash A$. There is even a theorem called cut-elimination which basically says you can prove anything theorem working backwards like this (without ever having to make a lemma or come up with some magic term which makes the proof work). Practical theorem provers generally work in this backward style more or less. (Although, of course, users still break big theorems into smaller lemmas.)
Practical tools
Now, practically, you could create your own proof system for modal logic, or use Metamath, which is a meta-logical system where you can implement your own calculus and prove theorems in it.
Generating random formulas in Metamath is not new, as for example in the paper Learning to Prove Theorems by Learning to Generate Theorems.
Solving for "interesting" theorems
Controlling the spam of bad formulas is going to be the largest challenge in such as system, and I don't believe there is a good solution yet. But let me tell you ideas that have been explored a bit. (Although, see What are some automated theorem generators? What background logic do they use and what heuristics do they use for theorem-goodness? for a better survey.)
- Brute force. One can just try to brute force formulas (systematically or randomly) and see what one gets. One could also play with the parameters as well.
- One can use machine learning language models to synthesize proofs. This would require one to have a good set of training data, which you unfortunately don't have.
- One can use machine learning to generate conjectures and you can see if they are provable. We are much better at proving theorems from a goal, and there are a number of systems that can do this. (The one-sentence description is that many of these provers are trained to suggest the next step of a proof from the current goal. Then one puts this into a tree search.) So if one can create a system that generates good conjectures, then one can also create another system to prove those conjectures. Again, one needs data for both halves of this system.
- One can generate formulas systematically/randomly and then select the interesting ones. Those could be used to train some of the machine learning components above. This is what the AlphaGeometry paper does. The real problem here is that it is very hard to come up with a good criterion for "interesting". Here are some proposals:
- Short theorems.
- Thereoms that an automatic solver can't solve.
- Theorems that are useful as lemmas to other proofs.
- Theorems that are in some sense simpler than their proofs (which in some sense is the AlphaGeometry criteria).
- A minimal subset of maximally useful theorem. This is hard to make formal but see DreamCoder for a similar idea in program synthesis.
- Useful lemmas which can be extracted from other theorems. See the REFACTOR paper.
- Lemmas which are "surprising". There are many ways to do this involving various notions of information theory, and it isn't clear if any work.
- One can create a feedback loop where one builds up a library of interesting (and non-interesting) theorems and then starts to train machine-learning models to do tasks on it. As these machine-learning models get better, then so do the generated theorems, creating a system that eventually learns new mathematics. Again see DreamCoder for a rough sketch of how this could look in theory.
Conclusion
So to conclude, this is certainly an interesting project, but it isn't a trivial one, and I hope I've inspired you or others to work on solving it!
0 comment threads