Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »

Review Suggested Edit

You can't approve or reject suggested edits because you haven't yet earned the Edit Posts ability.

Approved.
This suggested edit was approved and applied to the post 10 months ago by Julius H.‭.

53 / 255
  • The following answer comes from Jason Rute, all credit goes to him:
  • ————————
  • “””
  • Ok, I am going to bite. As the commentators have already said, I think you are proposing a much more ambitious and difficult program than you realize. If you could get this to work well, it could conceivably be a paper in Nature or Science (like AlphaGeometry). Asking on a chat forum how to do this is a bit like asking on a chat forum how to build a self-driving car. (Or pre-2010 asking how to make a Go-playing program that could compete with top-level humans.) Nonetheless, let me give you my thoughts.
  • Background in Logic
  • First, there isn't anything particular to modal logic here. This question holds for any deductive calculus L, where you are trying to find interesting formulas ϕ such that L⊢ϕ.
  • In a logic class, L is usually presented as a list of Hilbert-style axioms with one inference rule, modus ponens, and a proof is just a list facts which either follow from either an axiom or from modus ponens. In practical logic, it is usually better to use a natural deduction system with more inference rules and fewer axioms. But either way, you can imagine just generating proofs systematically or randomly with this approach.
  • But you quickly realize the combinatorial explosion. Even a rule like A⊢A stands for infinitely many theorems and what you fill in for A is entirely dependent on the result you plan to prove.
  • So another approach is to work backwards, starting with a goal to prove. Then one can use "unification". For example, if you want to prove ⊢□(p→q)→□(p→q), you can work backwards to apply the rule A⊢B⟹⊢A→B (where say A "unifies" with □(p→q)). So now the goal is □(p→q)⊢□(p→q). This now follows from the rule A⊢A. There is even a theorem called cut-elimination which basically says you can prove anything theorem working backwards like this (without ever having to make a lemma or come up with some magic term which makes the proof work). Practical theorem provers generally work in this backward style more or less. (Although, of course, users still break big theorems into smaller lemmas.)
  • Practical tools
  • Now, practically, you could create your own proof system for modal logic, or use Metamath, which is a meta-logical system where you can implement your own calculus and prove theorems in it.
  • Generating random formulas in Metamath is not new, as for example in the paper Learning to Prove Theorems by Learning to Generate Theorems.
  • Solving for "interesting" theorems
  • Controlling the spam of bad formulas is going to be the largest challenge in such as system, and I don't believe there is a good solution yet. But let me tell you ideas that have been explored a bit. (Although, see What are some automated theorem generators? What background logic do they use and what heuristics do they use for theorem-goodness? for a better survey.)
  • Brute force. One can just try to brute force formulas (systematically or randomly) and see what one gets. One could also play with the parameters as well.
  • One can use machine learning language models to synthesize proofs. This would require one to have a good set of training data, which you unfortunately don't have.
  • One can use machine learning to generate conjectures and you can see if they are provable. We are much better at proving theorems from a goal, and there are a number of systems that can do this. (The one-sentence description is that many of these provers are trained to suggest the next step of a proof from the current goal. Then one puts this into a tree search.) So if one can create a system that generates good conjectures, then one can also create another system to prove those conjectures. Again, one needs data for both halves of this system.
  • One can generate formulas systematically/randomly and then select the interesting ones. Those could be used to train some of the machine learning components above. This is what the AlphaGeometry paper does. The real problem here is that it is very hard to come up with a good criterion for "interesting". Here are some proposals:
  • Short theorems.
  • Thereoms that an automatic solver can't solve.
  • Theorems that are useful as lemmas to other proofs.
  • Theorems that are in some sense simpler than their proofs (which in some sense is the AlphaGeometry criteria).
  • A minimal subset of maximally useful theorem. This is hard to make formal but see DreamCoder for a similar idea in program synthesis.
  • Useful lemmas which can be extracted from other theorems. See the REFACTOR paper.
  • Lemmas which are "surprising". There are many ways to do this involving various notions of information theory, and it isn't clear if any work.
  • One can create a feedback loop where one builds up a library of interesting (and non-interesting) theorems and then starts to train machine-learning models to do tasks on it. As these machine-learning models get better, then so do the generated theorems, creating a system that eventually learns new mathematics. Again see DreamCoder for a rough sketch of how this could look in theory.
  • (Links:
  • https://www.nature.com/articles/s41586-023-06747-5
  • https://us.metamath.org/index.html
  • https://proceedings.neurips.cc/paper/2020/file/d2a27e83d429f0dcae6b937cf440aeb1-Paper.pdf
  • https://proofassistants.stackexchange.com/questions/603/what-are-some-automated-theorem-generators-what-background-logic-do-they-use-an
  • https://arxiv.org/abs/2006.08381
  • https://openreview.net/forum?id=827jG3ahxL
  • To be added
  • )
  • “””
  • - Jason Rute
  • https://proofassistants.stackexchange.com/a/2721/1781
  • The following [answer](https://proofassistants.stackexchange.com/a/2721/1781) comes from [Jason Rute](https://proofassistants.stackexchange.com/users/122/jason-rute), all credit goes to him:
  • ---
  • Ok, I am going to bite. As the commentators have already said, I think you are proposing a much more ambitious and difficult program than you realize. If you could get this to work well, it could conceivably be a paper in Nature or Science (like [AlphaGeometry][1]). Asking on a chat forum how to do this is a bit like asking on a chat forum how to build a self-driving car. (Or pre-2010 asking how to make a Go-playing program that could compete with top-level humans.) Nonetheless, let me give you my thoughts.
  • ### Background in Logic
  • First, there isn't anything particular to modal logic here. This question holds for any deductive calculus $L$, where you are trying to find interesting formulas $\phi$ such that $L \vdash \phi$.
  • In a logic class, $L$ is usually presented as a list of Hilbert-style axioms with one inference rule, modus ponens, and a proof is just a list facts which either follow from either an axiom or from modus ponens. In practical logic, it is usually better to use a natural deduction system with more inference rules and fewer axioms. But either way, you can imagine just generating proofs systematically or randomly with this approach.
  • But you quickly realize the combinatorial explosion. Even a rule like $A \vdash A$ stands for infinitely many theorems and what you fill in for $A$ is entirely dependent on the result you plan to prove.
  • So another approach is to work backwards, starting with a goal to prove. Then one can use "unification". For example, if you want to prove $\vdash \square (p \rightarrow q) \rightarrow \square (p \rightarrow q)$, you can work backwards to apply the rule $A \vdash B \implies \vdash A \rightarrow B$ (where say $A$ "unifies" with $\square (p \rightarrow q)$). So now the goal is $\square (p \rightarrow q) \vdash \square (p \rightarrow q)$. This now follows from the rule $A \vdash A$. There is even a theorem called cut-elimination which basically says you can prove anything theorem working backwards like this (without ever having to make a lemma or come up with some magic term which makes the proof work). Practical theorem provers generally work in this backward style more or less. (Although, of course, users still break big theorems into smaller lemmas.)
  • ### Practical tools
  • Now, practically, you could create your own proof system for modal logic, or use [Metamath][2], which is a meta-logical system where you can implement your own calculus and prove theorems in it.
  • Generating random formulas in Metamath is not new, as for example in the paper [Learning to Prove Theorems by Learning to Generate Theorems][3].
  • ### Solving for "interesting" theorems
  • Controlling the spam of bad formulas is going to be the largest challenge in such as system, and I don't believe there is a good solution yet. But let me tell you ideas that have been explored a bit. (Although, see [What are some automated theorem generators? What background logic do they use and what heuristics do they use for theorem-goodness?](https://proofassistants.stackexchange.com/questions/603/what-are-some-automated-theorem-generators-what-background-logic-do-they-use-an) for a better survey.)
  • * Brute force. One can just try to brute force formulas (systematically or randomly) and see what one gets. One could also play with the parameters as well.
  • * One can use machine learning language models to synthesize proofs. This would require one to have a good set of training data, which you unfortunately don't have.
  • * One can use machine learning to generate conjectures and you can see if they are provable. We are much better at proving theorems from a goal, and there are a number of systems that can do this. (The one-sentence description is that many of these provers are trained to suggest the next step of a proof from the current goal. Then one puts this into a tree search.) So if one can create a system that generates good conjectures, then one can also create another system to prove those conjectures. Again, one needs data for both halves of this system.
  • * One can generate formulas systematically/randomly and then select the interesting ones. Those could be used to train some of the machine learning components above. This is what the [AlphaGeometry paper][1] does. The real problem here is that it is very hard to come up with a good criterion for "interesting". Here are some proposals:
  • * Short theorems.
  • * Thereoms that an automatic solver can't solve.
  • * Theorems that are useful as lemmas to other proofs.
  • * Theorems that are in some sense simpler than their proofs (which in some sense is the AlphaGeometry criteria).
  • * A minimal subset of maximally useful theorem. This is hard to make formal but see [DreamCoder][4] for a similar idea in program synthesis.
  • * Useful lemmas which can be extracted from other theorems. See the [REFACTOR][5] paper.
  • * Lemmas which are "surprising". There are many ways to do this involving various notions of information theory, and it isn't clear if any work.
  • * One can create a feedback loop where one builds up a library of interesting (and non-interesting) theorems and then starts to train machine-learning models to do tasks on it. As these machine-learning models get better, then so do the generated theorems, creating a system that eventually learns new mathematics. Again see [DreamCoder][4] for a rough sketch of how this could look in theory.
  • ### Conclusion
  • So to conclude, this is certainly an interesting project, but it isn't a trivial one, and I hope I've inspired you or others to work on solving it!
  • [1]: https://www.nature.com/articles/s41586-023-06747-5
  • [2]: https://us.metamath.org/index.html
  • [3]: https://proceedings.neurips.cc/paper/2020/file/d2a27e83d429f0dcae6b937cf440aeb1-Paper.pdf
  • [4]: https://arxiv.org/abs/2006.08381
  • [5]: https://openreview.net/forum?id=827jG3ahxL

Suggested 10 months ago by The Amplitwist‭