Stanisław Purgał and Cezary Kaliszyk
Proceedings of the Thirty-Fifth International Florida Artificial Intelligence Research Society Conference, 2022.
pdf
doi:10.32473/flairs.v35i.130648
Abstract
Existing approaches to learning to prove theorems focuson particular logics and datasets. In this work, we proposeMonte-Carlo simulations guided by reinforcement learningthat can work in an arbitrarily specified logic, without any hu-man knowledge or set of problems. Since the algorithm doesnot need any training dataset, it is able to learn to work withany logical foundation, even when there is no body of proofsor even conjectures available. We practically demonstrate thefeasibility of the approach in multiple logical systems. Theapproach is stronger than training on randomly generated databut weaker than the approaches trained on tailored axiom andconjecture sets. It however allows us to apply machine learn-ing to automated theorem proving for many logics, where nosuch attempts have been tried to date, such as intuitionisticlogic or linear logic.
BibTex
@inproceedings{spck-flairs22, author = {Stanisław Purgał and Cezary Kaliszyk}, booktitle = {Proceedings of the Thirty-Fifth International Florida Artificial Intelligence Research Society Conference, {FLAIRS} 2022}, doi = {10.32473/flairs.v35i.130648}, editor = {Roman Bart{\'{a}}k and Fazel Keshtkar and Michael Franklin}, title = {Adversarial Learning to Reason in an Arbitrary Logic}, url = {https://doi.org/10.32473/flairs.v35i.130648}, year = {2022}}