Download PDFOpen PDF in browser

Property Preserving Embedding of First-order Logic

13 pagesPublished: April 27, 2020

Abstract

Logical reasoning as performed by human mathematicians involves an intuitive under- standing of terms and formulas. This includes properties of formulas themselves as well as relations between multiple formulas. Although vital, this intuition is missing when supplying atomically encoded formulae to (neural) down-stream models.
In this paper we construct continuous dense vector representations of first-order logic which preserve syntactic and semantic logical properties. The resulting neural formula embeddings encode six characteristics of logical expressions present in the training-set and further generalise to properties they have not explicitly been trained on. To facilitate training, evaluation, and comparing of embedding models we extracted and generated data sets based on TPTP’s first-order logic library. Furthermore we examine the expressiveness of our encodings by conducting toy-task as well as more practical deployment tests.

Keyphrases: first order logic, logical properties, neural embedding

In: Gregoire Danoy, Jun Pang and Geoff Sutcliffe (editors). GCAI 2020. 6th Global Conference on Artificial Intelligence (GCAI 2020), vol 72, pages 70-82.

BibTeX entry
@inproceedings{GCAI2020:Property_Preserving_Embedding_First,
  author    = {Julian Parsert and Stephanie Autherith and Cezary Kaliszyk},
  title     = {Property Preserving Embedding of First-order Logic},
  booktitle = {GCAI 2020. 6th Global Conference on Artificial Intelligence (GCAI 2020)},
  editor    = {Gregoire Danoy and Jun Pang and Geoff Sutcliffe},
  series    = {EPiC Series in Computing},
  volume    = {72},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2398-7340},
  url       = {/publications/paper/Cwgq},
  doi       = {10.29007/18t1},
  pages     = {70-82},
  year      = {2020}}
Download PDFOpen PDF in browser