Skip to content

Latest commit

 

History

History
52 lines (37 loc) · 2.29 KB

first-order-logic.md

File metadata and controls

52 lines (37 loc) · 2.29 KB

Augmenting Neural Networks with First-order Logic

Tao Li, Vijay Srikumar

2019, [arXiv]

Whats New It gives a framework to augment neuron with a rule from external knowledge, and proves its effectivenss specifically with the lesser training data.

Illustrative Example

Source: Author

Major Contribution

  • Framework incorporating first-oder logic rules into neural network design
  • Experiments on augmenting neural network with first order logic at following three levels:
    • intermediate decisions (i.e. attentions);
    • output decisions constrained by intermediate states *output decisions constrained using label dependencies
  • Validation of the framework on three use cases

How It Works

  • Augmenting Attentions

    Source: Author

    • As can be seen above, attentions are modifed with external knowledge, rules can be specified as follow R_{1}: \quad \forall i, j \in C, K_{i, j} \rightarrow \overleftarrow{A}_{i, j}^{\prime} \\$R_{2}: \quad \forall i, j \in C, K_{i, j} \wedge \overleftarrow{A}_{i, j} \rightarrow \overleftarrow{A}_{i, j}^{\prime}

    • Where, K_ij is the external knowledge between tokens i and j, let say word relatedness from conceptnet.

    • Following results wehre achieved.

    Source: Author

  • Output decisions at Intermediate States

    Source: Author

    • As can be seen, it directly modifies output decision, it can take augemented attention as inputs.