Skip to yearly menu bar Skip to main content


Poster

Can Neural Networks Understand Logical Entailment?

Richard Evans · David Saxton · David Amos · Pushmeet Kohli · Edward Grefenstette

East Meeting level; 1,2,3 #28

Abstract:

We introduce a new dataset of logical entailments for the purpose of measuring models' ability to capture and exploit the structure of logical expressions against an entailment prediction task. We use this task to compare a series of architectures which are ubiquitous in the sequence-processing literature, in addition to a new model class---PossibleWorldNets---which computes entailment as a ``convolution over possible worlds''. Results show that convolutional networks present the wrong inductive bias for this class of problems relative to LSTM RNNs, tree-structured neural networks outperform LSTM RNNs due to their enhanced ability to exploit the syntax of logic, and PossibleWorldNets outperform all benchmarks.

Live content is unavailable. Log in and register to view live content