Skip to yearly menu bar Skip to main content


Poster

Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering

Victor Zhong · Caiming Xiong · Nitish Shirish Keskar · richard socher

Great Hall BC #5

Keywords: [ nlp ] [ reading comprehension ] [ attention ] [ representation learning ] [ natural language processing ] [ question answering ]


Abstract:

End-to-end neural models have made significant progress in question answering, however recent studies show that these models implicitly assume that the answer and evidence appear close together in a single document. In this work, we propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. The CFC consists of a coarse-grain module that interprets documents with respect to the query then finds a relevant answer, and a fine-grain module which scores each candidate answer by comparing its occurrences across all of the documents with the query. We design these modules using hierarchies of coattention and self-attention, which learn to emphasize different parts of the input. On the Qangaroo WikiHop multi-evidence question answering task, the CFC obtains a new state-of-the-art result of 70.6% on the blind test set, outperforming the previous best by 3% accuracy despite not using pretrained contextual encoders.

Live content is unavailable. Log in and register to view live content