Poster
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering
Shuohang Wang · Mo Yu · Jing Jiang · Wei Zhang · Xiaoxiao Guo · Shiyu Chang · Zhiguo Wang · Tim Klinger · Gerald Tesauro · Murray Campbell
East Meeting level; 1,2,3 #35
Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\% improvement on the former two datasets.
Live content is unavailable. Log in and register to view live content