Skip to yearly menu bar Skip to main content


Poster

NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens

Cunxiang Wang · Ruoxi Ning · Boqi Pan · Tonghui Wu · Qipeng Guo · Cheng Deng · Guangsheng Bao · Xiangkun Hu · Zheng Zhang · Qian Wang · Yue Zhang

Hall 3 + Hall 2B #290
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Recent advancements in Large Language Models (LLMs) have pushed the boundaries of natural language processing, especially in long-context understanding. However, the evaluation of these models' long-context abilities remains a challenge due to the limitations of current benchmarks. To address this gap, we introduce NovelQA, a benchmark tailored for evaluating LLMs with complex, extended narratives. NovelQA, constructed from English novels, offers a unique blend of complexity, length, and narrative coherence, making it an ideal tool for assessing deep textual understanding in LLMs. This paper details the design and construction of NovelQA, focusing on its comprehensive manual annotation process and the variety of question types aimed at evaluating nuanced comprehension. Our evaluation of long-context LLMs on NovelQA reveals significant insights into their strengths and weaknesses. Notably, the models struggle with multi-hop reasoning, detail-oriented questions, and handling extremely long inputs, averaging over 200,000 tokens. Results highlight the need for substantial advancements in LLMs to enhance their long-context comprehension and contribute effectively to computational literary analysis.

Live content is unavailable. Log in and register to view live content