Learning Global Hypothesis Space for Enhancing Synergistic Reasoning Chain
Abstract
Chain-of-Thought (CoT) has been shown to significantly improve the reasoning accuracy of large language models (LLMs) on complex tasks. However, due to the autoregressive, step-by-step generation paradigm, existing CoT methods suffer from two fundamental limitations. First, the reasoning process is highly susceptible to early-stage errors, which tend to propagate and amplify without a global coordination and correction mechanism, thereby distorting the overall reasoning chain. Second, current CoT methods lack structured analytical frameworks for pruning redundant reasoning and identifying critical reasoning features, resulting in instability and reduced interpretability. To address these issues, we propose Global Hypothesis Structure via Topological Data Analysis (GHS-TDA), which constructs a semantically enriched global hypothesis graph that integrates and coordinates multiple candidate reasoning paths, thereby supporting global consistency refinement and error mitigation. GHS-TDA applies persistent homology-based topological data analysis to capture stable multi-scale structures, remove redundancy and inconsistencies, and extract a more reliable reasoning skeleton. By jointly leveraging reasoning diversity and topological stability, GHS-TDA achieves self-adaptive convergence, produces high-confidence and interpretable reasoning paths, and consistently outperforms strong baselines in terms of both accuracy and robustness across multiple reasoning benchmarks.