Efficient Submodular Maximization for Sums of Concave over Modular Functions
Yang Lv · Guihao Wang · Dachuan Xu · Ruiqi Yang
Abstract
Submodular maximization has broad applications in machine learning, network design, and data mining. However, classical algorithms often suffer from prohibitively high computational costs, which severely limit their scalability in practice. In this work, we focus on maximizing Sums of Concave over Modular functions (SCMs), an important subclass of submodular functions, under three fundamental constraints: cardinality, knapsack, and partition matroids. Our method integrates three components: continuous relaxation, Accelerated Approximate Projected Gradient Ascent (AAPGA), and randomized rounding, to efficiently compute near-optimal solutions. We establish a $(1 - \varepsilon - \eta - e^{-\Omega(\eta^2)})$ approximation guarantee for both cardinality and partition matroid constraints, with query complexity $O\left(n^{1/2}\varepsilon^{-1/2} (T_1 + T_2)\right)$. For the knapsack constraint, the approximation ratio degrades by a factor of $1/2$, with query complexity $O\left(n T_1 + n^{1/2}\varepsilon^{-1/2} T_2\right)$, where $T_1$ denotes the computational cost of evaluating the concave extension, and $T_2$ denotes the computational cost of backpropagation. By leveraging efficient convex optimization techniques, our approach substantially accelerates convergence toward high-quality solutions. In empirical evaluations, we demonstrate that AAPGA consistently outperforms standard PGA. On small-scale experiments, AAPGA achieves superior results in significantly less time, being up to $32.3\times$ faster than traditional methods. On large-scale experiments, our parallel multi-GPU implementation further enhances performance, demonstrating the scalability of our approach.
Successful Page Load