A Generalized Weighted Optimization Method for Computational Learning and Inversion

Kui Ren · Yunan Yang · Bj√∂rn Engquist


Keywords: [ machine learning ] [ generalization error ]

[ Abstract ]
[ Visit Poster at Spot B1 in Virtual World ] [ Slides [ OpenReview
Mon 25 Apr 10:30 a.m. PDT — 12:30 p.m. PDT


The generalization capacity of various machine learning models exhibits different phenomena in the under- and over-parameterized regimes. In this paper, we focus on regression models such as feature regression and kernel regression and analyze a generalized weighted least-squares optimization method for computational learning and inversion with noisy data. The highlight of the proposed framework is that we allow weighting in both the parameter space and the data space. The weighting scheme encodes both a priori knowledge on the object to be learned and a strategy to weight the contribution of different data points in the loss function. Here, we characterize the impact of the weighting scheme on the generalization error of the learning method, where we derive explicit generalization errors for the random Fourier feature model in both the under- and over-parameterized regimes. For more general feature maps, error bounds are provided based on the singular values of the feature matrix. We demonstrate that appropriate weighting from prior knowledge can improve the generalization capability of the learned model.

Chat is not available.