Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Data Continuity Matters: Improving Sequence Modeling with Lipschitz Regularizer

Eric Qu · Xufang Luo · Dongsheng Li

MH1-2-3-4 #66

Keywords: [ sequence modeling ] [ deep learning ] [ data continuity ] [ Deep Learning and representational learning ]


Abstract:

Sequence modeling is a core problem in machine learning, and various neural networks have been designed to process different types of sequence data. However, few attempts have been made to understand the inherent data property of sequence data, neglecting the critical factor that may significantly affect the performance of sequence modeling. In this paper, we theoretically and empirically analyze a generic property of sequence data, i.e., continuity, and connect this property with the performance of deep models. First, we empirically observe that different kinds of models for sequence modeling prefer data with different continuity. Then, we theoretically analyze the continuity preference of different models in both time and frequency domains. To further utilize continuity to improve sequence modeling, we propose a simple yet effective Lipschitz Regularizer, that can flexibly adjust data continuity according to model preferences, and bring very little extra computational cost. Extensive experiments on various tasks demonstrate that altering data continuity via Lipschitz Regularizer can largely improve the performance of many deep models for sequence modeling.

Chat is not available.