Skip to yearly menu bar Skip to main content


Poster

Learning to Embed Time Series Patches Independently

Seunghan Lee · Taeyoung Park · Kibok Lee

Halle B #126
[ ]
Fri 10 May 1:45 a.m. PDT — 3:45 a.m. PDT

Abstract:

Masked time series modeling has recently gained much attention as a self-supervised representation learning strategy for time series.Inspired by masked image modeling in computer vision, recent works first patchify and partially mask out time series, and then train Transformers to capture the dependencies between patches by predicting masked patches from unmasked patches.However, we argue that capturing such patch dependencies might not be an optimal strategy for time series representation learning;rather, learning to embed patches independently results in better time series representations.Specifically, we propose to use 1) the simple patch reconstruction task, which autoencode each patch without looking at other patches, and 2) the simple patch-wise MLP that embeds each patch independently.In addition, we introduce complementary contrastive learning to hierarchically capture adjacent time series information efficiently.Our proposed method improves time series forecasting and classification performance compared to state-of-the-art Transformer-based models, while it is more efficient in terms of the number of parameters and training time.Code is available at this repository: https://github.com/seunghan96/pits.

Chat is not available.