Skip to yearly menu bar Skip to main content


Poster

RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data

Maxwell Xu · Jaya Narain · Gregory Darnell · Haraldur Hallgrimsson · Hyewon Jeong · Darren Forde · Richard Fineman · Karthik Raghuram · James Rehg · Shirley Ren

Hall 3 + Hall 2B #17
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

We present RelCon, a novel self-supervised Relative Contrastive learning approach for training a motion foundation model from wearable accelerometry sensors. First, a learnable distance measure is trained to capture motif similarity and domain-specific semantic information such as rotation invariance. Then, the learned distance provides a measurement of semantic similarity between a pair of accelerometry time-series, which we use to train our foundation model to model relative relationships across time and across subjects. The foundation model is trained on 1 billion segments from 87,376 participants, and achieves strong performance across multiple downstream tasks, including human activity recognition and gait metric regression. To our knowledge, we are the first to show the generalizability of a foundation model with motion data from wearables across distinct evaluation tasks. Code here: https://github.com/maxxu05/relcon

Live content is unavailable. Log in and register to view live content