MotionBERT: A Unified Perspective on Learning Human Motion Representations

ICCV 2023

Wentao Zhu    Xiaoxuan Ma    Zhaoyang Liu    Libin Liu    Wayne Wu    Yizhou Wang

Peking University     Shanghai AI Laboratory

[Paper]   [Code]   [Video]



teaser

Abstract

We present a unified perspective on tackling various human-centric video tasks by learning human motion representations from large-scale and heterogeneous data resources. Specifically, we propose a pretraining stage in which a motion encoder is trained to recover the underlying 3D motion from noisy partial 2D observations. The motion representations acquired in this way incorporate geometric, kinematic, and physical knowledge about human motion, which can be easily transferred to multiple downstream tasks. We implement the motion encoder with a Dual-stream Spatio-temporal Transformer (DSTformer) neural network. It could capture long-range spatio-temporal relationships among the skeletal joints comprehensively and adaptively, exemplified by the lowest 3D pose estimation error so far when trained from scratch. Furthermore, our proposed framework achieves state-of-the-art performance on all three downstream tasks by simply finetuning the pretrained motion encoder with a simple regression head (1-2 layers), which demonstrates the versatility of the learned motion representations.

Results

demo

Video Demo

BibTeX