SIGMA: Sinkhorn-Guided Masked
Video Modeling

Abstract

Video-based pretraining offers immense potential for learning strong visual representations on an unprecedented scale. Recently, masked video modeling methods have shown promising scalability, yet fall short in capturing higher-level semantics due to reconstructing predefined low-level targets such as pixels. To tackle this, we present Sinkhorn-guided Masked Video Modelling (SIGMA), a novel video pretraining method that jointly learns the video model in addition to a target feature space using a projection network. However, this simple modification means that the regular L2 reconstruction loss will lead to trivial solutions as both networks are jointly optimized. As a solution, we distribute features of space-time tubes evenly across a limited number of learnable clusters. By posing this as an optimal transport problem, we enforce high entropy in the generated features across the batch, infusing semantic and temporal meaning into the feature space. The resulting cluster assignments are used as targets for a symmetric prediction task where the video model predicts cluster assignment of the projection network and vice versa. Experimental results on ten datasets across three benchmarks validate the effectiveness of SIGMA in learning more performant, temporally-aware, and robust video representations improving upon state-of-the-art methods.

Compared to VideoMAE, which uses RGB pixels as targets, we generate Sinkhorn-regularised features as reconstruction targets. This obtains more semantic features and yields better pretraining performance.

Methodology

In this work, we propose a new framework wherein the typically predefined reconstruction target space can be simultaneously learned alongside the video model. For this, a projection network is introduced which embeds both the visible and masked portions of the video, yielding deep feature reconstruction targets. However, employing a commonly used L2 reconstruction loss naïvely is ineffective due to the joint optimization of both networks leading to a trivial solution as both networks collapse to the same output irrespective of the input. To solve this, we introduce SIGMA: Sinkhorn-guided masked video modeling, where deep features of space-time tubes are regularised by optimal transport uniformly across clusters. This effectively acts as a high-entropy regularization constraint and enforces similar space-time tube features to be assigned to the same centroid, infusing semantic meaning into the feature space. These cluster assignments and centroids are learned in an online manner using the fast Sinhkhorn-Knopp algorithm, yielding feature pseudo-labels as targets. With these targets, we formulate our loss objective as a symmetric prediction task, where the features from each branch -- the video model and the projection network -- cross-predict the cluster assignment of the other. By doing so, we force the features of space-time tubes to be expressed by a limited number of clusters, enforcing semantic-rich concepts, while eliminating the dependency on predefined targets such as the masked pixel values, commonly used in prior works. Moreover, despite our cross-prediction task, we do not rely on any augmentations or crops, making our model stable and easy to train.

SIGMA

Overview of our proposed method SIGMA. A given video is embedded with the projection network \(\varphi\) leading to features \(\mathbf{x}^{\varphi}\). The video model \(\Psi\) predicts feature embeddings \(\mathbf{x}^\Psi\) of the masked space-time tubes. Both embeddings are projected onto the learnable prototypes \( \mathbf{C}\) representing cluster centroids. Cluster assignments are created with an adapted Sinkhorn algorithm enforcing equipartition across all prototypes. These pseudo-labels are then used as targets for the predictive task \(\mathcal{L}_{CE}\) with which the networks are optimized.

 

Results

We evaluate our method on a total of ten different datasets and across three common benchmark settings. In Benchmark I we first compare our approach against state-of-the-art video models in a linear probing (frozen backbone) and the standard full finetuning setting. Then, in Benchmark II we benchmark the semantic spatial and temporal understanding by reporting unsupervised semantic segmentation performance and visualizing the segmentation masks. In Benchmark III we evaluate our approach on the SEVERE benchmark specifically designed to analyze the generalization performance of video models.



Benchmark I

SIGMA

Frozen evaluation of masked video modeling methods. A linear layer on top of the frozen ViT-B backbone is optimized. The ViT-B backbones are pretrained on Kinetics-400 (K400). SIGMA consistently outperforms previous masked video modeling works across all video and image datasets considerably.

SIGMA

Comparison for full finetuning on Something-Something V2 (SSv2). The top part compromises supervised methods while the remaining methods are pretrained in a self-supervised manner. The middle section evaluates models trained on Kinetics 400 (K400) data for pretraining whereas the bottom part mainly used SSv2 data. We compare against previous methods pretrained on the ViT-Base backbone for 800 epochs. A full table with all previous works using also different pretraining setups is provided in the supplemental. M. Guid. denotes motion guidance such as optical flow used e.g. reconstructing targets or masking. Our method achieves state-of-the-art performance on SSv2 w/ and w/o motion guidance.

SIGMA

Comparison for full finetuning on Kinetics 400 (K400). We compare against previous methods for pretraining the ViT-Base backbone for 800 epochs on K400 and subsequently, fully finetuning the backbone with the K400 labels. A full table with all previous methods using also different setups is provided in the supplemental. M. Guid. denotes motion guidance such as optical flow used e.g. reconstructing targets or masking. Our method achieves state-of-the-art performance on K400.



Benchmark II

SIGMA

Unsupervised video object segmentation results on DAVIS. We visualize the abilities of masked video modeling methods to produce temporally consistent semantic segmentation masks. For that, K-means with K=6 is applied to the space-time features that are extracted from each input clip, resulting in assigning each tube feature to a certain cluster. Then, the extracted cluster maps are resized to match the input size and overlayed on the input. Sigma provides more coherent and consistent object cluster maps compared to other methods. This shows that our learned features have better temporal and spatial understanding.

SIGMA

Unsupervised video object segmentation. We follow the evaluation protocol from TimeTuning and report mIoU for clustering and overclustering. SIGMA consistently achieves better results compared to other methods across different backbones and datasets. This shows that our method learns better semantic and temporal consistent features. The clip-length is set to 16 and 4 for DAVIS and YTVOS, respectively. For clustering, the Hungarian algorithm matches the unsupervised segmentation clusters (K) with the ground truth (GT) per clip. For overclustering, we use K=10. The matching protocol is greedy many-to-one, see Supplemental for details.



Benchmark III

SIGMA

SEVERE Generalization. We evaluate masked video modeling methods for generalizability in domain shift, sample efficiency, action granularity, and task shift. SIGMA achieves strong generalization performance outperforming prior works across all configurations. We use the original severe codebase to evaluate the publicly available models for all the methods.



Visualizations

SIGMA

Visualization of prototypes. We visualize the 25 space-time tubes with the highest similarity to a particular prototype inside a video. For simplicity, we visualize the first patch inside the space-time tube. We observe that different prototypes attend to particular semantic parts of the video, as prototype 1 corresponds to the blue parts of the car.

SIGMA

Visualization of prototypes (2). We visualize the 25 space-time tubes with the highest similarity to a particular prototype inside a video. For simplicity, we visualize the first patch inside the space-time tube. We observe that different prototypes attend to particular semantic parts of the video, for example, prototype 1 corresponds to the person(s) in white.

Want to learn more about SIGMA?

Check out our paper and code!