FacET  

How Video Meetings Change Your Expression

Columbia University

Abstract

Do our facial expressions change when we speak over video calls? Given two unpaired sets of videos of people, we seek to automatically find spatio-temporal patterns that are distinctive of each set. Existing methods use discriminative approaches and perform post-hoc explainability analysis. Such methods are insufficient as they are unable to provide insights beyond obvious dataset biases, and the explanations are useful only if humans themselves are good at the task. Instead, we tackle the problem through the lens of generative domain translation: our method generates a detailed report of learned, input-dependent spatio-temporal features and the extent to which they vary between the domains. We demonstrate that our method, FacET (Facial Explanations through Translations), can discover behavioral differences between conversing face-to-face (F2F) and on video-calls (VCs). We also show the applicability of our method on discovering differences in presidential communication styles. Additionally, we are able to predict temporal change-points in videos that decouple expressions in an unsupervised way, and increase the interpretability and usefulness of our model. Finally, our method, being generative, can be used to transform a video call to appear as if it were recorded in a F2F setting. Experiments and visualizations show our approach is able to discover a range of behaviors, taking a step towards deeper understanding of human behaviors.

Method

Method

Given a sequence of facial keypoints, we learn interpretable translators which when applied to the input, transform the input to the translated domain.

Discovering Differences in Facial Expressions

Our method can find be used to discover fine-grained interpretable differences between facial expressions of the two domains in an unsupervised manner. FacET generates a report for each cluster showing how each β-VAE latent varies across the domains.

De-zoomification

Samples on the right of the arrow are synthetic and generated by our method. We show how our method can be used to de-zoomify a video call to appear as if it were recorded in a face-to-face setting. (Best viewed with audio 🎧🎵)

Changepoint Detection

We perform k-means clustering on all predicted translators and perform change-point analysis. The matrix shows the frequency of timestamp changes while going from one cluster to the other. Our method can detect such changes in an unsupervised manner.

Visualizing Latent Reconstruction

We vary each dimension of the 12-dimensional latent obtained through β-VAE encoding by keeping other dimensions fixed to show disentanglement (on ZoomIn and presidents dataset).

BibTeX

@misc{sarin2024videomeetingschangeexpression,
  title={How Video Meetings Change Your Expression}, 
  author={Sumit Sarin and Utkarsh Mall and Purva Tendulkar and Carl Vondrick},
  year={2024},
  eprint={2406.00955},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2406.00955}, 
}
Code for the website is inherited from Nerfies.