We are More than Our JOints: Predicting How 3D Bodies Move

Overview

We are More than Our JOints: Predicting How 3D Bodies Move

Citation

This repo contains the official implementation of our paper MOJO:

@inproceedings{Zhang:CVPR:2021,
  title = {We are More than Our Joints: Predicting how {3D} Bodies Move},
  author = {Zhang, Yan and Black, Michael J. and Tang, Siyu},
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2021},
  month_numeric = {6}
}

License

We employ CC BY-NC-SA 4.0 for the MOJO code, which covers

models/fittingop.py
experiments/utils/batch_gen_amass.py
experiments/utils/utils_canonicalize_amass.py
experiments/utils/utils_fitting_jts2mesh.py
experiments/utils/vislib.py
experiments/vis_*_amass.py

The rest part are developed based on DLow. According to their license, the implementation follows its CMU license.

Environment & code structure

  • Tested OS: Linux Ubuntu 18.04
  • Packages:
  • Note: All scripts should be run from the root of this repo to avoid path issues. Also, please fix some path configs in the code, otherwise errors will occur.

Training

The training is split to two steps. Provided we have a config file in experiments/cfg/amass_mojo_f9_nsamp50.yml, we can do

  • python experiments/train_MOJO_vae.py --cfg amass_mojo_f9_nsamp50 to train the MOJO
  • python experiments/train_MOJO_dlow.py --cfg amass_mojo_f9_nsamp50 to train the DLow

Evaluation

These experiments/eval_*.py files are for evaluation. For eval_*_pred.py, they can be used either to evaluate the results while predicting, or to save results to a file for further evaluation and visualization. An example is python experiments/eval_kps_pred.py --cfg amass_mojo_f9_nsamp50 --mode vis, which is to save files to the folder results/amass_mojo_f9_nsamp50.

Generation

In MOJO, the recursive projection scheme is to get 3D bodies from markers and keep the body valid. The relevant implementation is mainly in models/fittingop.py and experiments/test_recursive_proj.py. An example to run is

python experiments/test_recursive_proj.py --cfg amass_mojo_f9_nsamp50 --testdata ACCAD --gpu_index 0

Datasets

In MOJO, we have used AMASS, Human3.6M, and HumanEva.

For Human3.6M and HumanEva, we follow the same pre-processing step as in DLow, VideoPose3D, and others. Please refer to their pages, e.g. this one, for details.

For AMASS, we perform canonicalization of motion sequences with our own procedures. The details are in experiments/utils_canonicalize_amass.py. We find this sequence canonicalization procedure is important. The canonicalized AMASS used in our work can be downloaded here, which includes the random sample names of ACCAD and BMLhandball used in our experiments about motion realism.

Models

For human body modeling, we employ the SMPL-X parametric body model. You need to follow their license and download. Based on SMPL-X, we can use the body joints or a sparse set of body mesh vertices (the body markers) to represent the body.

  • CMU It has 41 markers, the corresponding SMPL-X mesh vertex ID can be downloaded here.
  • SSM2 It has 64 markers, the corresponding SMPL-X mesh vertex ID can be downloaded here.
  • Joints We used 22 joints. No need to download, but just obtain them from the SMPL-X body model. See details in the code.

Our CVAE model configurations are in experiments/cfg. The pre-trained checkpoints can be downloaded here.

Related projects

  • AMASS: It unifies diverse motion capture data with the SMPL-H model, and provides a large-scale high-quality dataset. Its official codebase and tutorials are in this github repo.

  • GRAB: Most mocap data only contains the body motion. GRAB, however, provides high-quality data of human-object interactions. Besides capturing the body motion, the object motion and the hand-object contact are captured simultaneously. More demonstrations are in its github repo.

Acknowledgement & disclaimer

We thank Nima Ghorbani for the advice on the body marker setting and the {\bf AMASS} dataset. We thank Yinghao Huang, Cornelia K"{o}hler, Victoria Fern'{a}ndez Abrevaya, and Qianli Ma for proofreading. We thank Xinchen Yan and Ye Yuan for discussions on baseline methods. We thank Shaofei Wang and Siwei Zhang for their help with the user study and the presentation, respectively.

MJB has received research gift funds from Adobe, Intel, Nvidia, Facebook, and Amazon. While MJB is a part-time employee of Amazon, his research was performed solely at, and funded solely by, Max Planck. MJB has financial interests in Amazon Datagen Technologies, and Meshcapade GmbH.

TransReID: Transformer-based Object Re-Identification

TransReID: Transformer-based Object Re-Identification [arxiv] The official repository for TransReID: Transformer-based Object Re-Identification achiev

569 Dec 30, 2022
An End-to-End Machine Learning Library to Optimize AUC (AUROC, AUPRC).

Logo by Zhuoning Yuan LibAUC: A Machine Learning Library for AUC Optimization Website | Updates | Installation | Tutorial | Research | Github LibAUC a

Optimization for AI 176 Jan 07, 2023
This package implements THOR: Transformer with Stochastic Experts.

THOR: Transformer with Stochastic Experts This PyTorch package implements Taming Sparsely Activated Transformer with Stochastic Experts. Installation

Microsoft 45 Nov 22, 2022
Implementation of the bachelor's thesis "Real-time stock predictions with deep learning and news scraping".

Real-time stock predictions with deep learning and news scraping This repository contains a partial implementation of my bachelor's thesis "Real-time

David Álvarez de la Torre 0 Feb 09, 2022
Use tensorflow to implement a Deep Neural Network for real time lane detection

LaneNet-Lane-Detection Use tensorflow to implement a Deep Neural Network for real time lane detection mainly based on the IEEE IV conference paper "To

MaybeShewill-CV 1.9k Jan 08, 2023
Learning a mapping from images to psychological similarity spaces with neural networks.

LearningPsychologicalSpaces v0.1: v1.1: v1.2: v1.3: v1.4: v1.5: The code in this repository explores learning a mapping from images to psychological s

Lucas Bechberger 8 Dec 12, 2022
Distance Encoding for GNN Design

Distance-encoding for GNN design This repository is the official PyTorch implementation of the DEGNN and DEAGNN framework reported in the paper: Dista

172 Nov 08, 2022
Code and data for "TURL: Table Understanding through Representation Learning"

TURL This Repo contains code and data for "TURL: Table Understanding through Representation Learning". Environment and Setup Data Pretraining Finetuni

SunLab-OSU 63 Nov 23, 2022
Cross-Task Consistency Learning Framework for Multi-Task Learning

Cross-Task Consistency Learning Framework for Multi-Task Learning Tested on numpy(v1.19.1) opencv-python(v4.4.0.42) torch(v1.7.0) torchvision(v0.8.0)

Aki Nakano 2 Jan 08, 2022
Viewmaker Networks: Learning Views for Unsupervised Representation Learning

Viewmaker Networks: Learning Views for Unsupervised Representation Learning Alex Tamkin, Mike Wu, and Noah Goodman Paper link: https://arxiv.org/abs/2

Alex Tamkin 31 Dec 01, 2022
A lightweight face-recognition toolbox and pipeline based on tensorflow-lite

FaceIDLight 📘 Description A lightweight face-recognition toolbox and pipeline based on tensorflow-lite with MTCNN-Face-Detection and ArcFace-Face-Rec

Martin Knoche 16 Dec 07, 2022
Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs

Perceiver IO Unofficial implementation of Perceiver IO: A General Architecture for Structured Inputs & Outputs Usage import torch from src.perceiver.

Timur Ganiev 111 Nov 15, 2022
This is our ARTS test set, an enriched test set to probe Aspect Robustness of ABSA.

This is the repository for our 2020 paper "Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis". Data We provide

35 Nov 16, 2022
The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.

Intermdiate layer matters - SSL The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper. Downl

Aakash Kaku 35 Sep 19, 2022
Python package for downloading ECMWF reanalysis data and converting it into a time series format.

ecmwf_models Readers and converters for data from the ECMWF reanalysis models. Written in Python. Works great in combination with pytesmo. Citation If

TU Wien - Department of Geodesy and Geoinformation 31 Dec 26, 2022
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intenti

NVIDIA Corporation 6.9k Jan 03, 2023
Collections for the lasted paper about multi-view clustering methods (papers, codes)

Multi-View Clustering Papers Collections for the lasted paper about multi-view clustering methods (papers, codes). There also exists some repositories

Andrew Guan 10 Sep 20, 2022
Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions

Aquarius Aquarius - Enabling Fast, Scalable, Data-Driven Virtual Network Functions NOTE: We are currently going through the open-source process requir

Zhiyuan YAO 0 Jun 02, 2022
This is a JAX implementation of Neural Radiance Fields for learning purposes.

learn-nerf This is a JAX implementation of Neural Radiance Fields for learning purposes. I've been curious about NeRF and its follow-up work for a whi

Alex Nichol 62 Dec 20, 2022
Code of Periodic Activation Functions Induce Stationarity

Periodic Activation Functions Induce Stationarity This repository is the official implementation of the methods in the publication: L. Meronen, M. Tra

AaltoML 12 Jun 07, 2022