MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

Overview

MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images

This repository contains the implementation of our paper MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images.

You can find detailed usage instructions for training your own models and using pretrained models below.

If you find our code useful, please cite:

@InProceedings{MetaAvatar:NeurIPS:2021,
  title = {MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images},
  author = {Shaofei Wang and Marko Mihajlovic and Qianli Ma and Andreas Geiger and Siyu Tang},
  booktitle = {Advances in Neural Information Processing Systems},
  year = {2021}
}

Installation

This repository has been tested on the following platform:

  1. Python 3.7, PyTorch 1.7.1 with CUDA 10.2 and cuDNN 7.6.5, Ubuntu 20.04

To clone the repo, run either:

git clone --recursive https://github.com/taconite/MetaAvatar-release.git

or

git clone https://github.com/taconite/MetaAvatar-release.git
git submodule update --init --recursive

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called meta-avatar using

conda env create -f environment.yml
conda activate meta-avatar

(Optional) if you want to use the evaluation code under evaluation/, then you need to install kaolin. Download the code from the kaolin repository, checkout to commit e7e513173bd4159ae45be6b3e156a3ad156a3eb9 and install it according to the instructions.

Build the dataset

To prepare the dataset for training/fine-tuning/evaluation, you have to first download the CAPE dataset from the CAPE website.

  1. Download SMPL v1.0, clean-up the chumpy objects inside the models using this code, and rename the files and extract them to ./body_models/smpl/, eventually, the ./body_models folder should have the following structure:
    body_models
     └-- smpl
     	├-- male
     	|   └-- model.pkl
     	└-- female
     	    └-- model.pkl
    
    

(Optional) if you want to use the evaluation code under evaluation/, then you need to download all the .pkl files from IP-Net repository and put them under ./body_models/misc/.

Finally, run the following script to extract necessary SMPL parameters used in our code:

python extract_smpl_parameters.py

The extracted SMPL parameters will be save into ./body_models/misc/.

  1. Extract CAPE dataset to an arbitrary path, denoted as ${CAPE_ROOT}. The extracted dataset should have the following structure:
    ${CAPE_ROOT}
     ├-- 00032
     ├-- 00096
     |   ...
     ├-- 03394
     └-- cape_release
    
    
  2. Create data directory under the project directory.
  3. Modify the parameters in preprocess/build_dataset.sh accordingly (i.e. modify the --dataset_path to ${CAPE_ROOT}) to extract training/fine-tuning/evaluation data.
  4. Run preprocess/build_dataset.sh to preprocess the CAPE dataset.

(Optional) if you want evaluate performance on interpolation task, then you need to process CAPE data again in order to generate processed data at full framerate. Simply comment the first command and uncomment the second command in preprocess/build_dataset.sh and run the script.

Pre-trained models

We provide pre-trained models, including 1) forward/backward skinning networks for full pointcloud (stage 0) 2) forward/backward skinning networks for depth pointcloud (stage 0) 3) meta-learned static SDF (stage 1) 3) meta-learned hypernetwork (stage 2) . After downloading them, please put them in respective folders under ./out/metaavatar.

Fine-tuning fromt the pre-trained model

We provide script to fine-tune subject/cloth-type specific avatars in batch. Simply run:

bash run_fine_tuning.sh

And it will conduct fine-tuning with default setting (subject 00122 with shortlong). You can comment/uncomment/add lines in jobs/splits to modify data splits.

Training

To train new networks from scratch, run

python train.py --num-workers 8 configs/meta-avatar/${config}.yaml

You can train the two stage 0 models in parallel, while stage 1 model depends on stage 0 models and stage 2 model depends on stage 1 model.

You can monitor on http://localhost:6006 the training process using tensorboard:

tensorboard --logdir ${OUTPUT_DIR}/logs --port 6006

where you replace ${OUTPUT_DIR} with the respective output directory.

Evaluation

To evaluate the generated meshes, use the following script:

bash run_evaluation.sh

Again, it will conduct evaluation with default setting (subject 00122 with shortlong). You can comment/uncomment/add lines in jobs/splits to modify data splits.

License

We employ MIT License for the MetaAvatar code, which covers

extract_smpl_parameters.py
run_fine_tuning.py
train.py
configs
jobs/
depth2mesh/
preprocess/

The SIREN networks are borrowed from the official SIREN repository. Mesh extraction code is borrowed from the DeeSDF repository.

Modules not covered by our license are:

  1. Modified code from IP-Net (./evaluation);
  2. Modified code from SMPL-X (./human_body_prior); for these parts, please consult their respective licenses and cite the respective papers.
MPLP: Metapath-Based Label Propagation for Heterogenous Graphs

MPLP: Metapath-Based Label Propagation for Heterogenous Graphs Results on MAG240M Here, we demonstrate the following performance on the MAG240M datase

Qiuying Peng 10 Jun 28, 2022
Code for CVPR2021 paper 'Where and What? Examining Interpretable Disentangled Representations'.

PS-SC GAN This repository contains the main code for training a PS-SC GAN (a GAN implemented with the Perceptual Simplicity and Spatial Constriction c

Xinqi/Steven Zhu 40 Dec 16, 2022
Forecasting for knowable future events using Bayesian informative priors (forecasting with judgmental-adjustment).

What is judgyprophet? judgyprophet is a Bayesian forecasting algorithm based on Prophet, that enables forecasting while using information known by the

AstraZeneca 56 Oct 26, 2022
End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model

onnx-facial-lmk-detector End-to-end face detection, cropping, norm estimation, and landmark detection in a single onnx model, model.onnx. Demo You can

atksh 42 Dec 30, 2022
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation

Auto-Seg-Loss By Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai This is the official implementation of the ICLR 2021 paper Auto

61 Dec 21, 2022
Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains

Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains This is an accompanying repository to the ICAIL 2021 pap

4 Dec 16, 2021
"Domain Adaptive Semantic Segmentation without Source Data" (ACM MM 2021)

LDBE Pytorch implementation for two papers (the paper will be released soon): "Domain Adaptive Semantic Segmentation without Source Data", ACM MM2021.

benfour 16 Sep 28, 2022
PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM

Quasi-Recurrent Neural Network (QRNN) for PyTorch Updated to support multi-GPU environments via DataParallel - see the the multigpu_dataparallel.py ex

Salesforce 1.3k Dec 28, 2022
Feature extraction made simple with torchextractor

torchextractor: PyTorch Intermediate Feature Extraction Introduction Too many times some model definitions get remorselessly copy-pasted just because

Antoine Broyelle 89 Oct 31, 2022
Deformable DETR is an efficient and fast-converging end-to-end object detector.

Deformable DETR: Deformable Transformers for End-to-End Object Detection.

2k Jan 05, 2023
LaBERT - A length-controllable and non-autoregressive image captioning model.

Length-Controllable Image Captioning (ECCV2020) This repo provides the implemetation of the paper Length-Controllable Image Captioning. Install conda

bearcatt 53 Nov 13, 2022
Code for "Continuous-Time Meta-Learning with Forward Mode Differentiation" (ICLR 2022)

Continuous-Time Meta-Learning with Forward Mode Differentiation ICLR 2022 (Spotlight) - Installation - Example - Citation This repository contains the

Tristan Deleu 25 Oct 20, 2022
RITA is a family of autoregressive protein models, developed by LightOn in collaboration with the OATML group at Oxford and the Debora Marks Lab at Harvard.

RITA: a Study on Scaling Up Generative Protein Sequence Models RITA is a family of autoregressive protein models, developed by a collaboration of Ligh

LightOn 69 Dec 22, 2022
Files for a tutorial to train SegNet for road scenes using the CamVid dataset

SegNet and Bayesian SegNet Tutorial This repository contains all the files for you to complete the 'Getting Started with SegNet' and the 'Bayesian Seg

Alex Kendall 800 Dec 31, 2022
Simplified interface for TensorFlow (mimicking Scikit Learn) for Deep Learning

SkFlow has been moved to Tensorflow. SkFlow has been moved to http://github.com/tensorflow/tensorflow into contrib folder specifically located here. T

3.2k Dec 29, 2022
Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training"

Saliency Guided Training Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training" by Aya Abdelsalam Ismail, Hector Cor

8 Sep 22, 2022
Implementation of the Remixer Block from the Remixer paper, in Pytorch

Remixer - Pytorch Implementation of the Remixer Block from the Remixer paper, in Pytorch. It claims that substituting the feedforwards in transformers

Phil Wang 35 Aug 23, 2022
Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks

This is the code associated with the paper Predicting Semantic Map Representations from Images with Pyramid Occupancy Networks, published at CVPR 2020.

Thomas Roddick 219 Dec 20, 2022
Code and data for the paper "Hearing What You Cannot See"

Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners Public repository of the paper "Hearing What You Cannot See: Acoustic Vehicle D

TU Delft Intelligent Vehicles 26 Jul 13, 2022
A style-based Quantum Generative Adversarial Network

Style-qGAN A style based Quantum Generative Adversarial Network (style-qGAN) model for Monte Carlo event generation. Tutorial We have prepared a noteb

9 Nov 24, 2022