The code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Overview

Neural Deformation Graphs

Project Page | Paper | Video


Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction
Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Justus Thies, Angela Dai, Matthias Nießner
CVPR 2021 (Oral Presentation)

This repository contains the code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Specifically, we implicitly model a deformation graph via a deep neural network and empose per-frame viewpoint consistency as well as inter-frame graph and surface consistency constraints in a self-supervised fashion.

That results in a differentiable construction of a deformation graph that is able to handle deformations present in the whole sequence.

Install all dependencies

  • Download the latest conda here.

  • To create a conda environment with all the required packages using conda run the following command:

conda env create -f resources/env.yml

The above command creates a conda environment with the name ndg.

  • Compile external dependencies inside external directory by executing:
conda activate ndg
./build_external.sh

The external dependencies are PyMarchingCubes, gaps and Eigen.

Generate data for visualization & training

In our experiments we use depth inputs from 4 camera views. These depth maps were captured with 4 Kinect Azure sensors. For quantitative evaluation we also used synthetic data, where 4 depth views were rendered from ground truth meshes. In both cases, screened Poisson reconstruction (implemented in MeshLab) was used to obtain meshes for data generation. An example sequence of meshes of a synthetic doozy sequence can be downloaded here.

To generate training data from these meshes, they need to be put into a directory out/meshes/doozy. Then the following code executes data generation, producing generated data samples in out/dataset/doozy:

./generate_data.sh

Visualize neural deformation graphs using pre-trained models

After data generation you can already check out the neural deformation graph estimation using a pre-trained model checkpoint. You need to place it into the out/models directory, and run visualization:

./viz.sh

Reconstruction visualization can take longer, if you want to check out graphs only, you can uncomment --viz_only_graph argument in viz.sh.

Within the Open3D viewer, you can navigate different settings using these keys:

  • N: toggle graph nodes and edges
  • G: toggle ground truth
  • D: show next
  • A: show previous
  • S: toggle smooth shading

Train a model from scratch

You can train a model from scratch using train_graph.sh and train_shape.sh scripts, in that order. The model checkpoints and tensorboard stats are going to be stored into out/experiments.

Optimize graph

To estimate a neural deformation graph from input observations, you need to specify the dataset to be used (inside out/dataset, should be generated before hand), and then training can be started using the following script:

./train_graph.sh

We ran all our experiments on NVidia 2080Ti GPU, for about 500k iterations. After the model has converged, you can visualize the optimized neural deformation graph using viz.sh script.

To check out convergence, you can visualize loss curves with tensorboard by running the following inside out/experiments directory:

tensorboard --logdir=.

Optimize shape

To optimize shape, you need to initialize the graph with a pre-trained graph model. That means that inside train_shape.sh you need to specify the graph_model_path, which should point to the converged checkpoint of the graph model (graph model usually converges at around 500k iterations). Multi-MLP model can then be optimized to reconstruct shape geometry by running:

./train_shape.sh

Similar to graph optimization also shape optimization converges in about 500k iterations.

Citation

If you find our work useful in your research, please consider citing:

@article{bozic2021neuraldeformationgraphs,
title={Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction},
author={Bo{\v{z}}i{\v{c}}, Alja{\v{z}} and Palafox, Pablo and Zollh{\"o}fer, Michael and Dai, Angela and Thies, Justus and Nie{\ss}ner, Matthias},
journal={CVPR},
year={2021}
}

Related work

Some other related works on non-rigid reconstruction by our group:

License

The code from this repository is released under the MIT license, except where otherwise stated (i.e., Eigen).

Owner
Aljaz Bozic
PhD Student at Visual Computing Group
Aljaz Bozic
OSLO: Open Source framework for Large-scale transformer Optimization

O S L O Open Source framework for Large-scale transformer Optimization What's New: December 21, 2021 Released OSLO 1.0. What is OSLO about? OSLO is a

TUNiB 280 Nov 24, 2022
Benchmarks for semi-supervised domain generalization.

Semi-Supervised Domain Generalization This code is the official implementation of the following paper: Semi-Supervised Domain Generalization with Stoc

Kaiyang 49 Dec 10, 2022
E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

11 Nov 08, 2022
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome

bottom-up-attention This code implements a bottom-up attention model, based on multi-gpu training of Faster R-CNN with ResNet-101, using object and at

Peter Anderson 1.3k Jan 09, 2023
PyTorch implementation for our paper Learning Character-Agnostic Motion for Motion Retargeting in 2D, SIGGRAPH 2019

Learning Character-Agnostic Motion for Motion Retargeting in 2D We provide PyTorch implementation for our paper Learning Character-Agnostic Motion for

Rundi Wu 367 Dec 22, 2022
Neural Magic Eye: Learning to See and Understand the Scene Behind an Autostereogram, arXiv:2012.15692.

Neural Magic Eye Preprint | Project Page | Colab Runtime Official PyTorch implementation of the preprint paper "NeuralMagicEye: Learning to See and Un

Zhengxia Zou 56 Jul 15, 2022
VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition

VL-LTR: Learning Class-wise Visual-Linguistic Representation for Long-Tailed Visual Recognition Usage First, install PyTorch 1.7.1+, torchvision 0.8.2

40 Dec 12, 2022
MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Felix Wimbauer 494 Jan 06, 2023
[NeurIPS'21 Spotlight] PyTorch code for our paper "Aligned Structured Sparsity Learning for Efficient Image Super-Resolution"

ASSL This repository is for a new network pruning method (Aligned Structured Sparsity Learning, ASSL) for efficient single image super-resolution (SR)

Huan Wang 47 Nov 28, 2022
A new version of the CIDACS-RL linkage tool suitable to a cluster computing environment.

Fully Distributed CIDACS-RL The CIDACS-RL is a brazillian record linkage tool suitable to integrate large amount of data with high accuracy. However,

Robespierre Pita 5 Nov 04, 2022
TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

TensorFlow Similarity is a python package focused on making similarity learning quick and easy.

912 Jan 08, 2023
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 01, 2023
Stitch it in Time: GAN-Based Facial Editing of Real Videos

STIT - Stitch it in Time [Project Page] Stitch it in Time: GAN-Based Facial Edit

1.1k Jan 04, 2023
Deep learning PyTorch library for time series forecasting, classification, and anomaly detection

Deep learning for time series forecasting Flow forecast is an open-source deep learning for time series forecasting framework. It provides all the lat

AIStream 1.2k Jan 04, 2023
Standalone pre-training recipe with JAX+Flax

Sabertooth Sabertooth is standalone pre-training recipe based on JAX+Flax, with data pipelines implemented in Rust. It runs on CPU, GPU, and/or TPU, b

Nikita Kitaev 26 Nov 28, 2022
SpinalNet: Deep Neural Network with Gradual Input

SpinalNet: Deep Neural Network with Gradual Input This repository contains scripts for training different variations of the SpinalNet and its counterp

H M Dipu Kabir 142 Dec 30, 2022
Multi-label classification of retinal disorders

Multi-label classification of retinal disorders This is a deep learning course project. The goal is to develop a solution, using computer vision techn

Sundeep Bhimireddy 1 Jan 29, 2022
AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis.

AITom Introduction AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis. AITom is originated from the tomominer l

93 Jan 02, 2023
Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Density-aware Chamfer Distance This repository contains the official PyTorch implementation of our paper: Density-aware Chamfer Distance as a Comprehe

Tong WU 93 Dec 15, 2022