Joint Learning of 3D Shape Retrieval and Deformation, CVPR 2021

Overview

Joint Learning of 3D Shape Retrieval and Deformation

Joint Learning of 3D Shape Retrieval and Deformation

Mikaela Angelina Uy, Vladimir G. Kim, Minhyuk Sung, Noam Aigerman, Siddhartha Chaudhuri and Leonidas Guibas

CVPR 2021

pic-network

Introduction

We propose a novel technique for producing high-quality 3D models that match a given target object image or scan. Our method is based on retrieving an existing shape from a database of 3D models and then deforming its parts to match the target shape. Unlike previous approaches that independently focus on either shape retrieval or deformation, we propose a joint learning procedure that simultaneously trains the neural deformation module along with the embedding space used by the retrieval module. This enables our network to learn a deformation-aware embedding space, so that retrieved models are more amenable to match the target after an appropriate deformation. In fact, we use the embedding space to guide the shape pairs used to train the deformation module, so that it invests its capacity in learning deformations between meaningful shape pairs. Furthermore, our novel part-aware deformation module can work with inconsistent and diverse part structures on the source shapes. We demonstrate the benefits of our joint training not only on our novel framework, but also on other state-of-the-art neural deformation modules proposed in recent years. Lastly, we also show that our jointly-trained method outperforms various non-joint baselines. Our project page can be found here, and the arXiv version of our paper can be found here.

@inproceedings{uy-joint-cvpr21,
      title = {Joint Learning of 3D Shape Retrieval and Deformation},
      author = {Mikaela Angelina Uy and Vladimir G. Kim and Minhyuk Sung and Noam Aigerman and Siddhartha Chaudhuri and Leonidas Guibas},
      booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      year = {2021}
  }

Data download and preprocessing details

Dataset downloads can be found in the links below. These should be extracted in the project home folder.

  1. Raw source shapes are here.

  2. Processed h5 and pickle files are here.

  3. Targets:

    • [Optional] (already processed in h5) point cloud
    • Images: chair, table, cabinet. You also need to modify the correct path for IMAGE_BASE_DIR in the image training and evaluation scripts.
  4. Automatic segmentation (ComplementMe)

    • Source shapes are here.
    • Processed h5 and pickle files are here.

For more details on the pre-processing scripts, please take a look at run_preprocessing.py and generate_combined_h5.py. run_preprocessing.py includes the details on how the connectivity constraints and projection matrices are defined. We use the keypoint_based constraint to define our source model constraints in the paper.

The renderer used throughout the project can be found here. Please modify the paths, including the input and output directories, accordingly at global_variables.py if you want to process your own data.

Pre-trained Models

The pretrained models for Ours and Ours w/ IDO, which uses our joint training approach can be found here. We also included the pretrained models of our structure-aware deformation-only network, which are trained on random source-target pairs used to initialize our joint training.

Evaluation

Example commands to run the evaluation script are as follows. The flags can be changed as desired. --mesh_visu renders the output results into images, remove the flag to disable the rendering. Note that --category is the object category and the values should be set to "chair", "table", "storagefurniture" for classes chair, table and cabinet, respectively.

For point clouds:

python evaluate.py --logdir=ours_ido_pc_chair/ --dump_dir=dump_ours_ido_pc_chair/ --joint_model=1 --use_connectivity=1 --use_src_encoder_retrieval=1 --category=chair --use_keypoint=1 --mesh_visu=1

python evaluate_recall.py --logdir=ours_ido_pc_chair/ --dump_dir=dump_ours_ido_pc_chair/ --category=chair

For images:

python evaluate_images.py --logdir=ours_ido_img_chair/ --dump_dir=dump_ours_ido_img_chair/ --joint_model=1 --use_connectivity=1 --category=chair --use_src_encoder_retrieval=1 --use_keypoint=1 --mesh_visu=1

python evaluate_images_recall.py --logdir=ours_ido_img_chair/ --dump_dir=dump_ours_ido_img_chair/ --category=chair

Training

  • To train deformation-only networks on random source-target pairs, example commands are as follows:
# For point clouds
python train_deformation_final.py --logdir=log/ --dump_dir=dump/ --to_train=1 --use_connectivity=1 --category=chair --use_keypoint=1 --use_symmetry=1

# For images
python train_deformation_images.py --logdir=log/ --dump_dir=dump/ --to_train=1 --use_connectivity=1 --category=storagefurniture --use_keypoint=1 --use_symmetry=1
  • To train our joint models without IDO (Ours), example commands are as follows:
# For point clouds
python train_region_final.py --logdir=log/ --dump_dir=dump/ --to_train=1 --init_deformation=1 --loss_function=regression --distance_function=mahalanobis --use_connectivity=1 --use_src_encoder_retrieval=1 --category=chair --model_init=df_chair_pc/ --selection=retrieval_candidates --use_keypoint=1 --use_symmetry=1

# For images
python train_region_images.py --logdir=log/ --dump_dir=dump/ --to_train=1 --use_connectivity=1 --selection=retrieval_candidates --use_src_encoder_retrieval=1 --category=chair --use_keypoint=1 --use_symmetry=1 --init_deformation=1 --model_init=df_chair_img/
  • To train our joint models with IDO (Ours w/ IDO), example commands are as follows:
# For point clouds
python joint_with_icp.py --logdir=log/ --dump_dir=dump/ --to_train=1 --loss_function=regression --distance_function=mahalanobis --use_connectivity=1 --use_src_encoder_retrieval=1 --category=chair --model_init=df_chair_pc/ --selection=retrieval_candidates --use_keypoint=1 --use_symmetry=1 --init_deformation=1 --use_icp_pp=1 --fitting_loss=l2

# For images
python joint_icp_images.py --logdir=log/ --dump_dir=dump/ --to_train=1 --init_joint=1 --loss_function=regression --distance_function=mahalanobis --use_connectivity=1 --use_src_encoder_retrieval=1 --category=chair --model_init=df_chair_img/ --selection=retrieval_candidates --use_keypoint=1 --use_symmetry=1 --init_deformation=1 --use_icp_pp=1 --fitting_loss=l2

Note that our joint training approach is used by setting the flag --selection=retrieval_candidates=1.

Related Work

This work and codebase is related to the following previous work:

License

This repository is released under MIT License (see LICENSE file for details).

Owner
Mikaela Uy
CS PhD Student
Mikaela Uy
ObjectDrawer-ToolBox: a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system

ObjectDrawer-ToolBox is a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system, Object Drawer.

77 Jan 05, 2023
OneFlow is a performance-centered and open-source deep learning framework.

OneFlow OneFlow is a performance-centered and open-source deep learning framework. Latest News Version 0.5.0 is out! First class support for eager exe

OneFlow 4.2k Jan 07, 2023
Unofficial implementation of One-Shot Free-View Neural Talking Head Synthesis

face-vid2vid Usage Dataset Preparation cd datasets wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl chmod a+rx youtube-dl python load_

worstcoder 68 Dec 30, 2022
Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Lottery Jackpots Exist in Pre-trained Models (Paper Link) Requirements Python = 3.7.4 Pytorch = 1.6.1 Torchvision = 0.4.1 Reproduce the Experiment

Yuxin Zhang 27 Jun 28, 2022
Collision risk estimation using stochastic motion models

collision_risk_estimation Collision risk estimation using stochastic motion models. This is a new approach, based on stochastic models, to predict the

Unmesh 7 Jun 26, 2022
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
MultiLexNorm 2021 competition system from ÚFAL

ÚFAL at MultiLexNorm 2021: Improving Multilingual Lexical Normalization by Fine-tuning ByT5 David Samuel & Milan Straka Charles University Faculty of

ÚFAL 13 Jun 28, 2022
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 02, 2023
Autonomous Robots Kalman Filters

Autonomous Robots Kalman Filters The Kalman Filter is an easy topic. However, ma

20 Jul 18, 2022
Supplementary code for the AISTATS 2021 paper "Matern Gaussian Processes on Graphs".

Matern Gaussian Processes on Graphs This repo provides an extension for gpflow with Matérn kernels, inducing variables and trainable models implemente

41 Dec 17, 2022
DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (SVS & TTS); AAAI 2022; Official code

DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism This repository is the official PyTorch implementation of our AAAI-2022 paper, in

Jinglin Liu 803 Dec 28, 2022
Implementation of the method proposed in the paper "Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation"

Neural Descriptor Fields (NDF) PyTorch implementation for training continuous 3D neural fields to represent dense correspondence across objects, and u

167 Jan 06, 2023
Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks

Discovering Dynamic Salient Regions with Spatio-Temporal Graph Neural Networks This is the official code for DyReg model inroduced in Discovering Dyna

Bitdefender Machine Learning 11 Nov 08, 2022
The official PyTorch implementation for NCSNv2 (NeurIPS 2020)

Improved Techniques for Training Score-Based Generative Models This repo contains the official implementation for the paper Improved Techniques for Tr

174 Dec 26, 2022
Learning from Synthetic Shadows for Shadow Detection and Removal [Inoue+, IEEE TCSVT 2020].

Learning from Synthetic Shadows for Shadow Detection and Removal (IEEE TCSVT 2020) Overview This repo is for the paper "Learning from Synthetic Shadow

Naoto Inoue 67 Dec 28, 2022
Özlem Taşkın 0 Feb 23, 2022
Source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree.

self-driving-car In this repository I will share the source code of all the projects of Udacity Self-Driving Car Engineer Nanodegree. Hope this might

Andrea Palazzi 2.4k Dec 29, 2022
Reviatalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation

Reviatalizing Optimization for 3D Human Pose and Shape Estimation: A Sparse Constrained Formulation This is the implementation of the approach describ

Taosha Fan 47 Nov 15, 2022
Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

567 Dec 26, 2022