This is an implementation of PIFuhd based on Pytorch

Overview

Open-PIFuhd

This is a unofficial implementation of PIFuhd

PIFuHD: Multi-Level Pixel-Aligned Implicit Function forHigh-Resolution 3D Human Digitization(CVPR2020)

Implementation

  • Training Coarse PIFuhd
  • Training Fine PIFuhd
  • Inference
  • metrics(P2S, Normal, Chamfer)
  • Gan generates front normal and back normal (Under designing)

Note that the pipeline I design do not consider normal map generated by pix2pixHD because it is Not main difficulty we reimplement PIFuhd. By the way, I will release GAN +PIFuhd soon.

Prerequisites

  • PyTorch>=1.6
  • json
  • PIL
  • skimage
  • tqdm
  • cv2
  • trimesh with pyembree
  • pyexr
  • PyOpenGL
  • freeglut (use sudo apt-get install freeglut3-dev for ubuntu users)
  • (optional) egl related packages for rendering with headless machines. (use apt install libgl1-mesa-dri libegl1-mesa libgbm1 for ubuntu users)
  • face3d

Data processed

We use Render People as our datasets but the data size is 296 (270 for training while 29 for testing) which is less than paper said 500.

Note that we are unable to release the full training data due to the restriction of commertial scans.

Initial data

I modified part codes in PIFu (branch: PIFu-modify, and download it into your project) in order to could process dirs where your model save

bash ./scripts/process_obj.sh [--dir_models_path]
#e.g.  bash ./scripts/process_obj.sh ../Garment/render_people_train/

Rendering data

I modified part codes in PIFu in order to could process dirs where your model save

python -m apps.render_data -i [--dir_models_path] -o [--save_processed_models_path] -s 1024 [Optional: -e]
#-e means use GPU rendering
#e.g.python -m apps.render_data -i ../Garment/render_people_train/ -o ../Garment/render_gen_1024_train/ -s 1024 -e

Render Normal Map

Rendering front and back normal map In Current Project

All config params is set in ./configs/PIFuhd_Render_People_HG_coarse.py, bash ./scripts/generate.sh

# the params you could modify from ./configs/PIFuhd_Render_People_HG_normal_map.py
# the import params here is 
#  e.g. input_dir = '../Garment/render_gen_1024_train/' and cache= "../Garment/cache/render_gen_1024/rp_train/"
# inpud_dir means output render_gen_1024_train
# cache means where save intermediate results like sample points from mesh

After processing all datasets, Tree-Structured Directory looks like following:

render_gen_1024_train/
├── rp_aaron_posed_004_BLD
│   ├── GEO
│   ├── MASK
│   ├── PARAM
│   ├── RENDER
│   ├── RENDER_NORMAL
│   ├── UV_MASK
│   ├── UV_NORMAL
│   ├── UV_POS
│   ├── UV_RENDER
│   └── val.txt
├── rp_aaron_posed_005_BLD
	....

Training

Training coarse-pifuhd

All config params is set in ./configs/PIFuhd_Render_People_HG_coarse.py, Where you could modify all you want.

Note that this project I designed is friend, which means you could easily replace origin backbone, head by yours :)

bash ./scripts/train_pfhd_coarse.sh

Training Fine-PIFuhd

the same as coarse PIFuhd, all config params is set in ./configs/PIFuhd_Render_People_HG_fine.py,

bash ./scripts/train_pfhd_fine.sh

**If you meet memory problems about GPUs, pls reduce batch_size in ./config/*.py **

Inference

bash ./scripts/test_pfhd_coarse.sh
#or 
bash ./scripts/test_pfhd_fine.sh

the results will be saved into checkpoints/PIFuhd_Render_People_HG_[coarse/fine]/gallery/test/model_name/*.obj, then you could use meshlab to view the generate models.

Metrics

export MESA_GL_VERSION_OVERRIDE=3.3 
# eval coarse-pifuhd
python ./tools/eval_pifu.py  --config ./configs/PIFuhd_Render_People_HG_coarse.py
# eval fine-pifuhd
python ./tools/eval_pifu.py  --config ./configs/PIFuhd_Render_People_HG_fine.py

Demo

we provide rendering code using free models in RenderPeople. This tutorial uses rp_dennis_posed_004 model. Please download the model from this link and unzip the content. Use following command to reconstruct the model:


Debug

I provide bool params(debug in all of config files) to you to check whether your points sampled from mesh is right. There are examples:

Visualization

As following show, left is input image, mid is the results of coarse-pifuhd, right is fine-pifuhd

Reconstruction on Render People Datasets

Note that our training datasets are less than official one(270 for our while 450 for paper) resulting in the performance changes in some degree

IoU ACC recall P2S Normal Chamfer
PIFu 0.748 0.880 0.856 1.801 0.1446 2.00
Coarse-PIFuhd(+Front and back normal) 0.865(5cm) 0.931(5cm) 0.923(5cm) 1.242 0.1205 1.4015
Fine-PIFuhd(+Front and back normal) 0.813(3cm) 0.896(3cm) 0.904(5cm) - 0.1138 -

There is an issue why p2s of fine-pifuhd is bit large than coarse-pifuhd. This is because I do not add some post-processing to clean some chaos in reconstruction. However, the details of human mesh produced by fine-pifuhd are obviously better than coarse-pifuhd.

About Me

I hope that this project could provide some contributions to our communities, especially for implicit-field.

By the way, If you think the project is helpful to you, pls don’t forget to star this project : )

Related Research

Monocular Real-Time Volumetric Performance Capture (ECCV 2020) Ruilong Li*, Yuliang Xiu*, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020) Shunsuke Saito, Tomas Simon, Jason Saragih, Hanbyul Joo

ARCH: Animatable Reconstruction of Clothed Humans (CVPR 2020) Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung

Robust 3D Self-portraits in Seconds (CVPR 2020) Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu

Learning to Infer Implicit Surfaces without 3d Supervision (NeurIPS 2019) Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li

Owner
Lingteng Qiu
good good study, day day up
Lingteng Qiu
img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation

img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation Figure 1: We estimate the 6DoF rigid transformation of a 3D face (rendered in si

Vítor Albiero 519 Dec 29, 2022
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

201 Dec 29, 2022
Finding all things on-prem Microsoft for password spraying and enumeration.

msprobe About Installing Usage Examples Coming Soon Acknowledgements About Finding all things on-prem Microsoft for password spraying and enumeration.

205 Jan 09, 2023
EMNLP 2021 paper The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers.

Codebase for training transformers on systematic generalization datasets. The official repository for our EMNLP 2021 paper The Devil is in the Detail:

Csordás Róbert 57 Nov 21, 2022
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

248 Dec 04, 2022
It helps user to learn Pick-up lines and share if he has a better one

Pick-up-Lines-Generator(Open Source) It helps user to learn Pick-up lines Share and Add one or many to the DataBase Unique SQLite DataBase AI Undercon

knock_nott 0 May 04, 2022
Interactive Image Segmentation via Backpropagating Refinement Scheme

Won-Dong Jang and Chang-Su Kim, Interactive Image Segmentation via Backpropagating Refinement Scheme, CVPR 2019

Won-Dong Jang 85 Sep 15, 2022
Project to create an open-source 6 DoF input device

6DInputs A Project to create open-source 3D printed 6 DoF input devices Note the plural ('6DInputs' and 'devices') in the headings. We would like seve

RepRap Ltd 47 Jul 28, 2022
[2021 MultiMedia] CONQUER: Contextual Query-aware Ranking for Video Corpus Moment Retrieval

CONQUER: Contexutal Query-aware Ranking for Video Corpus Moment Retreival PyTorch implementation of CONQUER: Contexutal Query-aware Ranking for Video

Hou zhijian 23 Dec 26, 2022
Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields"

NeRF++ Codebase for arXiv preprint "NeRF++: Analyzing and Improving Neural Radiance Fields" Work with 360 capture of large-scale unbounded scenes. Sup

Kai Zhang 722 Dec 28, 2022
a minimal terminal with python 😎😉

Meterm a terminal with python 😎 How to use Clone Project: $ git clone https://github.com/motahharm/meterm.git Run: in Terminal: meterm.exe Or pip ins

Motahhar.Mokfi 5 Jan 28, 2022
A map update dataset and benchmark

MUNO21 MUNO21 is a dataset and benchmark for machine learning methods that automatically update and maintain digital street map datasets. Previous dat

16 Nov 30, 2022
A scikit-learn-compatible module for estimating prediction intervals.

MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals (or prediction sets) using your favourit

588 Jan 04, 2023
Lip Reading - Cross Audio-Visual Recognition using 3D Convolutional Neural Networks

Lip Reading - Cross Audio-Visual Recognition using 3D Convolutional Neural Networks - Official Project Page This repository contains the code develope

Amirsina Torfi 1.7k Dec 18, 2022
Lane assist for ETS2, built with the ultra-fast-lane-detection model.

Euro-Truck-Simulator-2-Lane-Assist Lane assist for ETS2, built with the ultra-fast-lane-detection model. This project was made possible by the amazing

36 Jan 05, 2023
Computational inteligence project on faces in the wild dataset

Table of Contents The general idea How these scripts work? Loading data Needed modules and global variables Parsing the arrays in dataset Extracting a

tooraj taraz 4 Oct 21, 2022
This repo contains the code for the paper "Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimaging" that has been accepted to NeurIPS 2021.

Dugh-NeurIPS-2021 This repo contains the code for the paper "Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroi

Ali Hashemi 5 Jul 12, 2022
Few-shot Learning of GPT-3

Few-shot Learning With Language Models This is a codebase to perform few-shot "in-context" learning using language models similar to the GPT-3 paper.

Tony Z. Zhao 224 Dec 28, 2022
An implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks in PyTorch.

Neural Attention Distillation This is an implementation demo of the ICLR 2021 paper Neural Attention Distillation: Erasing Backdoor Triggers from Deep

Yige-Li 84 Jan 04, 2023
The code of Zero-shot learning for low-light image enhancement based on dual iteration

Zero-shot-dual-iter-LLE The code of Zero-shot learning for low-light image enhancement based on dual iteration. You can get the real night image tests

1 Mar 18, 2022