Code for ICCV2021 paper PARE: Part Attention Regressor for 3D Human Body Estimation

Overview

PARE: Part Attention Regressor for 3D Human Body Estimation [ICCV 2021]

Open In Colab report report

PARE: Part Attention Regressor for 3D Human Body Estimation,
Muhammed Kocabas, Chun-Hao Paul Huang, Otmar Hilliges Michael J. Black,
International Conference on Computer Vision (ICCV), 2021

Features

PARE is an occlusion-robust human pose and shape estimation method. This implementation includes the demo and evaluation code for PARE implemented in PyTorch.

Updates

  • 13/10/2021: Demo and evaluation code is released.

Getting Started

PARE has been implemented and tested on Ubuntu 18.04 with python >= 3.7. If you don't have a suitable device, try running our Colab demo.

Clone the repo:

git clone https://github.com/mkocabas/PARE.git

Install the requirements using virtualenv or conda:

# pip
source scripts/install_pip.sh

# conda
source scripts/install_conda.sh

Demo

First, you need to download the required data (i.e our trained model and SMPL model parameters). It is approximately 1.3GB. To do this you can just run:

source scripts/prepare_data.sh

Video Demo

Run the command below. See scripts/demo.py for more options.

python scripts/demo.py --vid_file data/sample_video.mp4 --output_folder logs/demo 

Sample demo output:

Image Folder Demo

python scripts/demo.py --image_folder <path to image folder> --output_folder logs/demo

Output format

If demo finishes succesfully, it needs to create a file named pare_output.pkl in the --output_folder. We can inspect what this file contains by:

>>> import joblib # you may also use native pickle here as well

>>> output = joblib.load('pare_output.pkl') 

>>> print(output.keys())  
                                                                                                                                                                                                                                                                                                                                                                                              
dict_keys([1, 2, 3, 4]) # these are the track ids for each subject appearing in the video

>>> for k,v in output[1].items(): print(k,v.shape) 

pred_cam (n_frames, 3)          # weak perspective camera parameters in cropped image space (s,tx,ty)
orig_cam (n_frames, 4)          # weak perspective camera parameters in original image space (sx,sy,tx,ty)
verts (n_frames, 6890, 3)       # SMPL mesh vertices
pose (n_frames, 72)             # SMPL pose parameters
betas (n_frames, 10)            # SMPL body shape parameters
joints3d (n_frames, 49, 3)      # SMPL 3D joints
joints2d (n_frames, 21, 3)      # 2D keypoint detections by STAF if pose tracking enabled otherwise None
bboxes (n_frames, 4)            # bbox detections (cx,cy,w,h)
frame_ids (n_frames,)           # frame ids in which subject with tracking id #1 appears
smpl_joints2d (n_frames, 49, 2) # SMPL 2D joints 

Google Colab

Training

Training instructions will follow soon.

Evaluation

You need to download 3DPW and 3DOH datasets before running the evaluation script. After the download, the data folder should look like:

data/
├── body_models
│   └── smpl
├── dataset_extras
├── dataset_folders
│   ├── 3doh
│   └── 3dpw
└── pare
    └── checkpoints

Then, you can evaluate PARE by running:

python scripts/eval.py \
  --cfg data/pare/checkpoints/pare_config.yaml \
  --opts DATASET.VAL_DS 3doh_3dpw-all
  
python scripts/eval.py \
  --cfg data/pare/checkpoints/pare_w_3dpw_config.yaml \
  --opts DATASET.VAL_DS 3doh_3dpw-all

You should obtain results in this table on 3DPW test set:

MPJPE PAMPJPE V2V
PARE 82 50.9 97.9
PARE (w. 3DPW) 74.5 46.5 88.6

Occlusion Sensitivity Analysis

We prepare a script to run occlusion sensitivity analysis proposed in our paper. Occlusion sensitivity analysis slides an occluding patch on the image and visualizes how human pose and shape estimation result affected.

python scripts/occlusion_analysis.py \
  --cfg data/pare/checkpoints/pare_config.yaml \
  --ckpt data/pare/checkpoints/pare_checkpoint.ckpt

Sample occlusion test output:

Citation

@inproceedings{Kocabas_PARE_2021,
  title = {{PARE}: Part Attention Regressor for {3D} Human Body Estimation},
  author = {Kocabas, Muhammed and Huang, Chun-Hao P. and Hilliges, Otmar and Black, Michael J.},
  booktitle = {Proc. International Conference on Computer Vision (ICCV)},
  pages = {11127--11137},
  month = oct,
  year = {2021},
  doi = {},
  month_numeric = {10}
}

License

This code is available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using this code you agree to the terms in the LICENSE. Third-party datasets and software are subject to their respective licenses.

References

We indicate if a function or script is borrowed externally inside each file. Consider citing these works if you use them in your project.

Contact

For questions, please contact [email protected]

For commercial licensing (and all related questions for business applications), please contact [email protected].

Comments
  • cannot run the demo.py

    cannot run the demo.py

    I cloned the repo and run 'source scripts/install_pip.sh' 'source scripts/prepare_data.sh' 'python scripts/demo.py --vid_file data/sample_video.mp4 --output_folder logs/demo '. Then i got error:

    "(PARE) [email protected]:~/桌面/Paper/PARE$ python scripts/demo.py --vid_file data/sample_video.mp4 --output_folder logs/demo

    2021-11-01 21:05:45.270 | INFO | main:main:65 - Frames are already extracted in "logs/demo/sample_video_/tmp_images" 2021-11-01 21:05:45.389 | INFO | main:main:97 - Demo options: Namespace(batch_size=16, beta=1.0, cfg='data/pare/checkpoints/pare_w_3dpw_config.yaml', ckpt='data/pare/checkpoints/pare_w_3dpw_checkpoint.ckpt', detector='yolo', display=False, draw_keypoints=False, exp='', image_folder=None, min_cutoff=0.004, mode='video', no_render=False, no_save=False, output_folder='logs/demo', save_obj=False, sideview=False, smooth=False, staf_dir='/home/mkocabas/developments/openposetrack', tracker_batch_size=12, tracking_method='bbox', vid_file='data/sample_video.mp4', wireframe=False, yolo_img_size=416) 2021-11-01 21:05:46.038 | INFO | pare.models.backbone.hrnet:init_weights:530 - => init weights from normal distribution 2021-11-01 21:05:46.231 | WARNING | pare.models.backbone.hrnet:init_weights:558 - IMPORTANT WARNING!! Please download pre-trained models if you are in TRAINING mode! 2021-11-01 21:05:46.231 | INFO | pare.models.head.pare_head:init:125 - "Keypoint Attention" should be activated to be able to use part segmentation 2021-11-01 21:05:46.231 | INFO | pare.models.head.pare_head:init:126 - Overriding use_keypoint_attention 2021-11-01 21:05:46.253 | INFO | pare.models.head.pare_head:init:327 - Keypoint attention is active WARNING: You are using a SMPL model, with only 10 shape coefficients. 2021-11-01 21:05:58.125 | INFO | pare.core.tester:_load_pretrained_model:113 - Loading pretrained model from data/pare/checkpoints/pare_w_3dpw_checkpoint.ckpt 2021-11-01 21:05:58.365 | WARNING | pare.utils.train_utils:load_pretrained_model:45 - Removing "model." keyword from state_dict keys.. 2021-11-01 21:05:58.749 | INFO | pare.core.tester:_load_pretrained_model:116 - Loaded pretrained weights from "data/pare/checkpoints/pare_w_3dpw_checkpoint.ckpt" 2021-11-01 21:05:58.753 | INFO | main:main:103 - Input video number of frames 3080 Downloading files from https://raw.githubusercontent.com/mkocabas/yolov3-pytorch/master/yolov3/config/yolov3.cfg --2021-11-01 21:05:58-- https://raw.githubusercontent.com/mkocabas/yolov3-pytorch/master/yolov3/config/yolov3.cfg 正在连接 127.0.0.1:8889... 已连接。 已发出 Proxy 请求,正在等待回应... 200 OK 长度: 8338 (8.1K) [text/plain] 正在保存至: “/home/ywk/.torch/config/yolov3.cfg”

    yolov3.cfg 100%[===================>] 8.14K --.-KB/s 用时 0s

    2021-11-01 21:05:59 (32.7 MB/s) - 已保存 “/home/ywk/.torch/config/yolov3.cfg” [8338/8338])

    Running Multi-Person-Tracker 100%|█████████████████████████████████████████| 257/257 [01:23<00:00, 3.09it/s] Finished. Detection + Tracking FPS 37.06 2021-11-01 14:54:27.210 | INFO | pare.core.tester:run_on_video:287 - Running PARE on each tracklet... 0%| | 0/278 [00:00<?, ?it/s]2021-11-07 14:54:28.564 | INFO | pare.core.tester:run_on_video:362 - Converting smpl keypoints 2d to original image coordinate 0%|▏ | 1/278 [00:01<06:12, 1.34s/it]2021-11-07 14:54:30.089 | INFO | pare.core.tester:run_on_video:362 - Converting smpl keypoints 2d to original image coordinate 1%|▎ | 2/278 [00:02<06:26, 1.40s/it]2021-11-07 14:54:31.578 | INFO | pare.core.tester:run_on_video:362 - Converting smpl keypoints 2d to original image coordinate 1%|▍ | 3/278 [00:04<06:32, 1.43s/it] ................ 100%|█████████████████████████████████████████| 278/278 [03:13<00:00, 1.44it/s] 2021-11-01 21:10:36.733 | INFO | main:main:115 - PARE FPS: 15.92 2021-11-01 21:10:36.733 | INFO | main:main:117 - Total time spent: 277.98 seconds (including model loading time). 2021-11-01 21:10:36.733 | INFO | main:main:118 - Total FPS (including model loading time): 11.08. 2021-11-01 21:10:36.734 | INFO | main:main:121 - Saving output results to "logs/demo/sample_video_/pare_output.pkl". WARNING: You are using a SMPL model, with only 10 shape coefficients. libEGL warning: DRI2: failed to create dri screen libEGL warning: DRI2: failed to create dri screen Traceback (most recent call last): File "scripts/demo.py", line 238, in main(args) File "scripts/demo.py", line 126, in main orig_width, orig_height, num_frames) File "./pare/core/tester.py", line 392, in render_results wireframe=self.args.wireframe File "./pare/utils/vibe_renderer.py", line 66, in init point_size=1.0 File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in init self._create() File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/offscreen.py", line 134, in _create self._platform.init_context() File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/platforms/egl.py", line 177, in init_context assert eglInitialize(self._egl_display, major, minor) File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/OpenGL/error.py", line 234, in glCheckError baseOperation = baseOperation, OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_NOT_INITIALIZED, baseOperation = eglInitialize, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7f9a2e003710>, c_long(0), c_long(0), ), result = 0 )" can you tell me how solve the error?

    opened by yiweike 5
  • Could you provide the 3DPW-OCC dataset?

    Could you provide the 3DPW-OCC dataset?

    Thanks for the exciting work.

    Can you provide the 3DPW-OCC dataset mentioned in the paper?

    It would be appreciated if you provide a 3DPW-OCC annotation file or video sequence names.

    Thank you.

    opened by hygenie1228 3
  • pare-github-data

    pare-github-data

    Hi

    I`m tring to run demo, but the file pare-github-data.zip on https://www.dropbox.com/s/aeulffqzb3zmh8x/pare-github-data.zip cannot download. Is there any other way can i get it?

    thank u so much

    opened by cytcyt1111 3
  • Output for images folder

    Output for images folder

    Hi,

    Congratulations on such a great work!

    When I run PARE on an image folder I get an output pkl file that doesn't match what you specify in README.me.

    For example, you specify an output key: pose (n_frames, 72) # SMPL pose parameters But in the output from an image folder I get this: pred_pose (1, 24, 3, 3)

    I guess that you are transforming the 72 parameters into 24 joints rotation matrices, but I can't know exactly the format of these rotations. Would it be possible to also get the original pose parameters?

    Another question is about the joints output, which includes 49 elements. But the SMPL skeleton has only 24 joints... How do I relate this 49 positions to the original 24 joints?

    Thank you in advance.

    Sincerely,

    Alejandro Beacco

    opened by abeacco 2
  • Any improvement for multiview SMPL fitting?

    Any improvement for multiview SMPL fitting?

    I'm trying to fit SMPL to scans. Currently, I just render it in different views and choose the result predicted by PARE with lowest Chamfer distance. Is there any feasible improvement on view consistency?

    opened by Charlulote 1
  • why PARE don't have fitting part?

    why PARE don't have fitting part?

    From VIBE, there are some operation when do smpl, it will do fitting in a train loop, but PARE seems don't have it. Why it was still performance better than VIBE?

    opened by jinfagang 1
  • How to get the heatmap results like the figure 1 in your paper?

    How to get the heatmap results like the figure 1 in your paper?

    Hi @mkocabas, PARE is an interesting work. Analyses on the influence of occlusions are meaningful. Could you please tell me how to get the heatmap results for deeper analysising?

    opened by syguan96 1
  • Issues on evaluation process

    Issues on evaluation process

    Thanks for the great work!

    I wanted to leave issues while I was running the evaluation code.

    https://github.com/mkocabas/PARE/blob/fa90affb6f8fc266d84a91b53b7f5c4a803fb759/scripts/eval.py#L39 num_workers=-1 raises an error in my machine as below: ValueError: num_workers option should be non-negative; use num_workers=0 to disable multiprocessing.

    num_workers=0 resolves the issue.

    In order to use jpeg4py module smoothly, I had to install libturbojpeg using the following command: sudo apt-get install libturbojpeg You might not have noticed this dependency yet, since it's not a python module.

    opened by uyoung-jeong 1
  • where are the Sup. Mat?

    where are the Sup. Mat?

    Hello, thanks for your excellent job! You have mentioned in articles that more details are provided in Sup. Mat, but I haven't found where are the Sup. Mat?

    opened by Fmin-Zou 0
  • Part Features

    Part Features

    Hi Vibe, Great Work!

    I wanted to ask, I need the part features before the heat maps. Altho, from the code I see that the final step of the part features is the _get_part_attention_map function. Afterwards, I see that there is an if statement that says: elif self.use_heatmaps == 'part_segm' then output['pred_segm_mask'] = heatmaps. I wanted to ask, in this case the pred_segm_mask are the body part segments as you attached in the appendix of the paper? (see attached picture :-) ) image

    opened by asafjo23 0
  • How to pass shape data for the

    How to pass shape data for the "humanoids" outlook

    Hi, hope you are fine.

    I'm looking to use PARE for a project.

    The whole idea is pass a multiperson video dataset where the humanoids takes the gesture and save just the humanoids on a separate video.

    I want to pass some parameters on the result like age, gender, race,... so in the next step, humanizing the "humanoid" can "dress" it with images provided (body and faces)

    Please let me know how to separate the results in different video dataset of the "humanoids" replicating the movements.

    Appreciate!

    opened by venturaEffect 0
  • joints2d not found

    joints2d not found

    Hi, after I run the image folder demo,I can't find joints2d in the output file. If it is possible to share the STAF dir? The path in demo.py is '/home/mkocabas/developments/openposetrack', and I can't find it. Thank you very much.

    opened by Oliver-ny 1
Owner
Muhammed Kocabas
Muhammed Kocabas
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.

Attack-Probabilistic-Models This is the source code for Adversarial Attacks on Probabilistic Autoregressive Forecasting Models. This repository contai

SRI Lab, ETH Zurich 25 Sep 14, 2022
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification".

Rule-based Representation Learner This is a PyTorch implementation of Rule-based Representation Learner (RRL) as described in NeurIPS 2021 paper: Scal

Zhuo Wang 53 Dec 17, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO

Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supe

Facebook Research 4.2k Jan 03, 2023
Implementation supporting the ICCV 2017 paper "GANs for Biological Image Synthesis"

GANs for Biological Image Synthesis This codes implements the ICCV-2017 paper "GANs for Biological Image Synthesis". The paper and its supplementary m

Anton Osokin 95 Nov 25, 2022
Clairvoyance: a Unified, End-to-End AutoML Pipeline for Medical Time Series

Clairvoyance: A Pipeline Toolkit for Medical Time Series Authors: van der Schaar Lab This repository contains implementations of Clairvoyance: A Pipel

van_der_Schaar \LAB 89 Dec 07, 2022
Deep Anomaly Detection with Outlier Exposure (ICLR 2019)

Outlier Exposure This repository contains the essential code for the paper Deep Anomaly Detection with Outlier Exposure (ICLR 2019). Requires Python 3

Dan Hendrycks 464 Dec 27, 2022
salabim - discrete event simulation in Python

Object oriented discrete event simulation and animation in Python. Includes process control features, resources, queues, monitors. statistical distrib

181 Dec 21, 2022
A fast MoE impl for PyTorch

An easy-to-use and efficient system to support the Mixture of Experts (MoE) model for PyTorch.

Rick Ho 873 Jan 09, 2023
This repository is for DSA and CP scripts for reference.

dsa-script-collections This Repo is the collection of DSA and CP scripts for reference. Contents Python Bubble Sort Insertion Sort Merge Sort Quick So

Aditya Kumar Pandey 9 Nov 22, 2022
Machine Learning Toolkit for Kubernetes

Kubeflow the cloud-native platform for machine learning operations - pipelines, training and deployment. Documentation Please refer to the official do

Kubeflow 12.1k Jan 03, 2023
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Qiao Liu 50 Dec 18, 2022
Contextual Attention Localization for Offline Handwritten Text Recognition

CALText This repository contains the source code for CALText model introduced in "CALText: Contextual Attention Localization for Offline Handwritten T

0 Feb 17, 2022
code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction

Video_Pace This repository contains the code for the following paper: Jiangliu Wang, Jianbo Jiao and Yunhui Liu, "Self-Supervised Video Representation

Jiangliu Wang 95 Dec 14, 2022
(CVPR2021) Kaleido-BERT: Vision-Language Pre-training on Fashion Domain

Kaleido-BERT: Vision-Language Pre-training on Fashion Domain Mingchen Zhuge*, Dehong Gao*, Deng-Ping Fan#, Linbo Jin, Ben Chen, Haoming Zhou, Minghui

248 Dec 04, 2022
Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CVPR 2021)

Semi-supervised Semantic Segmentation with Directional Context-aware Consistency (CAC) Xin Lai*, Zhuotao Tian*, Li Jiang, Shu Liu, Hengshuang Zhao, Li

Jia Research Lab 137 Dec 14, 2022
PyTorch reimplementation of hand-biomechanical-constraints (ECCV2020)

Hand Biomechanical Constraints Pytorch Unofficial PyTorch reimplementation of Hand-Biomechanical-Constraints (ECCV2020). This project reimplement foll

Hao Meng 59 Dec 20, 2022
ISBI 2022: Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image.

Cross-level Contrastive Learning and Consistency Constraint for Semi-supervised Medical Image Introduction This repository contains the PyTorch implem

25 Nov 09, 2022
Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

OpenSelfSup News Downstream tasks now support more methods(Mask RCNN-FPN, RetinaNet, Keypoints RCNN) and more datasets(Cityscapes). 'GaussianBlur' is

AI Lab, Westlake University 332 Jan 03, 2023