FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

Overview

FADNet++: Real-Time and Accurate Disparity Estimation with Configurable Networks

This repository contains the code (in PyTorch) for the "FADNet++" paper.

Contents

  1. Introduction
  2. Usage
  3. Results
  4. Acknowledgement
  5. Contacts

Introduction

We propose an efficient and accurate deep network for disparity estimation named FADNet with three main features:

  • It exploits efficient 2D based correlation layers with stacked blocks to preserve fast computation.
  • It combines the residual structures to make the deeper model easier to learn.
  • It contains multi-scale predictions so as to exploit a multi-scale weight scheduling training technique to improve the accuracy.

Usage

Dependencies

Package Installation

  • Execute "sh compile.sh" to compile libraries needed by GANet.
  • Enter "layers_package" and execute "sh install.sh" to install customized layers, including Channel Normalization layer and Resample layer.

We also release the docker version of this project, which has been configured completely and can be used directly. Please refer to this website for the image.

Usage of Scene Flow dataset
Download RGB cleanpass images and its disparity for three subset: FlyingThings3D, Driving, and Monkaa. Organize them as follows:
- FlyingThings3D_release/frames_cleanpass
- FlyingThings3D_release/disparity
- driving_release/frames_cleanpass
- driving_release/disparity
- monkaa_release/frames_cleanpass
- monkaa_release/disparity
Put them in the data/ folder (or soft link). The *train.sh* defaultly locates the data root path as data/.

Train

We use template scripts to configure the training task, which are stored in exp_configs. One sample "fadnet.conf" is as follows:

net=fadnet
loss=loss_configs/fadnet_sceneflow.json
outf_model=models/${net}-sceneflow
logf=logs/${net}-sceneflow.log

lr=1e-4
devices=0,1,2,3
dataset=sceneflow
trainlist=lists/SceneFlow.list
vallist=lists/FlyingThings3D_release_TEST.list
startR=0
startE=0
batchSize=16
maxdisp=-1
model=none
#model=fadnet_sceneflow.pth
Parameter Description Options
net network architecture name dispnets, dispnetc, dispnetcss, fadnet, psmnet, ganet
loss loss weight scheduling configuration file depends on the training scheme
outf_model folder name to store the model files \
logf log file name \
lr initial learning rate \
devices GPU device IDs to use depends on the hardware system
dataset dataset name to train sceneflow
train(val)list sample lists for training/validation \
startR the round index to start training (for restarting training from the checkpoint) \
startE the epoch index to start training (for restarting training from the checkpoint) \
batchSize the number of samples per batch \
maxdisp the maximum disparity that the model tries to predict \
model the model file path of the checkpoint \

We have integrated PSMNet and GANet for comparison. The sample configuration files are also given.

To start training, use the following command, dnn=CONFIG_FILE sh train.sh, such as:

dnn=fadnet sh train.sh

You do not need the suffix for CONFIG_FILE.

Evaluation

We have two modes for performance evaluation, test and detect, respectively. test requires that the testing samples should have ground truth of disparity and then reports the average End-point-error (EPE). detect does not require any ground truth for EPE computation. However, detect stores the disparity maps for each sample in the given list.

For the test mode, one can revise test.sh and run sh test.sh. The contents of test.sh are as follows:

net=fadnet
maxdisp=-1
dataset=sceneflow
trainlist=lists/SceneFlow.list
vallist=lists/FlyingThings3D_release_TEST.list

loss=loss_configs/test.json
outf_model=models/test/
logf=logs/${net}_test_on_${dataset}.log

lr=1e-4
devices=0,1,2,3
startR=0
startE=0
batchSize=8
model=models/fadnet.pth
python main.py --cuda --net $net --loss $loss --lr $lr \
               --outf $outf_model --logFile $logf \
               --devices $devices --batch_size $batchSize \
               --trainlist $trainlist --vallist $vallist \
               --dataset $dataset --maxdisp $maxdisp \
               --startRound $startR --startEpoch $startE \
               --model $model 

Most of the parameters in test.sh are similar to training. However, you can just ignore parameters, including trainlist, loss, outf_model, since they are not used in the test mode.

For the detect mode, one can revise detect.sh and run sh detect.sh. The contents of detect.sh are as follows:

net=fadnet
dataset=sceneflow

model=models/fadnet.pth
outf=detect_results/${net}-${dataset}/

filelist=lists/FlyingThings3D_release_TEST.list
filepath=data

CUDA_VISIBLE_DEVICES=0 python detecter.py --model $model --rp $outf --filelist $filelist --filepath $filepath --devices 0 --net ${net} 

You can revise the value of outf to change the folder that stores the predicted disparity maps.

Finetuning on KITTI datasets and result submission

We re-use the codes in PSMNet to finetune the pretrained models on KITTI datasets and generate disparity maps for submission. Use finetune.sh and submission.sh to do them respectively.

Pretrained Model

Update: 2020/2/6 We released the pre-trained Scene Flow model.

KITTI 2015 Scene Flow KITTI 2012
/ Google Drive /

Results

Results on Scene Flow dataset

Model EPE GPU Memory during inference (GB) Runtime (ms) on Tesla V100
FADNet 0.83 3.87 48.1
DispNetC 1.68 1.62 18.7
PSMNet 1.09 13.99 399.3
GANet 0.84 29.1 2251.1

Citation

If you find the code and paper is useful in your work, please cite our conference paper

@inproceedings{wang2020fadnet,
  title={{FADNet}: A Fast and Accurate Network for Disparity Estimation},
  author={Wang, Qiang and Shi, Shaohuai and Zheng, Shizhen and Zhao, Kaiyong and Chu, Xiaowen},
  booktitle={2020 {IEEE} International Conference on Robotics and Automation ({ICRA} 2020)},
  pages={101--107},
  year={2020}
}

Acknowledgement

We acknowledge the following repositories and papers since our project has used some codes of them.

Contacts

[email protected]

Any discussions or concerns are welcomed!

Owner
HKBU High Performance Machine Learning Lab
HKBU High Performance Machine Learning Lab
A curated list of neural network pruning resources.

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Yang He 1.7k Jan 09, 2023
An all-in-one application to visualize multiple different local path planning algorithms

Table of Contents Table of Contents Local Planner Visualization Project (LPVP) Features Installation/Usage Local Planners Probabilistic Roadmap (PRM)

Abdur Javaid 47 Dec 30, 2022
A CNN model to detect hand gestures.

Software Used python - programming language used, tested on v3.8 miniconda - for managing virtual environment Libraries Used opencv - pip install open

Shivanshu 6 Jul 14, 2022
Lightweight tool to perform MITM attack on local network

ARPSpy - A lightweight tool to perform MITM attack Using many library to perform ARP Spoof and auto-sniffing HTTP packet containing credential. (Never

MinhItachi 8 Aug 28, 2022
Official Pytorch Implementation for Splicing ViT Features for Semantic Appearance Transfer presenting Splice

Splicing ViT Features for Semantic Appearance Transfer [Project Page] Splice is a method for semantic appearance transfer, as described in Splicing Vi

Omer Bar Tal 253 Jan 06, 2023
Manim is an engine for precise programmatic animations, designed for creating explanatory math videos

Manim is an engine for precise programmatic animations, designed for creating explanatory math videos. Note, there are two versions of manim. This rep

Grant Sanderson 49k Jan 09, 2023
Sample and Computation Redistribution for Efficient Face Detection

Introduction SCRFD is an efficient high accuracy face detection approach which initially described in Arxiv. Performance Precision, flops and infer ti

Sajjad Aemmi 13 Mar 05, 2022
Image restoration with neural networks but without learning.

Warning! The optimization may not converge on some GPUs. We've personally experienced issues on Tesla V100 and P40 GPUs. When running the code, make s

Dmitry Ulyanov 7.4k Jan 01, 2023
Unsupervised Feature Ranking via Attribute Networks.

FRANe Unsupervised Feature Ranking via Attribute Networks (FRANe) converts a dataset into a network (graph) with nodes that correspond to the features

7 Sep 29, 2022
Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Datset)

Graphlevel-SSL Overview Apply Graph Self-Supervised Learning methods to graph-level task(TUDataset, MolculeNet Dataset). It is unified framework to co

JunSeok 8 Oct 15, 2021
SelfRemaster: SSL Speech Restoration

SelfRemaster: Self-Supervised Speech Restoration Official implementation of SelfRemaster: Self-Supervised Speech Restoration with Analysis-by-Synthesi

Takaaki Saeki 46 Jan 07, 2023
Bayesian optimization in PyTorch

BoTorch is a library for Bayesian Optimization built on PyTorch. BoTorch is currently in beta and under active development! Why BoTorch ? BoTorch Prov

2.5k Dec 31, 2022
ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration

ROSITA News & Updates (24/08/2021) Release the demo to perform fine-grained semantic alignments using the pretrained ROSITA model. (15/08/2021) Releas

Vision and Language Group@ MIL 48 Dec 23, 2022
Torch-based tool for quantizing high-dimensional vectors using additive codebooks

Trainable multi-codebook quantization This repository implements a utility for use with PyTorch, and ideally GPUs, for training an efficient quantizer

Daniel Povey 41 Jan 07, 2023
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
CVPR2020 Counterfactual Samples Synthesizing for Robust VQA

CVPR2020 Counterfactual Samples Synthesizing for Robust VQA This repo contains code for our paper "Counterfactual Samples Synthesizing for Robust Visu

72 Dec 22, 2022
Implementation of CVPR'21: RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction

RfD-Net [Project Page] [Paper] [Video] RfD-Net: Point Scene Understanding by Semantic Instance Reconstruction Yinyu Nie, Ji Hou, Xiaoguang Han, Matthi

Yinyu Nie 162 Jan 06, 2023
Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

HifiFace — Unofficial Pytorch Implementation Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

MINDs Lab 218 Jan 04, 2023
Implementations of polygamma, lgamma, and beta functions for PyTorch

lgamma Implementations of polygamma, lgamma, and beta functions for PyTorch. It's very hacky, but that's usually ok for research use. To build, run: .

Rachit Singh 24 Nov 09, 2021
[CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy Codes for this paper: [CVPR 2022] The Pr

VITA 16 Nov 26, 2022