[NeurIPS-2020] Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID.

Related tags

Deep LearningSpCL
Overview

Python >=3.5 PyTorch >=1.0

Self-paced Contrastive Learning (SpCL)

The official repository for Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID, which is accepted by NeurIPS-2020. SpCL achieves state-of-the-art performances on both unsupervised domain adaptation tasks and unsupervised learning tasks for object re-ID, including person re-ID and vehicle re-ID.

framework

Updates

[2020-10-13] All trained models for the camera-ready version have been updated, see Trained Models for details.

[2020-09-25] SpCL has been accepted by NeurIPS on the condition that experiments on DukeMTMC-reID dataset should be removed, since the dataset has been taken down and should no longer be used.

[2020-07-01] We did the code refactoring to support distributed training, stronger performances and more features. Please see OpenUnReID.

Requirements

Installation

git clone https://github.com/yxgeee/SpCL.git
cd SpCL
python setup.py develop

Prepare Datasets

cd examples && mkdir data

Download the person datasets Market-1501, MSMT17, PersonX, and the vehicle datasets VehicleID, VeRi-776, VehicleX. Then unzip them under the directory like

SpCL/examples/data
├── market1501
│   └── Market-1501-v15.09.15
├── msmt17
│   └── MSMT17_V1
├── personx
│   └── PersonX
├── vehicleid
│   └── VehicleID -> VehicleID_V1.0
├── vehiclex
│   └── AIC20_ReID_Simulation -> AIC20_track2/AIC20_ReID_Simulation
└── veri
    └── VeRi -> VeRi_with_plate

Prepare ImageNet Pre-trained Models for IBN-Net

When training with the backbone of IBN-ResNet, you need to download the ImageNet-pretrained model from this link and save it under the path of logs/pretrained/.

mkdir logs && cd logs
mkdir pretrained

The file tree should be

SpCL/logs
└── pretrained
    └── resnet50_ibn_a.pth.tar

ImageNet-pretrained models for ResNet-50 will be automatically downloaded in the python script.

Training

We utilize 4 GTX-1080TI GPUs for training. Note that

  • The training for SpCL is end-to-end, which means that no source-domain pre-training is required.
  • use --iters 400 (default) for Market-1501 and PersonX datasets, and --iters 800 for MSMT17, VeRi-776, VehicleID and VehicleX datasets;
  • use --width 128 --height 256 (default) for person datasets, and --height 224 --width 224 for vehicle datasets;
  • use -a resnet50 (default) for the backbone of ResNet-50, and -a resnet_ibn50a for the backbone of IBN-ResNet.

Unsupervised Domain Adaptation

To train the model(s) in the paper, run this command:

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_uda.py \
  -ds $SOURCE_DATASET -dt $TARGET_DATASET --logs-dir $PATH_OF_LOGS

Some examples:

### PersonX -> Market-1501 ###
# use all default settings is ok
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_uda.py \
  -ds personx -dt market1501 --logs-dir logs/spcl_uda/personx2market_resnet50

### Market-1501 -> MSMT17 ###
# use all default settings except for iters=800
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_uda.py --iters 800 \
  -ds market1501 -dt msmt17 --logs-dir logs/spcl_uda/market2msmt_resnet50

### VehicleID -> VeRi-776 ###
# use all default settings except for iters=800, height=224 and width=224
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_uda.py --iters 800 --height 224 --width 224 \
  -ds vehicleid -dt veri --logs-dir logs/spcl_uda/vehicleid2veri_resnet50

Unsupervised Learning

To train the model(s) in the paper, run this command:

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_usl.py \
  -d $DATASET --logs-dir $PATH_OF_LOGS

Some examples:

### Market-1501 ###
# use all default settings is ok
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_usl.py \
  -d market1501 --logs-dir logs/spcl_usl/market_resnet50

### MSMT17 ###
# use all default settings except for iters=800
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_usl.py --iters 800 \
  -d msmt17 --logs-dir logs/spcl_usl/msmt_resnet50

### VeRi-776 ###
# use all default settings except for iters=800, height=224 and width=224
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_usl.py --iters 800 --height 224 --width 224 \
  -d veri --logs-dir logs/spcl_usl/veri_resnet50

Evaluation

We utilize 1 GTX-1080TI GPU for testing. Note that

  • use --width 128 --height 256 (default) for person datasets, and --height 224 --width 224 for vehicle datasets;
  • use --dsbn for domain adaptive models, and add --test-source if you want to test on the source domain;
  • use -a resnet50 (default) for the backbone of ResNet-50, and -a resnet_ibn50a for the backbone of IBN-ResNet.

Unsupervised Domain Adaptation

To evaluate the domain adaptive model on the target-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python examples/test.py --dsbn \
  -d $DATASET --resume $PATH_OF_MODEL

To evaluate the domain adaptive model on the source-domain dataset, run:

CUDA_VISIBLE_DEVICES=0 \
python examples/test.py --dsbn --test-source \
  -d $DATASET --resume $PATH_OF_MODEL

Some examples:

### Market-1501 -> MSMT17 ###
# test on the target domain
CUDA_VISIBLE_DEVICES=0 \
python examples/test.py --dsbn \
  -d msmt17 --resume logs/spcl_uda/market2msmt_resnet50/model_best.pth.tar
# test on the source domain
CUDA_VISIBLE_DEVICES=0 \
python examples/test.py --dsbn --test-source \
  -d market1501 --resume logs/spcl_uda/market2msmt_resnet50/model_best.pth.tar

Unsupervised Learning

To evaluate the model, run:

CUDA_VISIBLE_DEVICES=0 \
python examples/test.py \
  -d $DATASET --resume $PATH

Some examples:

### Market-1501 ###
CUDA_VISIBLE_DEVICES=0 \
python examples/test.py \
  -d market1501 --resume logs/spcl_usl/market_resnet50/model_best.pth.tar

Trained Models

framework

You can download the above models in the paper from [Google Drive] or [Baidu Yun](password: w3l9).

Citation

If you find this code useful for your research, please cite our paper

@inproceedings{ge2020selfpaced,
    title={Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID},
    author={Yixiao Ge and Feng Zhu and Dapeng Chen and Rui Zhao and Hongsheng Li},
    booktitle={Advances in Neural Information Processing Systems},
    year={2020}
}
Owner
Yixiao Ge
Ph.D Candidate @ CUHK-MMLab
Yixiao Ge
Coded illumination for improved lensless imaging

CodedCam Coded Illumination for Improved Lensless Imaging Paper | Supplementary results | Data and Code are available. Coded illumination for improved

Computational Sensing and Information Processing Lab 1 Nov 29, 2021
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022
This repo contains the pytorch implementation for Dynamic Concept Learner (accepted by ICLR 2021).

DCL-PyTorch Pytorch implementation for the Dynamic Concept Learner (DCL). More details can be found at the project page. Framework Grounding Physical

Zhenfang Chen 31 Jan 06, 2023
PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 2021

Neural Scene Flow Fields PyTorch implementation of paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes", CVPR 20

Zhengqi Li 585 Jan 04, 2023
Malware Bypass Research using Reinforcement Learning

Malware Bypass Research using Reinforcement Learning

Bobby Filar 76 Dec 26, 2022
PyTorch implementation of Deformable Convolution

PyTorch implementation of Deformable Convolution !!!Warning: There is some issues in this implementation and this repo is not maintained any more, ple

Wei Ouyang 893 Dec 18, 2022
Have you ever wondered how cool it would be to have your own A.I

Have you ever wondered how cool it would be to have your own A.I. assistant Imagine how easier it would be to send emails without typing a single word, doing Wikipedia searches without opening web br

Harsh Gupta 1 Nov 09, 2021
Pretrained Cost Model for Distributed Constraint Optimization Problems

Pretrained Cost Model for Distributed Constraint Optimization Problems Requirements PyTorch 1.9.0 PyTorch Geometric 1.7.1 Directory structure baseline

2 Aug 28, 2022
Contains supplementary materials for reproduce results in HMC divergence time estimation manuscript

Scalable Bayesian divergence time estimation with ratio transformations This repository contains the instructions and files to reproduce the analyses

Suchard Research Group 1 Sep 21, 2022
Pytorch Implementation for (STANet+ and STANet)

Pytorch Implementation for (STANet+ and STANet) V2-Weakly Supervised Visual-Auditory Saliency Detection with Multigranularity Perception (arxiv), pdf:

GuotaoWang 14 Nov 29, 2022
Intrinsic Image Harmonization

Intrinsic Image Harmonization [Paper] Zonghui Guo, Haiyong Zheng, Yufeng Jiang, Zhaorui Gu, Bing Zheng Here we provide PyTorch implementation and the

VISION @ OUC 44 Dec 21, 2022
abess: Fast Best-Subset Selection in Python and R

abess: Fast Best-Subset Selection in Python and R Overview abess (Adaptive BEst Subset Selection) library aims to solve general best subset selection,

297 Dec 21, 2022
The Wearables Development Toolkit - a development environment for activity recognition applications with sensor signals

Wearables Development Toolkit (WDK) The Wearables Development Toolkit (WDK) is a framework and set of tools to facilitate the iterative development of

Juan Haladjian 114 Nov 27, 2022
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 04, 2023
DiSECt: Differentiable Simulator for Robotic Cutting

DiSECt: Differentiable Simulator for Robotic Cutting Website | Paper | Dataset | Video | Blog post DiSECt is a simulator for the cutting of deformable

NVIDIA Research Projects 73 Oct 29, 2022
Drone Task1 - Drone Task1 With Python

Drone_Task1 Matching Results 3.mp4 1.mp4

MLV Lab (Machine Learning and Vision Lab at Korea University) 11 Nov 14, 2022
Implementation of "GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings" in PyTorch

PyGAS: Auto-Scaling GNNs in PyG PyGAS is the practical realization of our G NN A uto S cale (GAS) framework, which scales arbitrary message-passing GN

Matthias Fey 139 Dec 25, 2022
[NeurIPS 2021] Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

Near-Duplicate Video Retrieval with Deep Metric Learning This repository contains the Tensorflow implementation of the paper Near-Duplicate Video Retr

Liming Jiang 238 Nov 25, 2022
List some popular DeepFake models e.g. DeepFake, FaceSwap-MarekKowal, IPGAN, FaceShifter, FaceSwap-Nirkin, FSGAN, SimSwap, CihaNet, etc.

deepfake-models List some popular DeepFake models e.g. DeepFake, CihaNet, SimSwap, FaceSwap-MarekKowal, IPGAN, FaceShifter, FaceSwap-Nirkin, FSGAN, Si

Mingcan Xiang 100 Dec 17, 2022
Another pytorch implementation of FCN (Fully Convolutional Networks)

FCN-pytorch-easiest Trying to be the easiest FCN pytorch implementation and just in a get and use fashion Here I use a handbag semantic segmentation f

Y. Dong 158 Dec 21, 2022