[ECCVW2020] Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DiMP)

Related tags

Deep LearningRLT-DIMP
Overview

Feel free to visit my homepage

Robust Long-Term Object Tracking via Improved Discriminative Model Prediction (RLT-DIMP) [ECCVW2020 paper]


Presentation video

1-minute version (ENG)

Video Label

12-minute version (ENG)

Video Label


Summary

Abstract

We propose an improved discriminative model prediction method for robust long-term tracking based on a pre-trained short-term tracker. The baseline pre-trained short-term tracker is SuperDiMP which combines the bounding-box regressor of PrDiMP with the standard DiMP classifier. Our tracker RLT-DiMP improves SuperDiMP in the following three aspects: (1) Uncertainty reduction using random erasing: To make our model robust, we exploit an agreement from multiple images after erasing random small rectangular areas as a certainty. And then, we correct the tracking state of our model accordingly. (2) Random search with spatio-temporal constraints: we propose a robust random search method with a score penalty applied to prevent the problem of sudden detection at a distance. (3) Background augmentation for more discriminative feature learning: We augment various backgrounds that are not included in the search area to train a more robust model in the background clutter. In experiments on the VOT-LT2020 benchmark dataset, the proposed method achieves comparable performance to the state-of-the-art long-term trackers.


Framework


Baseline

  • We adopt the pre-trained short-term tracker which combines the bounding box regressor of PrDiMP with the standard DiMP classifier
  • This tracker's name is SuperDiMP and it can be downloaded on the DiMP-family's github page [link]

Contribution1: Uncertainty reduction using random erasing


Contribution2: Random search with spatio-temporal constraints


Contribution3: Background augmentation for more discriminative learning


Prerequisites

  • Ubuntu 18.04 / Python 3.6 / CUDA 10.0 / gcc 7.5.0
  • Need anaconda
  • Need GPU (more than 2GB, Sometimes it is a little more necessary depending on the situation.)
  • Unfortunately, "Precise RoI Pooling" included in the Dimp tracker only supports GPU (cuda) implementations.
  • Need root permission
  • All libraries in “install.sh” file (please check “how to install”)

How to install

  • Unzip files in $(tracker-path)
  • cd $(tracker-path)
  • bash install.sh $(anaconda-path) $(env-name) (Automatically create conda environment, If you don’t want to make more conda environments, run “bash install_in_conda.sh” after conda activation)
  • check pretrained model "super_dimp.pth.tar" in $(tracker-path)$/pytracking/networks/ (It should be downloaded by install.sh)
  • conda activate $(env-name)
  • make VOTLT2020 workspace (vot workspace votlt2020 --workspace $(workspace-path))
  • move trackers.ini to $(workspace-path)
  • move(or download) votlt2020 dataset to $(workspace-path)/sequences
  • set the VOT dataset directory ($(tracker-path)/pytracking/evaluation/local.py), vot_path should include ‘sequence’ word (e.g., $(vot-dataset-path)/sequences/), vot_path must be the absolute path (not relative path)
  • modify paths in the trackers.ini file, paths should include ‘pytracking’ word (e.g., $(tracker-path)/pytracking), paths must be absolute path (not relative path)
  • cd $(workspace-path)
  • vot evaluate RLT_DiMP --workspace $(workspace-path)
  • It will fail once because the “precise rol pooling” file has to be compiled through the ninja. Please check the handling error parts.
  • vot analysis --workspace $(workspace-path) RLT_DiMP --output json

Handling errors

  • “Process did not finish yet” or “Error during tracker execution: Exception when waiting for response: Unknown”-> re-try or “sudo rm -rf /tmp/torch_extensions/_prroi_pooling/
  • About “groundtruth.txt” -> check vot_path in the $(tracker-path)/pytracking/evaluation/local.py file
  • About “pytracking/evaluation/local.py” -> check and run install.sh
  • About “permission denied : “/tmp/torch_extensions/_prroi_pooling/” -> sudo chmod -R 777 /tmp/torch_extensions/_prroi_pooling/
  • About “No module named 'ltr.external.PreciseRoiPooling’” or “can not complie Precise RoI Pooling library error” -> cd $(tracker-path) -> rm -rf /ltr/external/PreciseRoiPooling -> git clone https://github.com/vacancy/PreciseRoIPooling.git ltr/external/PreciseRoIPooling
  • If nothing happens since the code just stopped -> sudo rm -rf /tmp/torch_extensions/_prroi_pooling/

Contact

If you have any questions, please feel free to contact [email protected]


Acknowledgments

  • The code is based on the PyTorch implementation of the DiMP-family.
  • This work was done while the first author was a visiting researcher at CMU.
  • This work was supported in part through NSF grant IIS-1650994, the financial assistance award 60NANB17D156 from U.S. Department of Commerce, National Institute of Standards and Technology (NIST) and by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DOI/IBC) contract number D17PC0034. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copy-right annotation/herein. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements, either expressed or implied, of NIST, IARPA, NSF, DOI/IBC, or the U.S. Government.

Citation

@InProceedings{Choi2020,
  author = {Choi, Seokeon and Lee, Junhyun and Lee, Yunsung and Hauptmann, Alexander},
  title = {Robust Long-Term Object Tracking via Improved Discriminative Model Prediction},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={0--0},
  year={2020}
}

Reference

  • [PrDiMP] Danelljan, Martin, Luc Van Gool, and Radu Timofte. "Probabilistic Regression for Visual Tracking." arXiv preprint arXiv:2003.12565 (2020).
  • [DiMP] Bhat, Goutam, et al. "Learning discriminative model prediction for tracking." Proceedings of the IEEE International Conference on Computer Vision. 2019.
  • [ATOM] Danelljan, Martin, et al. "Atom: Accurate tracking by overlap maximization." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.
Owner
Seokeon Choi
I plan to receive a Ph.D. in Aug. 2021. I'm currently looking for a full-time job, residency program, or post-doc. linkedin.com/in/seokeon
Seokeon Choi
MultiLexNorm 2021 competition system from ÚFAL

ÚFAL at MultiLexNorm 2021: Improving Multilingual Lexical Normalization by Fine-tuning ByT5 David Samuel & Milan Straka Charles University Faculty of

ÚFAL 13 Jun 28, 2022
PyTorch implementation for 3D human pose estimation

Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach This repository is the PyTorch implementation for the network presented in:

Xingyi Zhou 579 Dec 22, 2022
YOLOX + ROS(1, 2) object detection package

YOLOX + ROS(1, 2) object detection package

Ar-Ray 158 Dec 21, 2022
Experiments for distributed optimization algorithms

Network-Distributed Algorithm Experiments -- This repository contains a set of optimization algorithms and objective functions, and all code needed to

Boyue Li 40 Dec 04, 2022
Generative Adversarial Text-to-Image Synthesis

###Generative Adversarial Text-to-Image Synthesis Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee This is the

Scott Ellison Reed 883 Dec 31, 2022
MagFace: A Universal Representation for Face Recognition and Quality Assessment

MagFace MagFace: A Universal Representation for Face Recognition and Quality Assessment in IEEE Conference on Computer Vision and Pattern Recognition

Qiang Meng 523 Jan 05, 2023
VQMIVC - Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion

VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion (Interspeech

Disong Wang 262 Dec 31, 2022
Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.

Official pytorch implementation of "Scaling-up Disentanglement for Image Translation", ICCV 2021.

Aviv Gabbay 41 Nov 29, 2022
Unofficial TensorFlow implementation of the Keyword Spotting Transformer model

Keyword Spotting Transformer This is the unofficial TensorFlow implementation of the Keyword Spotting Transformer model. This model is used to train o

Intelligent Machines Limited 8 May 11, 2022
Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem

Benchmarking nearest neighbors Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but so far t

Erik Bernhardsson 3.2k Jan 03, 2023
Syllabus del curso IIC2115 - Programación como Herramienta para la Ingeniería 2022/I

IIC2115 - Programación como Herramienta para la Ingeniería Videos y tutoriales Tutorial CMD Tutorial Instalación Python y Jupyter Tutorial de git-GitH

21 Nov 09, 2022
Code and Resources for the Transformer Encoder Reasoning Network (TERN)

Transformer Encoder Reasoning Network Code for the cross-modal visual-linguistic retrieval method from "Transformer Reasoning Network for Image-Text M

Nicola Messina 53 Dec 30, 2022
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation, available for both PyTorch and Tensorflow.

730 Jan 09, 2023
[ECCV 2020] XingGAN for Person Image Generation

Contents XingGAN or CrossingGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evaluation Acknowl

Hao Tang 218 Oct 29, 2022
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing

DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style Editing Figure: Joint multi-attribute edits using DyStyle model. Great diversity

74 Dec 03, 2022
Image Captioning using CNN and Transformers

Image-Captioning Keras/Tensorflow Image Captioning application using CNN and Transformer as encoder/decoder. In particulary, the architecture consists

24 Dec 28, 2022
This is the official pytorch implementation of Student Helping Teacher: Teacher Evolution via Self-Knowledge Distillation(TESKD)

Student Helping Teacher: Teacher Evolution via Self-Knowledge Distillation (TESKD) By Zheng Li[1,4], Xiang Li[2], Lingfeng Yang[2,4], Jian Yang[2], Zh

Zheng Li 9 Sep 26, 2022
Building a real-time environment using webcam frame division in OpenCV and classify cropped images using a fine-tuned vision transformers on hybryd datasets samples for facial emotion recognition.

Visual Transformer for Facial Emotion Recognition (FER) This project has the aim to build an efficient Visual Transformer for the Facial Emotion Recog

Mario Sessa 8 Dec 12, 2022
TensorFlow (v2.7.0) benchmark results on an M1 Macbook Air 2020 laptop (macOS Monterey v12.1).

M1-tensorflow-benchmark TensorFlow (v2.7.0) benchmark results on an M1 Macbook Air 2020 laptop (macOS Monterey v12.1). I was initially testing if Tens

particle 2 Jan 05, 2022
Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks

LMMNN Using Random Effects to Account for High-Cardinality Categorical Features and Repeated Measures in Deep Neural Networks This is the working dire

Giora Simchoni 10 Nov 02, 2022