Code for Domain Adaptive Video Segmentation via Temporal Consistency Regularization in ICCV 2021

Related tags

Deep LearningDA-VSN
Overview

Domain Adaptive Video Segmentation via Temporal Consistency Regularization

Updates

Paper

Domain Adaptive Video Segmentation via Temporal Consistency Regularization

Dayan Guan, Jiaxing Huang, Xiao Aoran, Shijian Lu
School of Computer Science and Engineering, Nanyang Technological University, Singapore
International Conference on Computer Vision, 2021.

If you find this code useful for your research, please cite our paper:

@inproceedings{guan2021domain,
  title={Domain adaptive video segmentation via temporal consistency regularization},
  author={Guan, Dayan and Huang, Jiaxing and Xiao, Aoran and Lu, Shijian},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={8053--8064},
  year={2021}
}

Abstract

Video semantic segmentation is an essential task for the analysis and understanding of videos. Recent efforts largely focus on supervised video segmentation by learning from fully annotated data, but the learnt models often experience clear performance drop while applied to videos of a different domain. This paper presents DA-VSN, a domain adaptive video segmentation network that addresses domain gaps in videos by temporal consistency regularization (TCR) for consecutive frames of target-domain videos. DA-VSN consists of two novel and complementary designs. The first is cross-domain TCR that guides the prediction of target frames to have similar temporal consistency as that of source frames (learnt from annotated source data) via adversarial learning. The second is intra-domain TCR that guides unconfident predictions of target frames to have similar temporal consistency as confident predictions of target frames. Extensive experiments demonstrate the superiority of our proposed domain adaptive video segmentation network which outperforms multiple baselines consistently by large margins.

Installation

  1. Conda enviroment:
conda create -n DA-VSN python=3.6
conda activate DA-VSN
conda install -c menpo opencv
pip install torch==1.2.0 torchvision==0.4.0
  1. Clone the ADVENT:
git clone https://github.com/valeoai/ADVENT.git
pip install -e ./ADVENT
  1. Clone the repo:
git clone https://github.com/Dayan-Guan/DA-VSN.git
pip install -e ./DA-VSN

Preparation

  1. Dataset:
DA-VSN/data/Cityscapes/                       % Cityscapes dataset root
DA-VSN/data/Cityscapes/leftImg8bit_sequence   % leftImg8bit_sequence_trainvaltest
DA-VSN/data/Cityscapes/gtFine                 % gtFine_trainvaltest
DA-VSN/data/Viper/                            % VIPER dataset root
DA-VSN/data/Viper/train/img                   % Modality: Images; Frames: *[0-9]; Sequences: 00-77; Format: jpg
DA-VSN/data/Viper/train/cls                   % Modality: Semantic class labels; Frames: *0; Sequences: 00-77; Format: png
DA-VSN/data/SynthiaSeq/                      % SYNTHIA-Seq dataset root
DA-VSN/data/SynthiaSeq/SEQS-04-DAWN          % SYNTHIA-SEQS-04-DAWN
  1. Pre-trained models: Download pre-trained models and put in DA-VSN/pretrained_models

Optical Flow Estimation

  • For quick preparation: Download the optical flow estimated from Cityscapes-Seq validation set here and unzip in DA-VSN/data
  1. Clone the flownet2-pytorch:
git clone https://github.com/NVIDIA/flownet2-pytorch.git
  1. Download pre-trained FlowNet2 and put in flownet2-pytorch/pretrained_models
DA-VSN/data/Cityscapes_val_optical_flow_scale512/  % unzip Cityscapes_val_optical_flow_scale512.zip
  1. Use the flownet2-pytorch to estimate optical flow

Evaluation on Pretrained Models

  • VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python test.py --cfg configs/davsn_viper2city_pretrained.yml
  • SYNTHIA-Seq → Cityscapes-Seq:
python test.py --cfg configs/davsn_syn2city_pretrained.yml

Training and Testing

  • VIPER → Cityscapes-Seq:
cd DA-VSN/davsn/scripts
python train.py --cfg configs/davsn_viper2city.yml
python test.py --cfg configs/davsn_viper2city.yml
  • SYNTHIA-Seq → Cityscapes-Seq:
python train.py --cfg configs/davsn_syn2city.yml
python test.py --cfg configs/davsn_syn2city.yml

Acknowledgements

This codebase is heavily borrowed from ADVENT and flownet2-pytorch.

Contact

If you have any questions, please contact: [email protected]

Comments
  • Optical flow is not used for propagating

    Optical flow is not used for propagating

    Hi, author. I have two questions. The first is I find that you didn't use flow to propogate previous frame to current frame. You just use it as a limitation that the pixel appeared in both cf and kf will be retained. This is unreasonable. image And I refine the code using resample2D to warp kf to cf, but the result only improve a little.

    The second question is that I try to train DAVSN for 3 times on 1080Ti and 2080Ti following the setting you gave, but I only get 46 mIoU which is 2 point less than you.

    opened by EDENpraseHAZARD 5
  • Question on Synthia-seq dataset

    Question on Synthia-seq dataset

    Dear authors,

    Thank you for your great work. I have several questions about the synthia-seq->cityscape-seq adaptation. The first one is about the scale of training data. It seems like compared with the VIPER dataset, synthia-seq only contains one labeled video with 850 frames in total. Is that true? And the second question is that 11 classes are reported the Table 4, but in the dataloader of synthia-seq, 12 classes are used. So, I'm not sure whether the fence class is considered during adaptation or not. https://github.com/Dayan-Guan/DA-VSN/blob/d110ff70dacec4156a3787eb49e7f2448dfb91a5/davsn/dataset/SynthiaSeq.py#L11

    Thanks in advance for your help!

    opened by xyIsHere 3
  • Details of SYNTHIA-Seq dataset

    Details of SYNTHIA-Seq dataset

    Hi author, I have downloaded SYNTHIA-Seq, but I found there are 'Stereo_Left' and 'Stereo_Right' folders. And each contains 'Omni_B', 'Omni_F', 'Omni_L' and 'Omni_R'. I wonder which one is used for training.

    opened by EDENpraseHAZARD 2
  • Could you please provide 'estimated_optical_flow' for training DA-VSN

    Could you please provide 'estimated_optical_flow' for training DA-VSN

    Hi @Dayan-Guan , thank you for open-sourcing your work!

    I am trying to follow this work. For training DA-VSN from scratch, the optical flows (for the 3 datasets used in your paper) estimated by FlowNet2 are needed. However, the instruction in your README only includes the evaluation part. I also see from the recent issues that you have provided the code and more instructions for the training part. But the code is not a complete one I guess so I cannot generate the optical flows with it.

    Could you please provide your generated optical flows for all 3 datasets used in your paper? It would save us time. Or could you please have a look again at the provided 'Code_for_optical_flow_estimation'? So that it is runnable for generating optical flows on our own.

    Thanks in advance!

    Regards

    opened by ldkong1205 1
  • In train_video_UDA.py, line 251, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), if the image flips, but the optical flow does not flip

    In train_video_UDA.py, line 251, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), if the image flips, but the optical flow does not flip

    Hello! I really enjoy reading your work!! At the same time, I encountered a problem in the operation of train_video_UDA.py

    In line 251 trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), Variable trg_prob is the prediction of trg_img_b_wk, and trg_img_b_wk is obtained by trg_img_b based on a certain probability of flip, but trg_flow_warp does not seem to be flipped, We consider such a situation, If trg_img_b_wk is fliped, trg_flow_warp is not flipped, Then trg_prob_warp and trg_img_d_st do not seem consistent? Because the image flips, but the optical flow does not flip. Although the trg_pl in line 256~258 is fliped.

    Chinese discription of my question: 在第251行, trg_ prob_ warp = warp_ bilinear(trg_prob, trg_flow_warp), 变量trg_prob是trg_img_b_wk的语义分割预测, 而trg_img_b_wk是由trg_img_b根据一定概率flip得到的, 但 trg_flow_warp似乎没有进行翻转, 我们考虑这样一种情况, 如果trg_img_b_wk经过了flip处理, 那么trg_prob_warp和trg_img_d_st的语义貌似不是一致的?因为图像flip了但光流图没有flip。 尽管在第256行对trg_pl进行了flip操作

    opened by zhe-juanz 0
  • Some questions about data loading

    Some questions about data loading

    Hi, This is a very enlightening work!!! @xing0047 @Dayan-Guan I want to ask a question~

    When I use./TPS/tps/scripts/train.py to read SynthiaSeq or ViperSeq data, I debug the code and find the following phenomena:

    I tried to print some variables of __ getitem__ () ,

    When the shuffle of source_loader = data.DataLoader() is set to False, and the batch_size=cfg.TRAIN.BATCH_SIZE_SOURCE is set to 1,

    1. It is found that although the batch_ Size=1, but 4 pictures and the first frame corresponding to them are loaded at one time, Instead of 1 picture and the previous frame.

    2. At the same time, it is found that 4 loaded pictures are disordered, such as 2-1-3-4, rather than 1-2-3-4, it seems to violate the settings of shuffle.

    Could you please kindly explain my doubt? Thank you very much!!

    The print code are as follows:

    111

    The print results are as follows,which the order of each run of print is different:

    ---index--- 1 ---index--- 0 ---index--- 2 img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000002.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000002.png ---index--- 3 img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000001.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000001.png img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000003.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000003.png img_file tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000004.png label_file tps/data/SynthiaSeq/SEQS-04-DAWN/label/000004.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000003.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000002.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000001.png image_kf tps/data/SynthiaSeq/SEQS-04-DAWN/rgb/000000.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000003.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000002.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000001.png label_kf tps/data/SynthiaSeq/SEQS-04-DAWN/label/000000.png

    opened by zhe-juanz 0
  • Regarding Synthia-Seq Dataset

    Regarding Synthia-Seq Dataset

    I really enjoyed reading your work. I have a question regarding the synthia-seq dataset. In the paper you mention that you have used 8000 synthesized video frames, but in the github the Synthia-Seq Dawn contain only 850 images. Can you please clarify this ambiguity. Thank you. image

    opened by Ihsan149 0
  • Optical flow for training

    Optical flow for training

    Thanks for your great job! I want to train DA-VSN, but I don't know how to get Estimated_optical_flow_Viper_train, Estimated_optical_flow_Cityscapes-Seq_train. I didn't find the detail about optical flow from readme or paper.

    opened by EDENpraseHAZARD 11
ROS support for Velodyne 3D LIDARs

Overview Velodyne1 is a collection of ROS2 packages supporting Velodyne high definition 3D LIDARs3. Warning: The master branch normally contains code

ROS device drivers 543 Dec 30, 2022
Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP

Segmentation in Style: Unsupervised Semantic Image Segmentation with Stylegan and CLIP Abstract: We introduce a method that allows to automatically se

Daniil Pakhomov 134 Dec 19, 2022
A PyTorch Implementation of SphereFace.

SphereFace A PyTorch Implementation of SphereFace. The code can be trained on CASIA-Webface and the best accuracy on LFW is 99.22%. SphereFace: Deep H

carwin 685 Dec 09, 2022
Implementation of OpenAI paper with Simple Noise Scale on Fastai V2

README Implementation of OpenAI paper "An Empirical Model of Large-Batch Training" for Fastai V2. The code is based on the batch size finder implement

13 Dec 10, 2021
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
This tutorial aims to learn the basics of deep learning by hands, and master the basics through combination of lectures and exercises

2021-Deep-learning This tutorial aims to learn the basics of deep learning by hands, and master the basics through combination of paper and exercises.

108 Feb 24, 2022
A curated list of awesome resources combining Transformers with Neural Architecture Search

A curated list of awesome resources combining Transformers with Neural Architecture Search

Yash Mehta 173 Jan 03, 2023
Some code of the implements of Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network

3D-GMPDCNN Geological Modeling Using 3D Pixel-Adaptive and Deformable Convolutional Neural Network PyTorch implementation of "Geological Modeling Usin

5 Nov 21, 2022
Zsseg.baseline - Zero-Shot Semantic Segmentation

This repo is for our paper A Simple Baseline for Zero-shot Semantic Segmentation

98 Dec 20, 2022
Code of PVTv2 is released! PVTv2 largely improves PVTv1 and works better than Swin Transformer with ImageNet-1K pre-training.

Updates (2020/06/21) Code of PVTv2 is released! PVTv2 largely improves PVTv1 and works better than Swin Transformer with ImageNet-1K pre-training. Pyr

1.3k Jan 04, 2023
Crosslingual Segmental Language Model

Crosslingual Segmental Language Model This repository contains the code from Multilingual unsupervised sequence segmentation transfers to extremely lo

C.M. Downey 1 Jun 13, 2022
tsai is an open-source deep learning package built on top of Pytorch & fastai focused on state-of-the-art techniques for time series classification, regression and forecasting.

Time series Timeseries Deep Learning Pytorch fastai - State-of-the-art Deep Learning with Time Series and Sequences in Pytorch / fastai

timeseriesAI 2.8k Jan 08, 2023
BMVC 2021 Oral: code for BI-GCN: Boundary-Aware Input-Dependent Graph Convolution for Biomedical Image Segmentation

BMVC 2021 BI-GConv: Boundary-Aware Input-Dependent Graph Convolution for Biomedical Image Segmentation Necassary Dependencies: PyTorch 1.2.0 Python 3.

Yanda Meng 15 Nov 08, 2022
This is the codebase for Diffusion Models Beat GANS on Image Synthesis.

This is the codebase for Diffusion Models Beat GANS on Image Synthesis.

OpenAI 3k Dec 26, 2022
A Light in the Dark: Deep Learning Practices for Industrial Computer Vision

A Light in the Dark: Deep Learning Practices for Industrial Computer Vision This is the repository for our Paper/Contribution to the WI2022 in Nürnber

Maximilian Harl 6 Jan 17, 2022
A curated list and survey of awesome Vision Transformers.

English | 简体中文 A curated list and survey of awesome Vision Transformers. You can use mind mapping software to open the mind mapping source file. You c

OpenMMLab 281 Dec 21, 2022
Notepy is a full-featured Notepad Python app

Notepy A full featured python text-editor Notable features Autocompletion for parenthesis and quote Auto identation Syntax highlighting Compile and ru

Mirko Rovere 11 Sep 28, 2022
[MICCAI'20] AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes

AlignShift NEW: Code for our new MICCAI'21 paper "Asymmetric 3D Context Fusion for Universal Lesion Detection" will also be pushed to this repository

Medical 3D Vision 42 Jan 06, 2023
ICCV2021 Expert-Goal Trajectory Prediction

ICCV 2021: Where are you heading? Dynamic Trajectory Prediction with Expert Goal Examples This repository contains the code for the paper Where are yo

hz 21 Dec 12, 2022
Amazing-Python-Scripts - 🚀 Curated collection of Amazing Python scripts from Basics to Advance with automation task scripts.

📑 Introduction A curated collection of Amazing Python scripts from Basics to Advance with automation task scripts. This is your Personal space to fin

Avinash Ranjan 1.1k Dec 29, 2022