Semi-supevised Semantic Segmentation with High- and Low-level Consistency

Overview

Semi-supevised Semantic Segmentation with High- and Low-level Consistency

This Pytorch repository contains the code for our work Semi-supervised Semantic Segmentation with High- and Low-level Consistency. The approach uses two network branches that link semi-supervised classification with semi-supervised segmentation including self-training. The approach attains significant improvement over existing methods, especially when trained with very few labeled samples. On several standard benchmarks - PASCAL VOC 2012,PASCAL-Context, and Cityscapes - the approach achieves new state-of-the-art in semi-supervised learning.

We propose a two-branch approach to the task of semi-supervised semantic segmentation. The lower branch predicts pixel-wise class labels and is referred to as the Semi-Supervised Semantic Segmentation GAN(s4GAN). The upper branch performs image-level classification and is denoted as the Multi-Label Mean Teacher(MLMT).

Here, this repository contains the source code for the s4GAN branch. MLMT branch is adapted from Mean-Teacher work for semi-supervised classification. Instructions for setting up the MLMT branch are given below.

Package pre-requisites

The code runs on Python 3 and Pytorch 0.4 The following packages are required.

pip install scipy tqdm matplotlib numpy opencv-python

Dataset preparation

Download ImageNet pretrained Resnet-101(Link) and place it ./pretrained_models/

PASCAL VOC

Download the dataset(Link) and extract in ./data/voc_dataset/

PASCAL Context

Download the annotations(Link) and extract in ./data/pcontext_dataset/

Cityscapes

Download the dataset from the Cityscapes dataset server(Link). Download the files named 'gtFine_trainvaltest.zip', 'leftImg8bit_trainvaltest.zip' and extract in ./data/city_dataset/

Training and Validation on PASCAL-VOC Dataset

Results in the paper are averaged over 3 random splits. Same splits are used for reporting baseline performance for fair comparison.

Training fully-supervised Baseline (FSL)

python train_full.py    --dataset pascal_voc  \
                        --checkpoint-dir ./checkpoints/voc_full \
                        --ignore-label 255 \
                        --num-classes 21 

Training semi-supervised s4GAN (SSL)

python train_s4GAN.py   --dataset pascal_voc  \
                        --checkpoint-dir ./checkpoints/voc_semi_0_125 \
                        --labeled-ratio 0.125 \
                        --ignore-label 255 \ 
                        --num-classes 21

Validation

python evaluate.py --dataset pascal_voc  \
                   --num-classes 21 \
                   --restore-from ./checkpoints/voc_semi_0_125/VOC_30000.pth 

Training MLMT Branch

python train_mlmt.py \
        --batch-size-lab 16 \
        --batch-size-unlab 80 \
        --labeled-ratio 0.125 \
        --exp-name voc_semi_0_125_MLMT \
        --pkl-file ./checkpoints/voc_semi_0_125/train_voc_split.pkl

Final Evaluation S4GAN + MLMT

python evaluate.py --dataset pascal_voc  \
                   --num-classes 21 \
                   --restore-from ./checkpoints/voc_semi_0_125/VOC_30000.pth \
                   --with-mlmt \
                   --mlmt-file ./mlmt_output/voc_semi_0_125_MLMT/output_ema_raw_100.txt
    

Training and Validation on PASCAL-Context Dataset

python train_full.py    --dataset pascal_context  \
                        --checkpoint-dir ./checkpoints/pc_full \
                        --ignore-label -1 \
                        --num-classes 60

python train_s4GAN.py  --dataset pascal_context  \
                       --checkpoint-dir ./checkpoints/pc_semi_0_125 \
                       --labeled-ratio 0.125 \
                       --ignore-label -1 \
                       --num-classes 60 \
                       --split-id ./splits/pc/split_0.pkl
                       --num-steps 60000

python evaluate.py     --dataset pascal_context  \
                       --num-classes 60 \
                       --restore-from ./checkpoints/pc_semi_0_125/VOC_40000.pth

Training and Validation on Cityscapes Dataset

python train_full.py    --dataset cityscapes \
                        --checkpoint-dir ./checkpoints/city_full_0_125 \
                        --ignore-label 250 \
                        --num-classes 19 \
                        --input-size '256,512'  

python train_s4GAN.py   --dataset cityscapes \
                        --checkpoint-dir ./checkpoints/city_semi_0_125 \
                        --labeled-ratio 0.125 \
                        --ignore-label 250 \
                        --num-classes 19 \
                        --split-id ./splits/city/split_0.pkl \
                        --input-size '256,512' \
                        --threshold-st 0.7 \
                        --learning-rate-D 1e-5 

python evaluate.py      --dataset cityscapes \
                        --num-classes 19 \
                        --restore-from ./checkpoints/city_semi_0_125/VOC_30000.pth 

Acknowledgement

Parts of the code have been adapted from: DeepLab-Resnet-Pytorch, AdvSemiSeg, PyTorch-Encoding

Citation

@ARTICLE{8935407,
  author={S. {Mittal} and M. {Tatarchenko} and T. {Brox}},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={Semi-Supervised Semantic Segmentation With High- and Low-Level Consistency}, 
  year={2021},
  volume={43},
  number={4},
  pages={1369-1379},
  doi={10.1109/TPAMI.2019.2960224}}
Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection

Structure Information is the Key: Self-Attention RoI Feature Extractor in 3D Object Detection abstract:Unlike 2D object detection where all RoI featur

DK. Zhang 2 Oct 07, 2022
A new version of the CIDACS-RL linkage tool suitable to a cluster computing environment.

Fully Distributed CIDACS-RL The CIDACS-RL is a brazillian record linkage tool suitable to integrate large amount of data with high accuracy. However,

Robespierre Pita 5 Nov 04, 2022
Open-Ended Commonsense Reasoning (NAACL 2021)

Open-Ended Commonsense Reasoning Quick links: [Paper] | [Video] | [Slides] | [Documentation] This is the repository of the paper, Differentiable Open-

(Bill) Yuchen Lin 31 Oct 19, 2022
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis

O-CNN This repository contains the implementation of our papers related with O-CNN. The code is released under the MIT license. O-CNN: Octree-based Co

Microsoft 607 Dec 28, 2022
Contains code for Deep Kernelized Dense Geometric Matching

DKM - Deep Kernelized Dense Geometric Matching Contains code for Deep Kernelized Dense Geometric Matching We provide pretrained models and code for ev

Johan Edstedt 83 Dec 23, 2022
A basic duplicate image detection service using perceptual image hash functions and nearest neighbor search, implemented using faiss, fastapi, and imagehash

Duplicate Image Detection Getting Started Install dependencies pip install -r requirements.txt Run service python main.py Testing Test with pytest How

Matthew Podolak 21 Nov 11, 2022
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image

Expressive Body Capture: 3D Hands, Face, and Body from a Single Image [Project Page] [Paper] [Supp. Mat.] Table of Contents License Description Fittin

Vassilis Choutas 1.3k Jan 07, 2023
Code for the paper "Zero-shot Natural Language Video Localization" (ICCV2021, Oral).

Zero-shot Natural Language Video Localization (ZSNLVL) by Pseudo-Supervised Video Localization (PSVL) This repository is for Zero-shot Natural Languag

Computer Vision Lab. @ GIST 37 Dec 27, 2022
DA2Lite is an automated model compression toolkit for PyTorch.

DA2Lite (Deep Architecture to Lite) is a toolkit to compress and accelerate deep network models. ⭐ Star us on GitHub — it helps!! Frameworks & Librari

Sinhan Kang 7 Mar 22, 2022
IJCAI2020 & IJCV 2020 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo

Seg_Uncertainty In this repo, we provide the code for the two papers, i.e., MRNet:Unsupervised Scene Adaptation with Memory Regularization in vivo, IJ

Zhedong Zheng 348 Jan 05, 2023
A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!

CoVA: Context-aware Visual Attention for Webpage Information Extraction Abstract Webpage information extraction (WIE) is an important step to create k

Keval Morabia 41 Jan 01, 2023
YOLO5Face: Why Reinventing a Face Detector (https://arxiv.org/abs/2105.12931)

Introduction Yolov5-face is a real-time,high accuracy face detection. Performance Single Scale Inference on VGA resolution(max side is equal to 640 an

DeepCam Shenzhen 1.4k Jan 07, 2023
Knowledge Distillation Toolbox for Semantic Segmentation

SegDistill: Toolbox for Knowledge Distillation on Semantic Segmentation Networks This repo contains the supported code and configuration files for Seg

9 Dec 12, 2022
A Python library for Deep Graph Networks

PyDGN Wiki Description This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitti

Federico Errica 194 Dec 22, 2022
Gradient Step Denoiser for convergent Plug-and-Play

Source code for the paper "Gradient Step Denoiser for convergent Plug-and-Play"

Samuel Hurault 11 Sep 17, 2022
[NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".

SMYRF: Efficient attention using asymmetric clustering Get started: Abstract We propose a novel type of balanced clustering algorithm to approximate a

Giannis Daras 46 Dec 22, 2022
Reliable probability face embeddings

ProbFace, arxiv This is a demo code of training and testing [ProbFace] using Tensorflow. ProbFace is a reliable Probabilistic Face Embeddging (PFE) me

Kaen Chan 34 Dec 31, 2022
A new data augmentation method for extreme lighting conditions.

Random Shadows and Highlights This repo has the source code for the paper: Random Shadows and Highlights: A new data augmentation method for extreme l

Osama Mazhar 35 Nov 26, 2022
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

DSEE Codes for [Preprint] DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Ch

VITA 4 Dec 27, 2021