FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.

Related tags

Deep LearningFastFCN
Overview

FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation

[Project] [Paper] [arXiv] [Home]

PWC

Official implementation of FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.
A Faster, Stronger and Lighter framework for semantic segmentation, achieving the state-of-the-art performance and more than 3x acceleration.

@inproceedings{wu2019fastfcn,
  title     = {FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation},
  author    = {Wu, Huikai and Zhang, Junge and Huang, Kaiqi and Liang, Kongming and Yu Yizhou},
  booktitle = {arXiv preprint arXiv:1903.11816},
  year = {2019}
}

Contact: Hui-Kai Wu ([email protected])

Update

2020-04-15: Now support inference on a single image !!!

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m experiments.segmentation.test_single_image --dataset [pcontext|ade20k] \
    --model [encnet|deeplab|psp] --jpu [JPU|JPU_X] \
    --backbone [resnet50|resnet101] [--ms] --resume {MODEL} --input-path {INPUT} --save-path {OUTPUT}

2020-04-15: New joint upsampling module is now available !!!

  • --jpu [JPU|JPU_X]: JPU is the original module in the arXiv paper; JPU_X is a pyramid version of JPU.

2020-02-20: FastFCN can now run on every OS with PyTorch>=1.1.0 and Python==3.*.*

  • Replace all C/C++ extensions with pure python extensions.

Version

  1. Original code, producing the results reported in the arXiv paper. [branch:v1.0.0]
  2. Pure PyTorch code, with torch.nn.DistributedDataParallel and torch.nn.SyncBatchNorm. [branch:latest]
  3. Pure Python code. [branch:master]

Overview

Framework

Joint Pyramid Upsampling (JPU)

Install

  1. PyTorch >= 1.1.0 (Note: The code is test in the environment with python=3.6, cuda=9.0)
  2. Download FastFCN
    git clone https://github.com/wuhuikai/FastFCN.git
    cd FastFCN
    
  3. Install Requirements
    nose
    tqdm
    scipy
    cython
    requests
    

Train and Test

PContext

python -m scripts.prepare_pcontext
Method Backbone mIoU FPS Model Scripts
EncNet ResNet-50 49.91 18.77
EncNet+JPU (ours) ResNet-50 51.05 37.56 GoogleDrive bash
PSP ResNet-50 50.58 18.08
PSP+JPU (ours) ResNet-50 50.89 28.48 GoogleDrive bash
DeepLabV3 ResNet-50 49.19 15.99
DeepLabV3+JPU (ours) ResNet-50 50.07 20.67 GoogleDrive bash
EncNet ResNet-101 52.60 (MS) 10.51
EncNet+JPU (ours) ResNet-101 54.03 (MS) 32.02 GoogleDrive bash

ADE20K

python -m scripts.prepare_ade20k

Training Set

Method Backbone mIoU (MS) Model Scripts
EncNet ResNet-50 41.11
EncNet+JPU (ours) ResNet-50 42.75 GoogleDrive bash
EncNet ResNet-101 44.65
EncNet+JPU (ours) ResNet-101 44.34 GoogleDrive bash

Training Set + Val Set

Method Backbone FinalScore (MS) Model Scripts
EncNet+JPU (ours) ResNet-50 GoogleDrive bash
EncNet ResNet-101 55.67
EncNet+JPU (ours) ResNet-101 55.84 GoogleDrive bash

Note: EncNet (ResNet-101) is trained with crop_size=576, while EncNet+JPU (ResNet-101) is trained with crop_size=480 for fitting 4 images into a 12G GPU.

Visual Results

Dataset Input GT EncNet Ours
PContext
ADE20K

More Visual Results

Acknowledgement

Code borrows heavily from PyTorch-Encoding.

Comments
  • Some problem when running test.py and train.py

    Some problem when running test.py and train.py

    Hi, I am a beginner in deep learning. Some problem occurred when I was running the code. First, I use the command 「 tar -xvf encnet_jpu_res50_pcontext.pth.tar 」 to extract the tar file, but it fails. Second, if i successfully extract the file and get checkpoint, which file should I put my checkpoint in ? Where should I extract my checkpoint file to? Thank You!

    opened by pp00704831 18
  • why i remove JPU,I also can  train model?

    why i remove JPU,I also can train model?

    Why does the code still execute without error when I delete the JPU module?(/FastFCN/encoding/nn/customize.py),I also can train model? These are my commands :(I did load the JPU module) CUDA_VISIBLE_DEVICES=4,5,6,7 python train.py --dataset pcontext --model encnet --jpu --aux --se-loss --backbone resnet101 --checkname encnet_res101_pcontext

    opened by E18301194 17
  • Segmentation fault

    Segmentation fault

    I think this problem is caused by my previous pytorch problem,so maybe i have to solve pytorch first.Could you give me some help? gcc:4.8 pytorch:1.1.0 python:3.5 and how could i change the pytorch version to 1.0.0?pip install torch==1.0?

    opened by Anikily 12
  • Performance Issue

    Performance Issue

    Thanks for your work. I have tried this script: https://github.com/wuhuikai/FastFCN/blob/master/experiments/segmentation/scripts/encnet_res50_pcontext.sh with the hardware and software: 4xTitanXp, Ubuntu16.04, CUDA9.0, PyToch1.0

    But I can't reproduce the performance reported in your paper. I got pixAcc: 0.7747, mIoU: 0.4785 for single-scale, and pixAcc: 0.7833, mIoU: 0.4898 for multi-scale.

    I would appreciate your help. Thanks for your consideration.

    bug 
    opened by tonysy 12
  • FastFCN has been supported by MMSegmentation.

    FastFCN has been supported by MMSegmentation.

    Hi, right now FastFCN has been supported by MMSegmentation. We do find using JPU with smaller feature maps from backbone could get similar or higher performance than original models with larger feature maps.

    There is still something to do for us, for example, we do not find obviously improvement about FPS in our implementation, thus we would try to figure it out in the future.

    Anyway, thanks for your work and hope more people from community could use FastFCN.

    Best,

    opened by MengzhangLI 9
  • RuntimeError: Failed downloading

    RuntimeError: Failed downloading

    Hi, thanks for your work. I try to run your code to train a model on the pascalContext dataset.But I got the following error: RuntimeError: Failed downloading url https://hangzh.s3.amazonaws.com/encoding/models/resnet50-ebb6acbb.zip I find the problem is I can not download the pretrained model. I find the author no longer provide the pretrained resnet model. https://github.com/zhanghang1989/PyTorch-Encoding/issues/273

    So, How can I solve this problem. Thanks for your consideration.

    opened by bufferXia 9
  • How could I set

    How could I set "resume" while running test_single_image?

    Hello!

    When I run test_single_image.py, I tried to set resume as path of resnet101-2a57e44d.pth and encountered an error.

    File "G:/gitfolder/FastFCN/experiments/segmentation/test_single_image.py", line 43, in test model.load_state_dict(checkpoint['state_dict'], strict=False) KeyError: 'state_dict

    I doubted that there existed a problem with "resume". Waiting for your reply.

    Thank you!

    opened by CN-HaoJiang 8
  • Questions about the SE-loss and  Aux-loss

    Questions about the SE-loss and Aux-loss

    Hi, first thank you for the great work. I just checked the codes and also had run some scripts. I am confused with the final loss which is composited with three individual losses. could you tell what is the se-loss and the aux-loss used for.

    opened by meanmee 7
  • Backbone weights download links not working anymore

    Backbone weights download links not working anymore

    Download links for the backbone do not seem to work anymore.

    I've tested with Resnet50 (https://hangzh.s3.amazonaws.com/encoding/models/resnet50-ebb6acbb.zip) and Resnet 101 (https://hangzh.s3.amazonaws.com/encoding/models/resnet101-2a57e44d.zip) too.

    I also tried to use torchivision weights instead, but I got matching errors when trying to load them.

    Could you consider reuploading the weights? That would be very helpful!

    opened by Khroto 6
  • Segmentation Fault

    Segmentation Fault

    我執行以下 command 準備 train model 但是發生 segmentation fault 有人有這個問題嗎 ? 謝謝幫忙 !

    run : CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --dataset pcontext --model encnet --jpu --aux --se-loss --backbone resnet101 --checkname encnet_res101_pcontext

    crashed : Using poly LR Scheduler! Starting Epoch: 0 Total Epoches: 80 0%| | 0/312 [00:00<?, ?it/s] =>Epoches 0, learning rate = 0.0010, previous best = 0.0000 Segmentation fault

    //------------ Nvidia GPU : Tesla P100-PCIE 16G x 4 CPU : GenuineIntel x 18 , Memory 140G totally

    opened by SimonTsungHanKuo 6
  • Need your suggestions

    Need your suggestions

    Hi, i have designed this SPP module for my network. But i am also interested in your work to replace my his module with JPU. Would you like to give me any suggestions? here is my implementation

    class SPP(nn.Module): def init(self, pool_sizes): super(SPP, self).init() self.pool_sizes = pool_sizes

    def forward(self, x):
        h, w = x.shape[2:]
        k_sizes = []
        strides = []
        for pool_size in self.pool_sizes:
            k_sizes.append((int(h / pool_size), int(w / pool_size)))
            strides.append((int(h / pool_size), int(w / pool_size)))
    
        spp_sum = x
    
        for i in range(len(self.pool_sizes)):
            out = F.avg_pool2d(x, k_sizes[i], stride=strides[i], padding=0)
            out = F.upsample(out, size=(h, w), mode="bilinear")
            spp_sum = spp_sum + out
    
        return spp_sum  
    
    opened by haideralimughal 5
  • add resnest and xception65

    add resnest and xception65

    Copy Resnest and xception65 from Pytorch-Encoding, and xception65 only can be used without pretrained models.

    Pls be careful as there are many changes!!

    I test it on my own server, and everything seems ok. As a caution, maybe you could test it by yourself first.My FastFCN

    I don't change the Readme.md and *.sh. Maybe you can rectify it if you agree this request.

    If the server resources are not tight, I will run the encnet+jpu+resnest101+pcontext and encnet+jpu_x+resnest101+pcontext, I will share you the results at issues or pull another request about Readme.md with my pth.tar.

    Thanks for your work again.

    opened by tjj1998 1
Releases(v1.0.0)
Official implementation of TMANet.

Temporal Memory Attention for Video Semantic Segmentation, arxiv Introduction We propose a Temporal Memory Attention Network (TMANet) to adaptively in

wanghao 94 Dec 02, 2022
Official repository of Semantic Image Matting

Semantic Image Matting This is the official repository of Semantic Image Matting (CVPR2021). Overview Natural image matting separates the foreground f

192 Dec 29, 2022
Rethinking Portrait Matting with Privacy Preserving

Rethinking Portrait Matting with Privacy Preserving This is the official repository of the paper Rethinking Portrait Matting with Privacy Preserving.

184 Jan 03, 2023
Title: Graduate-Admissions-Predictor

The purpose of this project is create a predictive model capable of identifying the probability of a person securing an admit based on their personal profile parameters. Simplified visualisations hav

Akarsh Singh 1 Jan 26, 2022
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction

H3DS Dataset This repository contains the code for using the H3DS dataset introduced in H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction Access

Crisalix 72 Dec 10, 2022
This is an official implementation for "Self-Supervised Learning with Swin Transformers".

Self-Supervised Learning with Vision Transformers By Zhenda Xie*, Yutong Lin*, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao and Han Hu This repo is the

Swin Transformer 529 Jan 02, 2023
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning

The Rich Get Richer: Disparate Impact of Semi-Supervised Learning Preprocess file of the dataset used in implicit sub-populations: (Demographic groups

<a href=[email protected]"> 4 Oct 14, 2022
Pure python PEMDAS expression solver without using built-in eval function

pypemdas Pure python PEMDAS expression solver without using built-in eval function. Supports nested parenthesis. Supported operators: + - * / ^ Exampl

1 Dec 22, 2021
Writeups for the challenges from DownUnderCTF 2021

cloud Challenge Author Difficulty Release Round Bad Bucket Blue Alder easy round 1 Not as Bad Bucket Blue Alder easy round 1 Lost n Found Blue Alder m

DownUnderCTF 161 Dec 31, 2022
Deep Learning to Improve Breast Cancer Detection on Screening Mammography

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Deep Learning to Improve Breast

Li Shen 305 Jan 03, 2023
Create and implement a deep learning library from scratch.

In this project, we create and implement a deep learning library from scratch. Table of Contents Deep Leaning Library Table of Contents About The Proj

Rishabh Bali 22 Aug 23, 2022
Robot Servers and Server Manager software for robo-gym

robo-gym-server-modules Robot Servers and Server Manager software for robo-gym. For info on how to use this package please visit the robo-gym website

JR ROBOTICS 4 Aug 16, 2021
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

Kim Seonghyeon 502 Jan 03, 2023
The toolkit to generate auto labeled datasets

Ozeu Ozeu is the toolkit to autolabal dataset for instance segmentation. You can generate datasets labaled with segmentation mask and bounding box fro

Xiong Jie 28 Mar 28, 2022
Code for intrusion detection system (IDS) development using CNN models and transfer learning

Intrusion-Detection-System-Using-CNN-and-Transfer-Learning This is the code for the paper entitled "A Transfer Learning and Optimized CNN Based Intrus

Western OC2 Lab 38 Dec 12, 2022
Code for our ACL 2021 paper "One2Set: Generating Diverse Keyphrases as a Set"

One2Set This repository contains the code for our ACL 2021 paper “One2Set: Generating Diverse Keyphrases as a Set”. Our implementation is built on the

Jiacheng Ye 63 Jan 05, 2023
As a part of the HAKE project, includes the reproduced SOTA models and the corresponding HAKE-enhanced versions (CVPR2020).

HAKE-Action HAKE-Action (TensorFlow) is a project to open the SOTA action understanding studies based on our Human Activity Knowledge Engine. It inclu

Yong-Lu Li 94 Nov 18, 2022
MIM: MIM Installs OpenMMLab Packages

MIM provides a unified API for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

OpenMMLab 254 Jan 04, 2023
Outlier Exposure with Confidence Control for Out-of-Distribution Detection

OOD-detection-using-OECC This repository contains the essential code for the paper Outlier Exposure with Confidence Control for Out-of-Distribution De

Nazim Shaikh 64 Nov 02, 2022