Densely Connected Convolutional Networks, In CVPR 2017 (Best Paper Award).

Overview

Densely Connected Convolutional Networks (DenseNets)

This repository contains the code for DenseNet introduced in the following paper

Densely Connected Convolutional Networks (CVPR 2017, Best Paper Award)

Gao Huang*, Zhuang Liu*, Laurens van der Maaten and Kilian Weinberger (* Authors contributed equally).

and its journal version

Convolutional Networks with Dense Connectivity (TPAMI 2019)

Gao Huang, Zhuang Liu, Geoff Pleiss, Laurens van der Maaten and Kilian Weinberger.

Now with memory-efficient implementation! Please check the technical report and code for more infomation.

The code is built on fb.resnet.torch.

Citation

If you find DenseNet useful in your research, please consider citing:

@article{huang2019convolutional,
 title={Convolutional Networks with Dense Connectivity},
 author={Huang, Gao and Liu, Zhuang and Pleiss, Geoff and Van Der Maaten, Laurens and Weinberger, Kilian},
 journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
 year={2019}
 }
 
@inproceedings{huang2017densely,
  title={Densely Connected Convolutional Networks},
  author={Huang, Gao and Liu, Zhuang and van der Maaten, Laurens and Weinberger, Kilian Q },
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2017}
}

Other Implementations

Our [Caffe], Our memory-efficient [Caffe], Our memory-efficient [PyTorch], [PyTorch] by Andreas Veit, [PyTorch] by Brandon Amos, [PyTorch] by Federico Baldassarre, [MXNet] by Nicatio, [MXNet] by Xiong Lin, [MXNet] by miraclewkf, [Tensorflow] by Yixuan Li, [Tensorflow] by Laurent Mazare, [Tensorflow] by Illarion Khlestov, [Lasagne] by Jan Schlüter, [Keras] by tdeboissiere,
[Keras] by Roberto de Moura Estevão Filho, [Keras] by Somshubra Majumdar, [Chainer] by Toshinori Hanya, [Chainer] by Yasunori Kudo, [Torch 3D-DenseNet] by Barry Kui, [Keras] by Christopher Masch, [Tensorflow2] by Gaston Rios and Ulises Jeremias Cornejo Fandos.

Note that we only listed some early implementations here. If you would like to add yours, please submit a pull request.

Some Following up Projects

  1. Multi-Scale Dense Convolutional Networks for Efficient Prediction
  2. DSOD: Learning Deeply Supervised Object Detectors from Scratch
  3. CondenseNet: An Efficient DenseNet using Learned Group Convolutions
  4. Fully Convolutional DenseNets for Semantic Segmentation
  5. Pelee: A Real-Time Object Detection System on Mobile Devices

Contents

  1. Introduction
  2. Usage
  3. Results on CIFAR
  4. Results on ImageNet and Pretrained Models
  5. Updates

Introduction

DenseNet is a network architecture where each layer is directly connected to every other layer in a feed-forward fashion (within each dense block). For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. This connectivity pattern yields state-of-the-art accuracies on CIFAR10/100 (with or without data augmentation) and SVHN. On the large scale ILSVRC 2012 (ImageNet) dataset, DenseNet achieves a similar accuracy as ResNet, but using less than half the amount of parameters and roughly half the number of FLOPs.

Figure 1: A dense block with 5 layers and growth rate 4.

densenet Figure 2: A deep DenseNet with three dense blocks.

Usage

  1. Install Torch and required dependencies like cuDNN. See the instructions here for a step-by-step guide.
  2. Clone this repo: git clone https://github.com/liuzhuang13/DenseNet.git

As an example, the following command trains a DenseNet-BC with depth L=100 and growth rate k=12 on CIFAR-10:

th main.lua -netType densenet -dataset cifar10 -batchSize 64 -nEpochs 300 -depth 100 -growthRate 12

As another example, the following command trains a DenseNet-BC with depth L=121 and growth rate k=32 on ImageNet:

th main.lua -netType densenet -dataset imagenet -data [dataFolder] -batchSize 256 -nEpochs 90 -depth 121 -growthRate 32 -nGPU 4 -nThreads 16 -optMemory 3

Please refer to fb.resnet.torch for data preparation.

DenseNet and DenseNet-BC

By default, the code runs with the DenseNet-BC architecture, which has 1x1 convolutional bottleneck layers, and compresses the number of channels at each transition layer by 0.5. To run with the original DenseNet, simply use the options -bottleneck false and -reduction 1

Memory efficient implementation (newly added feature on June 6, 2017)

There is an option -optMemory which is very useful for reducing GPU memory footprint when training a DenseNet. By default, the value is set to 2, which activates the shareGradInput function (with small modifications from here). There are two extreme memory efficient modes (-optMemory 3 or -optMemory 4) which use a customized densely connected layer. With -optMemory 4, the largest 190-layer DenseNet-BC on CIFAR can be trained on a single NVIDIA TitanX GPU (uses 8.3G of 12G) instead of fully using four GPUs with the standard (recursive concatenation) implementation .

More details about the memory efficient implementation are discussed here.

Results on CIFAR

The table below shows the results of DenseNets on CIFAR datasets. The "+" mark at the end denotes for standard data augmentation (random crop after zero-padding, and horizontal flip). For a DenseNet model, L denotes its depth and k denotes its growth rate. On CIFAR-10 and CIFAR-100 without data augmentation, a Dropout layer with drop rate 0.2 is introduced after each convolutional layer except the very first one.

Model Parameters CIFAR-10 CIFAR-10+ CIFAR-100 CIFAR-100+
DenseNet (L=40, k=12) 1.0M 7.00 5.24 27.55 24.42
DenseNet (L=100, k=12) 7.0M 5.77 4.10 23.79 20.20
DenseNet (L=100, k=24) 27.2M 5.83 3.74 23.42 19.25
DenseNet-BC (L=100, k=12) 0.8M 5.92 4.51 24.15 22.27
DenseNet-BC (L=250, k=24) 15.3M 5.19 3.62 19.64 17.60
DenseNet-BC (L=190, k=40) 25.6M - 3.46 - 17.18

Results on ImageNet and Pretrained Models

Torch

Models in the original paper

The Torch models are trained under the same setting as in fb.resnet.torch. The error rates shown are 224x224 1-crop test errors.

Network Top-1 error Torch Model
DenseNet-121 (k=32) 25.0 Download (64.5MB)
DenseNet-169 (k=32) 23.6 Download (114.4MB)
DenseNet-201 (k=32) 22.5 Download (161.8MB)
DenseNet-161 (k=48) 22.2 Download (230.8MB)

Models in the tech report

More accurate models trained with the memory efficient implementation in the technical report.

Network Top-1 error Torch Model
DenseNet-264 (k=32) 22.1 Download (256MB)
DenseNet-232 (k=48) 21.2 Download (426MB)
DenseNet-cosine-264 (k=32) 21.6 Download (256MB)
DenseNet-cosine-264 (k=48) 20.4 Download (557MB)

Caffe

https://github.com/shicai/DenseNet-Caffe.

PyTorch

PyTorch documentation on models. We would like to thank @gpleiss for this nice work in PyTorch.

Keras, Tensorflow and Theano

https://github.com/flyyufelix/DenseNet-Keras.

MXNet

https://github.com/miraclewkf/DenseNet.

Wide-DenseNet for better Time/Accuracy and Memory/Accuracy Tradeoff

If you use DenseNet as a model in your learning task, to reduce the memory and time consumption, we recommend use a wide and shallow DenseNet, following the strategy of wide residual networks. To obtain a wide DenseNet we set the depth to be smaller (e.g., L=40) and the growthRate to be larger (e.g., k=48).

We test a set of Wide-DenseNet-BCs and compared the memory and time with the DenseNet-BC (L=100, k=12) shown above. We obtained the statistics using a single TITAN X card, with batch size 64, and without any memory optimization.

Model Parameters CIFAR-10+ CIFAR-100+ Time per Iteration Memory
DenseNet-BC (L=100, k=12) 0.8M 4.51 22.27 0.156s 5452MB
Wide-DenseNet-BC (L=40, k=36) 1.5M 4.58 22.30 0.130s 4008MB
Wide-DenseNet-BC (L=40, k=48) 2.7M 3.99 20.29 0.165s 5245MB
Wide-DenseNet-BC (L=40, k=60) 4.3M 4.01 19.99 0.223s 6508MB

Obersevations:

  1. Wide-DenseNet-BC (L=40, k=36) uses less memory/time while achieves about the same accuracy as DenseNet-BC (L=100, k=12).
  2. Wide-DenseNet-BC (L=40, k=48) uses about the same memory/time as DenseNet-BC (L=100, k=12), while is much more accurate.

Thus, for practical use, we suggest picking one model from those Wide-DenseNet-BCs.

Updates

12/10/2019:

  1. Journal version (accepted by IEEE TPAMI) released.

08/23/2017:

  1. Add supporting code, so one can simply git clone and run.

06/06/2017:

  1. Support ultra memory efficient training of DenseNet with customized densely connected layer.

  2. Support memory efficient training of DenseNet with standard densely connected layer (recursive concatenation) by fixing the shareGradInput function.

05/17/2017:

  1. Add Wide-DenseNet.
  2. Add keras, tf, theano link for pretrained models.

04/20/2017:

  1. Add usage of models in PyTorch.

03/29/2017:

  1. Add the code for imagenet training.

12/03/2016:

  1. Add Imagenet results and pretrained models.
  2. Add DenseNet-BC structures.

Contact

liuzhuangthu at gmail.com
gaohuang at tsinghua.edu.cn
Any discussions, suggestions and questions are welcome!

Owner
Zhuang Liu
Zhuang Liu
Custom implementation of Corrleation Module

Pytorch Correlation module this is a custom C++/Cuda implementation of Correlation module, used e.g. in FlowNetC This tutorial was used as a basis for

Clément Pinard 361 Dec 12, 2022
Code for paper [ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot] (ICCV 2021, oral))

ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot This repository is the official PyTorch implementation of ICCV-21 pape

Jiarui 21 May 09, 2022
An algorithm that handles large-scale aerial photo co-registration, based on SURF, RANSAC and PyTorch autograd.

An algorithm that handles large-scale aerial photo co-registration, based on SURF, RANSAC and PyTorch autograd.

Luna Yue Huang 41 Oct 29, 2022
OHLC Average Prediction of Apple Inc. Using LSTM Recurrent Neural Network

Stock Price Prediction of Apple Inc. Using Recurrent Neural Network OHLC Average Prediction of Apple Inc. Using LSTM Recurrent Neural Network Dataset:

Nouroz Rahman 410 Jan 05, 2023
RID-Noise: Towards Robust Inverse Design under Noisy Environments

This is code of RID-Noise. Reproduce RID-Noise Results Toy tasks Please refer to the notebook ridnoise.ipynb to view experiments on three toy tasks. B

Thyrix 2 Nov 23, 2022
This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML)

package tests docs license stats support This repository contains FEDOT - an open-source framework for automated modeling and machine learning (AutoML

National Center for Cognitive Research of ITMO University 482 Dec 26, 2022
Single Image Random Dot Stereogram for Tensorflow

TensorFlow-SIRDS Single Image Random Dot Stereogram for Tensorflow SIRDS is a means to present 3D data in a 2D image. It allows for scientific data di

Greg Peatfield 5 Aug 10, 2022
E-RAFT: Dense Optical Flow from Event Cameras

E-RAFT: Dense Optical Flow from Event Cameras This is the code for the paper E-RAFT: Dense Optical Flow from Event Cameras by Mathias Gehrig, Mario Mi

Robotics and Perception Group 71 Dec 12, 2022
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"

RASP Setup Mac or Linux Run ./setup.sh . It will create a python3 virtual environment and install the dependencies for RASP. It will also try to insta

141 Jan 03, 2023
Official PyTorch implementation of GDWCT (CVPR 2019, oral)

This repository provides the official code of GDWCT, and it is written in PyTorch. Paper Image-to-Image Translation via Group-wise Deep Whitening-and-

WonwoongCho 135 Dec 02, 2022
Independent and minimal implementations of some reinforcement learning algorithms using PyTorch (including PPO, A3C, A2C, ...).

PyTorch RL Minimal Implementations There are implementations of some reinforcement learning algorithms, whose characteristics are as follow: Less pack

Gemini Light 4 Dec 31, 2022
The code release of paper Low-Light Image Enhancement with Normalizing Flow

[AAAI 2022] Low-Light Image Enhancement with Normalizing Flow Paper | Project Page Low-Light Image Enhancement with Normalizing Flow Yufei Wang, Renji

Yufei Wang 176 Jan 06, 2023
Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning"

Prompt-Tuning Implementation of "The Power of Scale for Parameter-Efficient Prompt Tuning" Currently, we support the following huggigface models: Bart

Andrew Zeng 36 Dec 19, 2022
a reimplementation of UnFlow in PyTorch that matches the official TensorFlow version

pytorch-unflow This is a personal reimplementation of UnFlow [1] using PyTorch. Should you be making use of this work, please cite the paper according

Simon Niklaus 134 Nov 20, 2022
Mahadi-Now - This Is Pakistani Just Now Login Tools

PAKISTANI JUST NOW LOGIN TOOLS Install apt update apt upgrade apt install python

MAHADI HASAN AFRIDI 19 Apr 06, 2022
Vit-ImageClassification - Pytorch ViT for Image classification on the CIFAR10 dataset

Vit-ImageClassification Introduction This project uses ViT to perform image clas

Kaicheng Yang 4 Jun 01, 2022
UFPR-ADMR-v2 Dataset

UFPR-ADMR-v2 Dataset The UFPR-ADMRv2 dataset contains 5,000 dial meter images obtained on-site by employees of the Energy Company of Paraná (Copel), w

Gabriel Salomon 8 Sep 29, 2022
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

Jia Research Lab 116 Dec 20, 2022
[CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision

TorchSemiSeg [CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision by Xiaokang Chen1, Yuhui Yuan2, Gang Zeng1, Jingdong Wang

Chen XiaoKang 387 Jan 08, 2023