Deep Networks with Recurrent Layer Aggregation

Related tags

Deep LearningRLANet
Overview

RLA-Net: Recurrent Layer Aggregation

Recurrence along Depth: Deep Networks with Recurrent Layer Aggregation

This is an implementation of RLA-Net (accept by NeurIPS-2021, paper).

RLANet

Introduction

This paper introduces a concept of layer aggregation to describe how information from previous layers can be reused to better extract features at the current layer. While DenseNet is a typical example of the layer aggregation mechanism, its redundancy has been commonly criticized in the literature. This motivates us to propose a very light-weighted module, called recurrent layer aggregation (RLA), by making use of the sequential structure of layers in a deep CNN. Our RLA module is compatible with many mainstream deep CNNs, including ResNets, Xception and MobileNetV2, and its effectiveness is verified by our extensive experiments on image classification, object detection and instance segmentation tasks. Specifically, improvements can be uniformly observed on CIFAR, ImageNet and MS COCO datasets, and the corresponding RLA-Nets can surprisingly boost the performances by 2-3% on the object detection task. This evidences the power of our RLA module in helping main CNNs better learn structural information in images.

RLA module

RLA_module

Changelog

  • 2021/04/06 Upload RLA-ResNet model.
  • 2021/04/16 Upload RLA-MobileNetV2 (depthwise separable conv version) model.
  • 2021/09/29 Upload all the ablation study on ImageNet.
  • 2021/09/30 Upload mmdetection files.
  • 2021/10/01 Upload pretrained weights.

Installation

Requirements

Our environments

  • OS: Linux Red Hat 4.8.5
  • CUDA: 10.2
  • Toolkit: Python 3.8.5, PyTorch 1.7.0, torchvision 0.8.1
  • GPU: Tesla V100

Please refer to get_started.md for more details about installation.

Quick Start

Train with ResNet

- Use single node or multi node with multiple GPUs

Use multi-processing distributed training to launch N processes per node, which has N GPUs. This is the fastest way to use PyTorch for either single node or multi node data parallel training.

python train.py -a {model_name} --b {batch_size} --multiprocessing-distributed --world-size 1 --rank 0 {imagenet-folder with train and val folders}

- Specify single GPU or multiple GPUs

CUDA_VISIBLE_DEVICES={device_ids} python train.py -a {model_name} --b {batch_size} --multiprocessing-distributed --world-size 1 --rank 0 {imagenet-folder with train and val folders}

Testing

To evaluate the best model

python train.py -a {model_name} --b {batch_size} --multiprocessing-distributed --world-size 1 --rank 0 --resume {path to the best model} -e {imagenet-folder with train and val folders}

Visualizing the training result

To generate acc_plot, loss_plot

python eval_visual.py --log-dir {log_folder}

Train with MobileNet_v2

It is same with above ResNet replace train.py by train_light.py.

Compute the parameters and FLOPs

If you have install thop, you can paras_flops.py to compute the parameters and FLOPs of our models. The usage is below:

python paras_flops.py -a {model_name}

More examples are shown in examples.md.

MMDetection

After installing MMDetection (see get_started.md), then do the following steps:

  • put the file resnet_rla.py in the folder './mmdetection/mmdet/models/backbones/', and do not forget to import the model in the init.py file.
  • put the config files (e.g. faster_rcnn_r50rla_fpn.py) in the folder './mmdetection/configs/base/models/'
  • put the config files (e.g. faster_rcnn_r50rla_fpn_1x_coco.py) in the folder './mmdetection/configs/faster_rcnn'

Note that the config files of the latest version of MMDetection are a little different, please modify the config files according to the latest format.

Experiments

ImageNet

Model Param. FLOPs Top-1 err.(%) Top-5 err.(%) BaiduDrive(models) Extract code GoogleDrive
RLA-ResNet50 24.67M 4.17G 22.83 6.58 resnet50_rla_2283 5lf1 resnet50_rla_2283
RLA-ECANet50 24.67M 4.18G 22.15 6.11 ecanet50_rla_2215 xrfo ecanet50_rla_2215
RLA-ResNet101 42.92M 7.79G 21.48 5.80 resnet101_rla_2148 zrv5 resnet101_rla_2148
RLA-ECANet101 42.92M 7.80G 21.00 5.51 ecanet101_rla_2100 vhpy ecanet101_rla_2100
RLA-MobileNetV2 3.46M 351.8M 27.62 9.18 dsrla_mobilenetv2_k32_2762 g1pm dsrla_mobilenetv2_k32_2762
RLA-ECA-MobileNetV2 3.46M 352.4M 27.07 8.89 dsrla_mobilenetv2_k32_eca_2707 9orl dsrla_mobilenetv2_k32_eca_2707

COCO 2017

Model AP AP_50 AP_75 BaiduDrive(models) Extract code GoogleDrive
Fast_R-CNN_resnet50_rla 38.8 59.6 42.0 faster_rcnn_r50rla_fpn_1x_coco_388 q5c8 faster_rcnn_r50rla_fpn_1x_coco_388
Fast_R-CNN_ecanet50_rla 39.8 61.2 43.2 faster_rcnn_r50rlaeca_fpn_1x_coco_398 f5xs faster_rcnn_r50rlaeca_fpn_1x_coco_398
Fast_R-CNN_resnet101_rla 41.2 61.8 44.9 faster_rcnn_r101rla_fpn_1x_coco_412 0ri3 faster_rcnn_r101rla_fpn_1x_coco_412
Fast_R-CNN_ecanet101_rla 42.1 63.3 46.1 faster_rcnn_r101rlaeca_fpn_1x_coco_421 cpug faster_rcnn_r101rlaeca_fpn_1x_coco_421
RetinaNet_resnet50_rla 37.9 57.0 40.8 retinanet_r50rla_fpn_1x_coco_379 lahj retinanet_r50rla_fpn_1x_coco_379
RetinaNet_ecanet50_rla 39.0 58.7 41.7 retinanet_r50rlaeca_fpn_1x_coco_390 adyd retinanet_r50rlaeca_fpn_1x_coco_390
RetinaNet_resnet101_rla 40.3 59.8 43.5 retinanet_r101rla_fpn_1x_coco_403 p8y0 retinanet_r101rla_fpn_1x_coco_403
RetinaNet_ecanet101_rla 41.5 61.6 44.4 retinanet_r101rlaeca_fpn_1x_coco_415 hdqx retinanet_r101rlaeca_fpn_1x_coco_415
Mask_R-CNN_resnet50_rla 39.5 60.1 43.3 mask_rcnn_r50rla_fpn_1x_coco_395 j1x6 mask_rcnn_r50rla_fpn_1x_coco_395
Mask_R-CNN_ecanet50_rla 40.6 61.8 44.0 mask_rcnn_r50rlaeca_fpn_1x_coco_406 c08r mask_rcnn_r50rlaeca_fpn_1x_coco_406
Mask_R-CNN_resnet101_rla 41.8 62.3 46.2 mask_rcnn_r101rla_fpn_1x_coco_418 8bsn mask_rcnn_r101rla_fpn_1x_coco_418
Mask_R-CNN_ecanet101_rla 42.9 63.6 46.9 mask_rcnn_r101rlaeca_fpn_1x_coco_429 3kmz mask_rcnn_r101rlaeca_fpn_1x_coco_429

Citation

@misc{zhao2021recurrence,
      title={Recurrence along Depth: Deep Convolutional Neural Networks with Recurrent Layer Aggregation}, 
      author={Jingyu Zhao and Yanwen Fang and Guodong Li},
      year={2021},
      eprint={2110.11852},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Questions

Please contact '[email protected]' or '[email protected]'.

Owner
Joy Fang
Joy Fang
Multi-Scale Progressive Fusion Network for Single Image Deraining

Multi-Scale Progressive Fusion Network for Single Image Deraining (MSPFN) This is an implementation of the MSPFN model proposed in the paper (Multi-Sc

Kuijiang 128 Nov 21, 2022
It's a powerful version of linebot

CTPS-FINAL Linbot-sever.py 主程式 Algorithm.py 推薦演算法,媒合餐廳端資料與顧客端資料 config.ini 儲存 channel-access-token、channel-secret 資料 Preface 生活在成大將近4年,我們每天的午餐時間看著形形色色

1 Oct 17, 2022
Machine Learning automation and tracking

The Open-Source MLOps Orchestration Framework MLRun is an open-source MLOps framework that offers an integrative approach to managing your machine-lea

873 Jan 04, 2023
SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021)

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021) This repository contains the official PyTorch implementa

Qianli Ma 133 Jan 05, 2023
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Dynamic Systems Lab 300 Dec 28, 2022
Code to reproduce the experiments in the paper "Transformer Based Multi-Source Domain Adaptation" (EMNLP 2020)

Transformer Based Multi-Source Domain Adaptation Dustin Wright and Isabelle Augenstein To appear in EMNLP 2020. Read the preprint: https://arxiv.org/a

CopeNLU 36 Dec 05, 2022
Evolving neural network parameters in JAX.

Evolving Neural Networks in JAX This repository holds code displaying techniques for applying evolutionary network training strategies in JAX. Each sc

Trevor Thackston 6 Feb 12, 2022
NovelD: A Simple yet Effective Exploration Criterion

NovelD: A Simple yet Effective Exploration Criterion Intro This is an implementation of the method proposed in NovelD: A Simple yet Effective Explorat

29 Dec 05, 2022
Code for our NeurIPS 2021 paper Mining the Benefits of Two-stage and One-stage HOI Detection

CDN Code for our NeurIPS 2021 paper "Mining the Benefits of Two-stage and One-stage HOI Detection". Contributed by Aixi Zhang*, Yue Liao*, Si Liu, Mia

71 Dec 14, 2022
Tensors and neural networks in Haskell

Hasktorch Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the co

hasktorch 920 Jan 04, 2023
I decide to sync up this repo and self-critical.pytorch. (The old master is in old master branch for archive)

An Image Captioning codebase This is a codebase for image captioning research. It supports: Self critical training from Self-critical Sequence Trainin

Ruotian(RT) Luo 1.3k Dec 31, 2022
OCR Post Correction for Endangered Language Texts

📌 Coming soon: an update to the software including features from our paper on semi-supervised OCR post-correction, to be published in the Transaction

Shruti Rijhwani 96 Dec 31, 2022
FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX.

FedJAX: Federated learning with JAX What is FedJAX? FedJAX is a library for developing custom Federated Learning (FL) algorithms in JAX. FedJAX priori

Google 208 Dec 14, 2022
Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning

RIIT Our open-source code for RIIT: Rethinking the Importance of Implementation Tricks in Multi-AgentReinforcement Learning. We implement and standard

405 Jan 06, 2023
A PyTorch implementation of unsupervised SimCSE

A PyTorch implementation of unsupervised SimCSE

99 Dec 23, 2022
adversarial_multi_armed_bandit_variable_plays

Adversarial Multi-Armed Bandit with Variable Plays This code is for paper: Adversarial Online Learning with Variable Plays in the Evasion-and-Pursuit

Yiyang Wang 1 Oct 28, 2021
An official implementation of the Anchor DETR.

Anchor DETR: Query Design for Transformer-Based Detector Introduction This repository is an official implementation of the Anchor DETR. We encode the

MEGVII Research 276 Dec 28, 2022
Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021.

Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021. Bobo Xi, Jiaojiao Li, Yunsong Li and Qian Du. Code f

Bobo Xi 7 Nov 03, 2022
Adaptive FNO transformer - official Pytorch implementation

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers This repository contains PyTorch implementation of the Adaptive Fourier Neu

NVIDIA Research Projects 77 Dec 29, 2022
A library for implementing Decentralized Graph Neural Network algorithms.

decentralized-gnn A package for implementing and simulating decentralized Graph Neural Network algorithms for classification of peer-to-peer nodes. De

Multimedia Knowledge and Social Analytics Lab 5 Nov 07, 2022