Oriented Response Networks, in CVPR 2017

Overview

Oriented Response Networks

[Home] [Project] [Paper] [Supp] [Poster]

illustration

Torch Implementation

The torch branch contains:

  • the official torch implementation of ORN.
  • the MNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • Torch7

Getting started

You can setup everything via a single command wget -O - https://git.io/vHCMI | bash or do it manually in case something goes wrong:

  1. install the dependencies (required by the demo code):

  2. clone the torch branch:

    # git version must be greater than 1.9.10
    git clone https://github.com/ZhouYanzhao/ORN.git -b torch --single-branch ORN.torch
    cd ORN.torch
    export DIR=$(pwd)
  3. install ORN:

    cd $DIR/install
    # install the CPU/GPU/CuDNN version ORN.
    bash install.sh
  4. unzip the MNIST dataset:

    cd $DIR/demo/datasets
    unzip MNIST
  5. run the MNIST-Variants demo:

    cd $DIR/demo
    # you can modify the script to test different hyper-parameters
    bash ./scripts/Train_MNIST.sh

Trouble shooting

If you run into 'cudnn.find' not found, update Torch7 to the latest version via cd <TORCH_DIR> && bash ./update.sh then re-install everything.

More experiments

CIFAR 10/100

You can train the OR-WideResNet model (converted from WideResNet by simply replacing Conv layers with ORConv layers) on CIFAR dataset with WRN.

dataset=cifar10_original.t7 model=or-wrn widen_factor=4 depth=40 ./scripts/train_cifar.sh

With exactly the same settings, ORN-augmented WideResNet achieves state-of-the-art result while using significantly fewer parameters.

CIFAR

Network Params CIFAR-10 (ZCA) CIFAR-10 (mean/std) CIFAR-100 (ZCA) CIFAR-100 (mean/std)
DenseNet-100-12-dropout 7.0M - 4.10 - 20.20
DenseNet-190-40-dropout 25.6M - 3.46 - 17.18
WRN-40-4 8.9M 4.97 4.53 22.89 21.18
WRN-28-10-dropout 36.5M 4.17 3.89 20.50 18.85
WRN-40-10-dropout 55.8M - 3.80 - 18.3
ORN-40-4(1/2) 4.5M 4.13 3.43 21.24 18.82
ORN-28-10(1/2)-dropout 18.2M 3.52 2.98 19.22 16.15

Table.1 Test error (%) on CIFAR10/100 dataset with flip/translation augmentation)

ImageNet

ILSVRC2012

The effectiveness of ORN is further verified on large scale data. The OR-ResNet-18 model upgraded from ResNet-18 yields significant better performance when using similar parameters.

Network Params Top1-Error Top5-Error
ResNet-18 11.7M 30.614 10.98
OR-ResNet-18 11.4M 28.916 9.88

Table.2 Validation error (%) on ILSVRC-2012 dataset.

You can use facebook.resnet.torch to train the OR-ResNet-18 model from scratch or finetune it on your data by using the pre-trained weights.

-- To fill the model with the pre-trained weights:
model = require('or-resnet.lua')({tensorType='torch.CudaTensor', pretrained='or-resnet18_weights.t7'})

A more specific demo notebook of using the pre-trained OR-ResNet to classify images can be found here.

PyTorch Implementation

The pytorch branch contains:

  • the official pytorch implementation of ORN (alpha version supports 1x1/3x3 ARFs with 4/8 orientation channels only).
  • the MNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • PyTorch

Getting started

  1. install the dependencies (required by the demo code):

    • tqdm: pip install tqdm
    • pillow: pip install Pillow
  2. clone the pytorch branch:

    # git version must be greater than 1.9.10
    git clone https://github.com/ZhouYanzhao/ORN.git -b pytorch --single-branch ORN.pytorch
    cd ORN.pytorch
    export DIR=$(pwd)
  3. install ORN:

    cd $DIR/install
    bash install.sh
  4. run the MNIST-Variants demo:

    cd $DIR/demo
    # train ORN on MNIST-rot
    python main.py --use-arf
    # train baseline CNN
    python main.py

Caffe Implementation

The caffe branch contains:

  • the official caffe implementation of ORN (alpha version supports 1x1/3x3 ARFs with 4/8 orientation channels only).
  • the MNIST-Variants demo.

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

  • Linux (tested on ubuntu 14.04LTS)
  • NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN mode are also available but significantly slower)
  • Caffe

Getting started

  1. install the dependency (required by the demo code):

  2. clone the caffe branch:

    # git version must be greater than 1.9.10
    git clone https://github.com/ZhouYanzhao/ORN.git -b caffe --single-branch ORN.caffe
    cd ORN.caffe
    export DIR=$(pwd)
  3. install ORN:

    # modify Makefile.config first
    # compile ORN.caffe
    make clean && make -j"$(nproc)" all
  4. run the MNIST-Variants demo:

    cd $DIR/examples/mnist
    bash get_mnist.sh
    # train ORN & CNN on MNIST-rot
    bash train.sh

Note

Due to implementation differences,

  • upgrading Conv layers to ORConv layers can be done by adding an orn_param
  • num_output of ORConv layers should be multipied by nOrientation of ARFs

Example:

layer {
  type: "Convolution"
  name: "ORConv" bottom: "Data" top: "ORConv"
  # add this line to replace regular filters with ARFs
  orn_param {orientations: 8}
  param { lr_mult: 1 decay_mult: 2}
  convolution_param {
    # this means 10 ARF feature maps
    num_output: 80
    kernel_size: 3
    stride: 1
    pad: 0
    weight_filler { type: "msra"}
    bias_filler { type: "constant" value: 0}
  }
}

Check the MNIST demo prototxt (and its visualization) for more details.

Citation

If you use the code in your research, please cite:

@INPROCEEDINGS{Zhou2017ORN,
    author = {Zhou, Yanzhao and Ye, Qixiang and Qiu, Qiang and Jiao, Jianbin},
    title = {Oriented Response Networks},
    booktitle = {CVPR},
    year = {2017}
}
ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction

ViSER: Video-Specific Surface Embeddings for Articulated 3D Shape Reconstruction. NeurIPS 2021.

Gengshan Yang 59 Nov 25, 2022
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022
This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Convolutional Networks on Node Classification

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification This is a Pytorch implementation of paper: DropEdge: Towards Deep Graph Con

401 Dec 16, 2022
Neural style transfer as a class in PyTorch

pt-styletransfer Neural style transfer as a class in PyTorch Based on: https://github.com/alexis-jacq/Pytorch-Tutorials Adds: StyleTransferNet as a cl

Tyler Kvochick 31 Jun 27, 2022
IDRLnet, a Python toolbox for modeling and solving problems through Physics-Informed Neural Network (PINN) systematically.

IDRLnet IDRLnet is a machine learning library on top of PyTorch. Use IDRLnet if you need a machine learning library that solves both forward and inver

IDRL 105 Dec 17, 2022
Fast and customizable reconnaissance workflow tool based on simple YAML based DSL.

Fast and customizable reconnaissance workflow tool based on simple YAML based DSL, with support of notifications and distributed workload of that work

Américo Júnior 3 Mar 11, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 01, 2023
LQM - Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstract Object detection aims to locate and classify object instances in ima

IM Lab., POSTECH 0 Sep 28, 2022
Neon-erc20-example - Example of creating SPL token and wrapping it with ERC20 interface in Neon EVM

Example of wrapping SPL token by ERC2-20 interface in Neon Requirements Install

7 Mar 28, 2022
A heterogeneous entity-augmented academic language model based on Open Academic Graph (OAG)

Library | Paper | Slack We released two versions of OAG-BERT in CogDL package. OAG-BERT is a heterogeneous entity-augmented academic language model wh

THUDM 58 Dec 17, 2022
Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)

Fast Axiomatic Attribution for Neural Networks This is the official repository accompanying the NeurIPS 2021 paper: R. Hesse, S. Schaub-Meyer, and S.

Visual Inference Lab @TU Darmstadt 11 Nov 21, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
Framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample resolution

Sample-specific Bayesian Networks A framework for estimating the structures and parameters of Bayesian networks (DAGs) at per-sample or per-patient re

Caleb Ellington 1 Sep 23, 2022
BMW TechOffice MUNICH 148 Dec 21, 2022
Deep Residual Learning for Image Recognition

Deep Residual Learning for Image Recognition This is a Torch implementation of "Deep Residual Learning for Image Recognition",Kaiming He, Xiangyu Zhan

Kimmy 561 Dec 01, 2022
SegNet including indices pooling for Semantic Segmentation with tensorflow and keras

SegNet SegNet is a model of semantic segmentation based on Fully Comvolutional Network. This repository contains the implementation of learning and te

Yuta Kamikawa 172 Dec 23, 2022
PanopticBEV - Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images This r

63 Dec 16, 2022
2021 Artificial Intelligence Diabetes Datathon

A.I.D.D. 2021 2021 Artificial Intelligence Diabetes Datathon A.I.D.D. 2021은 ‘2021 인공지능 학습용 데이터 구축사업’을 통해 만들어진 학습용 데이터를 활용하여 당뇨병을 효과적으로 예측할 수 있는가에 대한 A

2 Dec 27, 2021
Magisk module to enable hidden features on Android 12 Developer Preview 1.

Android 12 Extensions This is a Magisk module that enables hidden features on Android 12 Developer Preview 1. Features Scrolling screenshots Wallpaper

Danny Lin 384 Jan 06, 2023
[ICCV 2021] Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation

ADDS-DepthNet This is the official implementation of the paper Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation I

LIU_LINA 52 Nov 24, 2022