An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

Overview

Automatic Augmentation Zoo

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

We will post updates regularly so you can star 🌟 or watch 👓 this repository for the latest.

Introduction

This repository provides the official implementations of OHL and AWS, and will also integrate some other popular auto-aug methods (like Auto Augment, Fast AutoAugment and Adversarial autoaugment) in pure PyTorch. We use torch.distributed to conduct the distributed training. The model checkpoints will be upload to GoogleDrive or OneDrive soon.

Dependencies

It would be recommended to conduct experiments under:

  • python 3.6.3
  • pytorch 1.1.0, torchvision 0.2.1

All the dependencies are listed in requirements.txt. You may use commands like pip install -r requirements.txt to install them.

Running

  1. Create the directory for your experiment.
cd /path/to/this/repo
mkdir -p exp/aws_search1
  1. Copy configurations into your workspace.
cp scripts/search.sh configs/aws.yaml exp/aws_search1
cd exp/aws_search1
  1. Start searching
# sh ./search.sh  
sh ./search.sh Test 8

An instance of yaml:

version: 0.1.0

dist:
    type: torch
    kwargs:
        node0_addr: auto
        node0_port: auto
        mp_start_method: fork   # fork or spawn; spawn would be too slow for Dalaloader

pipeline:
    type: aws
    common_kwargs:
        dist_training: &dist_training False
#        job_name:         [will be assigned in runtime]
#        exp_root:         [will be assigned in runtime]
#        meta_tb_lg_root:  [will be assigned in runtime]

        data:
            type: cifar100               # case-insensitive (will be converted to lower case in runtime)
#            dataset_root: /path/to/dataset/root   # default: ~/datasets/[type]
            train_set_size: 40000
            val_set_size: 10000
            batch_size: 256
            dist_training: *dist_training
            num_workers: 3
            cutout: True
            cutlen: 16

        model_grad_clip: 3.0
        model:
            type: WRN
            kwargs:
#                num_classes: [will be assigned in runtime]
                bn_mom: 0.5

        agent:
            type: ppo           # ppo or REINFORCE
            kwargs:
                initial_baseline_ratio: 0
                baseline_mom: 0.9
                clip_epsilon: 0.2
                max_training_times: 5
                early_stopping_kl: 0.002
                entropy_bonus: 0
                op_cfg:
                    type: Adam         # any type in torch.optim
                    kwargs:
#                        lr: [will be assigned in runtime] (=sc.kwargs.base_lr)
                        betas: !!python/tuple [0.5, 0.999]
                        weight_decay: 0
                sc_cfg:
                    type: Constant
                    kwargs:
                        base_lr_divisor: 8      # base_lr = warmup_lr / base_lr_divisor
                        warmup_lr: 0.1          # lr at the end of warming up
                        warmup_iters: 10      # warmup_epochs = epochs / warmup_divisor
                        iters: &finetune_lp 350
        
        criterion:
            type: LSCE
            kwargs:
                smooth_ratio: 0.05


    special_kwargs:
        pretrained_ckpt_path: ~ # /path/to/pretrained_ckpt.pth.tar
        pretrain_ep: &pretrain_ep 200
        pretrain_op: &sgd
            type: SGD       # any type in torch.optim
            kwargs:
#                lr: [will be assigned in runtime] (=sc.kwargs.base_lr)
                nesterov: True
                momentum: 0.9
                weight_decay: 0.0001
        pretrain_sc:
            type: Cosine
            kwargs:
                base_lr_divisor: 4      # base_lr = warmup_lr / base_lr_divisor
                warmup_lr: 0.2          # lr at the end of warming up
                warmup_divisor: 200     # warmup_epochs = epochs / warmup_divisor
                epochs: *pretrain_ep
                min_lr: &finetune_lr 0.001

        finetuned_ckpt_path: ~  # /path/to/finetuned_ckpt.pth.tar
        finetune_lp: *finetune_lp
        finetune_ep: &finetune_ep 10
        rewarded_ep: 2
        finetune_op: *sgd
        finetune_sc:
            type: Constant
            kwargs:
                base_lr: *finetune_lr
                warmup_lr: *finetune_lr
                warmup_iters: 0
                epochs: *finetune_ep

        retrain_ep: &retrain_ep 300
        retrain_op: *sgd
        retrain_sc:
            type: Cosine
            kwargs:
                base_lr_divisor: 4      # base_lr = warmup_lr / base_lr_divisor
                warmup_lr: 0.4          # lr at the end of warming up
                warmup_divisor: 200     # warmup_epochs = epochs / warmup_divisor
                epochs: *retrain_ep
                min_lr: 0

Citation

If you're going to to use this code in your research, please consider citing our papers (OHL and AWS).

@inproceedings{lin2019online,
  title={Online Hyper-parameter Learning for Auto-Augmentation Strategy},
  author={Lin, Chen and Guo, Minghao and Li, Chuming and Yuan, Xin and Wu, Wei and Yan, Junjie and Lin, Dahua and Ouyang, Wanli},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={6579--6588},
  year={2019}
}

@article{tian2020improving,
  title={Improving Auto-Augment via Augmentation-Wise Weight Sharing},
  author={Tian, Keyu and Lin, Chen and Sun, Ming and Zhou, Luping and Yan, Junjie and Ouyang, Wanli},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

Contact for Issues

References & Opensources

Improved Fitness Optimization Landscapes for Sequence Design

ReLSO Improved Fitness Optimization Landscapes for Sequence Design Description Citation How to run Training models Original data source Description In

Krishnaswamy Lab 44 Dec 20, 2022
[3DV 2021] A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks

dispersion-score Official implementation of 3DV 2021 Paper A Dataset-dispersion Perspective on Reconstruction versus Recognition in Single-view 3D Rec

Yefan 7 May 28, 2022
Forecasting Nonverbal Social Signals during Dyadic Interactions with Generative Adversarial Neural Networks

ForecastingNonverbalSignals This is the implementation for the paper Forecasting Nonverbal Social Signals during Dyadic Interactions with Generative A

1 Feb 10, 2022
Code for the paper "Adapting Monolingual Models: Data can be Scarce when Language Similarity is High"

Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling Adapting Monolingual Models: Data can be Scarce when Language Similarity is High

Wietse de Vries 5 Aug 02, 2021
Single-Shot Motion Completion with Transformer

Single-Shot Motion Completion with Transformer 👉 [Preprint] 👈 Abstract Motion completion is a challenging and long-discussed problem, which is of gr

FuxiCV 78 Dec 29, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
PySOT - SenseTime Research platform for single object tracking, implementing algorithms like SiamRPN and SiamMask.

PySOT is a software system designed by SenseTime Video Intelligence Research team. It implements state-of-the-art single object tracking algorit

STVIR 4.1k Dec 29, 2022
Code for "Universal inference meets random projections: a scalable test for log-concavity"

How to use this repository This repository contains code to replicate the results of "Universal inference meets random projections: a scalable test fo

Robin Dunn 0 Nov 21, 2021
PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection

Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection

Facebook Research 366 Dec 28, 2022
A cool little repl-based simulation written in Python

A cool little repl-based simulation written in Python planned to integrate machine-learning into itself to have AI battle to the death before your eye

Em 6 Sep 17, 2022
EfficientNetv2 TensorRT int8

EfficientNetv2_TensorRT_int8 EfficientNetv2模型实现来自https://github.com/d-li14/efficientnetv2.pytorch 环境配置 ubuntu:18.04 cuda:11.0 cudnn:8.0 tensorrt:7

34 Apr 24, 2022
PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Xinlei-Pei 6 Dec 23, 2022
Semi-Supervised Learning with Ladder Networks in Keras. Get 98% test accuracy on MNIST with just 100 labeled examples !

Semi-Supervised Learning with Ladder Networks in Keras This is an implementation of Ladder Network in Keras. Ladder network is a model for semi-superv

Divam Gupta 101 Sep 07, 2022
covid question answering datasets and fine tuned models

Covid-QA Fine tuned models for question answering on Covid-19 data. Hosted Inference This model has been contributed to huggingface.Click here to see

Abhijith Neil Abraham 19 Sep 09, 2021
An auto discord account and token generator. Automatically verifies the phone number. Works without proxy. Bypasses captcha.

JOIN DISCORD SERVER https://discord.gg/uAc3agBY FREE HCAPTCHA SOLVING API Discord-Token-Gen An auto discord token generator. Auto verifies phone numbe

3kp 271 Jan 01, 2023
PyTorch implementation of residual gated graph ConvNets, ICLR’18

Residual Gated Graph ConvNets April 24, 2018 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbress

Xavier Bresson 112 Aug 10, 2022
Official implementation of "One-Shot Voice Conversion with Weight Adaptive Instance Normalization".

One-Shot Voice Conversion with Weight Adaptive Instance Normalization By Shengjie Huang, Yanyan Xu*, Dengfeng Ke*, Mingjie Chen, Thomas Hain. This rep

31 Dec 07, 2022
VGGFace2-HQ - A high resolution face dataset for face editing purpose

The first open source high resolution dataset for face swapping!!! A high resolution version of VGGFace2 for academic face editing purpose

Naiyuan Liu 232 Dec 29, 2022
TipToiDog - Tip Toi Dog With Python

TipToiDog Was ist dieses Projekt? Meine 5-jährige Tochter spielt sehr gerne das

1 Feb 07, 2022
A framework for the elicitation, specification, formalization and understanding of requirements.

A framework for the elicitation, specification, formalization and understanding of requirements.

NASA - Software V&V 161 Jan 03, 2023