This is the official implementation of TrivialAugment and a mini-library for the application of multiple image augmentation strategies including RandAugment and TrivialAugment.

Overview

Trivial Augment

This is the official implementation of TrivialAugment (https://arxiv.org/abs/2103.10158), as was used for the paper. TrivialAugment is a super simple, but state-of-the-art performing, augmentation algorithm.

We distribute this implementation with two main use cases in mind. Either you only use our (re-)implementetations of practical augmentation methods or you start off with our full codebase.

Use TrivialAugment and Other Methods in Your Own Codebase

In this case we recommend to simply copy over the file aug_lib.py to your codebase. You can now instantiate the augmenters TrivialAugment, RandAugment and UniAugment like this:

augmenter = aug_lib.TrivialAugment()

And simply use them on a PIL images img:

aug_img = augmenter(img)

This format also happens to be compatible with torchvision.transforms. If you do not have Pillow or numpy installed, do so by calling pip install Pillow numpy. Generally, a good position to augment an image with the augmenter is right as you get it out of the dataset, before you apply any custom augmentations.

The default augmentation space is fixed_standard, that is without AutoAugments posterization bug and using the set of augmentations used in Randaugment. This is the search space we used for all our experiments, that do not mention another augmentation space. You can change the augmentation space, though, with aug_lib.set_augmentation_space. This call for example

aug_lib.set_augmentation_space('fixed_custom',2,['cutout'])

will change the augmentation space to only ever apply cutout with a large width or nothing. The 2 here gives indications in how many strength levels the strength ranges of the augmentation space should be divided. If an augmentation space includes sample_pairing, you need to specify a set of images with which to pair before each step: aug_lib.blend_images = [LIST OF PIL IMAGES].

Our recommendation is to use the default fixed_standard search space for very cheap setups, like Wide-Resnet-40-2, and to use wide_standard for all other setups by calling aug_lib.set_augmentation_space('wide_standard',31) before the start of training.

Use Our Full Codebase

Clone this directory and cd into it.

git clone automl/trivialaugment
cd trivialaugment

Install a fitting PyTorch version for your setup with GPU support, as our implementation only support setups with at least one CUDA device and install our requirements:

pip install -r requirements.txt
# Install a pytorch version, in many setups this has to be done manually, see pytorch.org

Now you should be ready to go. Start a training like so:

python -m TrivialAugment.train -c confs/wresnet40x2_cifar100_b128_maxlr.1_ta_fixedsesp_nowarmup_200epochs.yaml --dataroot data --tag EXPERIMENT_NAME

For concrete configs of experiments from the paper see the comments in the papers LaTeX code around the number you want to reproduce. For logs and metrics use a tensorboard with the logs directory or use our aggregate_results.py script to view data from the tensorboard logs in the command line.

Confidence Intervals

Since in the current literature we rarely found confidence intervals, we share our implementation in evaluation_tools.py.

This repository uses code from https://github.com/ildoonet/pytorch-randaugment and from https://github.com/tensorflow/models/tree/master/research/autoaugment.

An efficient framework for reinforcement learning.

rl: An efficient framework for reinforcement learning Requirements Introduction PPO Test Requirements name version Python =3.7 numpy =1.19 torch =1

16 Nov 30, 2022
Code release for SLIP Self-supervision meets Language-Image Pre-training

SLIP: Self-supervision meets Language-Image Pre-training What you can find in this repo: Pre-trained models (with ViT-Small, Base, Large) and code to

Meta Research 621 Dec 31, 2022
Pytorch implemenation of Stochastic Multi-Label Image-to-image Translation (SMIT)

SMIT: Stochastic Multi-Label Image-to-image Translation This repository provides a PyTorch implementation of SMIT. SMIT can stochastically translate a

Biomedical Computer Vision Group @ Uniandes 37 Mar 01, 2022
Generate high quality pictures. GAN. Generative Adversarial Networks

ESRGAN generate high quality pictures. GAN. Generative Adversarial Networks """ Super-resolution of CelebA using Generative Adversarial Networks. The

Lieon 1 Dec 14, 2021
An Open Source Machine Learning Framework for Everyone

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

170.1k Jan 04, 2023
pytorch bert intent classification and slot filling

pytorch_bert_intent_classification_and_slot_filling 基于pytorch的中文意图识别和槽位填充 说明 基本思路就是:分类+序列标注(命名实体识别)同时训练。 使用的预训练模型:hugging face上的chinese-bert-wwm-ext 依

西西嘛呦 33 Dec 15, 2022
A Broader Picture of Random-walk Based Graph Embedding

Random-walk Embedding Framework This repository is a reference implementation of the random-walk embedding framework as described in the paper: A Broa

Zexi Huang 23 Dec 13, 2022
📚 A collection of all the Deep Learning Metrics that I came across which are not accuracy/loss.

📚 A collection of all the Deep Learning Metrics that I came across which are not accuracy/loss.

Rahul Vigneswaran 1 Jan 17, 2022
PyTorch framework for Deep Learning research and development.

Accelerated DL & RL PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentati

Catalyst-Team 29 Jul 13, 2022
Source code for From Stars to Subgraphs

GNNAsKernel Official code for From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness Visualizations GNN-AK(+) GNN-AK(+) with Subgra

44 Dec 19, 2022
COCO Style Dataset Generator GUI

A simple GUI-based COCO-style JSON Polygon masks' annotation tool to facilitate quick and efficient crowd-sourced generation of annotation masks and bounding boxes. Optionally, one could choose to us

Hans Krupakar 142 Dec 09, 2022
Toolbox to analyze temporal context invariance of deep neural networks

PyTCI A toolbox that estimates the integration window of a sensory response using the "Temporal Context Invariance" paradigm (TCI). The TCI method Int

4 Oct 23, 2022
Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition (NeurIPS 2019)

MLCR This is the source code for paper Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition. Xuesong Niu, Hu Han, Shiguang

Edson-Niu 60 Nov 29, 2022
Code for our paper "Interactive Analysis of CNN Robustness"

Perturber Code for our paper "Interactive Analysis of CNN Robustness" Datasets Feature visualizations: Google Drive Fine-tuning checkpoints as saved m

Stefan Sietzen 0 Aug 17, 2021
Music source separation is a task to separate audio recordings into individual sources

Music Source Separation Music source separation is a task to separate audio recordings into individual sources. This repository is an PyTorch implmeme

Bytedance Inc. 958 Jan 03, 2023
QuanTaichi evaluation suite

QuanTaichi: A Compiler for Quantized Simulations (SIGGRAPH 2021) Yuanming Hu, Jiafeng Liu, Xuanda Yang, Mingkuan Xu, Ye Kuang, Weiwei Xu, Qiang Dai, W

Taichi Developers 120 Jan 04, 2023
PyTorch implementation for paper StARformer: Transformer with State-Action-Reward Representations.

StARformer This repository contains the PyTorch implementation for our paper titled StARformer: Transformer with State-Action-Reward Representations.

Jinghuan Shang 14 Dec 09, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

45 Dec 26, 2022
The Turing Change Point Detection Benchmark: An Extensive Benchmark Evaluation of Change Point Detection Algorithms on real-world data

Turing Change Point Detection Benchmark Welcome to the repository for the Turing Change Point Detection Benchmark, a benchmark evaluation of change po

The Alan Turing Institute 85 Dec 28, 2022
Hysterese plugin with two temperature offset areas

craftbeerpi4 plugin OffsetHysterese Temperatur-Steuerungs-Plugin mit zwei tempereaturbereich abhängigen Offsets. Installation sudo pip3 install https:

HappyHibo 1 Dec 21, 2021