High-quality implementations of standard and SOTA methods on a variety of tasks.

Overview

Uncertainty Baselines

Tests

The goal of Uncertainty Baselines is to provide a template for researchers to build on. The baselines can be a starting point for any new ideas, applications, and/or for communicating with other uncertainty and robustness researchers. This is done in three ways:

  1. Provide high-quality implementations of standard and state-of-the-art methods on standard tasks.
  2. Have minimal dependencies on other files in the codebase. Baselines should be easily forkable without relying on other baselines and generic modules.
  3. Prescribe best practices for uncertainty and robustness benchmarking.

Motivation. There are many uncertainty and robustness implementations across GitHub. However, they are typically one-off experiments for a specific paper (many papers don't even have code). There are no clear examples that uncertainty researchers can build on to quickly prototype their work. Everyone must implement their own baseline. In fact, even on standard tasks, every project differs slightly in their experiment setup, whether it be architectures, hyperparameters, or data preprocessing. This makes it difficult to compare properly against baselines.

Installation

To install the latest development version, run

pip install "git+https://github.com/google/uncertainty-baselines.git#egg=uncertainty_baselines"

There is not yet a stable version (nor an official release of this library). All APIs are subject to change. Installing uncertainty_baselines does not automatically install any backend. For TensorFlow, you will need to install TensorFlow ( tensorflow or tf-nightly), TensorFlow Addons (tensorflow- addons or tfa-nightly), and TensorBoard (tensorboard or tb-nightly). See the extra dependencies one can install in setup.py.

Usage

Baselines

The baselines/ directory includes all the baselines, organized by their training dataset. For example, baselines/cifar/determinstic.py is a Wide ResNet 28-10 obtaining 96.0% test accuracy on CIFAR-10.

Launching with TPUs. You often need TPUs to reproduce baselines. There are three options:

  1. Colab. Colab offers free TPUs. This is the most convenient and budget-friendly option. You can experiment with a baseline by copying its script and running it from scratch. This works well for simple experimentation. However, be careful relying on Colab long-term: TPU access isn't guaranteed, and Colab can only go so far for managing multiple long experiments.

  2. Google Cloud. This is the most flexible option. First, you'll need to create a virtual machine instance (details here).

    Here's an example to launch the BatchEnsemble baseline on CIFAR-10. We assume a few environment variables which are set up with the cloud TPU (details here).

    export BUCKET=gs://bucket-name
    export TPU_NAME=ub-cifar-batchensemble
    export DATA_DIR=$BUCKET/tensorflow_datasets
    export OUTPUT_DIR=$BUCKET/model
    
    python baselines/cifar/batchensemble.py \
        --tpu=$TPU_NAME \
        --data_dir=$DATA_DIR \
        --output_dir=$OUTPUT_DIR

    Note the TPU's accelerator type must align with the number of cores for the baseline (num_cores flag). In this example, BatchEnsemble uses a default of num_cores=8. So the TPU must be set up with accelerator_type=v3-8.

  3. Change the flags. For example, go from 8 TPU cores to 8 GPUs, or reduce the number of cores to train the baseline.

    python baselines/cifar/batchensemble.py \
        --data_dir=/tmp/tensorflow_datasets \
        --output_dir=/tmp/model \
        --use_gpu=True \
        --num_cores=8

    Results may be similar, but ultimately all bets are off. GPU vs TPU may not make much of a difference in practice, especially if you use the same numerical precision. However, changing the number of cores matters a lot. The total batch size during each training step is often determined by num_cores, so be careful!

Datasets

The ub.datasets module consists of datasets following the TensorFlow Datasets API. They add minimal logic such as default data preprocessing. Note: in ipython/colab notebook, one may need to activate tf earger execution mode tf.compat.v1.enable_eager_execution().

import uncertainty_baselines as ub

# Load CIFAR-10, holding out 10% for validation.
dataset_builder = ub.datasets.Cifar10Dataset(split='train',
                                             validation_percent=0.1)
train_dataset = dataset_builder.load(batch_size=FLAGS.batch_size)
for batch in train_dataset:
  # Apply code over batches of the data.

You can also use get to instantiate datasets from strings (e.g., commandline flags).

dataset_builder = ub.datasets.get(dataset_name, split=split, **dataset_kwargs)

To use the datasets in Jax and PyTorch:

for batch in tfds.as_numpy(ds):
  train_step(batch)

Note that tfds.as_numpy calls tensor.numpy(). This invokes an unnecessary copy compared to tensor._numpy().

for batch in iter(ds):
  train_step(jax.tree_map(lambda y: y._numpy(), batch))

Models

The ub.models module consists of models following the tf.keras.Model API.

import uncertainty_baselines as ub

model = ub.models.wide_resnet(input_shape=(32, 32, 3),
                              depth=28,
                              width_multiplier=10,
                              num_classes=10,
                              l2=1e-4)

You can also use get to instantiate models from strings (e.g., commandline flags).

model = ub.models.get(model_name, batch_size=FLAGS.batch_size)

Metrics

We define metrics used across datasets below. All results are reported by roughly 3 significant digits and averaged over 10 runs.

  1. # Parameters. Number of parameters in the model to make predictions after training.

  2. Test Accuracy. Accuracy over the test set. For a dataset of N input-output pairs (xn, yn) where the label yn takes on 1 of K values, the accuracy is

    1/N \sum_{n=1}^N 1[ \argmax{ p(yn | xn) } = yn ],

    where 1 is the indicator function that is 1 when the model's predicted class is equal to the label and 0 otherwise.

  3. Test Cal. Error. Expected calibration error (ECE) over the test set (Naeini et al., 2015). ECE discretizes the probability interval [0, 1] under equally spaced bins and assigns each predicted probability to the bin that encompasses it. The calibration error is the difference between the fraction of predictions in the bin that are correct (accuracy) and the mean of the probabilities in the bin (confidence). The expected calibration error averages across bins.

    For a dataset of N input-output pairs (xn, yn) where the label yn takes on 1 of K values, ECE computes a weighted average

    \sum_{b=1}^B n_b / N | acc(b) - conf(b) |,

    where B is the number of bins, n_b is the number of predictions in bin b, and acc(b) and conf(b) is the accuracy and confidence of bin b respectively.

  4. Test NLL. Negative log-likelihood over the test set (measured in nats). For a dataset of N input-output pairs (xn, yn), the negative log-likelihood is

    -1/N \sum_{n=1}^N \log p(yn | xn).

    It is equivalent up to a constant to the KL divergence from the true data distribution to the model, therefore capturing the overall goodness of fit to the true distribution (Murphy, 2012). It can also be intepreted as the amount of bits (nats) to explain the data (Grunwald, 2004).

  5. Train/Test Runtime. Training runtime is the total wall-clock time to train the model, including any intermediate test set evaluations. Test Runtime refers to the time it takes to run a forward pass on the GPU/TPU, i.e., the duration for which the device is not idle. Note that Test Runtime does not include time on the coordinator: this is more precise in comparing baselines because including the coordinator adds overhead in GPU/TPU scheduling and data fetching---producing high variance results.

Viewing metrics. Uncertainty Baselines writes TensorFlow summaries to the model_dir which can be consumed by TensorBoard. This includes the TensorBoard hyperparameters plugin, which can be used to analyze hyperparamter tuning sweeps.

If you wish to upload to the PUBLICLY READABLE tensorboard.dev, use:

tensorboard dev upload --logdir MODEL_DIR --plugins "scalars,graphs,hparams" --name "My experiment" --description "My experiment details"

References

If you'd like to cite Uncertainty Baselines, use the following BibTeX entry.

Z. Nado, N. Band, M. Collier, J. Djolonga, M. Dusenberry, S. Farquhar, A. Filos, M. Havasi, R. Jenatton, G. Jerfel, J. Liu, Z. Mariet, J. Nixon, S. Padhy, J. Ren, T. Rudner, Y. Wen, F. Wenzel, K. Murphy, D. Sculley, B. Lakshminarayanan, J. Snoek, Y. Gal, and D. Tran. Uncertainty Baselines: Benchmarks for uncertainty & robustness in deep learning, arXiv preprint arXiv:2106.04015, 2021.

@article{nado2021uncertainty,
  author = {Zachary Nado and Neil Band and Mark Collier and Josip Djolonga and Michael Dusenberry and Sebastian Farquhar and Angelos Filos and Marton Havasi and Rodolphe Jenatton and Ghassen Jerfel and Jeremiah Liu and Zelda Mariet and Jeremy Nixon and Shreyas Padhy and Jie Ren and Tim Rudner and Yeming Wen and Florian Wenzel and Kevin Murphy and D. Sculley and Balaji Lakshminarayanan and Jasper Snoek and Yarin Gal and Dustin Tran},
  title = {{Uncertainty Baselines}:  Benchmarks for Uncertainty \& Robustness in Deep Learning},
  journal = {arXiv preprint arXiv:2106.04015},
  year = {2021},
}

Papers using Uncertainty Baselines

The following papers have used code from Uncertainty Baselines:

  1. A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
  2. BatchEnsemble: An Alternative Approach to Efficient Ensembles and Lifelong Learning
  3. DEUP: Direct Epistemic Uncertainty Prediction
  4. Distilling Ensembles Improves Uncertainty Estimates
  5. Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors
  6. Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit
  7. Hyperparameter Ensembles for Robustness and Uncertainty Quantification
  8. Measuring Calibration in Deep Learning
  9. Measuring and Improving Model-Moderator Collaboration using Uncertainty Estimation
  10. Neural networks with late-phase weights
  11. On the Practicality of Deterministic Epistemic Uncertainty
  12. Prediction-Time Batch Normalization for Robustness under Covariate Shift
  13. Refining the variational posterior through iterative optimization
  14. Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks
  15. Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
  16. Training independent subnetworks for robust prediction

Contributing

Adding a Baseline

  1. Write a script that loads the fixed training dataset and model. Typically, this is forked from other baselines.
  2. After tuning, set the default flag values to the best hyperparameters.
  3. Add the baseline's performance to the table of results in the corresponding README.md.

Adding a Dataset

  1. Add the bibtex reference to references.md.
  2. Add the dataset definition to the datasets/ dir. Every file should have a subclass of datasets.base.BaseDataset, which at a minimum requires implementing a constructor, a tfds.core.DatasetBuilder, and _create_process_example_fn.
  3. Add a test that at a minimum constructs the dataset and checks the shapes of elements.
  4. Add the dataset to datasets/datasets.py for easy access.
  5. Add the dataset class to datasets/__init__.py.

For an example of adding a dataset, see this pull request.

Adding a Model

  1. Add the bibtex reference to references.md.

  2. Add the model definition to the models/ dir. Every file should have a create_model function with the following signature:

    def create_model(
        batch_size: int,
        ...
        **unused_kwargs: Dict[str, Any])
        -> tf.keras.models.Model:
  3. Add a test that at a minimum constructs the model and does a forward pass.

  4. Add the model to models/models.py for easy access.

  5. Add the create_model function to models/__init__.py.

Owner
Google
Google ❤️ Open Source
Google
Implementation of the HMAX model of vision in PyTorch

PyTorch implementation of HMAX PyTorch implementation of the HMAX model that closely follows that of the MATLAB implementation of The Laboratory for C

Marijn van Vliet 52 Oct 13, 2022
Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution

FAU Implementation of the paper: Facial Action Unit Intensity Estimation via Semantic Correspondence Learning with Dynamic Graph Convolution. Yingruo

Evelyn 78 Nov 29, 2022
Command-line tool for downloading and extending the RedCaps dataset.

RedCaps Downloader This repository provides the official command-line tool for downloading and extending the RedCaps dataset. Users can seamlessly dow

RedCaps dataset 33 Dec 14, 2022
Automatic Differentiation Multipole Moment Molecular Forcefield

Automatic Differentiation Multipole Moment Molecular Forcefield Performance notes On a single gpu, using waterbox_31ang.pdb example from MPIDplugin wh

4 Jan 07, 2022
Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples

Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples This repository is the official implementation of paper [Qimera: Data-free Q

Kanghyun Choi 21 Nov 03, 2022
The repo of Feedback Networks, CVPR17

Feedback Networks http://feedbacknet.stanford.edu/ Paper: Feedback Networks, CVPR 2017. Amir R. Zamir*,Te-Lin Wu*, Lin Sun, William B. Shen, Bertram E

Stanford Vision and Learning Lab 87 Nov 19, 2022
Code for "My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack" paper

Myo Keylogging This is the source code for our paper My(o) Armband Leaks Passwords: An EMG and IMU Based Keylogging Side-Channel Attack by Matthias Ga

Secure Mobile Networking Lab 7 Jan 03, 2023
The world's simplest facial recognition api for Python and the command line

Face Recognition You can also read a translated version of this file in Chinese 简体中文版 or in Korean 한국어 or in Japanese 日本語. Recognize and manipulate fa

Adam Geitgey 46.9k Jan 03, 2023
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).

CCNet: Criss-Cross Attention for Semantic Segmentation Paper Links: Our most recent TPAMI version with improvements and extensions (Earlier ICCV versi

Zilong Huang 1.3k Dec 27, 2022
A PyTorch implementation of the paper "Semantic Image Synthesis via Adversarial Learning" in ICCV 2017

Semantic Image Synthesis via Adversarial Learning This is a PyTorch implementation of the paper Semantic Image Synthesis via Adversarial Learning. Req

Seonghyeon Nam 146 Nov 25, 2022
TensorFlow2 Classification Model Zoo playing with TensorFlow2 on the CIFAR-10 dataset.

Training CIFAR-10 with TensorFlow2(TF2) TensorFlow2 Classification Model Zoo. I'm playing with TensorFlow2 on the CIFAR-10 dataset. Architectures LeNe

Chia-Hung Yuan 16 Sep 27, 2022
CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning

CRLT: A Unified Contrastive Learning Toolkit for Unsupervised Text Representation Learning This repository contains the code and relevant instructions

XiaoMing 5 Aug 19, 2022
From this paper "SESNet: A Semantically Enhanced Siamese Network for Remote Sensing Change Detection"

SESNet for remote sensing image change detection It is the implementation of the paper: "SESNet: A Semantically Enhanced Siamese Network for Remote Se

1 May 24, 2022
Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line Segment Detection"

M-LSD: Towards Light-weight and Real-time Line Segment Detection Official Tensorflow implementation of "M-LSD: Towards Light-weight and Real-time Line

NAVER/LINE Vision 357 Jan 04, 2023
GestureSSD CBAM - A gesture recognition web system based on SSD and CBAM, using pytorch, flask and node.js

GestureSSD_CBAM A gesture recognition web system based on SSD and CBAM, using pytorch, flask and node.js SSD implementation is based on https://github

xue_senhua1999 2 Jan 06, 2022
Codebase for Image Classification Research, written in PyTorch.

pycls pycls is an image classification codebase, written in PyTorch. It was originally developed for the On Network Design Spaces for Visual Recogniti

Facebook Research 2k Jan 01, 2023
MIMIC Code Repository: Code shared by the research community for the MIMIC-III database

MIMIC Code Repository The MIMIC Code Repository is intended to be a central hub for sharing, refining, and reusing code used for analysis of the MIMIC

MIT Laboratory for Computational Physiology 1.8k Dec 26, 2022
PyTorch code for the paper "Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval".

Complementarity is the King: Multi-modal and Multi-grained Hierarchical Semantic Enhancement Network for Cross-modal Retrieval (M2HSE) PyTorch code fo

Xinlei-Pei 6 Dec 23, 2022
ML model to classify between cats and dogs

Cats-and-dogs-classifier This is my first ML model which can classify between cats and dogs. Here the accuracy is around 75%, however , the accuracy c

Sharath V 4 Aug 20, 2021
This project implements "virtual speed" from heart rate monito

ANT+ Virtual Stride Based Speed and Distance Monitor Overview This project imple

2 May 20, 2022