This is the reference implementation for "Coresets via Bilevel Optimization for Continual Learning and Streaming"

Overview

Coresets via Bilevel Optimization

This is the reference implementation for "Coresets via Bilevel Optimization for Continual Learning and Streaming" https://arxiv.org/pdf/2006.03875.pdf.

This repository also contains the implementation of the selection via Nyström proxy used for selecting batches in "Semi-supervised Batch Active Learning via Bilevel Optimization" https://arxiv.org/pdf/2010.09654. Selection via the Nyström proxy supports data augmentation, it is faster for larger coresets and hence supersedes the representer proxy in data summarization scenarios.

Overview

To get started with the library, check out demo.ipynb Open In Colab that shows how to build coresets for a toy regression problem and for MNIST classification. The following snippet outlines the general usage:

import bilevel_coreset
import loss_utils
import numpy as np

x, y = load_data()

# define proxy kernel function
linear_kernel_fn = lambda x1, x2: np.dot(x1, x2.T)

coreset_size = 10

coreset_constructor = bilevel_coreset.BilevelCoreset(outer_loss_fn=loss_utils.cross_entropy,
                                                    inner_loss_fn=loss_utils.cross_entropy,
                                                    out_dim=y.shape[1])
coreset_inds, coreset_weights = coreset_constructor.build_with_representer_proxy_batch(x, y, 
                                                    coreset_size, linear_kernel_fn, inner_reg=1e-3)
x_coreset, y_coreset = x[coreset_inds], y[coreset_inds]

Note: if you are planning to use the library on your problem, the most important hyperparameter to tune is inner_reg, the regularizer of the inner objective in the representer proxy - try the grid [10-2, 10-3, 10-4, 10-5, 10-6].

Requirements

Python 3 is required. To install the required dependencies, run:

pip install -r requirements.txt

If you are planning to use the NTK proxy, consider installing the GPU version of JAX: instructions here. If you would like to run the experiments, add the project root to your PYTHONPATH env variable.

Data Summarization

Change dir to data_summarization. For running and plotting the MNIST summarization experiment, adjust the globals in runner.py to your setup and run:

python runner.py --exp cnn_mnist
python plotter.py --exp cnn_mnist

Similarly, for the CIFAR-10 summary for a version of ResNet-18 run:

python runner.py --exp resnet_cifar
python plotter.py --exp resnet_cifar

For running the Kernel Ridge Regression experiment, you first need to generate the kernel with python generate_cntk.py. Note: this implementation differs in the kernel choice in generate_kernel() from the paper. For details on the original kernel, please refer to the paper. Once you generated the kernel, generate the results by:

python runner.py --exp krr_cifar
python plotter.py --exp krr_cifar 

Continual Learning and Streaming

We showcase the usage our coreset construction in continual learning and streaming with memory replay. The buffer regularizer beta is tuned individually for each method. We provide the best betas from [0.01, 0.1, 1.0, 10.0, 100.0, 1000.0] for each method in cl_results/ and streaming_results/.

Running the Experiments

Change dir to cl_streaming. After this, you can run individual experiments, e.g.:

python cl.py --buffer_size 100 --dataset splitmnist --seed 0 --method coreset --beta 100.0

You can also run the continual learning and streaming experiments with grid search over beta on datasets derived from MNIST by adjusting the globals in runner.py to your setup and running:

python runner.py --exp cl
python runner.py --exp streaming
python runner.py --exp imbalanced_streaming

The table of result can be displayed by running python process_results.py with the corresponding --exp argument. For example, python process_results.py --exp imbalanced_streaming produces:

Method \ Dataset splitmnistimbalanced
reservoir 80.60 +- 4.36
cbrs 89.71 +- 1.31
coreset 92.30 +- 0.23

The experiments derived from CIFAR-10 can be similarly run by:

python cifar_runner.py --exp cl
python process_results --exp splitcifar
python cifar_runner.py --exp imbalanced_streaming
python process_results --exp imbalanced_streaming_cifar

Selection via the Nyström proxy

The Nyström proxy was proposed to support data augmentations. It is also faster for larger coresets than the representer proxy. An example of running the selection on CIFAR-10 can be found in batch_active_learning/nystrom_example.py.

Citation

If you use the code in a publication, please cite the paper:

@article{borsos2020coresets,
      title={Coresets via Bilevel Optimization for Continual Learning and Streaming}, 
      author={Zalán Borsos and Mojmír Mutný and Andreas Krause},
      year={2020},
      journal={arXiv preprint arXiv:2006.03875}
}
Owner
Zalán Borsos
PhD student at ETH Zurich
Zalán Borsos
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks

MEAL-V2 This is the official pytorch implementation of our paper: "MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tric

Zhiqiang Shen 653 Dec 19, 2022
This repo implements several applications of the proposed generalized Bures-Wasserstein (GBW) geometry on symmetric positive definite matrices.

GBW This repo implements several applications of the proposed generalized Bures-Wasserstein (GBW) geometry on symmetric positive definite matrices. Ap

Andi Han 0 Oct 22, 2021
Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling"

Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling" Pipeline of Tip-Adapter Tip-Adapter can provid

peng gao 187 Dec 28, 2022
Sinkformers: Transformers with Doubly Stochastic Attention

Code for the paper : "Sinkformers: Transformers with Doubly Stochastic Attention" Paper You will find our paper here. Compat This package has been dev

Michael E. Sander 31 Dec 29, 2022
Official implementation for paper Render In-between: Motion Guided Video Synthesis for Action Interpolation

Render In-between: Motion Guided Video Synthesis for Action Interpolation [Paper] [Supp] [arXiv] [4min Video] This is the official Pytorch implementat

8 Oct 27, 2022
A rule learning algorithm for the deduction of syndrome definitions from time series data.

README This project provides a rule learning algorithm for the deduction of syndrome definitions from time series data. Large parts of the algorithm a

0 Sep 24, 2021
Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data

Named Entity Recognition with Small Strongly Labeled and Large Weakly Labeled Data arXiv This is the code base for weakly supervised NER. We provide a

Amazon 92 Jan 04, 2023
HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation

HPRNet: Hierarchical Point Regression for Whole-Body Human Pose Estimation Official PyTroch implementation of HPRNet. HPRNet: Hierarchical Point Regre

Nermin Samet 53 Dec 04, 2022
PyTorch Implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedding (ORAL, MICCAIW 2021)

Small Lesion Segmentation in Brain MRIs with Subpixel Embedding PyTorch implementation of Small Lesion Segmentation in Brain MRIs with Subpixel Embedd

22 Oct 21, 2022
OOD Dataset Curator and Benchmark for AI-aided Drug Discovery

🔥 DrugOOD 🔥 : OOD Dataset Curator and Benchmark for AI Aided Drug Discovery This is the official implementation of the DrugOOD project, this is the

108 Dec 17, 2022
Recovering Brain Structure Network Using Functional Connectivity

Recovering-Brain-Structure-Network-Using-Functional-Connectivity Framework: Papers: This repository provides a PyTorch implementation of the models ad

5 Nov 30, 2022
CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing

CapsuleVOS This is the code for the ICCV 2019 paper CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing. Arxiv Link: https://a

53 Oct 27, 2022
Image Segmentation Evaluation

Image Segmentation Evaluation Martin Keršner, [email protected] Evaluation

Martin Kersner 273 Oct 28, 2022
PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data.

Anti-Backdoor Learning PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data. The Anti-Backdoor Learning

Yige-Li 51 Dec 07, 2022
Reimplementation of Learning Mesh-based Simulation With Graph Networks

Pytorch Implementation of Learning Mesh-based Simulation With Graph Networks This is the unofficial implementation of the approach described in the pa

Jingwei Xu 33 Dec 14, 2022
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
Densely Connected Search Space for More Flexible Neural Architecture Search (CVPR2020)

DenseNAS The code of the CVPR2020 paper Densely Connected Search Space for More Flexible Neural Architecture Search. Neural architecture search (NAS)

Jamin Fong 291 Nov 18, 2022
Liecasadi - liecasadi implements Lie groups operation written in CasADi

liecasadi liecasadi implements Lie groups operation written in CasADi, mainly di

Artificial and Mechanical Intelligence 14 Nov 05, 2022
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Ren Yurui 261 Jan 09, 2023