PyContinual (An Easy and Extendible Framework for Continual Learning)

Overview

PyContinual (An Easy and Extendible Framework for Continual Learning)

Easy to Use

You can sumply change the baseline, backbone and task, and then ready to go. Here is an example:

	python run.py \  
	--bert_model 'bert-base-uncased' \  
	--backbone bert_adapter \ #or other backbones (bert, w2v...)  
	--baseline ctr \  #or other avilable baselines (classic, ewc...)
	--task asc \  #or other avilable task/dataset (dsc, newsgroup...)
	--eval_batch_size 128 \  
	--train_batch_size 32 \  
	--scenario til_classification \  #or other avilable scenario (dil_classification...)
	--idrandom 0  \ #which random sequence to use
	--use_predefine_args #use pre-defined arguments

Easy to Extend

You only need to write your own ./dataloader, ./networks and ./approaches. You are ready to go!

Introduction

Recently, continual learning approaches have drawn more and more attention. This repo contains pytorch implementation of a set of (improved) SoTA methods using the same training and evaluation pipeline.

This repository contains the code for the following papers:

Features

  • Datasets: It currently supports Language Datasets (Document/Sentence/Aspect Sentiment Classification, Natural Language Inference, Topic Classification) and Image Datasets (CelebA, CIFAR10, CIFAR100, FashionMNIST, F-EMNIST, MNIST, VLCS)
  • Scenarios: It currently supports Task Incremental Learning and Domain Incremental Learning
  • Training Modes: It currently supports single-GPU. You can also change it to multi-node distributed training and the mixed precision training.

Architecture

./res: all results saved in this folder.
./dat: processed data
./data: raw data ./dataloader: contained dataloader for different data ./approaches: code for training
./networks: code for network architecture
./data_seq: some reference sequences (e.g. asc_random) ./tools: code for preparing the data

Setup

  • If you want to run the existing systems, please see run_exist.md
  • If you want to expand the framework with your own model, please see run_own.md
  • If you want to see the full list of baselines and variants, please see baselines.md

Reference

If using this code, parts of it, or developments from it, please consider cite the references bellow.

@inproceedings{ke2021achieve,
  title={Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning},
  author={Ke, Zixuan and Liu, Bing and Ma, Nianzu and Xu, Hu, and Lei Shu},
  booktitle={NeurIPS},
  year={2021}
}

@inproceedings{ke2021contrast,
  title={CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks},
  author={Ke, Zixuan and Liu, Bing and Xu, Hu, and Lei Shu},
  booktitle={EMNLP},
  year={2021}
}

@inproceedings{ke2021adapting,
  title={Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks},
  author={Ke, Zixuan and Xu, Hu and Liu, Bing},
  booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
  pages={4746--4755},
  year={2021}
}

@inproceedings{ke2020continualmixed,
author= {Ke, Zixuan and Liu, Bing and Huang, Xingchang},
title= {Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks},
booktitle = {Advances in Neural Information Processing Systems},
volume={33},
year = {2020}}

@inproceedings{ke2020continual,
author= {Zixuan Ke and Bing Liu and Hao Wang and Lei Shu},
title= {Continual Learning with Knowledge Transfer for Sentiment Classification},
booktitle = {ECML-PKDD},
year = {2020}}

Contact

Please drop an email to Zixuan Ke, Xingchang Huang or Nianzu Ma if you have any questions regarding to the code. We thank Bing Liu, Hu Xu and Lei Shu for their valuable comments and opinioins.

Deploying PyTorch Model to Production with FastAPI in CUDA-supported Docker

Deploying PyTorch Model to Production with FastAPI in CUDA-supported Docker A example FastAPI PyTorch Model deploy with nvidia/cuda base docker. Model

Ming 68 Jan 04, 2023
Repository providing a wide range of self-supervised pretrained models for computer vision tasks.

Hierarchical Pretraining: Research Repository This is a research repository for reproducing the results from the project "Self-supervised pretraining

Colorado Reed 53 Nov 09, 2022
A curated list of Generative Deep Art projects, tools, artworks, and models

Generative Deep Art A curated list of Generative Deep Art projects, tools, artworks, and models Inbox Get started with making AI art in 2022 โ€“ deeplea

Filipe Calegario 251 Jan 03, 2023
Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting

Sequence to Sequence (seq2seq) Recurrent Neural Network (RNN) for Time Series Forecasting Note: You can find here the accompanying seq2seq RNN forecas

Guillaume Chevalier 1k Dec 25, 2022
Reinforcement Learning for Portfolio Management

qtrader Reinforcement Learning for Portfolio Management Why Reinforcement Learning? Learns the optimal action, rather than models the market. Adaptive

Angelos Filos 406 Jan 01, 2023
PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation.

DosGAN-PyTorch PyTorch Implementation of Exploring Explicit Domain Supervision for Latent Space Disentanglement in Unpaired Image-to-Image Translation

40 Nov 30, 2022
NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch

PyTorch implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping Paper: https://arxiv.org/abs/2102.06171.pdf Original code: htt

Vaibhav Balloli 320 Jan 02, 2023
Maximum Spatial Perturbation for Image-to-Image Translation (Official Implementation)

MSPC for I2I This repository is by Yanwu Xu and contains the PyTorch source code to reproduce the experiments in our CVPR2022 paper Maximum Spatial Pe

51 Dec 14, 2022
Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

Accommodating supervised learning algorithms for the historical prices of the world's favorite cryptocurrency and boosting it through LightGBM.

1 Nov 27, 2021
CodeContests is a competitive programming dataset for machine-learning

CodeContests CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training AlphaCode. It consists of pro

DeepMind 1.6k Jan 08, 2023
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF

Google 625 Dec 30, 2022
Repository for the paper "PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation", CVPR 2021.

PoseAug: A Differentiable Pose Augmentation Framework for 3D Human Pose Estimation Code repository for the paper: PoseAug: A Differentiable Pose Augme

Pyjcsx 328 Dec 17, 2022
ICLR 2021, Fair Mixup: Fairness via Interpolation

Fair Mixup: Fairness via Interpolation Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predicti

Ching-Yao Chuang 49 Nov 22, 2022
Lolviz - A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations

lolviz By Terence Parr. See Explained.ai for more stuff. A very nice looking javascript lolviz port with improvements by Adnan M.Sagar. A simple Pytho

Terence Parr 785 Dec 30, 2022
Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch

Omninet - Pytorch Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch. The authors propose that we should be atte

Phil Wang 48 Nov 21, 2022
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Vide

Jonas Wu 232 Dec 29, 2022
Measuring and Improving Consistency in Pretrained Language Models

ParaRel ๐Ÿค˜ This repository contains the code and data for the paper: Measuring and Improving Consistency in Pretrained Language Models as well as the

Yanai Elazar 26 Dec 02, 2022
GAN Image Generator and Characterwise Image Recognizer with python

MODEL SUMMARY ๋ชจ๋ธ์˜ ๊ตฌ์กฐ๋Š” ํฌ๊ฒŒ 6๋‹จ๊ณ„๋กœ ๋‚˜๋‰ฉ๋‹ˆ๋‹ค. STEP 0: Input Image Predict ํ•  ์ด๋ฏธ์ง€๋ฅผ ๋ชจ๋ธ์— ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค. STEP 1: Make Black and White Image STEP 1 ์€ ์ž…๋ ฅ๋ฐ›์€ ์ด๋ฏธ์ง€์˜ ๊ธ€์ž๋ฅผ ํ‘์ƒ‰์œผ๋กœ, ๋ฐฐ๊ฒฝ์„

Juwan HAN 1 Feb 09, 2022
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation

Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target i

NanYoMy 13 Oct 09, 2022
COPA-SSE contains crowdsourced explanations for the Balanced COPA dataset

COPA-SSE Repository for COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning. COPA-SSE contains crowdsourced explanations for the Balanced

Ana Brassard 5 Jul 31, 2022