LightningFSL: Pytorch-Lightning implementations of Few-Shot Learning models.

Overview

LightningFSL: Few-Shot Learning with Pytorch-Lightning

LICENSE Python last commit

In this repo, a number of pytorch-lightning implementations of FSL algorithms are provided, including two official ones

Boosting Few-Shot Classification with View-Learnable Contrastive Learning (ICME 2021)

Rectifying the Shortcut Learning of Background for Few-Shot Learning (NeurIPS 2021)

Contents

  1. Advantages
  2. Few-shot classification Results
  3. General Guide

Advantages:

This repository is built on top of LightningCLI, which is very convenient to use after being familiar with this tool.

  1. Enabling multi-GPU training
    • Our implementation of FSL framework allows DistributedDataParallel (DDP) to be included in the training of Few-Shot Learning, which is not available before to the best of our knowledge. Previous researches use DataParallel (DP) instead, which is inefficient and requires more computation storages. We achieve this by modifying the DDP sampler of Pytorch, making it possible to sample few-shot learning tasks among devices. See dataset_and_process/samplers.py for details.
  2. High reimplemented accuracy
    • Our reimplementations of some FSL algorithms achieve strong performance. For example, our ResNet12 implementation of ProtoNet and Cosine Classifier achieves 76+ and 80+ accuracy on 5w5s task of miniImageNet, respectively. All results can be reimplemented using pre-defined configuration files in config/.
  3. Quick and convenient creation of new algorithms
    • Pytorch-lightning provides our codebase with a clean and modular structure. Built on top of LightningCLI, our codebase unifies necessary basic components of FSL, making it easy to implement a brand-new algorithm. An impletation of an algorithm usually only requires three short additional files, one specifying the lightningModule, one specifying the classifer head, and the last one specifying all configurations. For example, see the code of ProtoNet (modules/PN.py, architectures/classifier/proto_head.py) and cosine classifier (modules/cosine_classifier.py, architectures/classifier/CC_head.py.
  4. Easy reproducability
    • Every time of running results in a full copy of yaml configuration file in the logging directory, enabling exact reproducability (by using the direct yaml file instead of creating a new one).
  5. Enabling both episodic/non-episodic algorithms
    • Switching with a single parameter is_meta in the configuration file.

Implemented Few-shot classification Results

Implemented results on few-shot classification datasets. The average results of 2,000 randomly sampled episodes repeated 5 times for 1/5-shot evaluation with 95% confidence interval are reported.

miniImageNet Dataset

Models Backbone 5-way 1-shot 5-way 5-shot pretrained models
Protypical Networks ResNet12 61.19+-0.40 76.50+-0.45 link
Cosine Classifier ResNet12 63.89+-0.44 80.94+-0.05 link
Meta-Baseline ResNet12 62.65+-0.65 79.10+-0.29 link
S2M2 WRN-28-10 58.85+-0.20 81.83+-0.15 link
S2M2+Logistic_Regression WRN-28-10 62.36+-0.42 82.01+-0.24
MoCo-v2(unsupervised) ResNet12 52.03+-0.33 72.94+-0.29 link
Exemplar-v2 ResNet12 59.02+-0.24 77.23+-0.16 link
PN+CL ResNet12 63.44+-0.44 79.42+-0.06 link
COSOC ResNet12 69.28+0.49 85.16+-0.42 link

General Guide

To understand the code correctly, it is highly recommended to first quickly go through the pytorch-lightning documentation, especially LightningCLI. It won't be a long journey since pytorch-lightning is built on the top of pytorch.

Installation

Just run the command:

pip install -r requirements.txt

running an implemented few-shot model

  1. Downloading Datasets:
  2. Training (Except for Meta-baseline and COSOC):
    • Choose the corresponding configuration file in 'config'(e.g.set_config_PN.py for PN model), set inside the parameter 'is_test' to False, set GPU ids (multi-GPU or not), dataset directory, logging dir as well as other parameters you would like to change.
    • modify the first line in run.sh (e.g., python config/set_config_PN.py).
    • To begin the running, run the command bash run.sh
  3. Training Meta-baseline:
    • This is a two-stage algorithm, with the first stage being CEloss-pretraining, followed by ProtoNet finetuning. So a two-stage training is need. The first training uses the configuration file config/set_config_meta_baseline_pretrain.py. The second uses config/set_config_meta_baseline_finetune.py, with pre-training model path from the first stage, specified by the parameterpre_trained_path in the configuration file.
  4. Training COSOC:
    • For pre-training Exemplar, choose configuration file config/set_config_MoCo.py and set parameter is_exampler to True.
    • For runing COS algorithm, run the command python COS.py --save_dir [save_dir] --pretrained_Exemplar_path [model_path] --dataset_path [data_path]. [save_dir] specifies the saving directory of all foreground objects, [model_path] and [data_path] specify the pathes of pre-trained model and datasets, respectively.
    • For runing a FSL algorithm with COS, choose configuration file config/set_config_COSOC.py and set parameter data["train_dataset_params"] to the directory of saved data of COS algorithm, pre_trained_path to the directory of pre-trained Exemplar.
  5. Testing:
    • Choose the same configuration file as training, set parameter is_test to True, pre_trained_path to the directory of checkpoint model (with suffix '.ckpt'), and other parameters (e.g. shot, batchsize) as you disire.
    • modify the first line in run.sh (e.g., python config/set_config_PN.py).
    • To begin the testing, run the command bash run.sh

Creating a new few-shot algorithm

It is quite simple to implement your own algorithm. most of algorithms only need creation of a new LightningModule and a classifier head. We give a breif description of the code structure here.

run.py

It is usually not needed to modify this file. The file run.py wraps the whole training and testing procedure of a FSL algorithm, for which all configurations are specified by an individual yaml file contained in the /config folder; see config/set_config_PN.py for example. The file run.py contains a python class Few_Shot_CLI, inherited from LightningCLI. It adds new hyperpameters (Also specified in configuration file) as well as testing process for FSL.

FewShotModule

Need modification. The folder modules contains LightningModules for FSL models, specifying model components, optimizers, logging metrics and train/val/test processes. Notably, modules/base_module.py contains the template module for all FSL models. All other modules inherit the base module; see modules/PN.py and modules/cosine_classifier.py for how episodic/non-episodic models inherit from the base module.

architectures

Need modification. We divide general FSL architectures into feature extractor and classification head, specified respectively in architectures/feature_extractor and architectures/classifier. These are just common nn modules in pytorch, which shall be embedded in LightningModule mentioned above. The recommended feature extractor is ResNet12, which is popular and shows promising performance. The classification head, however, varies with algorithms and need specific designs.

Datasets and DataModule

It is usually not needed for modification. Pytorch-lightning unifies data processing across training, val and testing into a single LightningDataModule. We disign such a datamodule in dataset_and_process/datamodules/few_shot_datamodule.py for FSL, enabling episodic/non-episodic sampling and DDP for multi-GPU fast training. The definition of Dataset itself is in dataset_and_process/datasets, specified as common pytorch datasets class. There is no need to modify the dataset module unless new datasets are involved.

Callbacks and Plugins

It is usually not needed for modification. See documentation of pytorch-lightning for detailed introductions of callbacks and Plugins. They are additional functionalities added to the system in a modular fashion.

Configuration

Need modification. See LightningCLI for how a yaml configuration file works. For each algorithm, there needs one specific configuration file, though most of the configurations are the same across algorithms. Thus it is convenient to copy one configuration and change it for a new algorithm.

Owner
Xu Luo
M.S. student of SMILE Lab, UESTC
Xu Luo
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022
This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Koltun"

Learning to propose objects This implements the learning and inference/proposal algorithm described in "Learning to Propose Objects, Krähenbühl and Ko

Philipp Krähenbühl 90 Sep 10, 2021
Official repository for ABC-GAN

ABC-GAN The work represented in this repository is the result of a 14 week semesterthesis on photo-realistic image generation using generative adversa

IgorSusmelj 10 Jun 23, 2022
Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechanism

Period-alternatives-of-Softmax Experimental Demo for our paper 'Escaping the Gradient Vanishing: Periodic Alternatives of Softmax in Attention Mechani

slwang9353 0 Sep 06, 2021
Codebase for ECCV18 "The Sound of Pixels"

Sound-of-Pixels Codebase for ECCV18 "The Sound of Pixels". *This repository is under construction, but the core parts are already there. Environment T

Hang Zhao 318 Dec 20, 2022
LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.

Deep-Leafsnap Convolutional Neural Networks have become largely popular in image tasks such as image classification recently largely due to to Krizhev

Sujith Vishwajith 48 Nov 27, 2022
The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question IntentionClassification Benchmark for Text-to-SQL"

TriageSQL The dataset and source code for our paper: "Did You Ask a Good Question? A Cross-Domain Question Intention Classification Benchmark for Text

Yusen Zhang 22 Nov 09, 2022
Simulator for FRC 2022 challenge: Rapid React

rrsim Simulator for FRC 2022 challenge: Rapid React out-1.mp4 Usage In order to run the simulator use the following: python3 rrsim.py [config_path] wh

1 Jan 18, 2022
CLIP (Contrastive Language–Image Pre-training) trained on Indonesian data

CLIP-Indonesian CLIP (Radford et al., 2021) is a multimodal model that can connect images and text by training a vision encoder and a text encoder joi

Galuh 17 Mar 10, 2022
Rank 3 : Source code for OPPO 6G Data Generation Challenge

OPPO 6G Data Generation with an E2E Framework Homepage of OPPO 6G Data Generation Challenge Datasets H1_32T4R.mat H2_32T4R.mat Please put the original

Sen Pei 97 Jan 07, 2023
Final project for machine learning (CSC 590). Detection of hepatitis C and progression through blood samples.

Hepatitis C Blood Based Detection Final project for machine learning (CSC 590). Dataset from Kaggle. Using data from previous hepatitis C blood panels

Jennefer Maldonado 1 Dec 28, 2021
MADE (Masked Autoencoder Density Estimation) implementation in PyTorch

pytorch-made This code is an implementation of "Masked AutoEncoder for Density Estimation" by Germain et al., 2015. The core idea is that you can turn

Andrej 498 Dec 30, 2022
A keras-based real-time model for medical image segmentation (CFPNet-M)

CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal Biomedical Image Real-Time Segmentation This repository contains the implementat

268 Nov 27, 2022
Think Big, Teach Small: Do Language Models Distil Occam’s Razor?

Think Big, Teach Small: Do Language Models Distil Occam’s Razor? Software related to the paper "Think Big, Teach Small: Do Language Models Distil Occa

0 Dec 07, 2021
A Simulation Environment to train Robots in Large Realistic Interactive Scenes

iGibson: A Simulation Environment to train Robots in Large Realistic Interactive Scenes iGibson is a simulation environment providing fast visual rend

Stanford Vision and Learning Lab 493 Jan 04, 2023
CS50x-AI - Artificial Intelligence with Python from Harvard University

CS50x-AI Artificial Intelligence with Python from Harvard University 📖 Table of

Hosein Damavandi 6 Aug 22, 2022
Continuous Security Group Rule Change Detection & Response at scale

Introduction Get notified of Security Group Changes across all AWS Accounts & Regions in an AWS Organization, with the ability to respond/revert those

Raajhesh Kannaa Chidambaram 3 Aug 13, 2022
Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

LESA Introduction This repository contains the official implementation of Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Cont

Chenglin Yang 20 Dec 31, 2021
ContourletNet: A Generalized Rain Removal Architecture Using Multi-Direction Hierarchical Representation

ContourletNet: A Generalized Rain Removal Architecture Using Multi-Direction Hierarchical Representation (Accepted by BMVC'21) Abstract: Images acquir

10 Dec 08, 2022
This repo includes the supplementary of our paper "CEMENT: Incomplete Multi-View Weak-Label Learning with Long-Tailed Labels"

Supplementary Materials for CEMENT: Incomplete Multi-View Weak-Label Learning with Long-Tailed Labels This repository includes all supplementary mater

Zhiwei Li 0 Jan 05, 2022