PAWS 🐾 Predicting View-Assignments with Support Samples

Related tags

Deep Learningsuncet
Overview

PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples.

CD21_260_SWAV2_PAWS_Flowchart_FINAL

PAWS is a method for semi-supervised learning that builds on the principles of self-supervised distance-metric learning. PAWS pre-trains a model to minimize a consistency loss, which ensures that different views of the same unlabeled image are assigned similar pseudo-labels. The pseudo-labels are generated non-parametrically, by comparing the representations of the image views to those of a set of randomly sampled labeled images. The distance between the view representations and labeled representations is used to provide a weighting over class labels, which we interpret as a soft pseudo-label. By non-parametrically incorporating labeled samples in this way, PAWS extends the distance-metric loss used in self-supervised methods such as BYOL and SwAV to the semi-supervised setting.

Also provided in this repo is a PyTorch implementation of the semi-supervised SimCLR+CT method described in the paper Supervision Accelerates Pretraining in Contrastive Semi-Supervised Learning of Visual Representations. SimCLR+CT combines the SimCLR self-supervised loss with the SuNCEt (supervised noise contrastive estimation) loss for semi-supervised learning.

Pretrained models

We provide the full checkpoints for the PAWS pre-trained models, both with and without fine-tuning. The full checkpoints for the pretrained models contain the backbone, projection head, and prediction head weights. The finetuned model checkpoints, on the other hand, only include the backbone and linear classifier head weights. Top-1 classification accuracy for the pretrained models is reported using a nearest neighbour classifier. Top-1 classification accuracy for the finetuned models is reported using the class labels predicted by the network's last linear layer.

1% labels 10% labels
epochs network pretrained (NN) finetuned pretrained (NN) finetuned
300 RN50 65.4% 66.5% 73.1% 75.5%
200 RN50 64.6% 66.1% 71.9% 75.0%
100 RN50 62.6% 63.8% 71.0% 73.9%

Running PAWS semi-supervised pre-training and fine-tuning

Config files

All experiment parameters are specified in config files (as opposed to command-line-arguments). Config files make it easier to keep track of different experiments, as well as launch batches of jobs at a time. See the configs/ directory for example config files.

Requirements

  • Python 3.8
  • PyTorch install 1.7.1
  • torchvision
  • CUDA 11.0
  • Apex with CUDA extension
  • Other dependencies: PyYaml, numpy, opencv, submitit

Labeled Training Splits

For reproducibilty, we have pre-specified the labeled training images as .txt files in the imagenet_subsets/ and cifar10_subsets/ directories. Based on your specifications in your experiment's config file, our implementation will automatically use the images specified in one of these .txt files as the set of labeled images. On ImageNet, if you happen to request a split of the data that is not contained in imagenet_subsets/ (for example, if you set unlabeled_frac !=0.9 and unlabeled_frac != 0.99, i.e., not 10% labeled or 1% labeled settings), then the code will independently flip a coin at the start of training for each training image with probability 1-unlabeled_frac to determine whether or not to keep the image's label.

Single-GPU training

PAWS is very simple to implement and experiment with. Our implementation starts from the main.py, which parses the experiment config file and runs the desired script (e.g., paws pre-training or fine-tuning) locally on a single GPU.

CIFAR10 pre-training

For example, to pre-train with PAWS on CIFAR10 locally, using a single GPU using the pre-training experiment configs specificed inside configs/paws/cifar10_train.yaml, run:

python main.py
  --sel paws_train
  --fname configs/paws/cifar10_train.yaml

CIFAR10 evaluation

To fine-tune the pre-trained model for a few optimization steps with the SuNCEt (supervised noise contrastive estimation) loss on a single GPU using the pre-training experiment configs specificed inside configs/paws/cifar10_snn.yaml, run:

python main.py
  --sel snn_fine_tune
  --fname configs/paws/cifar10_snn.yaml

To then evaluate the nearest-neighbours performance of the model, locally, on a single GPU, run:

python snn_eval.py
  --model-name wide_resnet28w2 --use-pred
  --pretrained $path_to_pretrained_model
  --unlabeled_frac $1.-fraction_of_labeled_train_data_to_support_nearest_neighbour_classification
  --root-path $path_to_root_datasets_directory
  --image-folder $image_directory_inside_root_path
  --dataset-name cifar10_fine_tune
  --split-seed $which_prespecified_seed_to_split_labeled_data

Multi-GPU training

Running PAWS across multiple GPUs on a cluster is also very simple. In the multi-GPU setting, the implementation starts from main_distributed.py, which, in addition to parsing the config file and launching the desired script, also allows for specifying details about distributed training. For distributed training, we use the popular open-source submitit tool and provide examples for a SLURM cluster, but feel free to edit main_distributed.py for your purposes to specify a different approach to launching a multi-GPU job on a cluster.

ImageNet pre-training

For example, to pre-train with PAWS on 64 GPUs using the pre-training experiment configs specificed inside configs/paws/imgnt_train.yaml, run:

python main_distributed.py
  --sel paws_train
  --fname configs/paws/imgnt_train.yaml
  --partition $slurm_partition
  --nodes 8 --tasks-per-node 8
  --time 1000
  --device volta16gb

ImageNet fine-tuning

To fine-tune a pre-trained model on 4 GPUs using the fine-tuning experiment configs specified inside configs/paws/fine_tune.yaml, run:

python main_distributed.py
  --sel fine_tune
  --fname configs/paws/fine_tune.yaml
  --partition $slurm_partition
  --nodes 1 --tasks-per-node 4
  --time 1000
  --device volta16gb

To evaluate the fine-tuned model locally on a single GPU, use the same config file, configs/paws/fine_tune.yaml, but change training: true to training: false. Then run:

python main.py
  --sel fine_tune
  --fname configs/paws/fine_tune.yaml

Soft Nearest Neighbours evaluation

To evaluate the nearest-neighbours performance of a pre-trained ResNet50 model on a single GPU, run:

python snn_eval.py
  --model-name resnet50 --use-pred
  --pretrained $path_to_pretrained_model
  --unlabeled_frac $1.-fraction_of_labeled_train_data_to_support_nearest_neighbour_classification
  --root-path $path_to_root_datasets_directory
  --image-folder $image_directory_inside_root_path
  --dataset-name $one_of:[imagenet_fine_tune, cifar10_fine_tune]

License

See the LICENSE file for details about the license under which this code is made available.

Citation

If you find this repository useful in your research, please consider giving a star and a citation 🐾

@article{assran2021semisupervised,
  title={Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples}, 
  author={Assran, Mahmoud, and Caron, Mathilde, and Misra, Ishan, and Bojanowski, Piotr and Joulin, Armand, and Ballas, Nicolas, and Rabbat, Michael},
  journal={arXiv preprint arXiv:2104.13963},
  year={2021}
}
@article{assran2020supervision,
  title={Supervision Accelerates Pretraining in Contrastive Semi-Supervised Learning of Visual Representations},
  author={Assran, Mahmoud, and Ballas, Nicolas, and Castrejon, Lluis, and Rabbat, Michael},
  journal={arXiv preprint arXiv:2006.10803},
  year={2020}
}
Owner
Facebook Research
Facebook Research
Data augmentation for NLP, accepted at EMNLP 2021 Findings

AEDA: An Easier Data Augmentation Technique for Text Classification This is the code for the EMNLP 2021 paper AEDA: An Easier Data Augmentation Techni

Akbar Karimi 81 Dec 09, 2022
Pytorch implementation for ACMMM2021 paper "I2V-GAN: Unpaired Infrared-to-Visible Video Translation".

I2V-GAN This repository is the official Pytorch implementation for ACMMM2021 paper "I2V-GAN: Unpaired Infrared-to-Visible Video Translation". Traffic

69 Dec 31, 2022
Pretraining Representations For Data-Efficient Reinforcement Learning

Pretraining Representations For Data-Efficient Reinforcement Learning Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, Ankesh Anand, Laurent Ch

Mila 40 Dec 11, 2022
Deep Image Matting implementation in PyTorch

Deep Image Matting Deep Image Matting paper implementation in PyTorch. Differences "fc6" is dropped. Indices pooling. "fc6" is clumpy, over 100 millio

Yang Liu 724 Dec 27, 2022
An open source python library for automated feature engineering

"One of the holy grails of machine learning is to automate more and more of the feature engineering process." ― Pedro Domingos, A Few Useful Things to

alteryx 6.4k Jan 03, 2023
Variational autoencoder for anime face reconstruction

VAE animeface Variational autoencoder for anime face reconstruction Introduction This repository is an exploratory example to train a variational auto

Minzhe Zhang 2 Dec 11, 2021
Mining-the-Social-Web-3rd-Edition - The official online compendium for Mining the Social Web, 3rd Edition (O'Reilly, 2018)

Mining the Social Web, 3rd Edition The official code repository for Mining the Social Web, 3rd Edition (O'Reilly, 2019). The book is available from Am

Mikhail Klassen 838 Jan 01, 2023
Convolutional Neural Network for 3D meshes in PyTorch

MeshCNN in PyTorch SIGGRAPH 2019 [Paper] [Project Page] MeshCNN is a general-purpose deep neural network for 3D triangular meshes, which can be used f

Rana Hanocka 1.4k Jan 04, 2023
Minimalist Error collection Service compatible with Rollbar clients. Sentry or Rollbar alternative.

Minimalist Error collection Service Features Compatible with any Rollbar client(see https://docs.rollbar.com/docs). Just change the endpoint URL to yo

Haukur Rósinkranz 381 Nov 11, 2022
Official Pytorch implementation of Meta Internal Learning

Official Pytorch implementation of Meta Internal Learning

10 Aug 24, 2022
Create time-series datacubes for supervised machine learning with ICEYE SAR images.

ICEcube is a Python library intended to help organize SAR images and annotations for supervised machine learning applications. The library generates m

ICEYE Ltd 65 Jan 03, 2023
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023
Code release for General Greedy De-bias Learning

General Greedy De-bias for Dataset Biases This is an extention of "Greedy Gradient Ensemble for Robust Visual Question Answering" (ICCV 2021, Oral). T

4 Mar 15, 2022
Everything's Talkin': Pareidolia Face Reenactment (CVPR2021)

Everything's Talkin': Pareidolia Face Reenactment (CVPR2021) Linsen Song, Wayne Wu, Chaoyou Fu, Chen Qian, Chen Change Loy, and Ran He [Paper], [Video

71 Dec 21, 2022
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
Malmo Collaborative AI Challenge - Team Pig Catcher

The Malmo Collaborative AI Challenge - Team Pig Catcher Approach The challenge involves 2 agents who can either cooperate or defect. The optimal polic

Kai Arulkumaran 66 Jun 29, 2022
Automatically download the cwru data set, and then divide it into training data set and test data set

Automatically download the cwru data set, and then divide it into training data set and test data set.自动下载cwru数据集,然后分训练数据集和测试数据集

6 Jun 27, 2022
Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning

Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning This is the official repository for Conservative and Adaptive Penalty fo

7 Nov 22, 2022
The Instructed Glacier Model (IGM)

The Instructed Glacier Model (IGM) Overview The Instructed Glacier Model (IGM) simulates the ice dynamics, surface mass balance, and its coupling thro

27 Dec 16, 2022
Speeding-Up Back-Propagation in DNN: Approximate Outer Product with Memory

Approximate Outer Product Gradient Descent with Memory Code for the numerical experiment of the paper Speeding-Up Back-Propagation in DNN: Approximate

2 Mar 02, 2022