Code implementation of "Sparsity Probe: Analysis tool for Deep Learning Models"

Overview

Sparsity Probe: Analysis tool for Deep Learning Models

GitHub license made-with-python made-with-pytorch

This repository is a limited implementation of Sparsity Probe: Analysis tool for Deep Learning Models by I. Ben-Shaul and S. Dekel (2021).

Folded Ball Example

Downloading the Repo

git clone https://github.com/idobenshaul10/SparsityProbe.git
pip install -r requirements.txt

Requirements

torch==1.7.0
umap_learn==0.4.6
matplotlib==3.3.2
tqdm==4.49.0
seaborn==0.11.0
torchvision==0.8.1
numpy==1.19.2
scikit_learn==0.24.2
umap==0.1.1

Usage

The first step of using this Repo should be to look at this example: CIFAR10 Example. In this example, we demonstrate running the Sparsity-Probe on a trained Resnet18 on the CIFAR10 dataset, at selected layers.

Creating a new enviorment:

Create a new environment in the environments directory, inheriting from BaseEnviorment. This enviorment should include the train and test datasets(including the matching transforms), the model layers we want to test the alpha-scores on(see cifar10_env example), and the trained model.

Training a model:

It is possible to train a basic model with the train.py script, which uses an environment to load the model and the datasets. Example Usage: python train/train_mnist.py --output_path "results" --batch_size 32 --epochs 100

Running the Sparsity Probe

Done using the DL_smoothness.py script. Arguments:
trees - Number of trees in the forest.
depth - Maximum depth of each tree.
batch_size - batch used in the forward pass(when computing the layer outputs)
env_name - enviorment which is loaded to measure alpha-scores on
epsilon_1 - the epsilon_low used for the numerical approximation. By default, epsilon_high is inited as 4*epsilon_low
only_umap - only create umaps of the intermediate layers(without computing alpha-scores)
use_clustering - run KMeans on intermediate layers
calc_test - calculate test accuracy(More metrics coming soon)
output_folder - location where all outputs are saved
feature_dimension - to reduce computation costs, we compute the alpha-scores on the features after a dimensionality reduction technique has been applied. As of now, if the dim(layer_outputs)>feature_dimension, the TruncatedSVD is used to reduce dim(layer_outputs) to feature_dimension. Default feature_dimension is 2500.

Plotting Results

Result plots can be created using this script.

UMAP example

Acknowledgements

Our pretrained CIFAR10 Resnet18 network used in the example is taken from This Repo.

License

This repository is MIT licensed, as found in the LICENSE file.

NeRViS: Neural Re-rendering for Full-frame Video Stabilization

Neural Re-rendering for Full-frame Video Stabilization

Yu-Lun Liu 9 Jun 17, 2022
Sinkformers: Transformers with Doubly Stochastic Attention

Code for the paper : "Sinkformers: Transformers with Doubly Stochastic Attention" Paper You will find our paper here. Compat This package has been dev

Michael E. Sander 31 Dec 29, 2022
Code for the CVPR2022 paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity"

Introduction This is an official release of the paper "Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity" (arxiv link). Abstrac

Leo 21 Nov 23, 2022
Koç University deep learning framework.

Knet Knet (pronounced "kay-net") is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It supports GPU

1.4k Dec 31, 2022
Implementation of CaiT models in TensorFlow and ImageNet-1k checkpoints. Includes code for inference and fine-tuning.

CaiT-TF (Going deeper with Image Transformers) This repository provides TensorFlow / Keras implementations of different CaiT [1] variants from Touvron

Sayak Paul 9 Jun 26, 2022
A PyTorch Implementation of the Luna: Linear Unified Nested Attention

Unofficial PyTorch implementation of Luna: Linear Unified Nested Attention The quadratic computational and memory complexities of the Transformer’s at

Soohwan Kim 32 Nov 07, 2022
PyTorch inference for "Progressive Growing of GANs" with CelebA snapshot

Progressive Growing of GANs inference in PyTorch with CelebA training snapshot Description This is an inference sample written in PyTorch of the origi

320 Nov 21, 2022
Predict bus arrival time using VertexAI and Nvidia's Jetson Nano

bus_prediction predict bus arrival time using VertexAI and Nvidia's Jetson Nano imagenet the command for imagenet.py look like this python3 /path/to/i

10 Dec 22, 2022
This repository contains the files for running the Patchify GUI.

Repository Name Train-Test-Validation-Dataset-Generation App Name Patchify Description This app is designed for crop images and creating smal

Salar Ghaffarian 9 Feb 15, 2022
RoIAlign & crop_and_resize for PyTorch

RoIAlign for PyTorch This is a PyTorch version of RoIAlign. This implementation is based on crop_and_resize and supports both forward and backward on

Long Chen 530 Jan 07, 2023
Self-supervised Deep LiDAR Odometry for Robotic Applications

DeLORA: Self-supervised Deep LiDAR Odometry for Robotic Applications Overview Paper: link Video: link ICRA Presentation: link This is the correspondin

Robotic Systems Lab - Legged Robotics at ETH Zürich 181 Dec 29, 2022
AdaDM: Enabling Normalization for Image Super-Resolution

AdaDM AdaDM: Enabling Normalization for Image Super-Resolution. You can apply BN, LN or GN in SR networks with our AdaDM. Pretrained models (EDSR*/RDN

58 Jan 08, 2023
Data pipelines for both TensorFlow and PyTorch!

rapidnlp-datasets Data pipelines for both TensorFlow and PyTorch ! If you want to load public datasets, try: tensorflow/datasets huggingface/datasets

1 Dec 08, 2021
NEG loss implemented in pytorch

Pytorch Negative Sampling Loss Negative Sampling Loss implemented in PyTorch. Usage neg_loss = NEG_loss(num_classes, embedding_size) optimizer =

Daniil Gavrilov 123 Sep 13, 2022
Baseline powergrid model for NY

Baseline-powergrid-model-for-NY Table of Contents About The Project Built With Usage License Contact Acknowledgements About The Project As the urgency

Anderson Energy Lab at Cornell 6 Nov 24, 2022
Fine-tune pretrained Convolutional Neural Networks with PyTorch

Fine-tune pretrained Convolutional Neural Networks with PyTorch. Features Gives access to the most popular CNN architectures pretrained on ImageNet. A

Alex Parinov 694 Nov 23, 2022
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

268 Jan 09, 2023
A implemetation of the LRCN in mxnet

A implemetation of the LRCN in mxnet ##Abstract LRCN is a combination of CNN and RNN ##Installation Download UCF101 dataset ./avi2jpg.sh to split the

44 Aug 25, 2022
The code for SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network.

SAG-DTA The code is the implementation for the paper 'SAG-DTA: Prediction of Drug–Target Affinity Using Self-Attention Graph Network'. Requirements py

Shugang Zhang 7 Aug 02, 2022
yolox_backbone is a deep-learning library and is a collection of YOLOX Backbone models.

YOLOX-Backbone yolox-backbone is a deep-learning library and is a collection of YOLOX backbone models. Install pip install yolox-backbone Load a Pret

Yonghye Kwon 21 Dec 28, 2022