Universal Probability Distributions with Optimal Transport and Convex Optimization

Overview

Sylvester normalizing flows for variational inference

Pytorch implementation of Sylvester normalizing flows, based on our paper:

Sylvester normalizing flows for variational inference (UAI 2018)
Rianne van den Berg*, Leonard Hasenclever*, Jakub Tomczak, Max Welling

*Equal contribution

Requirements

The latest release of the code is compatible with:

  • pytorch 1.0.0

  • python 3.7

Thanks to Martin Engelcke for adapting the code to provide this compatibility.

Version v0.3.0_2.7 is compatible with:

  • pytorch 0.3.0 WARNING: More recent versions of pytorch have different default flags for the binary cross entropy loss module: nn.BCELoss(). You have to adapt the appropriate flags if you want to port this code to a later vers
    ion.

  • python 2.7

Data

The experiments can be run on the following datasets:

  • static MNIST: dataset is in data folder;
  • OMNIGLOT: the dataset can be downloaded from link;
  • Caltech 101 Silhouettes: the dataset can be downloaded from link.
  • Frey Faces: the dataset can be downloaded from link.

Usage

Below, example commands are given for running experiments on static MNIST with different types of Sylvester normalizing flows, for 4 flows:

Orthogonal Sylvester flows
This example uses a bottleneck of size 8 (Q has 8 columns containing orthonormal vectors).

python main_experiment.py -d mnist -nf 4 --flow orthogonal --num_ortho_vecs 8 

Householder Sylvester flows
This example uses 8 Householder reflections per orthogonal matrix Q.

python main_experiment.py -d mnist -nf 4 --flow householder --num_householder 8

Triangular Sylvester flows

python main_experiment.py -d mnist -nf 4 --flow triangular 

To run an experiment with other types of normalizing flows or just with a factorized Gaussian posterior, see below.


Factorized Gaussian posterior

python main_experiment.py -d mnist --flow no_flow

Planar flows

python main_experiment.py -d mnist -nf 4 --flow planar

Inverse Autoregressive flows
This examples uses MADEs with 320 hidden units.

python main_experiment.py -d mnist -nf 4 --flow iaf --made_h_size 320

More information about additional argument options can be found by running ```python main_experiment.py -h```

Cite

Please cite our paper if you use this code in your own work:

@inproceedings{vdberg2018sylvester,
  title={Sylvester normalizing flows for variational inference},
  author={van den Berg, Rianne and Hasenclever, Leonard and Tomczak, Jakub and Welling, Max},
  booktitle={proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI)},
  year={2018}
}
Comments
  • about log_p_zk

    about log_p_zk

    Hi Rianne, This is a great code, and I have a little question about logp(zk), we hope p(zk) in VAE can be a distribution whose form is no fixed, but it seems that the calculate of logp(zk) in line81 of loss.py imply that p(zk) is a standard Gaussion. Are there some mistakes about my understanding?
    Thank your for this code

    opened by Archer666 10
  • loss = bce + beta * kl

    loss = bce + beta * kl

    hello Rianne: Thanks very much. I am a bit confused with line 44 in loss.py : loss = bce + beta * kl. Based on equation 3 in Tomczak's paper (Improving Variational Auto-Encoder Using Householder Flows), shouldn't "loss = bce - beta * kl "? Also, why use -ELBO instead of ELBO when reporting your metrics? Thanks

    opened by tumis1946 4
  • PyTorch_v1 and Python3 compatibility

    PyTorch_v1 and Python3 compatibility

    Hi Rianne,

    This PR contains a 'minimal' set of changes to run the code with the latest PyTorch versions and Python 3 ( #1 #2 )

    It is 'minimal' in the sense that I only made changes that affect functionality. There are additional cosmetic changes that could be made; e.g. Variable(), the volatile flag, and F.sigmoid() have been deprecated but they should not affect functionality.

    I tested the changes with PyTorch 1.0.0 and Python 3.7 on MNIST and Freyfaces, giving me similar results for the baseline VAE without any flows.

    I am not sure if more rigorous test should be done and if you want to merge this into master or keep a separate branch.

    Best, Martin

    opened by martinengelcke 1
  • PR for PyTorch 1.+ and Python 3 support

    PR for PyTorch 1.+ and Python 3 support

    Hi Rianne,

    Thank you for this really nice code release :)

    I cloned the repo and made some changes so that it runs with PyTorch 1.+ and Python 3. Also solved the issue mentioned in #1 . I tested the changes on MNIST (binary input) and Freyfaces (multinomial input), giving similar results to the original code.

    If you are interested in reviewing and potentially adding this to the repo, I would be happy to clean things up and make a PR.

    Best, Martin

    opened by martinengelcke 1
  • RuntimeError in default main experiment

    RuntimeError in default main experiment

    Hi Rianne,

    I'm trying to run the default experiment on cpu with a small latent space dimension (z=5):

    python main_experiment.py -d mnist --flow no_flow -nc --z_size 5

    Which unfortunately gives the following error:

    Traceback (most recent call last):
      File "main_experiment.py", line 278, in <module>
        run(args, kwargs)
      File "main_experiment.py", line 189, in run
        tr_loss = train(epoch, train_loader, model, optimizer, args)
      File ".../sylvester-flows/optimization/training.py", line 39, in train
        loss.backward()
      File "//anaconda/envs/dl/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
        torch.autograd.backward(self, gradient, retain_graph, create_graph)
      File "//anaconda/envs/dl/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
        allow_unreachable=True)  # allow_unreachable flag
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
    

    I am using PyTorch version 1.0.0 and did not modify the code.

    opened by trdavidson 1
  • How to sample from latent distribution

    How to sample from latent distribution

    Hello,

    I was wondering how I can generate samples using the decoder network after training. In a VAE, I would just sample from the prior distribution z~N(0,1) and generate a data point using the decoder. In TriangularSylvesterVAE, however, I also have to provide hyperparameters lambda(x) that depend on the input. How can I sample from my latent distribution and generate samples from it?

    I am new to normalizing flows in general and would appreciate any help.

    opened by crlz182 2
Releases(v1.0.0_3.7)
  • v1.0.0_3.7(Jul 5, 2019)

    Sylvester Normalizing Flow repository compatible with Pytorch 1.0.0 and Python 3.7. Thanks to martinengelcke for taking care of this compatibility.

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0_2.7(Jul 5, 2019)

Owner
Rianne van den Berg
Senior researcher @Microsoft research Amsterdam. Formerly at Google Brain and University of Amsterdam
Rianne van den Berg
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

STARS Laboratory 8 Sep 14, 2022
Codebase for Inducing Causal Structure for Interpretable Neural Networks

Interchange Intervention Training (IIT) Codebase for Inducing Causal Structure for Interpretable Neural Networks Release Notes 12/01/2021: Code and Pa

Zen 6 Oct 10, 2022
This is an official pytorch implementation of Fast Fourier Convolution.

Fast Fourier Convolution (FFC) for Image Classification This is the official code of Fast Fourier Convolution for image classification on ImageNet. Ma

pkumi 199 Jan 03, 2023
A testcase generation tool for Persistent Memory Programs.

PMFuzz PMFuzz is a testcase generation tool to generate high-value tests cases for PM testing tools (XFDetector, PMDebugger, PMTest and Pmemcheck) If

Systems Research at ShiftLab 14 Jul 24, 2022
OCR Post Correction for Endangered Language Texts

📌 Coming soon: an update to the software including features from our paper on semi-supervised OCR post-correction, to be published in the Transaction

Shruti Rijhwani 96 Dec 31, 2022
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 03, 2023
How to Learn a Domain Adaptive Event Simulator? ACM MM, 2021

LETGAN How to Learn a Domain Adaptive Event Simulator? ACM MM 2021 Running Environment: pytorch=1.4, 1 NVIDIA-1080TI. More details can be found in pap

CVTEAM 4 Sep 20, 2022
[ICCV'21] Official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations

CrowdNav with Social-NCE This is an official implementation for the paper Social NCE: Contrastive Learning of Socially-aware Motion Representations by

VITA lab at EPFL 125 Dec 23, 2022
Fermi Problems: A New Reasoning Challenge for AI

Fermi Problems: A New Reasoning Challenge for AI Fermi Problems are questions whose answer is a number that can only be reasonably estimated as a prec

AI2 15 May 28, 2022
这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer

Time Series Research with Torch 这个开源项目主要是对经典的时间序列预测算法论文进行复现,模型主要参考自GluonTS,框架主要参考自Informer。 建立原因 相较于mxnet和TF,Torch框架中的神经网络层需要提前指定输入维度: # 建立线性层 TensorF

Chi Zhang 85 Dec 29, 2022
Projecting interval uncertainty through the discrete Fourier transform

Projecting interval uncertainty through the discrete Fourier transform This repo

1 Mar 02, 2022
Message Passing on Cell Complexes

CW Networks This repository contains the code used for the papers Weisfeiler and Lehman Go Cellular: CW Networks (Under review) and Weisfeiler and Leh

Twitter Research 108 Jan 05, 2023
[ICLR 2021] Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments.

[ICLR 2021] RAPID: A Simple Approach for Exploration in Reinforcement Learning This is the Tensorflow implementation of ICLR 2021 paper Rank the Episo

Daochen Zha 48 Nov 21, 2022
Ascend your Jupyter Notebook usage

Jupyter Ascending Sync Jupyter Notebooks from any editor About Jupyter Ascending lets you edit Jupyter notebooks from your favorite editor, then insta

Untitled AI 254 Jan 08, 2023
The UI as a mobile display for OP25

OP25 Mobile Control Head A 'remote' control head that interfaces with an OP25 instance. We take advantage of some data end-points left exposed for the

Sarah Rose Giddings 13 Dec 28, 2022
Deep Two-View Structure-from-Motion Revisited

Deep Two-View Structure-from-Motion Revisited This repository provides the code for our CVPR 2021 paper Deep Two-View Structure-from-Motion Revisited.

Jianyuan Wang 145 Jan 06, 2023
Official Repsoitory for "Activate or Not: Learning Customized Activation." [CVPR 2021]

CVPR 2021 | Activate or Not: Learning Customized Activation. This repository contains the official Pytorch implementation of the paper Activate or Not

184 Dec 27, 2022
Reinforcement Learning for Automated Trading

Reinforcement Learning for Automated Trading This thesis has been realized for the obtention of the Master's in Mathematical Engineering at the Polite

Pierpaolo Necchi 80 Jun 19, 2022
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 09, 2022
Federated Learning Based on Dynamic Regularization

Federated Learning Based on Dynamic Regularization This is implementation of Federated Learning Based on Dynamic Regularization. Requirements Please i

39 Jan 07, 2023