Modular Probabilistic Programming on MXNet

Overview

MXFusion

Build Status | codecov | pypi | Documentation Status | GitHub license

MXFusion

Tutorials | Documentation | Contribution Guide

MXFusion is a modular deep probabilistic programming library.

With MXFusion Modules you can use state-of-the-art inference techniques for specialized probabilistic models without needing to implement those techniques yourself. MXFusion helps you rapidly build and test new methods at scale, by focusing on the modularity of probabilistic models and their integration with modern deep learning techniques.

MXFusion uses MXNet as its computational platform to bring the power of distributed, heterogenous computation to probabilistic modeling.

Installation

Dependencies / Prerequisites

MXFusion's primary dependencies are MXNet >= 1.3 and Networkx >= 2.1. See requirements.

Supported Architectures / Versions

MXFusion is tested on Python 3.4+ on MacOS and Linux.

Installation of MXNet

There are multiple PyPi packages of MXNet. A straight-forward installation with only CPU support can be done by:

pip install mxnet

For an installation with GPU or MKL, detailed instructions can be found on MXNet site.

pip

If you just want to use MXFusion and not modify the source, you can install through pip:

pip install mxfusion

From source

To install MXFusion from source, after cloning the repository run the following from the top-level directory:

pip install .

Where to go from here?

Tutorials

Documentation

Contributions

Community

We welcome your contributions and questions and are working to build a responsive community around MXFusion. Feel free to file an Github issue if you find a bug or want to request a new feature.

Contributing

Have a look at our contributing guide, thanks for the interest!

Points of contact for MXFusion are:

  • Eric Meissner (@meissnereric)
  • Zhenwen Dai (@zhenwendai)

License

MXFusion is licensed under Apache 2.0. See LICENSE.

Comments
  • LBFGS optimizer

    LBFGS optimizer

    Issue #, if available: #75

    Description of changes: This is a draft pull request to add LBFGS optimizer for optimization of deterministic loss functions. I found this works much better than the current default optimizer for vanilla GPs.

    If you don't object to the approach I use here I will take the time and add tests and tidy the code up.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Copy over module algorithms correctly

    Copy over module algorithms correctly

    The module algorithms dictionary is expected to be a list containing tuples but after cloning a module this is not the case. This PR fixes that.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Changed transpose property to function.

    Changed transpose property to function.

    This stops the debugger accidentally creating new variables on access.

    Description of changes: When using an interactive debugger (e.g. PyCharm), inspecting any Variable object invokes the property T which has the side effect of creating new variables that are not part of the graph. By changing this to a function .transpose() the functionality is maintained (at the slight expense of ease of use) whilst preventing this from happening.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by tdiethe 5
  • GP replicate_self fixes

    GP replicate_self fixes

    This fixes #183 (but maybe you want to copy kernels in a different way as discussed yesterday). It also fixes a few other issues with cloning this specific GP module and adds tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • DGP implementation

    DGP implementation

    This PR adds an implementation of DGPs.

    I have added a marginalise_conditional_distribution method to the conditional GP class which is used both by the deep GP and the SVI GP.

    I chose not to explicitly represent the conditional p(f|u) as I couldn't work out how to represent it while making the marginalization as efficient.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • Move constants in InferenceParameters into ParameterDict

    Move constants in InferenceParameters into ParameterDict

    Description of changes: Shift MXNet based constants from being stored in the InferenceParameters._constants dictionary (which still exists and keeps scalar/native constants) to keeping MXNet constants in the ParameterDict where other Parameters are kept.

    This removes the need for the separate MXNet constants file at serialization file as well!

    (Feel free to ignore the documentation change here, it's a remnant of the larger docs change I made in another PR and I'll try to make sure the other one is the one that gets kept.)

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by meissnereric 4
  • Implement common MXNet operators (dot product, diag, etc. in MXFusion

    Implement common MXNet operators (dot product, diag, etc. in MXFusion

    These should be implemented as stateless functions.

    Create a class that takes in variables and returns a function evaluation.

    Basic - Add, subtract, multiplication, division of variables.

    elementwise - square, exponentiation, log

    Aggregation - sum, mean, prod

    Matrix ops - dot product, diag

    Matrix manipulation - reshape, transpose

    enhancement 
    opened by meissnereric 4
  • Folder structure changes and renaming

    Folder structure changes and renaming

    It contains the following changes:

    • Folder structure changes and renaming according to our discussion.
    • Extend the Mark's multivariate normal distribution implementation to allow general broadcasting.

    @marpulli I changed the behavior of your log_pdf and kl_divergence implementation to return an array of the size of mean without the last dimension. This is consistent with our other distributions.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by zhenwendai 3
  • The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master branch fails, because it tries to deploy again. Here is the link to a build: https://travis-ci.org/amzn/MXFusion/jobs/453598240.

    bug 
    opened by zhenwendai 3
  • Uniform and Laplace distributions

    Uniform and Laplace distributions

    Issue #, if available:

    Description of changes:

    Uniform and Laplace distributions + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Added Bernoulli distribution + tests

    Added Bernoulli distribution + tests

    Issue #, if available: #21

    Description of changes: Added Bernoulli distribution + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Switch to deep numpy as backend.

    Switch to deep numpy as backend.

    Hi,

    I'm a member of Deep Numpy (a new MXNet frontend API with numpy like interface: https://numpy.mxnet.io/index.html) dev team and I'm currently working on the random module for Deep Numpy, which, as its name, has behavior exactly like Numpy, including broadcastable parameters and output shape, for example: https://github.com/apache/incubator-mxnet/pull/15858

    Also, sampling backend for rejection sampling is entirely rewritten to remove while loop from GPU Kernel. The new version, according to my profiling, performs ten times faster than nd.random on GPU at the cost of tiny increase in memory usage. (not merged yet) https://github.com/apache/incubator-mxnet/issues/15928

    We could have some further discussion if you have interests in switching to deep numpy's random sampling backend, or, migrating MXFusion to Deep Numpy API :-)

    opened by xidulu 0
  • GP Likelihood classes

    GP Likelihood classes

    Is your feature request related to a problem? Please describe. The SVGP and DGP modules should be able to work with alternative likelihoods. They are both currently implemented for Gaussian likelihoods only. The ability to switch out likelihoods would make things like #126 much simpler with less code duplication.

    Describe the solution you'd like Currently:

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.Y = SVGP.define_variable(m.X, kern, m.noise_var...)
    

    I'd like something more like

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.likelihood = NormalGPLikelihood(variance)
    # This could also be: 
    # m.likelihood = BernoulliGPLikelihood()
    m.Y = SVGP.define_variable(m.X, kern, m.likelihood...)
    

    These likelihoods would need to be able to compute: where p(f) is Gaussian.

    class BernoulliGPLikelihood(GPLikelihood):
        # This class would also contain the transformation of the 
        # latent function to the interval [0, 1] I think
       def expectation(self, mean, variance, data): 
           # This method would implement the quadrature rule to do this here
    
    class NormalGPLikelihood(GPLikelihood):
       def expectation(self, mean, variance, data): # think of a better name!
           # This method could use the analytically equation
    

    Describe alternatives you've considered Creating a new class might not be necessary, we might be able to reuse the current distribution objects directly.

    I would be happy to implement this, if we come up with a design which people are happy with first.

    opened by marpulli 0
  • Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Describe the bug

    When passing Variables of type PARAMETER as 'constants' argument to the constructor of Inference (or child classes), this line throws during the execution of Inference's run method since the Variables that were passed as 'constants' are in self._var_trans but not in the 'kw' input argument.

    Expected behavior

    Execution does not throw and Variables passed as 'constants' argument to the Inference constructor, even if they are of type PARAMETER, are treated as constants and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 2
  • Execution throws if a Kernel Variable is set to CONSTANT

    Execution throws if a Kernel Variable is set to CONSTANT

    Describe the bug

    If a kernel Variable is of CONSTANT type, fetch_parameters throws here since CONSTANT Variables do not make it to params input argument.

    Expected behavior

    Execution does not throw, and kernel Variables that are of type CONSTANT are kept constant and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 1
  • Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Describe the bug

    If batch_size for MinibatchInferenceLoop is larger than the dataset size, this line throws due to division by zero.

    Expected behavior

    Possible mitigations are:

    • Option No. 1: Default batch size to min("requested batch size", "dataset size") and issue a warning if "requested batch size" > "dataset size".
    • Option No. 2: Throw with a message stating why it's throwing and how to fix the issue.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug Easy 
    opened by pabfer 1
Releases(v0.3.1)
  • v0.3.1(May 30, 2019)

    • Added SVGP regression notebook.
    • Updated the VAE and GP regression notebook.
    • Removed the dependency on scikit-learn.
    • Moved the parameters of Gluon block to be controlled by MXFusion.
    • Fixed a bug in mini-batch learning.
    • Extended the SVGPRegression module to take samples as input variables.
    • Documentation and stylistic edits.
    • Merged in the PILCO Changes
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 20, 2019)

    The bigger changes are:

    • #133 Changing variable broadcasting for factors
    • #153 Simplify serialization to use a simple zip file

    Other than that its mostly bug fixes and documentation changes.

    • #144 Add details to inference and serialization documentation
    • #148 Fix a bug for SVGP regression with minibatch traning
    • #81 Reduce num_samples in uniform non-mock test to 1000
    • #137 Changed transpose property to function.
    • #135 Bug fix in function evaluation
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Nov 9, 2018)

    • Add the tutorial for Gaussian process regression.
    • Fix empty operator bug.
    • Fix bug to allow the same variable for multiple inputs to a factor.
    • Add module serialization.
    • Fix the bug that causes the failure of documentation compilation.
    • Fix the bug: the inference methods of GP modules do not handle samples.
    • Update issue templates.
    • Add license headers to all files.
    • Add the getting started notebook.
    • Remove the dependency on future.
    • Update the Inference documentation.
    • Implement Dirichlet distribution.
    • Add logistic variable transformation.
    • Implement Expectation inference.
    • Fix the bugs related to dtype.
    • Validate shape of array variables.
    • Fix divide by zero error if max_iter < n_prints.
    • Add multiply kernel.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 19, 2018)

    • Improve the printing messages of gradient loop and bug fix for GluonFunctionEvaluation
    • Add Gamma Distribution
    • GP Modules enhancement
    • Implement Module, GPRegression, SparseGPRegression and SVGPRegression…
    • Update the README
    • Add score function for variational inference enhancement
    • Refactor the interface of inferece for module
    • Add score function inference
    • Implement common MXNet operators (dot product, diag, etc. in MXFusion enhancement
    • Add support for MXNet operators.
    • Cleanup kernel function and function wrappers for copying enhancement
    • Implement the base class MXFusionFunction
    Source code(tar.gz)
    Source code(zip)
Owner
Amazon
Amazon
Retinal vessel segmentation based on GT-UNet

Retinal vessel segmentation based on GT-UNet Introduction This project is a retinal blood vessel segmentation code based on UNet-like Group Transforme

Kent0n 27 Dec 18, 2022
A Unified Framework and Analysis for Structured Knowledge Grounding

UnifiedSKG 📚 : Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models Code for paper UnifiedSKG: Unifying and Mu

HKU NLP Group 370 Dec 21, 2022
Implementation of "Bidirectional Projection Network for Cross Dimension Scene Understanding" CVPR 2021 (Oral)

Bidirectional Projection Network for Cross Dimension Scene Understanding CVPR 2021 (Oral) [ Project Webpage ] [ arXiv ] [ Video ] Existing segmentatio

Hu Wenbo 135 Dec 26, 2022
[ICCV 2021 Oral] Mining Latent Classes for Few-shot Segmentation

Mining Latent Classes for Few-shot Segmentation Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao. This codebase contains baseline of our paper Mini

Lihe Yang 66 Nov 29, 2022
App for identification of various objects. Based on YOLO v4 tiny architecture

Object_detection Repository containing trained model yolo v4 tiny, which is capable of identification 80 different classes Default feed is set to be a

Mateusz Kurdziel 0 Jun 22, 2022
PyTorch implementation of Barlow Twins.

Barlow Twins: Self-Supervised Learning via Redundancy Reduction PyTorch implementation of Barlow Twins. @article{zbontar2021barlow, title={Barlow Tw

Facebook Research 839 Dec 29, 2022
A deep learning framework for historical document image analysis

DIVA-DAF Description A deep learning framework for historical document image analysis. How to run Install dependencies # clone project git clone https

9 Aug 04, 2022
TensorFlow implementation of Deep Reinforcement Learning papers

Deep Reinforcement Learning in TensorFlow TensorFlow implementation of Deep Reinforcement Learning papers. This implementation contains: [1] Playing A

Taehoon Kim 1.6k Jan 03, 2023
Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Julia package for contraction of tensor networks, based on the sweep line algorithm outlined in the paper General tensor network decoding of 2D Pauli codes

Christopher T. Chubb 35 Dec 21, 2022
'A C2C E-COMMERCE TRUST MODEL BASED ON REPUTATION' Python implementation

Project description A library providing functionalities to calculate reputation and degree of trust on C2C ecommerce platforms. The work is fully base

Davide Bigotti 2 Dec 14, 2022
Tutorial page of the Climate Hack, the greatest hackathon ever

Tutorial page of the Climate Hack, the greatest hackathon ever

UCL Artificial Intelligence Society 12 Jul 02, 2022
Classify music genre from a 10 second sound stream using a Neural Network.

MusicGenreClassification Academic research in the field of Deep Learning (Deep Neural Networks) and Sound Processing, Tel Aviv University. Featured in

Matan Lachmish 453 Dec 27, 2022
A simple pytorch pipeline for semantic segmentation.

SegmentationPipeline -- Pytorch A simple pytorch pipeline for semantic segmentation. Requirements : torch=1.9.0 tqdm albumentations=1.0.3 opencv-pyt

petite7 4 Feb 22, 2022
PyTorch implementation of neural style transfer algorithm

neural-style-pt This is a PyTorch implementation of the paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias

770 Jan 02, 2023
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
PyTorch implementation of the end-to-end coreference resolution model with different higher-order inference methods.

End-to-End Coreference Resolution with Different Higher-Order Inference Methods This repository contains the implementation of the paper: Revealing th

Liyan 52 Jan 04, 2023
Creating a Linear Program Solver by Implementing the Simplex Method in Python with NumPy

Creating a Linear Program Solver by Implementing the Simplex Method in Python with NumPy Simplex Algorithm is a popular algorithm for linear programmi

Reda BELHAJ 2 Oct 12, 2022
Reference implementation of code generation projects from Facebook AI Research. General toolkit to apply machine learning to code, from dataset creation to model training and evaluation. Comes with pretrained models.

This repository is a toolkit to do machine learning for programming languages. It implements tokenization, dataset preprocessing, model training and m

Facebook Research 408 Jan 01, 2023
Oriented Object Detection: Oriented RepPoints + Swin Transformer/ReResNet

Oriented RepPoints for Aerial Object Detection The code for the implementation of “Oriented RepPoints + Swin Transformer/ReResNet”. Introduction Based

96 Dec 13, 2022
T2F: text to face generation using Deep Learning

⭐ [NEW] ⭐ T2F - 2.0 Teaser (coming soon ...) Please note that all the faces in the above samples are generated ones. The T2F 2.0 will be using MSG-GAN

Animesh Karnewar 533 Dec 22, 2022