Pmapper is a super-resolution and deconvolution toolkit for python 3.6+

Related tags

Deep Learningpmapper
Overview

pmapper

pmapper is a super-resolution and deconvolution toolkit for python 3.6+. PMAP stands for Poisson Maximum A-Posteriori, a highly flexible and adaptable algorithm for these problems. An implementation of the contemporary Richardson-Lucy algorithm is included for comparison.

The name of this repository is an homage to MTF-Mapper, a slanted edge MTF measurement program written by Frans van den Bergh.

The implementations of all algorithms in this repository are CPU/GPU agnostic and performant, able to perform 4K restoration at hundreds of iterations per second.

Usage

Basic PMAP, Multi-frame PMAP

import pmapper

img = ... # load an image somehow
psf = ... # acquire the PSF associated with the img
pmp = pmapper.PMAP(img, psf)  # "PMAP problem"
while pmp.iter < 100:  # number of iterations
    fHat = pmp.step()  # fHat is the object estimate

In simulation studies, the true object can be compared to fHat (for example, mean square error) to track convergence. If the psf is "larger" than the image, for example a 1024x1024 image and a 2048x2048 psf, the output will be super-resolved at the 2048x2048 resolution.

PMAP is able to combine multiple images of the same objec with different PSFs into one with the multi-frame variant. This can be used to combat dynamical atmospheric seeing conditions, line of sight jitter, or even perform incoherent aperture synthesis; rendering images from sparse aperture systems that mimic or exceed a system with a fully filled aperture.

import pmapper

# load a sequence of images; could be any iterable,
# or e.g. a kxmxn ndarray, with k = num frames
# psfs must have the same "size" (k) and correspond
# to the images in the same indices
imgs = ...
psfs = ...
pmp = pmapper.MFPMAP(imgs, psfs)  # "PMAP problem"
while pmp.iter < len(imgs)*100:  # number of iterations
    fHat = pmp.step()  # fHat is the object estimate

Multi-frame PMAP cycles through the images and PSFs, so the total number of iterations "should" be an integer multiple of the number of source images. In this way, each image is "visited" an equal number of times.

GPU computing

As mentioned previously, pmapper can be used trivially on a GPU. To do so, simply execute the following modification:

import pmapper
from pmapper import backend

import cupy as cp
from cupyx.scipy import (
    ndimage as cpndimage,
    fft as cpfft
)

backend.np._srcmodule = cp
backend.fft.fft = cpfft
backend.ndimage._srcmodule = cpndimage

# if your data is not on the GPU already
img = cp.array(img)
psf = cp.array(psf)

# ... do PMAP, it will run on a GPU without changing anything about your code

fHatCPU = fHat.get()

cupy is not the only way to do so; anything that quacks like numpy, scipy fft, and scipy ndimage can be used to substitute the backend. This can be done dynamically and at runtime. You likely will want to cast your imagery from fp64 to fp32 for performance scaling on the GPU.

Owner
NASA Jet Propulsion Laboratory
A world leader in the robotic exploration of space
NASA Jet Propulsion Laboratory
Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19

2s-AGCN Two-Stream Adaptive Graph Convolutional Networks for Skeleton-Based Action Recognition in CVPR19 Note PyTorch version should be 0.3! For PyTor

LShi 547 Dec 26, 2022
Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks

OnsagerNet Learning hidden low dimensional dyanmics using a Generalized Onsager Principle and neural networks This is the original pyTorch implemenati

Haijun.Yu 3 Aug 24, 2022
Training deep models using anime, illustration images.

animeface deep models for anime images. Datasets anime-face-dataset Anime faces collected from Getchu.com. Based on Mckinsey666's dataset. 63.6K image

Tomoya Sawada 61 Dec 25, 2022
Convert BART models to ONNX with quantization. 3X reduction in size, and upto 3X boost in inference speed

fast-Bart Reduction of BART model size by 3X, and boost in inference speed up to 3X BART implementation of the fastT5 library (https://github.com/Ki6a

Siddharth Sharma 19 Dec 09, 2022
Python Implementation of algorithms in Graph Mining, e.g., Recommendation, Collaborative Filtering, Community Detection, Spectral Clustering, Modularity Maximization, co-authorship networks.

Graph Mining Author: Jiayi Chen Time: April 2021 Implemented Algorithms: Network: Scrabing Data, Network Construbtion and Network Measurement (e.g., P

Jiayi Chen 3 Mar 03, 2022
A data annotation pipeline to generate high-quality, large-scale speech datasets with machine pre-labeling and fully manual auditing.

About This repository provides data and code for the paper: Scalable Data Annotation Pipeline for High-Quality Large Speech Datasets Development (subm

Appen Repos 86 Dec 07, 2022
Repository containing the PhD Thesis "Formal Verification of Deep Reinforcement Learning Agents"

Getting Started This repository contains the code used for the following publications: Probabilistic Guarantees for Safe Deep Reinforcement Learning (

Edoardo Bacci 5 Aug 31, 2022
Annotate with anyone, anywhere.

h h is the web app that serves most of the https://hypothes.is/ website, including the web annotations API at https://hypothes.is/api/. The Hypothesis

Hypothesis 2.6k Jan 08, 2023
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
Byte-based multilingual transformer TTS for low-resource/few-shot language adaptation.

One model to speak them all 🌎 Audio Language Text ▷ Chinese 人人生而自由,在尊严和权利上一律平等。 ▷ English All human beings are born free and equal in dignity and rig

Mutian He 60 Nov 14, 2022
Vector Quantization, in Pytorch

Vector Quantization - Pytorch A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a

Phil Wang 665 Jan 08, 2023
Predicting path with preference based on user demonstration using Maximum Entropy Deep Inverse Reinforcement Learning in a continuous environment

Preference-Planning-Deep-IRL Introduction Check my portfolio post Dependencies Gym stable-baselines3 PyTorch Usage Take Demonstration python3 record.

Tianyu Li 9 Oct 26, 2022
This repo is the official implementation of "L2ight: Enabling On-Chip Learning for Optical Neural Networks via Efficient in-situ Subspace Optimization".

L2ight is a closed-loop ONN on-chip learning framework to enable scalable ONN mapping and efficient in-situ learning. L2ight adopts a three-stage learning flow that first calibrates the complicated p

Jiaqi Gu 9 Jul 14, 2022
code for paper"A High-precision Semantic Segmentation Method Combining Adversarial Learning and Attention Mechanism"

PyTorch implementation of UAGAN(U-net Attention Generative Adversarial Networks) This repository contains the source code for the paper "A High-precis

Tong 8 Apr 25, 2022
Selecting Parallel In-domain Sentences for Neural Machine Translation Using Monolingual Texts

DataSelection-NMT Selecting Parallel In-domain Sentences for Neural Machine Translation Using Monolingual Texts Quick update: The paper got accepted o

Javad Pourmostafa 6 Jan 07, 2023
High level network definitions with pre-trained weights in TensorFlow

TensorNets High level network definitions with pre-trained weights in TensorFlow (tested with 2.1.0 = TF = 1.4.0). Guiding principles Applicability.

Taehoon Lee 1k Dec 13, 2022
Official PyTorch implementation of PS-KD

Self-Knowledge Distillation with Progressive Refinement of Targets (PS-KD) Accepted at ICCV 2021, oral presentation Official PyTorch implementation of

61 Dec 28, 2022
Code in PyTorch for the convex combination linear IAF and the Householder Flow, J.M. Tomczak & M. Welling

VAE with Volume-Preserving Flows This is a PyTorch implementation of two volume-preserving flows as described in the following papers: Tomczak, J. M.,

Jakub Tomczak 87 Dec 26, 2022
DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (SVS & TTS); AAAI 2022; Official code

DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism This repository is the official PyTorch implementation of our AAAI-2022 paper, in

Jinglin Liu 803 Dec 28, 2022
Framework to build and train RL algorithms

RayLink RayLink is a RL framework used to build and train RL algorithms. RayLink was used to build a RL framework, and tested in a large-scale multi-a

Bytedance Inc. 32 Oct 07, 2022