A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising (CVPR 2020 Oral & TPAMI 2021)

Related tags

Deep LearningELD
Overview

ELD

The implementation of CVPR 2020 (Oral) paper "A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising" and its journal (TPAMI) version "Physics-based Noise Modeling for Extreme Low-light Photography". Interested readers are also referred to an insightful Note about this work in Zhihu (Chinese).

News

  • 2022/01/08: Major Update: Release the training code and other related items (including synthetic datasets, customized rawpy, calibrated camera noise parameters, baseline noise models, calibrated SonyA7S2 camera response function (CRF) and a modern implementation of EMoR radiometric calibration method) to accelerate further research!
  • 2022/01/05: Replace the released ELD dataset by my local version of the dataset. We thank @fenghansen for pointing this out. Please refer to this issue for more details.
  • 2021/08/05: The comprehensive version of this work was accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
  • 2020/07/16: Release the ELD dataset and our pretrained models at GoogleDrive and Baidudisk (0lby)

Highlights

  • We present a highly accurate noise formation model based on the characteristics of CMOS photosensors, thereby enabling us to synthesize realistic samples that better match the physics of image formation process.

  • To study the generalizability of a neural network trained with existing schemes, we introduce a new Extreme Low-light Denoising (ELD) dataset that covers four representative modern camera devices for evaluation purposes only. The image capture setup and example images are shown as below:

  • By training only with our synthetic data, we demonstrate a convolutional neural network can compete with or sometimes even outperform the network trained with paired real data under extreme low-light settings. The denoising results of networks trained with multiple schemes, i.e. 1) synthetic data generated by the poissonian-gaussian noise model, 2) paired read data of SID dataset and 3) synthetic data generated by our proposed noise model, are displayed as follows:

Prerequisites

  • Python >=3.6, PyTorch >= 1.6
  • Requirements: opencv-python, tensorboardX, lmdb, rawpy, torchinterp1d
  • Platforms: Ubuntu 16.04, cuda-10.1

Notice this codebase relies on my own customized rawpy, which provides more functionalities than the official one. This is released together with our datasets and the pretrained models. To build rawpy from source, please first compile and install the LibRaw library following the official instructions, then type pip install -e . in the rawpy directory.

Quick Start

Due to the business license, we are unable to to provide the noise model as well as the calibration method. Instead, we release our collected ELD dataset and our pretrained models to facilitate future research.

To reproduce our results presented in the paper (Table 1 and 2), please take a look at scripts/test_SID.sh and scripts/test_ELD.sh

Update: (2022-01-08) We release the training code and the synthetic datasets per the users' requests. The training scripts and the user instructions can be found in scripts/train.sh. Additionally, we provide the baseline noise models (G/G+P/G+P*) and the calibrated noise parameters for all cameras of ELD for training (see noise.py and train_syn.py), which could serve as a starting point to develop your own noise model.

We use lmdb to prepare datasets, please refer to util/lmdb_data.py to see how we generate datasets from SID. We also provide a new implementation of a classic radiometric calibration method EMoR, and utilize it to calibrate the CRF of SonyA7S2, which could be further used to simulate realistic on-board ISP as in the commercial SonyA7S2 camera.

ELD Dataset

The dataset capture protocol is shown as follow:

We choose three ISO settings (800, 1600, 3200) and four low light factors (x1, x10, x100, x200) to capture the dataset (x1/x10 is not used in our paper). Image ids 1, 6, 11, 16 represent the long-exposure reference images. Please refer to ELDEvalDataset class in data/sid_dataset.py for more details.

Citation

If you find our code helpful in your research or work please cite our paper.

@inproceedings{wei2020physics,
  title={A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising},
  author={Wei, Kaixuan and Fu, Ying and Yang, Jiaolong and Huang, Hua},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020},
}

@article{wei2021physics,
  title={Physics-based Noise Modeling for Extreme Low-light Photography},
  author={Wei, Kaixuan and Fu, Ying and Zheng, Yinqiang and Yang, Jiaolong},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  publisher={IEEE}
}

Contact

If you find any problem, please feel free to contact me (kxwei at princeton.edu kaixuan_wei at bit.edu.cn). A brief self-introduction (including your name, affiliation and position) is required, if you would like to get an in-depth help from me. I'd be glad to talk with you if more information (e.g. your personal website link) is attached. Note I would not reply to any impolite/aggressive email that violates the above criteria.

Owner
Kaixuan Wei
PhD student at Princeton University. Previously I obtained BS and MS degrees from BIT and ever did research at Cambridge and MSRA.
Kaixuan Wei
SiT: Self-supervised vIsion Transformer

This repository contains the official PyTorch self-supervised pretraining, finetuning, and evaluation codes for SiT (Self-supervised image Transformer).

Sara Ahmed 275 Dec 28, 2022
Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling"

Official Code Release for "TIP-Adapter: Training-free clIP-Adapter for Better Vision-Language Modeling" Pipeline of Tip-Adapter Tip-Adapter can provid

peng gao 187 Dec 28, 2022
Text-to-Image generation

Generate vivid Images for Any (Chinese) text CogView is a pretrained (4B-param) transformer for text-to-image generation in general domain. Read our p

THUDM 1.3k Dec 29, 2022
Hands-On Machine Learning for Algorithmic Trading, published by Packt

Hands-On Machine Learning for Algorithmic Trading Hands-On Machine Learning for Algorithmic Trading, published by Packt This is the code repository fo

Packt 981 Dec 29, 2022
Speech Recognition using DeepSpeech2.

deepspeech.pytorch Implementation of DeepSpeech2 for PyTorch using PyTorch Lightning. The repo supports training/testing and inference using the DeepS

Sean Naren 2k Jan 04, 2023
Code for "Diffusion is All You Need for Learning on Surfaces"

Source code for "Diffusion is All You Need for Learning on Surfaces", by Nicholas Sharp Souhaib Attaiki Keenan Crane Maks Ovsjanikov NOTE: the linked

Nick Sharp 247 Dec 28, 2022
This is the repository for the paper "Have I done enough planning or should I plan more?"

Metacognitive Learning Tool box https://re.is.mpg.de What Is This? This repository contains two modules used to analyse metacognitive learning in huma

0 Dec 01, 2021
This is the official repository of Music Playlist Title Generation: A Machine-Translation Approach.

PlyTitle_Generation This is the official repository of Music Playlist Title Generation: A Machine-Translation Approach. The paper has been accepted by

SeungHeonDoh 6 Jan 03, 2022
Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini!

ConversorDeMedidas_CapuccinoGelado Este conversor criará a medida exata para sua receita de capuccino gelado da grandiosa Rafaella Ballerini! Requirem

Arthur Ottoni Ribeiro 48 Nov 15, 2022
Translate darknet to tensorflow. Load trained weights, retrain/fine-tune using tensorflow, export constant graph def to mobile devices

Intro Real-time object detection and classification. Paper: version 1, version 2. Read more about YOLO (in darknet) and download weight files here. In

Trieu 6.1k Dec 30, 2022
DAN: Unfolding the Alternating Optimization for Blind Super Resolution

DAN-Basd-on-Openmmlab DAN: Unfolding the Alternating Optimization for Blind Super Resolution We reproduce DAN via mmediting based on open-sourced code

AlexZou 72 Dec 13, 2022
Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Convert Pytorch model to onnx or tflite, and the converted model can be visualized by Netron

Roxbili 5 Nov 19, 2022
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
Algorithms for outlier, adversarial and drift detection

Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline d

Seldon 1.6k Dec 31, 2022
Fully Connected DenseNet for Image Segmentation

Fully Connected DenseNets for Semantic Segmentation Fully Connected DenseNet for Image Segmentation implementation of the paper The One Hundred Layers

Somshubra Majumdar 84 Oct 31, 2022
AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis.

AITom Introduction AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis. AITom is originated from the tomominer l

93 Jan 02, 2023
Optimizing Deeper Transformers on Small Datasets

DT-Fixup Optimizing Deeper Transformers on Small Datasets Paper published in ACL 2021: arXiv Detailed instructions to replicate our results in the pap

16 Nov 14, 2022
This GitHub repo consists of Code and Some results of project- Diabetes Treatment using Gold nanoparticles. These Consist of ML Models used for prediction Diabetes and further the basic theory and working of Gold nanoparticles.

GoldNanoparticles This GitHub repo consists of Code and Some results of project- Diabetes Treatment using Gold nanoparticles. These Consist of ML Mode

1 Jan 30, 2022
Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation".

PixelTransformer Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation". Project Page Installation Please insta

Shubham Tulsiani 24 Dec 17, 2022
My coursework for Machine Learning (2021 Spring) at National Taiwan University (NTU)

Machine Learning 2021 Machine Learning (NTU EE 5184, Spring 2021) Instructor: Hung-yi Lee Course Website : (https://speech.ee.ntu.edu.tw/~hylee/ml/202

100 Dec 26, 2022