Official repository for "Restormer: Efficient Transformer for High-Resolution Image Restoration". SOTA for motion deblurring, image deraining, denoising (Gaussian/real data), and defocus deblurring.

Overview

PWC PWC PWC PWC

PWC PWC PWC PWC PWC

PWC PWC

Restormer: Efficient Transformer for High-Resolution Image Restoration

Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang

Paper: https://arxiv.org/abs/2111.09881

News

  • Testing codes and pre-trained models are released!

Abstract: Since convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data, these models have been extensively applied to image restoration and related tasks. Recently, another class of neural architectures, Transformers, have shown significant performance gains on natural language and high-level vision tasks. While the Transformer model mitigates the shortcomings of CNNs (i.e., limited receptive field and inadaptability to input content), its computational complexity grows quadratically with the spatial resolution, therefore making it infeasible to apply to most image restoration tasks involving high-resolution images. In this work, we propose an efficient Transformer model by making several key designs in the building blocks (multi-head attention and feed-forward network) such that it can capture long-range pixel interactions, while still remaining applicable to large images. Our model, named Restoration Transformer (Restormer), achieves state-of-the-art results on several image restoration tasks, including image deraining, single-image motion deblurring, defocus deblurring (single-image and dual-pixel data), and image denoising (Gaussian grayscale/color denoising, and real image denoising).


Network Architecture

Installation

The model is built in PyTorch 1.8.1 and tested on Ubuntu 16.04 environment (Python3.7, CUDA10.2, cuDNN7.6).

For installing, follow these intructions

conda create -n pytorch181 python=3.7
conda activate pytorch181
conda install pytorch=1.8 torchvision cudatoolkit=10.2 -c pytorch
pip install matplotlib scikit-learn scikit-image opencv-python yacs joblib natsort h5py tqdm

Results

Image Deraining comparisons on the Test100, Rain100H, Rain100L, Test1200, and Test2800 testsets. You can download Restormer's predictions from this Google Drive link


Single-Image Motion Deblurring results. Our Restormer is trained only on the GoPro dataset and directly applied to the HIDE and RealBlur benchmark datasets. You can download Restormer's predictions from this Google Drive link


Defocus Deblurring comparisons on the DPDD testset (containing 37 indoor and 39 outdoor scenes). S: single-image defocus deblurring. D: dual-pixel defocus deblurring. You can download Restormer's predictions from this Google Drive link


Gaussian Image Denoising comparisons for two categories of methods. Top super row: learning a single model to handle various noise levels. Bottom super row: training a separate model for each noise level. You can download Restormer's predictions from this Google Drive link

Grayscale

Color

Real Image Denoising on SIDD and DND datasets. ∗ denotes methods using additional training data. Our Restormer is trained only on the SIDD images and directly tested on DND. You can download Restormer's predictions from this Google Drive link

Citation

If you use Restormer, please consider citing:

@article{Zamir2021Restormer,
    title={Restormer: Efficient Transformer for High-Resolution Image Restoration}, 
    author={Syed Waqas Zamir and Aditya Arora and Salman Khan and Munawar Hayat 
            and Fahad Shahbaz Khan and Ming-Hsuan Yang},
    journal={ArXiv 2111.09881},
    year={2021}
}

Contact

Should you have any question, please contact [email protected]

Comments
  • Problems about training Deraining

    Problems about training Deraining

    Hi,Congratulations to you have a good job! Although I haved changed the number of GPUs in train.sh and Deraining_Restormer.yml to 4 since I only have 4 GPUs,I can't train the code of Deraining due to my GPU memory limitations. I found the program can run if I change the batch_size_per_gpu smaller. But the batch size can't meet the experimental settings. So what can I do if I want to achieve the settings in your experiment ( i.e. For progressive learning, we start training with patch size 128×128 and batch size 64. The patch size and batch size pairs are updated to [(160^2,40), (192^2,32), (256^2,16), (320^2,8),(384^2,8)] at iterations [92K, 156K, 204K, 240K, 276K].) ?

    opened by Lucky0775 5
  • colab?

    colab?

    I am pleased with your work; the level of completeness is really professional! Do you guys have any plan to release the code for Google Colab? Unfortunately, I can't run the code on my local machine due to some poor factors.

    opened by osushilover 5
  • Questions about the quantitative results of other methods?

    Questions about the quantitative results of other methods?

    Hi, How are the quantitative results calculated for the other methods in Restormer Table 1? Are you quoting their results directly or are you retraining them?

    Looking forward to your reply. Thank you!

    opened by C-water 3
  • Typical GPU memory requirements for training?

    Typical GPU memory requirements for training?

    I was trying to run training Restormer, and succeed to run it with 128x128 size.

    However my GPU memory runs out when trying to train the network with 256x256 size and a batch size larger than 2. My GPU is RTX3080 with 10GB memory.

    Do you know how much memory we need to train it on 256x256 size patch and batch size >= 8 ?

    opened by wonwoolee 3
  • Motion Debluring Train

    Motion Debluring Train

    Hi.Thank you so much for your open source work. When I trained motion_deblur, I found that the effect in the paper could not be achieved.

    1. I followed the dependency tutorial mentioned in the repository ,downloaded the gopro dataset, and used the provided crop method to prepare the training set and validation set.
    2. And use the Deblurring_Restormer.yml configuration file for training. In the configuration file I modified to use single GPU training.
    3. In another experiment, I modified the training strategy to fix the crop size to 128. But the results of both experiments were less than 31db, which was much lower than the results in the paper. I wonder if details are missing and why the results are so different.
    opened by niehen6174 3
  • About the training

    About the training

    How to solve the error of create_dataloader, create_dataset in init.py in the train.py file? Also what is the difference between training on basicsr documents and training on specific tasks (e.g. Deraining)?

    opened by SunYJLU 3
  • problem on the step ”Install gdrive using“

    problem on the step ”Install gdrive using“

    Dear author,I met a problem when input the code "go get github.com/prasmussen/gdrive"

    package golang.org/x/oauth2/google: unrecognized import path "golang.org/x/oauth2/google" (https fetch: Get https://golang.org/x/oauth2/google?go-get=1: dial tcp 172.217.163.49:443: i/o timeout)

    I want to know how to solve this.THANKS!

    opened by ZYQii 3
  • add model to Huggingface

    add model to Huggingface

    Hi, would you be interested in adding Restormer to Hugging Face Hub? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community. We can setup an organization or a user account under which restormer can be added similar to github.

    Example from other organizations: Keras: https://huggingface.co/keras-io Microsoft: https://huggingface.co/microsoft Facebook: https://huggingface.co/facebook

    Example spaces with repos: github: https://github.com/salesforce/BLIP Spaces: https://huggingface.co/spaces/akhaliq/BLIP

    github: https://github.com/facebookresearch/omnivore Spaces: https://huggingface.co/spaces/akhaliq/omnivore

    and here are guides for adding spaces/models/datasets to your org

    How to add a Space: https://huggingface.co/blog/gradio-spaces how to add models: https://huggingface.co/docs/hub/adding-a-model uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

    Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

    opened by AK391 3
  •  denoising training dataset

    denoising training dataset

    well done ! But can you tell me about your denoising-working ,what dataset your used? real training dataset and Gaussian Denoising dataset. Thank you very much!

    opened by 17346604401 3
  • some problems

    some problems

    Since no training code is given, I write my own training program to train Restormer. However, at the beginning of the training, I could only set batchsize to 48 due to the limitation of GPUs memory. However, I found that the loss would hardly decrease when the first 10,000 to 20,000 iteration was carried out, which verified that the PSNR remained unchanged at about 26.2. Is the training relatively slow, or what is the problem? And if prob, I would like to know the upward trend of Val PSNR and the downward trend of loss during your training

    opened by jiaaihhy 3
  • Would you inform about the wide-shallow network?

    Would you inform about the wide-shallow network?

    Hello,

    In the ablation study, you compared deeper vs wider Restormer. I'm wondering about the wider Restormer you mentioned, so could you inform me of the details of it?

    opened by amoeba04 2
  • About lr_scheduler.py

    About lr_scheduler.py

    Hi ! In lr_scheduler.py, from torch.optim.lr_scheduler import _LRScheduler The message Cannot find reference '_LRScheduler' in 'lr_scheduler.pyi' How can I solve this problem?

    opened by Spacei567 3
  • Question about training denoising model

    Question about training denoising model

    I followed the instructions and conducted 2 Gaussian color image denoising experiments where sigma=15 and 50. But I can't reproduce the same PSNR value as paper shows. Here are my results: sigma=15 For CBSD68 dataset Noise Level 15 PSNR: 34.398237 For Kodak dataset Noise Level 15 PSNR: 35.439437 For McMaster dataset Noise Level 15 PSNR: 35.556497 For Urban100 dataset Noise Level 15 PSNR: 35.058984 sigma=50 For CBSD68 dataset Noise Level 50 PSNR: 28.586302 For Kodak dataset Noise Level 50 PSNR: 29.967525 For McMaster dataset Noise Level 50 PSNR: 30.237451 For Urban100 dataset Noise Level 50 PSNR: 29.891585

    Did I miss some important details?

    opened by Andrew0613 0
  • About PSNR of IFAN in defocus deblurring tasks (DPDD datasets).

    About PSNR of IFAN in defocus deblurring tasks (DPDD datasets).

    Hi, Did you retrain the IFAN on DPDD? IFAN only provided the results from 8bit images, which is inconsistent with the results in this paper.
    I guess you have retrained IFAN. If convenient, could you please provide the test pictures?

    Thank you very much!

    opened by C-water 0
  • About training, NCCL

    About training, NCCL

    RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1616554786529/work/torch/lib/c10d/ProcessGroupNCCL.cpp:33, unhandled cuda error, NCCL version 2.7.8 ncclUnhandledCudaError: Call to CUDA function failed.

    How can i fix it????? Plz help!

    opened by jjjjzyyyyyy 1
  • About training

    About training

    Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.

    opened by jjjjzyyyyyy 0
  • About the Gaussian color image denoising results.

    About the Gaussian color image denoising results.

    Hi, there is a question about the Gaussian color image denoising results on the Kodak24 dataset. I have downloaded the provided pre-trained models and use them for testing, under the provided code base and environment. However, I can not get the similar results on Kodak24 as you have reported in Table 5 of the main paper. In fact, I get lower PSNR values of testing on Kodak24 (e,g,. -0.12 dB for sigma15, -0.11 dB of sigma25, -0.14 dB of sigma 50). Can you give some explanations or suggestions? Thanks very much.

    opened by gladzhang 0
Owner
Syed Waqas Zamir
Research Scientist
Syed Waqas Zamir
Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.

AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ [ICCV-2021]. Overview This package contains the model implementation and training

Google Research 365 Dec 30, 2022
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
Implementation of "Efficient Regional Memory Network for Video Object Segmentation" (Xie et al., CVPR 2021).

RMNet This repository contains the source code for the paper Efficient Regional Memory Network for Video Object Segmentation. Cite this work @inprocee

Haozhe Xie 76 Dec 14, 2022
AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning AutoPentest-DRL is an automated penetration testing framework based o

Cyber Range Organization and Design Chair 217 Jan 01, 2023
Liver segmentation using MONAI and pytorch

Machine Learning use case in the field of Healthcare. In this project MONAI and pytorch frameworks are used for 3D Liver segmentation.

Abhishek Gajbhiye 2 May 30, 2022
This repository contains code to train and render Mixture of Volumetric Primitives (MVP) models

Mixture of Volumetric Primitives -- Training and Evaluation This repository contains code to train and render Mixture of Volumetric Primitives (MVP) m

Meta Research 125 Dec 29, 2022
KaziText is a tool for modelling common human errors.

KaziText KaziText is a tool for modelling common human errors. It estimates probabilities of individual error types (so called aspects) from grammatic

ÚFAL 3 Nov 24, 2022
PyTorch Implementation of Realtime Multi-Person Pose Estimation project.

PyTorch Realtime Multi-Person Pose Estimation This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here Realtime_Multi-P

Dave Fang 157 Nov 12, 2022
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"

Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng Internati

Princeton Vision & Learning Lab 115 Jan 04, 2023
Official PyTorch implementation of the paper Image-Based CLIP-Guided Essence Transfer.

TargetCLIP- official pytorch implementation of the paper Image-Based CLIP-Guided Essence Transfer This repository finds a global direction in StyleGAN

Hila Chefer 221 Dec 13, 2022
A collection of models for image<->text generation in ACM MM 2021.

Bi-directional Image and Text Generation UMT-BITG (image & text generator) Unifying Multimodal Transformer for Bi-directional Image and Text Generatio

Multimedia Research 63 Oct 30, 2022
TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation Paper Links: TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentati

Hust Visual Learning Team 253 Dec 21, 2022
Code for MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks This is the code for the paper: MentorNet: Learning Data-Driven Curriculum fo

Google 302 Dec 23, 2022
Locationinfo - A script helps the user to show network information such as ip address

Description This script helps the user to show network information such as ip ad

Roxcoder 1 Dec 30, 2021
ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

ML-PersonalWork - Big assignment PersonalWork in Machine Learning, 2021 autumn BUAA.

Snapdragon Lee 2 Dec 16, 2022
9th place solution

AllDataAreExt-Galixir-Kaggle-HPA-2021-Solution Team Members Qishen Ha is Master of Engineering from the University of Tokyo. Machine Learning Engineer

daishu 5 Nov 18, 2021
This is the official source code of "BiCAT: Bi-Chronological Augmentation of Transformer for Sequential Recommendation".

BiCAT This is our TensorFlow implementation for the paper: "BiCAT: Sequential Recommendation with Bidirectional Chronological Augmentation of Transfor

John 15 Dec 06, 2022
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
Simulating Sycamore quantum circuits classically using tensor network algorithm.

Simulating the Sycamore quantum supremacy circuit This repo contains data we have obtained in simulating the Sycamore quantum supremacy circuits with

Feng Pan 46 Nov 17, 2022
Learning Dense Representations of Phrases at Scale (Lee et al., 2020)

DensePhrases DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches th

Princeton Natural Language Processing 540 Dec 30, 2022