RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

Overview

RIFE

RIFE: Real-Time Intermediate Flow Estimation for Video Frame Interpolation

Ported from https://github.com/hzwer/arXiv2020-RIFE

Dependencies

  • NumPy
  • PyTorch, preferably with CUDA. Note that torchvision and torchaudio are not required and hence can be omitted from the command.
  • VapourSynth

Installation

pip install --upgrade vsrife

Usage

from vsrife import RIFE

ret = RIFE(clip)

See __init__.py for the description of the parameters.

Comments
  • Getting Error when interpolating

    Getting Error when interpolating

        model.load_model(os.path.join(os.path.dirname(__file__), model_dir), -1)
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\vsrife\RIFE_HDv2.py", line 164, in load_model
        convert(torch.load('{}/flownet.pkl'.format(path), map_location=self.torch_device)))
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 608, in load
        return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
      File "C:\Users\\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\serialization.py", line 777, in _legacy_load
        magic_number = pickle_module.load(f, **pickle_load_args)
    EOFError: Ran out of input  ```
    
    Source file is a 720p 30fps mp4, loaded into VS through Lsmash source, set the format to RGBS. Nothing else
    System specs are R7 3700x, 32GB of ram and a RTX 3060
    
    
    opened by banjaminicc 4
  • Small feature request for RIFEv4: target fps as alternative to multiplier

    Small feature request for RIFEv4: target fps as alternative to multiplier

    I would it be possible to allow setting a target fps instead of a multiplier when using RIFEv4? When going from for example 23.976 (24000/1001) to 60 fps, having to use (60 * 1001 / 24000 =) 2,5025 is kind of annoying. ;) I know could write a wrapper arount the rife.RIFE but I suspect depending on the resulting float it would be more accurate if this was done inside the filter.

    opened by Selur 3
  • vs-rife + latest vs-dpir don't work

    vs-rife + latest vs-dpir don't work

    When using just vs-rife:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV420P8 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, multi=3, device_type='cuda', device_index=0) # new fps: 20
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 30.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30, fpsden=1)
    # Output
    clip.set_output()
    

    everything works. But when I add latest vs-dpir:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    import os
    import site
    # Import libraries for onnxruntime
    from ctypes import WinDLL
    path = site.getsitepackages()[0]+'/onnxruntime_dlls/'
    WinDLL(path+'cublas64_11.dll')
    WinDLL(path+'cudart64_110.dll')
    WinDLL(path+'cudnn64_8.dll')
    WinDLL(path+'cudnn_cnn_infer64_8.dll')
    WinDLL(path+'cudnn_ops_infer64_8.dll')
    WinDLL(path+'cufft64_10.dll')
    WinDLL(path+'cufftw64_10.dll')
    WinDLL(path+'nvinfer.dll')
    WinDLL(path+'nvinfer_plugin.dll')
    WinDLL(path+'nvparsers.dll')
    WinDLL(path+'nvonnxparser.dll')
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    from vsdpir import DPIR
    # adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # denoising using DPIRDenoise
    clip = DPIR(clip=clip, strength=15.000, task="denoise", provider=1, device_id=0)
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV444P16, matrix_s="470bg", range_s="limited")
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV444P16 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, multi=3, device_type='cuda', device_index=0) # new fps: 20
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 30.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30, fpsden=1)
    # Output
    clip.set_output()
    

    I get:

    Python exception: [WinError 127] Die angegebene Prozedur wurde nicht gefunden. Error loading "I:\Hybrid\64bit\Vapoursynth\Lib/site-packages\torch\lib\cudnn_cnn_train64_8.dll" or one of its dependencies.
    

    Using just vs-dpir:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    import os
    import site
    # Import libraries for onnxruntime
    from ctypes import WinDLL
    path = site.getsitepackages()[0]+'/onnxruntime_dlls/'
    WinDLL(path+'cublas64_11.dll')
    WinDLL(path+'cudart64_110.dll')
    WinDLL(path+'cudnn64_8.dll')
    WinDLL(path+'cudnn_cnn_infer64_8.dll')
    WinDLL(path+'cudnn_ops_infer64_8.dll')
    WinDLL(path+'cufft64_10.dll')
    WinDLL(path+'cufftw64_10.dll')
    WinDLL(path+'nvinfer.dll')
    WinDLL(path+'nvinfer_plugin.dll')
    WinDLL(path+'nvparsers.dll')
    WinDLL(path+'nvonnxparser.dll')
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/DeinterlaceFilter/TIVTC/libtivtc.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/d2vSource/d2vsource.dll")
    # source: 'C:\Users\Selur\Desktop\VTS_01_1.VOB'
    # current color space: YUV420P8, bit depth: 8, resolution: 720x480, fps: 29.97, color matrix: 470bg, yuv luminance scale: limited, scanorder: telecine
    # Loading C:\Users\Selur\Desktop\VTS_01_1.VOB using D2VSource
    clip = core.d2v.Source(input="E:/Temp/vob_941fdaaeda22090766694391cc4281d5_853323747.d2v")
    # Setting color matrix to 470bg.
    clip = core.std.SetFrameProps(clip, _Matrix=5)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=5)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=5)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 29.970
    clip = core.std.AssumeFPS(clip=clip, fpsnum=30000, fpsden=1001)
    # Deinterlacing using TIVTC
    clip = core.tivtc.TFM(clip=clip)
    clip = core.tivtc.TDecimate(clip=clip, mode=7, rate=10, dupThresh=0.04, vidThresh=3.50, sceneThresh=15.00)# new fps: 10
    # make sure content is preceived as frame based
    clip = core.std.SetFieldBased(clip, 0)
    from vsdpir import DPIR
    # adjusting color space from YUV420P8 to RGBS for vsDPIRDenoise
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
    # denoising using DPIRDenoise
    clip = DPIR(clip=clip, strength=15.000, task="denoise", provider=1, device_id=0)
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="470bg", range_s="limited")
    # set output frame rate to 10.000fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=10, fpsden=1)
    # Output
    clip.set_output()
    

    works fine.

    -> do you have an idea how I could fix this?

    opened by Selur 3
  • half the image is broken when using 4k content

    half the image is broken when using 4k content

    I get a broken output (see attachment), when using:

    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    core = vs.core
    # Loading Plugins
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll")
    core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
    # source: 'G:\TestClips&Co\files\MPEG-4 H.264\4k\Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv'
    # current color space: YUV420P10, bit depth: 10, resolution: 3840x2076, fps: 23.976, color matrix: 2020ncl, yuv luminance scale: limited, scanorder: progressive
    # Loading G:\TestClips&Co\files\MPEG-4 H.264\4k\Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv using LWLibavSource
    clip = core.lsmas.LWLibavSource(source="G:/TestClips&Co/files/MPEG-4 H.264/4k/Back to the Future (1985) 4k 10bit - 0.10.35-0.11.35.mkv", format="YUV420P10", cache=0, fpsnum=24000, fpsden=1001, prefer_hw=1)
    # Setting color matrix to 2020ncl.
    clip = core.std.SetFrameProps(clip, _Matrix=9)
    clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=9)
    clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=9)
    # Setting color range to TV (limited) range.
    clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
    # making sure frame rate is set to 23.976
    clip = core.std.AssumeFPS(clip=clip, fpsnum=24000, fpsden=1001)
    clip = core.misc.SCDetect(clip=clip,threshold=0.150)
    from vsrife import RIFE
    # adjusting color space from YUV420P10 to RGBS for VsTorchRIFE
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="2020ncl", range_s="limited")
    # adjusting frame count&rate with RIFE (torch)
    clip = RIFE(clip, scale=0.5, multi=3, device_type='cuda', device_index=0, fp16=True) # new fps: 71.928
    # adjusting output color from: RGBS to YUV420P8 for x264Model
    clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P8, matrix_s="2020ncl", range_s="limited", dither_type="error_diffusion")
    # set output frame rate to 71.928fps
    clip = core.std.AssumeFPS(clip=clip, fpsnum=8991, fpsden=125)
    # Output
    clip.set_output()
    

    tried different scale values, fp16 disabled, without scene change detection and other values for mult, nothing helped. https://github.com/HomeOfVapourSynthEvolution/VapourSynth-RIFE-ncnn-Vulkan works fine. 2k content also works fine. I tried different source filters and different files. Would be nice if this could be fixed.

    attachment was too large: https://ibb.co/WGT9pvL

    opened by Selur 2
  • Vapoursynth R58 and Python 3.10 compatibilty

    Vapoursynth R58 and Python 3.10 compatibilty

    trying to install vs-rife in Vapoursynth R58 I get:

    I:\Hybrid\64bit\Vapoursynth>python -m pip install --upgrade vsrife
    Collecting vsrife
      Using cached vsrife-2.0.0-py3-none-any.whl (32.5 MB)
    Requirement already satisfied: torch>=1.9.0 in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsrife) (1.11.0+cu113)
    Requirement already satisfied: numpy in i:\hybrid\64bit\vapoursynth\lib\site-packages (from vsrife) (1.22.3)
    Collecting VapourSynth>=55
      Using cached VapourSynth-57.zip (567 kB)
      Preparing metadata (setup.py) ... error
      error: subprocess-exited-with-error
    
      × python setup.py egg_info did not run successfully.
      │ exit code: 1
      ╰─> [15 lines of output]
          Traceback (most recent call last):
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 64, in <module>
              dll_path = query(winreg.HKEY_LOCAL_MACHINE, REGISTRY_PATH, REGISTRY_KEY)
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 38, in query
              reg_key = winreg.OpenKey(hkey, path, 0, winreg.KEY_READ)
          FileNotFoundError: [WinError 2] Das System kann die angegebene Datei nicht finden
    
          During handling of the above exception, another exception occurred:
    
          Traceback (most recent call last):
            File "<string>", line 2, in <module>
            File "<pip-setuptools-caller>", line 34, in <module>
            File "C:\Users\Selur\AppData\Local\Temp\pip-install-s7976394\vapoursynth_701a37362cd045f58da4818d07217c99\setup.py", line 67, in <module>
              raise OSError("Couldn't detect vapoursynth installation path")
          OSError: Couldn't detect vapoursynth installation path
          [end of output]
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
    error: metadata-generation-failed
    
    × Encountered error while generating package metadata.
    ╰─> See above for output.
    
    note: This is an issue with the package mentioned above, not pip.
    hint: See above for details.
    

    any idea how to fix it?

    opened by Selur 2
  • How to set 'clip.num_frames

    How to set 'clip.num_frames

    How to set the frames numbers?I only found the "multi: int ="in "init.py".Can I set the whole number of the frames numbers?Like 60 fps?Thanks!

    opened by feaonal 2
  • Requesting example vapoursynth script

    Requesting example vapoursynth script

    I tried to create a valid script for a while, but I can't make it run.

    from vsrife import RIFE
    import vapoursynth as vs
    core = vs.core
    core.std.LoadPlugin(path='/usr/lib/x86_64-linux-gnu/libffms2.so')
    clip = core.ffms2.Source(source='test.webm')
    print(clip) # YUV420P8
    clip = vs.core.resize.Bicubic(clip, format=vs.RGBS)
    print(clip) # RGBS
    clip = RIFE(clip)
    clip.set_output()
    
    vspipe --y4m inference.py - | x264 - --demuxer y4m -o example.mkv
    
    Error: Failed to retrieve frame 0 with error: Resize error: Resize error 3074: no path between colorspaces (2/2/2 => 0/2/2). May need to specify additional colorspace parameters.
    

    Can I get an example that should actually work?

    opened by styler00dollar 2
  • [Q] 0bit models in the repo

    [Q] 0bit models in the repo

    Hi

    i see in the model folders, have a files (models?) with 0bits, i presume when the plugin "learn", the models is filled with the data

    this is correct?

    then, in a system with install this plugin as system-wide, these models should be have a write permissions? (in case of linux)

    greetings

    opened by sl1pkn07 2
  • Wrong output framerate

    Wrong output framerate

    That - https://github.com/HolyWu/vs-rife/blob/91e894f41cbdfb458ef8f776c47c7f652158bc6f/vsrife/init.py#L280 - doesn't work as expected because of two reasons:

    1. clip.fps.numerator / denominator can be 0 / 1 (from the docs: "It is 0/1 when the clip has a variable framerate")
    2. there's a frame duration attached to each frame, and it seems like FrameEval(frame_adjuster) return frames with the original durations, not the ones from format_clip

    A quick fix that works:

        clip0 = vs.core.std.Interleave([clip] * factor_num)
        if factor_den>1:
            clip0 = clip0.std.SelectEvery(cycle=factor_den,offsets=0)
        clip1 = clip.std.DuplicateFrames(frames=clip.num_frames - 1).std.DeleteFrames(frames=0)
        clip1 = vs.core.std.Interleave([clip1] * factor_num)
        if factor_den>1:
            clip1 = clip1.std.SelectEvery(cycle=factor_den,offsets=0)
    
    opened by chainikdn 1
  • How to set clip.num_frames

    How to set clip.num_frames

    How to set the frames numbers?I only found the "multi: int ="in "init.py".Can I set the whole number of the frames numbers?Like 60 fps?Thanks!

    opened by feaonal 0
Releases(v3.1.0)
RLDS stands for Reinforcement Learning Datasets

RLDS RLDS stands for Reinforcement Learning Datasets and it is an ecosystem of tools to store, retrieve and manipulate episodic data in the context of

Google Research 135 Jan 01, 2023
Vision transformers (ViTs) have found only limited practical use in processing images

CXV Convolutional Xformers for Vision Vision transformers (ViTs) have found only limited practical use in processing images, in spite of their state-o

Cloudwalker 23 Sep 10, 2022
Synthetic Humans for Action Recognition, IJCV 2021

SURREACT: Synthetic Humans for Action Recognition from Unseen Viewpoints Gül Varol, Ivan Laptev and Cordelia Schmid, Andrew Zisserman, Synthetic Human

Gul Varol 59 Dec 14, 2022
DuBE: Duple-balanced Ensemble Learning from Skewed Data

DuBE: Duple-balanced Ensemble Learning from Skewed Data "Towards Inter-class and Intra-class Imbalance in Class-imbalanced Learning" (IEEE ICDE 2022 S

6 Nov 12, 2022
MADE (Masked Autoencoder Density Estimation) implementation in PyTorch

pytorch-made This code is an implementation of "Masked AutoEncoder for Density Estimation" by Germain et al., 2015. The core idea is that you can turn

Andrej 498 Dec 30, 2022
WRENCH: Weak supeRvision bENCHmark

🔧 What is it? Wrench is a benchmark platform containing diverse weak supervision tasks. It also provides a common and easy framework for development

Jieyu Zhang 176 Dec 28, 2022
Pytorch implementation for Patient Knowledge Distillation for BERT Model Compression

Patient Knowledge Distillation for BERT Model Compression Knowledge distillation for BERT model Installation Run command below to install the environm

Siqi 180 Dec 19, 2022
Picasso: A CUDA-based Library for Deep Learning over 3D Meshes

The Picasso Library is intended for complex real-world applications with large-scale surfaces, while it also performs impressively on the small-scale applications over synthetic shape manifolds. We h

97 Dec 01, 2022
TorchIO is a Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Fernando Pérez-García 1.6k Jan 06, 2023
Simulation-based inference for the Galactic Center Excess

Simulation-based inference for the Galactic Center Excess Siddharth Mishra-Sharma and Kyle Cranmer Abstract The nature of the Fermi gamma-ray Galactic

Siddharth Mishra-Sharma 3 Jan 21, 2022
Unsupervised Image Generation with Infinite Generative Adversarial Networks

Unsupervised Image Generation with Infinite Generative Adversarial Networks Here is the implementation of MICGANs using DCGAN architecture on MNIST da

16 Dec 24, 2021
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

HEP Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior Implementation Python3 PyTorch=1.0 NVIDIA GPU+CUDA Training process The

FengZhang 34 Dec 04, 2022
Code repository for the paper "Tracking People with 3D Representations"

Tracking People with 3D Representations Code repository for the paper "Tracking People with 3D Representations" (paper link) (project site). Jathushan

Jathushan Rajasegaran 77 Dec 03, 2022
Python code for loading the Aschaffenburg Pose Dataset.

Aschaffenburg Pose Dataset (APD) This repository contains Python code for loading and filtering the Aschaffenburg Pose Dataset. The dataset itself and

1 Nov 26, 2021
TorchOk - The toolkit for fast Deep Learning experiments in Computer Vision

TorchOk - The toolkit for fast Deep Learning experiments in Computer Vision

52 Dec 23, 2022
[ICML 2020] DrRepair: Learning to Repair Programs from Error Messages

DrRepair: Learning to Repair Programs from Error Messages This repo provides the source code & data of our paper: Graph-based, Self-Supervised Program

Michihiro Yasunaga 155 Jan 08, 2023
Jingju baseline - A baseline model of our project of Beijing opera script generation

Jingju Baseline It is a baseline of our project about Beijing opera script gener

midon 1 Jan 14, 2022
This is the official implementation code repository of Underwater Light Field Retention : Neural Rendering for Underwater Imaging (Accepted by CVPR Workshop2022 NTIRE)

Underwater Light Field Retention : Neural Rendering for Underwater Imaging (UWNR) (Accepted by CVPR Workshop2022 NTIRE) Authors: Tian Ye†, Sixiang Che

jmucsx 17 Dec 14, 2022
ParmeSan: Sanitizer-guided Greybox Fuzzing

ParmeSan: Sanitizer-guided Greybox Fuzzing ParmeSan is a sanitizer-guided greybox fuzzer based on Angora. Published Work USENIX Security 2020: ParmeSa

VUSec 158 Dec 31, 2022
📚 Papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks.

papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks. Papermill lets you: parameterize notebooks execute notebooks This

nteract 5.1k Jan 03, 2023