Fast Neural Representations for Direct Volume Rendering

Related tags

Deep LearningfV-SRN
Overview

Fast Neural Representations for Direct Volume Rendering

Teaser

Sebastian Weiss, Philipp Hermüller, Rüdiger Westermann

This repository contains the code and settings to reproduce all figures (and more) from the paper. https://arxiv.org/abs/2112.01579

Jump to

How to train a new network

How to reproduce the figures

Video

Watch the video

Requirements

  • NVIDIA GPU with RTX, e.g. RTX20xx or RTX30xx (we use an RTX2070)
  • CUDA 11
  • OpenGL with GLFW and GLM
  • Python 3.8 or higher, see applications/env.txt for the required packages

Tested systems:

  • Windows 10, Visual Studio 2019, CUDA 11.1, Python 3.9, PyTorch 1.9
  • Ubuntu 20.04, gcc 9.3.0, CUDA 11.1, Python 3.8, PyTorch 1.8

Installation / Project structure

The project consists of a C++/CUDA part that has to be compiled first:

  • renderer: the renderer static library, see below for noteworthy files. Files ending in .cuh and .cu are CUDA kernel files.
  • bindings: entry point to the Python bindings, after compilation leads to a python extension module pyrenderer, placed in bin
  • gui: the interactive GUI to design the config files, explore the reference datasets and the trained networks. Requires OpenGL

For compilation, we recommend CMake. For running on a headless server, specifiy -DRENDERER_BUILD_OPENGL_SUPPORT=Off -DRENDERER_BUILD_GUI=Off. Alternatively, compile-library-server.sh is provided for compilation with the built-in extension compiler of PyTorch. We use this for compilation on our headless GPU server, as it simplifies potential wrong dependencies to different CUDA, Python or PyTorch versions with different virtualenvs or conda environments.

After compiling the C++ library, the network training and evaluation is performed in Python. The python files are all found in applications:

  • applications/volumes the volumes used in the ablation studies
  • applicatiosn/config-files the config files
  • applications/common: common utilities, especially utils.py for loading the pyrenderer library and other helpers
  • applications/losses: the loss functions, including SSIM and LPIPS
  • applications/volnet: the main network code for training in inference, see below.

Noteworthy Files

Here we list and explain noteworthy files that contain important aspects of the presented method

On the side of the C++/CUDA library in renderer/ are the following files important. Note that for the various modules, multiple implementations exists, e.g. for the TF. Therefore, the CUDA-kernels are assembled on-demand using NVRTC runtime compilation.

  • Image evaluators (iimage_evaluator.h), the entry point to the renderer. Only one implementation:

    • image_evaluator_simple.h, renderer_image_evaluator_simple.cuh: Contains the loop over the pixels and generates the rays -- possibly multisampled for Monte Carlo -- from the camera
  • Ray evaluators (iray_evaluation.h), called per ray and returns the colors. They call the volume implementation to fetch the density

    • ray_evaluation_stepping.h, renderer_ray_evaluation_stepping_iso.cuh, renderer_ray_evaluation_stepping_dvr.cuh: constant stepping for isosurfaces and DVR.
    • ray_evaluation_monte_carlo.h Monte Carlo path tracing with multiple bounces, delta tracking and various phase functions
  • Volume interpolations (volume_interpolation.h). On the CUDA-side, implementations provide a functor that evaluates a position and returns the density or color at that point

    • Grid interpolation (volume_interpolation_grid.h), trilinear interpolation into a voxel grid stored in volume.h.
    • Scene Reconstruction Networks (volume_interpolation_network.h). The SRNs as presented in the paper. See the header for the binary format of the .volnet file. The proposed tensor core implementation (Sec. 4.1) can be found in renderer_volume_tensorcores.cuh

On the python side in applications/volnet/, the following files are important:

  • train_volnet: the entry point for training
  • inference.py: the entry point for inference, used in the scripts for evaluation. Also converts trained models into the binary format for the GUI
  • network.py: The SRN network specification
  • input_data.py: The loader of the input grids, possibly time-dependent
  • training_data.py: world- and screen-space data loaders, contains routines for importance sampling / adaptive resampling. The rejection sampling is implemented in CUDA for performance and called from here
  • raytracing.py: Differentiable raytracing in PyTorch, including the memory optimization from Weiss&Westermann 2021, DiffDVR

How to train

The training is launched via applications/volnet/train_volnet.py. Have a look at python train_volnet.py --help for the available command line parameters.

A typical invocation looks like this (this is how fV-SRN with Ejecta from Fig. 1 was trained)

python train_volnet.py
   config-files/ejecta70-v6-dvr.json
   --train:mode world  # instead of 'screen', Sec. 5.4
   --train:samples 256**3
   --train:sampler_importance 0.01   # importance sampling based on the density, optional, see Section 5.3
   --train:batchsize 64*64*128
   --rebuild_dataset 51   # adaptive resampling after 51 epochs, see Section 5.3
   --val:copy_and_split  # for validation, use 20% of training samples
   --outputmode density:direct  # instead of e.g. 'color', Sec. 5.3
   --lossmode density
   --layers 32:32:32  # number of hidden feature layers -> that number + 1 for the number of linear layers / weight matrices.
   --activation SnakeAlt:2
   --fouriercount 14
   --fourierstd -1  # -1 indicates NeRF-construction, positive value indicate sigma for random Fourier Features, see Sec. 5.5
   --volumetric_features_resolution 32  # the grid specification, see Sec. 5.2
   --volumetric_features_channels 16
   -l1 1  #use L1-loss with weight 1
   --lr 0.01
   --lr_step 100  #lr reduction after 100 epochs, default lr is used 
   -i 200  # number of epochs
   --save_frequency 20  # checkpoints + test visualization

After training, the resulting .hdf5 file contains the network weights + latent grid and can be compiled to our binary format via inference.py. The resulting .volnet file can the be loaded in the GUI.

How to reproduce the figures

Each figure is associated with a respective script in applications/volnet. Those scripts include the training of the networks, evaluation, and plot generation. They have to be launched with the current path pointing to applications/. Note that some of those scripts take multiple hours due to the network training.

  • Figure 1, teaser: applications/volnet/eval_CompressionTeaser.py
  • Table 1, possible architectures: applications/volnet/collect_possible_layers.py
  • Section 4.2, change to performance due to grid compression: applications/volnet/eval_VolumetricFeatures_GridEncoding
  • Figure 3, performance of the networks: applications/volnet/eval_NetworkConfigsGrid.py
  • Section 5, study on the activation functions: applications/volnet/eval_ActivationFunctions.py
  • Figure 4+5, latent grid, also includes other datasets: applications/volnet/eval_VolumetricFeatures.py
  • Figure 6, density-vs-color: applications/volnet/eval_world_DensityVsColorGrid_NoImportance.py without initial importance sampling and adaptive resampling (Fig. 6) applications/volnet/eval_world_DensityVsColorGrid.py , includes initial importance sampling, not shown applications/volnet/eval_world_DensityVsColorGrid_WithResampling.py , with initial importance sampling and adaptive resampling, improvement reported in Section 5.3
  • Table 2, Figure 7, screen-vs-world: applications/volnet/eval_ScreenVsWorld_GridNeRF.py
  • Figure 8, Fourier features: applications/volnet/eval_Fourier_Grid.py , includes the datasets not shown in the paper for space reasons
  • Figure 9,10, time-dependent fields: applications/volnet/eval_TimeVolumetricFeatures.py: train on every fifth timestep applications/volnet/eval_TimeVolumetricFeatures2.py: train on every second timestep applications/volnet/eval_TimeVolumetricFeatures_plotPaper.py: assembles the plot for Figure 9

The other eval_*.py scripts were cut from the paper due to space limitations. They equal the tests above, except that no grid was used and instead the largest possible networks fitting into the TC-architecture

Owner
Sebastian Weiss
Ph.D. student of computer science at the Technical University of Munich
Sebastian Weiss
Demo code for ICCV 2021 paper "Sensor-Guided Optical Flow"

Sensor-Guided Optical Flow Demo code for "Sensor-Guided Optical Flow", ICCV 2021 This code is provided to replicate results with flow hints obtained f

10 Mar 16, 2022
Neon: an add-on for Lightbulb making it easier to handle component interactions

Neon Neon is an add-on for Lightbulb making it easier to handle component interactions. Installation pip install git+https://github.com/neonjonn/light

Neon Jonn 9 Apr 29, 2022
Used to record WKU's utility bills on a regular basis.

WKU水电费小助手 一个用于定期记录WKU水电费的脚本 Looking for English Readme? 背景 由于WKU校园内的水电账单系统时常存在扣费延迟的现象,而补扣的费用缺乏令人信服的证明。不少学生为费用摸不着头脑,但也没有申诉的依据。为了更好地掌握水电费使用情况,留下一手证据,我开源

2 Jul 21, 2022
An Implementation of Fully Convolutional Networks in Tensorflow.

Update An example on how to integrate this code into your own semantic segmentation pipeline can be found in my KittiSeg project repository. tensorflo

Marvin Teichmann 1.1k Dec 12, 2022
[ICCV 2021] Group-aware Contrastive Regression for Action Quality Assessment

CoRe Created by Xumin Yu*, Yongming Rao*, Wenliang Zhao, Jiwen Lu, Jie Zhou This is the PyTorch implementation for ICCV paper Group-aware Contrastive

Xumin Yu 31 Dec 24, 2022
EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale

EgonNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale Paper: EgoNN: Egocentric Neural Network for Point Cloud

19 Sep 20, 2022
JudeasRx - graphical app for doing personalized causal medicine using the methods invented by Judea Pearl et al.

JudeasRX Instructions Read the references given in the Theory and Notation section below Fire up the Jupyter Notebook judeas-rx.ipynb The notebook dra

Robert R. Tucci 19 Nov 07, 2022
Detail-Preserving Transformer for Light Field Image Super-Resolution

DPT Official Pytorch implementation of the paper "Detail-Preserving Transformer for Light Field Image Super-Resolution" accepted by AAAI 2022 . Update

50 Jan 01, 2023
Python interface for SmartRF Sniffer 2 Firmware

#TI SmartRF Packet Sniffer 2 Python Interface TI Makes available a nice packet sniffer firmware, which interfaces to Wireshark. You can see this proje

Colin O'Flynn 3 May 18, 2021
Depression Asisstant GDSC Challenge Solution

Depression Asisstant can help you give solution. Please using Python version 3.9.5 for contribute.

Ananda Rauf 1 Jan 30, 2022
Bolt Online Learning Toolbox

Bolt Online Learning Toolbox Bolt features discriminative learning of linear predictors (e.g. SVM or Logistic Regression) using fast online learning a

Peter Prettenhofer 87 Dec 12, 2022
🐦 Opytimizer is a Python library consisting of meta-heuristic optimization techniques.

Opytimizer: A Nature-Inspired Python Optimizer Welcome to Opytimizer. Did you ever reach a bottleneck in your computational experiments? Are you tired

Gustavo Rosa 546 Dec 31, 2022
Competitive Programming Club, Clinify's Official repository for CP problems hosting by club members.

Clinify-CPC_Programs This repository holds the record of the competitive programming club where the competitive coding aspirants are thriving hard and

Clinify Open Sauce 4 Aug 22, 2022
Codes and scripts for "Explainable Semantic Space by Grounding Languageto Vision with Cross-Modal Contrastive Learning"

Visually Grounded Bert Language Model This repository is the official implementation of Explainable Semantic Space by Grounding Language to Vision wit

17 Dec 17, 2022
Language models are open knowledge graphs ( non official implementation )

language-models-are-knowledge-graphs-pytorch Language models are open knowledge graphs ( work in progress ) A non official reimplementation of Languag

theblackcat102 132 Dec 18, 2022
[BMVC 2021] Official PyTorch Implementation of Self-supervised learning of Image Scale and Orientation Estimation

Self-Supervised Learning of Image Scale and Orientation Estimation (BMVC 2021) This is the official implementation of the paper "Self-Supervised Learn

Jongmin Lee 17 Nov 10, 2022
Automatic labeling, conversion of different data set formats, sample size statistics, model cascade

Simple Gadget Collection for Object Detection Tasks Automatic image annotation Conversion between different annotation formats Obtain statistical info

llt 4 Aug 24, 2022
Implementation of ICCV21 paper: PnP-DETR: Towards Efficient Visual Analysis with Transformers

Implementation of ICCV 2021 paper: PnP-DETR: Towards Efficient Visual Analysis with Transformers arxiv This repository is based on detr Recently, DETR

twang 113 Dec 27, 2022
This repository provides a basic implementation of our GCPR 2021 paper "Learning Conditional Invariance through Cycle Consistency"

Learning Conditional Invariance through Cycle Consistency This repository provides a basic TensorFlow 1 implementation of the proposed model in our GC

BMDA - University of Basel 1 Nov 04, 2022
Source code for Task-Aware Variational Adversarial Active Learning

Contrastive Coding for Active Learning under Class Distribution Mismatch Official PyTorch implementation of ["Contrastive Coding for Active Learning u

27 Nov 23, 2022