Rendering Point Clouds with Compute Shaders

Overview

Compute Shader Based Point Cloud Rendering

This repository contains the source code to our techreport:
Rendering Point Clouds with Compute Shaders and Vertex Order Optimization
Markus Schütz, Bernhard Kerbl, Michael Wimmer. (not peer-reviewed, currently in submission)

  • Compute shaders can render point clouds up to an order of magnitude faster than GL_POINTS.
  • With a combination of warp-wide deduplication and early-z, compute shaders able to render 796 million points (12.7GB) at stable 62 to 64 frames per second in various different viewpoints on an RTX 3090. This corresponds to a memory bandwidth utilization of about 802GB/s, or a throughput of about 50 billion points per second.
  • The vertex order also strongly affects the performance. Some locality of points that are consecutive in memory is beneficial, but excessive locality can result in drastic slowdowns if it leads to thousands of GPU threads attempting to update a single pixel. As such, neither Morton ordered nor shuffled buffers are optimal. However combining both by first sorting by Morton code, and then shuffling batches of 128 points but leaving points within a batch in order, results in an improved ordering that ensures high performance with our compute approaches, and it also increases the performance of GL_POINTS by up to 5 times.

About the Framework

This framework is written in C++ and JavaScript (using V8). Most of the rendering is done in JavaScript with bindings to OpenGL 4.5 functions. It is written with live-coding in mind, so many javascript files are immediately executed at runtime as soon as they are saved by any text editor. As such, code has to be written with repeated execution in mind.

Getting Started

  • Compile Skye.sln project with Visual Studio.
  • Open the workspace in vscode.
  • Open "load_pointcloud.js" (quick search files via ctrl + e).
    • Adapt the path to the correct location of the las file.
    • Adapt position and lookAt to a viewpoint that fits your point cloud.
    • Change window.x to something that fits your monitor setup, e.g., 0 if you've got a single monitor, or 2540 if you've got two monitors and your first one has a with of 2540 pixels.
  • Press "Ctrl + Shift + B" to start the app. You should be seing an empty green window. (window.x is not yet applied)
  • Once you save "load_pointcloud.js" via ctrl+s, it will be executed, the window will be repositioned, and the point cloud will be loaded.
  • You can change position and lookAt at runtime and apply them by simply saving load_pointcloud.js again. The pointcloud will not be loaded again - to do so, you'll need to restart first.

After loading the point cloud, you should be seeing something like the screenshot below. The framework includes an IMGUI window with frame times, and another window that lets you switch between various rendering methods. Best try with data sets with tens of millions or hundreds of millions of points!

sd

Code Sections

Code for the individual rendering methods is primarily found in the modules/compute_<methods> folders.

Method Location
atomicMin ./modules/compute
reduce ./modules/compute_ballot
early-z ./modules/compute_earlyDepth
reduce & early-z ./modules/compute_ballot_earlyDepth
dedup ./modules/compute_ballot_earlyDepth_dedup
HQS ./modules/compute_hqs
HQS1R ./modules/compute_hqs_1x64bit_fast
busy-loop ./modules/compute_guenther
just-set ./modules/compute_just_set

Results

Frame times when rendering 796 million points on an RTX 3090 in a close-up viewpoint. Times in milliseconds, lower is better. The compute methods reduce (with early-z) and dedup (with early-z) yield the best results with Morton order (<16.6ms, >60fps). The shuffled Morton order greatly improves performance of GL_POINTS and some compute methods, and it is usually either the fastest or within close margins of the fastest combinations of rendering method and ordering.

Not depicted is that the dedup method is the most stable approach that continuously maintains >60fps in all viewpoints, while the performance of the reduce method varies and may drop to 50fps in some viewpoints. As such, we would recomend to use dedup in conjunction with Morton order if the necessary compute features are available, and reduce (with early-z) for wider support.

Comments
  • Can provide some dataset to test the demo (like default Candi Banyunibo data set)?

    Can provide some dataset to test the demo (like default Candi Banyunibo data set)?

    Hi, just found this paper & project via graphics weekly news.. just compiled the demo, and seems uses that dataset by default (banyunibo_inside_morton).. anyway to obtain that dataset? if not can you provide some download link to some huge & equivalent data set used by the demo like retz,eclepens,etc.. are this datasets under non "open" licenses? just wanted to test performance on my Titan V compared to a 3090.. thanks..

    opened by oscarbg 11
  • invisible window

    invisible window

    If I compile in DEBUG (VS2019) it crashes in void V8Helper::setupGL in this line: setupV8GLExtBindings(tpl);

    If I compile in RELEASE it compiles and the console shows no error but the windows is invisible, I can only see the console (and if I click keys I see them in the log).

    I have a 1070

    opened by jagenjo 3
  • A question of render.cs

    A question of render.cs

    image in render.cs there's vec2 variable called imgPos, i don't know why it times 0.5 and plus 0.5, what does it mean? Thank you very much if you can answer my questions. : )

    opened by UMR19 2
  • Questions about point cloud display when zoom in

    Questions about point cloud display when zoom in

    Hi, Thanks for your excellent job, just i compiled the demo and modified the setting to load my point cloud, it loaded successfully and have a better performance. but when i roll the mouse wheel to zoom in to look at the detail, lots of point missed, but when i use potree to display my point cloud, it display perfect. I am a beginner in computer graphics,may be it's point size is too small? image In the potree image

    Thanks for you reply:)

    opened by UMR19 0
  • ssRG and ssBA not bound to resolve?

    ssRG and ssBA not bound to resolve?

    In https://github.com/m-schuetz/compute_rasterizer/blob/master/compute_hqs/render.js

    Both buffers are bound in the render attribute pass, but neither is explicitly bound in the resolve pass. Always assumed that bindBufferBase affects the currently bound shader, but apparently the binding caries over to the next shader? Just a reminder to look this up to clarify my understanding of how they work.

    gl.bindBufferBase(gl.SHADER_STORAGE_BUFFER, 3, ssRG);
    gl.bindBufferBase(gl.SHADER_STORAGE_BUFFER, 4, ssBA);
    
    opened by m-schuetz 0
Releases(build_laz_crf)
Owner
Markus Schütz
Markus Schütz
Reproduction process of AlexNet

PaddlePaddle论文复现杂谈 背景 注:该repo基于PaddlePaddle,对AlexNet进行复现。时间仓促,难免有所疏漏,如果问题或者想法,欢迎随时提issue一块交流。 飞桨论文复现赛地址:https://aistudio.baidu.com/aistudio/competitio

19 Nov 29, 2022
Survival analysis (SA) is a well-known statistical technique for the study of temporal events.

DAGSurv Survival analysis (SA) is a well-known statistical technique for the study of temporal events. In SA, time-to-an-event data is modeled using a

Rahul Kukreja 1 Sep 05, 2022
This repository contains the source code for the paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks",

DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks Project Page | Video | Presentation | Paper | Data L

Facebook Research 281 Dec 22, 2022
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields.

This repository contains the code release for Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. This implementation is written in JAX, and is a fork of Google's JaxNeRF

Google 625 Dec 30, 2022
Fight Recognition from Still Images in the Wild @ WACVW2022, Real-world Surveillance Workshop

Fight Detection from Still Images in the Wild Detecting fights from still images is an important task required to limit the distribution of social med

Şeymanur Aktı 10 Nov 09, 2022
Code for "Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search"

Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search This is an implementation for our paper Contextual Non-Loca

Tencent YouTu Research 50 Dec 03, 2022
U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection

The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

Xuebin Qin 6.5k Jan 09, 2023
Source code for TACL paper "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation".

KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation Source code for TACL 2021 paper KEPLER: A Unified Model for Kn

THU-KEG 138 Dec 22, 2022
Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"

Adversarial Neuron Pruning Purifies Backdoored Deep Models Code for NeurIPS 2021 "Adversarial Neuron Pruning Purifies Backdoored Deep Models" by Dongx

Dongxian Wu 31 Dec 11, 2022
HybridNets: End-to-End Perception Network

HybridNets: End2End Perception Network HybridNets Network Architecture. HybridNets: End-to-End Perception Network by Dat Vu, Bao Ngo, Hung Phan 📧 FPT

Thanh Dat Vu 370 Dec 29, 2022
Human segmentation models, training/inference code, and trained weights, implemented in PyTorch

Human-Segmentation-PyTorch Human segmentation models, training/inference code, and trained weights, implemented in PyTorch. Supported networks UNet: b

Thuy Ng 474 Dec 19, 2022
Temporal Knowledge Graph Reasoning Triggered by Memories

MTDM Temporal Knowledge Graph Reasoning Triggered by Memories To alleviate the time dependence, we propose a memory-triggered decision-making (MTDM) n

4 Sep 25, 2022
Rule-based Customer Segmentation

Rule-based Customer Segmentation Business Problem A game company wants to create level-based new customer definitions (personas) by using some feature

Cem Çaluk 2 Jan 03, 2022
Unofficial PyTorch implementation of MobileViT based on paper "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer".

MobileViT RegNet Unofficial PyTorch implementation of MobileViT based on paper MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TR

Hong-Jia Chen 91 Dec 02, 2022
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

9.6k Dec 31, 2022
Deal or No Deal? End-to-End Learning for Negotiation Dialogues

Introduction This is a PyTorch implementation of the following research papers: (1) Hierarchical Text Generation and Planning for Strategic Dialogue (

Facebook Research 1.4k Dec 29, 2022
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
8-week curriculum for AI Builders

curriculum 8-week curriculum for AI Builders สารบัญ บทที่ 1 - Machine Learning คืออะไร บทที่ 2 - ชุดข้อมูลมหัศจรรย์และถิ่นที่อยู่ บทที่ 3 - Stochastic

AI Builders 134 Jan 03, 2023
Pmapper is a super-resolution and deconvolution toolkit for python 3.6+

pmapper pmapper is a super-resolution and deconvolution toolkit for python 3.6+. PMAP stands for Poisson Maximum A-Posteriori, a highly flexible and a

NASA Jet Propulsion Laboratory 8 Nov 06, 2022
Summary of related papers on visual attention

This repo is built for paper: Attention Mechanisms in Computer Vision: A Survey paper Vision-Attention-Papers Channel attention Spatial attention Temp

MenghaoGuo 2.1k Dec 30, 2022