An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

Overview

AnalyticMesh

Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and topological errors that result from insufficient sampling, by means of mathematically guaranteed analysis.

This repository gives an implementation of Analytic Marching algorithm. This algorithm is initially proposed in our conference paper Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks, then finally improved in our journal paper: Learning and Meshing from Deep Implicit Surface Networks Using an Efficient Implementation of Analytic Marching.

Our codes provide web pages for manipulating your models via graphic interface, and a backend for giving full control of the algorithm by writing python codes.

Installation

First please download our codes:

git clone https://github.com/Karbo123/AnalyticMesh.git --depth=1
cd AnalyticMesh
export AMROOT=`pwd`

Backend

Backend gives a python binding of analytic marching. You can write simple python codes in your own project after compiling the backend.

Our implementation supports pytorch, and possibly also other deep learning frameworks (e.g. tensorflow), but we do not test other frameworks yet.

Requirements:

Compilation:

cd $AMROOT/backend
mkdir build && cd build
cmake ..
make -j8
cd ..

If your pytorch version < 1.5.1, you may need to fix cpp extension compile failure on some envs.

Make sure compiled library can pass the tests. Run:

CUDA_VISIBLE_DEVICES=0 PYTHONDONTWRITEBYTECODE=1 pytest -s -p no:warnings -p no:cacheprovider

It will generate some files under folder $AMROOT/backend/tmp. Generally, those generated meshes (.ply) are watertight, you can check with meshlab.

If it passes all the tests, you can finally link to somewhere so that python can find it:

ln -s $AMROOT `python -c 'import site; print(site.getsitepackages()[0])'`

Frontend

We also provide an easy-to-use interactive interface to apply analytic marching to your input network model by just clicking your mouse. To use the web interface, you may follow steps below to install.

Requirement:

Before compiling, you may need to modify the server information given in file frontend/pages/src/assets/index.js. Then you can compile those files by running:

cd $AMROOT/frontend/pages
npm install
npm run build

The $AMROOT/frontend/pages/dist directory is ready to be deployed. If you want to deploy web pages to a server, please additionally follow these instructions.

To start the server, simply run:

cd $AMROOT/frontend && python server.py

You can open the interface via either opening file $AMROOT/frontend/pages/dist/index.html on your local machine or opening the url to which the page is deployed.

Demo

We provide some samples in $AMROOT/examples, you can try them.

Here we show a simple example (which is from $AMROOT/examples/2_polytope.py):

import os
import torch
from AnalyticMesh import save_model, load_model, AnalyticMarching

class MLPPolytope(torch.nn.Module):
    def __init__(self):
        super(MLPPolytope, self).__init__()
        self.linear0 = torch.nn.Linear(3, 14)
        self.linear1 = torch.nn.Linear(14, 1)
        with torch.no_grad(): # here we give the weights explicitly since training takes time
            weight0 = torch.tensor([[ 1,  1,  1],
                                    [-1, -1, -1],
                                    [ 0,  1,  1],
                                    [ 0, -1, -1],
                                    [ 1,  0,  1],
                                    [-1,  0, -1],
                                    [ 1,  1,  0],
                                    [-1, -1,  0],
                                    [ 1,  0,  0],
                                    [-1,  0,  0],
                                    [ 0,  1,  0],
                                    [ 0, -1,  0],
                                    [ 0,  0,  1],
                                    [ 0,  0, -1]], dtype=torch.float32)
            bias0 = torch.zeros(14)
            weight1 = torch.ones([14], dtype=torch.float32).unsqueeze(0)
            bias1 = torch.tensor([-2], dtype=torch.float32)

            add_noise = lambda x: x + torch.randn_like(x) * (1e-7)
            self.linear0.weight.copy_(add_noise(weight0))
            self.linear0.bias.copy_(add_noise(bias0))
            self.linear1.weight.copy_(add_noise(weight1))
            self.linear1.bias.copy_(add_noise(bias1))

    def forward(self, x):
        return self.linear1(torch.relu(self.linear0(x)))


if __name__ == "__main__":
    #### save onnx
    DIR = os.path.dirname(os.path.abspath(__file__)) # the directory to save files
    onnx_path = os.path.join(DIR, "polytope.onnx")
    save_model(MLPPolytope(), onnx_path) # we save the model as onnx format
    print(f"we save onnx to: {onnx_path}")

    #### save ply
    ply_path = os.path.join(DIR, "polytope.ply")
    model = load_model(onnx_path) # load as a specific model
    AnalyticMarching(model, ply_path) # do analytic marching
    print(f"we save ply to: {ply_path}")

API

We mainly provide the following two ways to use analytic marching:

  • Web interface (provides an easy-to-use graphic interface)
  • Python API (gives more detailed control)
  1. Web interface

    You should compile both the backend and frontend to use this web interface. Its usage is detailed in the user guide on the web page.

  2. Python API

    It's very simple to use, just three lines of code.

    from AnalyticMesh import load_model, AnalyticMarching 
    model = load_model(load_onnx_path) 
    AnalyticMarching(model, save_ply_path)

    If results are not satisfactory, you may need to change default values of the AnalyticMarching function.

    To obtain an onnx model file, you can just use the save_model function we provide.

    from AnalyticMesh import save_model
    save_model(your_custom_nn_module, save_onnx_path)

Some tips:

  • It is highly recommended that you try dichotomy first as the initialization method.
  • If CUDA runs out of memory, try setting voxel_configs. It will partition the space and solve them serially.
  • More details are commented in our source codes.

Use Analytic Marching in your own project

There are generally three ways to use Analytic Marching.

  1. Directly representing a single shape by a multi-layer perceptron. For a single object, you can simply represent the shape as a single network. For example, you can directly fit a point cloud by a multi-layer perceptron. In this way, the weights of the network uniquely determine the shape.
  2. Generating the weights of multi-layer perceptron from a hyper-network. To learn from multiple shapes, one can use hyper-network to generate the weights of multi-layer perceptron in a learnable manner.
  3. Re-parameterizing the latent code into the bias of the first layer. To learn from multiple shapes, we can condition the network with a latent code input at the first layer (e.g. 3+256 -> 512 -> 512 -> 1). Note that the concatenated latent code can be re-parameterized and combined into the bias of the first layer. More specifically, the computation of the first layer can be re-parameterized as , where the newly computed bias is .

About

This repository is mainly maintained by Jiabao Lei (backend) and Yongyi Su (frontend). If you have any question, feel free to create an issue on github.

If you find our works useful, please consider citing our papers.

@inproceedings{
    Lei2020,
    title = {Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks},
    author = {Jiabao Lei and Kui Jia},
    booktitle = {International Conference on Machine Learning 2020 {ICML-20}},
    year = {2020},
    month = {7}
}

@misc{
    Lei2021,
    title={Learning and Meshing from Deep Implicit Surface Networks Using an Efficient Implementation of Analytic Marching}, 
    author={Jiabao Lei and Kui Jia and Yi Ma},
    year={2021},
    eprint={2106.10031},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Contact: [email protected]

Owner
Karbo
Karbo
AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition

AdaMML: Adaptive Multi-Modal Learning for Efficient Video Recognition [ArXiv] [Project Page] This repository is the official implementation of AdaMML:

International Business Machines 43 Dec 26, 2022
Repo for CVPR2021 paper "QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information"

QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information by Masato Tamura, Hiroki Ohashi, and Tomoaki Yosh

105 Dec 23, 2022
Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method

C++/ROS Source Codes for "Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method" published in IEEE Trans. Intelligent Transportation Systems

Bai Li 88 Dec 23, 2022
This is the first released system towards complex meters` detection and recognition, which is implemented by computer vision techniques.

A three-stage detection and recognition pipeline of complex meters in wild This is the first released system towards detection and recognition of comp

Yan Shu 19 Nov 28, 2022
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"

PASS - Official PyTorch Implementation [CVPR2021 Oral] Prototype Augmentation and Self-Supervision for Incremental Learning Fei Zhu, Xu-Yao Zhang, Chu

67 Dec 27, 2022
Supervised Contrastive Learning for Product Matching

Contrastive Product Matching This repository contains the code and data download links to reproduce the experiments of the paper "Supervised Contrasti

Web-based Systems Group @ University of Mannheim 18 Dec 10, 2022
Implementation of SwinTransformerV2 in TensorFlow.

SwinTransformerV2-TensorFlow A TensorFlow implementation of SwinTransformerV2 by Microsoft Research Asia, based on their official implementation of Sw

Phan Nguyen 2 May 30, 2022
PyTorch code of my ICDAR 2021 paper Vision Transformer for Fast and Efficient Scene Text Recognition (ViTSTR)

Vision Transformer for Fast and Efficient Scene Text Recognition (ICDAR 2021) ViTSTR is a simple single-stage model that uses a pre-trained Vision Tra

Rowel Atienza 198 Dec 27, 2022
This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning].

CG3 This is the repository for the AAAI 21 paper [Contrastive and Generative Graph Convolutional Networks for Graph-based Semi-Supervised Learning]. R

12 Oct 28, 2022
(CVPR 2022 Oral) Official implementation for "Surface Representation for Point Clouds"

RepSurf - Surface Representation for Point Clouds [CVPR 2022 Oral] By Haoxi Ran* , Jun Liu, Chengjie Wang ( * : corresponding contact) The pytorch off

Haoxi Ran 264 Dec 23, 2022
PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision

Learning to Generate Grounded Visual Captions without Localization Supervision This is the PyTorch implementation of our paper: Learning to Generate G

Chih-Yao Ma 41 Nov 17, 2022
Towards Multi-Camera 3D Human Pose Estimation in Wild Environment

PanopticStudio Toolbox This repository has a toolbox to download, process, and visualize the Panoptic Studio (Panoptic) data. Note: Sep-21-2020: Curre

335 Jan 09, 2023
Code for reproducing our paper: LMSOC: An Approach for Socially Sensitive Pretraining

LMSOC: An Approach for Socially Sensitive Pretraining Code for reproducing the paper LMSOC: An Approach for Socially Sensitive Pretraining to appear a

Twitter Research 11 Dec 20, 2022
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations)

Graph Neural Networks with Learnable Structural and Positional Representations Source code for the paper "Graph Neural Networks with Learnable Structu

Vijay Prakash Dwivedi 180 Dec 22, 2022
Expand human face editing via Global Direction of StyleCLIP, especially to maintain similarity during editing.

Oh-My-Face This project is based on StyleCLIP, RIFE, and encoder4editing, which aims to expand human face editing via Global Direction of StyleCLIP, e

AiLin Huang 51 Nov 17, 2022
Adversarially Learned Inference

Adversarially Learned Inference Code for the Adversarially Learned Inference paper. Compiling the paper locally From the repo's root directory, $ cd p

Mohamed Ishmael Belghazi 308 Sep 24, 2022
An intelligent, flexible grammar of machine learning.

An english representation of machine learning. Modify what you want, let us handle the rest. Overview Nylon is a python library that lets you customiz

Palash Shah 79 Dec 02, 2022
Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization

Born-Infeld (BI) for AI: Energy-Conserving Descent (ECD) for Optimization This repository contains the code for the BBI optimizer, introduced in the p

G. Bruno De Luca 5 Sep 06, 2022
Compact Bidirectional Transformer for Image Captioning

Compact Bidirectional Transformer for Image Captioning Requirements Python 3.8 Pytorch 1.6 lmdb h5py tensorboardX Prepare Data Please use git clone --

YE Zhou 19 Dec 12, 2022
Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting

Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting This is the origin Pytorch implementation of Informer in the followin

Haoyi 3.1k Dec 29, 2022