AI pipelines for Nvidia Jetson Platform

Overview

Jetson Multicamera Pipelines

Easy-to-use realtime CV/AI pipelines for Nvidia Jetson Platform. This project:

  • Builds a typical multi-camera pipeline, i.e. N×(capture)->preprocess->batch->DNN-> <<your application logic here>> ->encode->file I/O + display. Uses gstreamer and deepstream under-the-hood.
  • Gives programatic acces to configure the pipeline in python via jetmulticam package.
  • Utilizes Nvidia HW accleration for minimal CPU usage. For example, you can perform object detection in real-time on 6 camera streams using as little as 16.5% CPU. See benchmarks below for details.

Demos

You can easily build your custom logic in python by accessing image data (via np.array), as well object detection results. See examples of person following below:

DashCamNet (DLA0) + PeopleNet (DLA1) on 3 camera streams.

We have 3 intependent cameras with ~270° field of view. Red Boxes correspond to DashCamNet detections, green ones to PeopleNet. The PeopleNet detections are used to perform person following logic.

demo_8_follow_me.mp4

PeopleNet (GPU) on 3 cameras streams.

Robot is operated in manual mode.

demo_9_security_nvidia.mp4

DashCamNet (GPU) on 3 camera streams.

Robot is operated in manual mode.

demo_1_fedex_driver.mp4

(All demos are performed in real-time onboard Nvidia Jetson Xavier NX)

Quickstart

Install:

git clone https://github.com/NVIDIA-AI-IOT/jetson-multicamera-pipelines.git
cd jetson-multicamera-pipelines
bash scripts/install-dependencies.sh
pip3 install .

Run example with your cameras:

source scripts/env_vars.sh 
cd examples
python3 example.py

Usage example

import time
from jetmulticam import CameraPipelineDNN
from jetmulticam.models import PeopleNet, DashCamNet

if __name__ == "__main__":

    pipeline = CameraPipelineDNN(
        cameras=[2, 5, 8],
        models=[
            PeopleNet.DLA1,
            DashCamNet.DLA0,
            # PeopleNet.GPU
        ],
        save_video=True,
        save_video_folder="/home/nx/logs/videos",
        display=True,
    )

    while pipeline.running():
        arr = pipeline.images[0] # np.array with shape (1080, 1920, 3), i.e. (1080p RGB image)
        dets = pipeline.detections[0] # Detections from the DNNs
        time.sleep(1/30)

Benchmarks

# Scenario # cams CPU util.
(jetmulticam)
CPU util.
(nvargus-deamon)
CPU
total
GPU % EMC util % Power draw Inference Hardware
1. 1xGMSL -> 2xDNNs + disp + encode 1 5.3% 4% 9.3% <3% 57% 8.5W DLA0: PeopleNet DLA1: DashCamNet
2. 2xGMSL -> 2xDNNs + disp + encode 2 7.2% 7.7% 14.9% <3% 62% 9.4W DLA0: PeopleNet DLA1: DashCamNet
3. 3xGMSL -> 2xDNNs + disp + encode 3 9.2% 11.3% 20.5% <3% 68% 10.1W DLA0: PeopleNet DLA1: DashCamNet
4. Same as #3 with CPU @ 1.9GHz 3 7.5% 9.0% <3% 68% 10.4w DLA0: PeopleNet DLA1: DashCamNet
5. 3xGMSL+2xV4L -> 2xDNNs + disp + encode 5 9.5% 11.3% 20.8% <3% 45% 9.1W DLA0: PeopleNet (interval=1) DLA1: DashCamNet (interval=1)
6. 3xGMSL+2xV4L -> 2xDNNs + disp + encode 5 8.3% 11.3% 19.6% <3% 25% 7.5W DLA0: PeopleNet (interval=6) DLA1: DashCamNet (interval=6)
7. 3xGMSL -> DNN + disp + encode 5 10.3% 12.8% 23.1% 99% 25% 15W GPU: PeopleNet

Notes:

  • All figures are in 15W 6 core mode. To reproduce do: sudo nvpmodel -m 2; sudo jetson_clocks;
  • Test platform: Jetson Xavier NX and XNX Box running JetPack v4.5.1
  • The residual GPU usage in DLA-accelerated nets is caused by Sigmoid activations being computed with CUDA backend. Remaining layers are computed on DLA.
  • CPU usage will vary depending on factors such as camera resolution, framerate, available video formats and driver implementation.

More

Supported models / acceleratorss

pipeline = CameraPipelineDNN(
    cam_ids = [0, 1, 2]
    models=[
        models.PeopleNet.DLA0,
        models.PeopleNet.DLA1,
        models.PeopleNet.GPU,
        models.DashCamNet.DLA0,
        models.DashCamNet.DLA1,
        models.DashCamNet.GPU
        ]
    # ...
)
Owner
NVIDIA AI IOT
NVIDIA AI IOT
Library to enable Bayesian active learning in your research or labeling work.

Bayesian Active Learning (BaaL) BaaL is an active learning library developed at ElementAI. This repository contains techniques and reusable components

ElementAI 687 Dec 25, 2022
This repository contains a set of codes to run (i.e., train, perform inference with, evaluate) a diarization method called EEND-vector-clustering.

EEND-vector clustering The EEND-vector clustering (End-to-End-Neural-Diarization-vector clustering) is a speaker diarization framework that integrates

45 Dec 26, 2022
Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR, 2019)

Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR 2019) To make better use of given limited labels, we propo

126 Sep 13, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

AugMax: Adversarial Composition of Random Augmentations for Robust Training Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, an

VITA 112 Nov 07, 2022
Edge Restoration Quality Assessment

ERQA - Edge Restoration Quality Assessment ERQA - a full-reference quality metric designed to analyze how good image and video restoration methods (SR

MSU Video Group 27 Dec 17, 2022
Plug and play transformer you can find network structure and official complete code by clicking List

Plug-and-play Module Plug and play transformer you can find network structure and official complete code by clicking List The following is to quickly

8 Mar 27, 2022
Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow

eXtreme Gradient Boosting Community | Documentation | Resources | Contributors | Release Notes XGBoost is an optimized distributed gradient boosting l

Distributed (Deep) Machine Learning Community 23.6k Dec 31, 2022
This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

This project aim to create multi-label classification annotation tool to boost annotation speed and make it more easier.

4 Aug 02, 2022
Audio-Visual Generalized Few-Shot Learning with Prototype-Based Co-Adaptation

Audio-Visual Generalized Few-Shot Learning with Prototype-Based Co-Adaptation The code repository for "Audio-Visual Generalized Few-Shot Learning with

Kaiaicy 3 Jun 27, 2022
Implemented fully documented Particle Swarm Optimization algorithm (basic model with few advanced features) using Python programming language

Implemented fully documented Particle Swarm Optimization (PSO) algorithm in Python which includes a basic model along with few advanced features such as updating inertia weight, cognitive, social lea

9 Nov 29, 2022
Official Implementation of PCT

Official Implementation of PCT Prerequisites python == 3.8.5 Please make sure you have the following libraries installed: numpy torch=1.4.0 torchvisi

32 Nov 21, 2022
Official implementation of "StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation" (SIGGRAPH 2021)

StyleCariGAN: Caricature Generation via StyleGAN Feature Map Modulation This repository contains the official PyTorch implementation of the following

Wonjong Jang 270 Dec 30, 2022
PyTorch implementation of the paper Dynamic Token Normalization Improves Vision Transfromers.

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022
Python scripts form performing stereo depth estimation using the HITNET model in ONNX.

ONNX-HITNET-Stereo-Depth-estimation Python scripts form performing stereo depth estimation using the HITNET model in ONNX. Stereo depth estimation on

Ibai Gorordo 30 Nov 08, 2022
Implementation of Kalman Filter in Python

Kalman Filter in Python This is a basic example of how Kalman filter works in Python. I do plan on refactoring and expanding this repo in the future.

Enoch Kan 35 Sep 11, 2022
A human-readable PyTorch implementation of "Self-attention Does Not Need O(n^2) Memory"

memory_efficient_attention.pytorch A human-readable PyTorch implementation of "Self-attention Does Not Need O(n^2) Memory" (Rabe&Staats'21). def effic

Ryuichiro Hataya 7 Dec 26, 2022
A vision library for performing sliced inference on large images/small objects

SAHI: Slicing Aided Hyper Inference A vision library for performing sliced inference on large images/small objects Overview Object detection and insta

Open Business Software Solutions 2.3k Jan 04, 2023
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
FaRL for Facial Representation Learning

FaRL for Facial Representation Learning This repo hosts official implementation of our paper General Facial Representation Learning in a Visual-Lingui

Microsoft 19 Jan 05, 2022
The codes and related files to reproduce the results for Image Similarity Challenge Track 2.

ISC-Track2-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 2. Required dependencies To begin with

Wenhao Wang 89 Jan 02, 2023