Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation

Overview

NVIDIA Source Code License Python 3.7

OSCAR

Project Page | Paper

This repository contains the codebase used in OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation.

More generally, this codebase is a modular framework built upon IsaacGym, and intended to support future robotics research leveraging large-scale training.

Of note, this repo contains:

  • High-quality controller implementations of OSC, IK, and Joint-Based controllers that have been fully parallelized for PyTorch
  • Complex Robot Manipulation tasks for benchmarking learning algorithms
  • Modular structure enabling rapid prototyping of additional robots, controllers, and environments

Requirements

  • Linux machine
  • Conda
  • NVIDIA GPU + CUDA

Getting Started

First, clone this repo and initialize the submodules:

git clone https://github.com/NVlabs/oscar.git
cd oscar
git submodule update --init --recursive

Next, create a new conda environment to be used for this repo and activate the repo:

bash create_conda_env_oscar.sh
conda activate oscar

This will create a new conda environment named oscar and additional install some dependencies. Next, we need IsaacGym. This repo itself does not contain IsaacGym, but is compatible with any version >= preview 3.0.

Install and build IsaacGym HERE.

Once installed, navigate to the python directory and install the package to this conda environment:

(oscar) cd <ISAACGYM_REPO_PATH>/python
(oscar) pip install -e .

Now with IsaacGym installed, we can finally install this repo as a package:

(oscar) cd <OSCAR_REPO_PATH>
(oscar) pip install -e .

That's it!

Training

Provided are helpful scripts for running training, evaluation, and finetuning. These are found in the Examples directory. You can set the Task, Controller, and other parameters directly at the top of the example script. They should run out of the box, like so:

cd examples
bash train.sh

For evaluation (including zero-shot), you can modify and run:

bash eval.sh

For finetuning on the published out-of-distribution task settings using a pretrained model, you can modify and run:

bash finetune.sh

To pretrain the initial OSCAR base network, you can modify and run:

bash pretrain_oscar.sh

Reproducing Paper Results

We provide all of our final trained models used in our published results, found in trained_models section.

Adding Custom Modules

This repo is designed to be built upon and enable future large-scale robot learning simulation research. You can add your own custom controller by seeing an example controller like the OSC controller, your own custom robot agent by seeing an example agent like the Franka agent, and your own custom task by seeing an example task like the Push task.

License

Please check the LICENSE file. OSCAR may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact [email protected].

Citation

Please cite OSCAR if you use this framework in your publications:

@inproceedings{wong2021oscar,
  title={OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation},
  author={Josiah Wong and Viktor Makoviychuk and Anima Anandkumar and Yuke Zhu},
  booktitle={arXiv preprint arXiv:2110.00704},
  year={2021}
}
MDETR: Modulated Detection for End-to-End Multi-Modal Understanding

MDETR: Modulated Detection for End-to-End Multi-Modal Understanding Website • Colab • Paper This repository contains code and links to pre-trained mod

Aishwarya Kamath 770 Dec 28, 2022
A Fast Knowledge Distillation Framework for Visual Recognition

FKD: A Fast Knowledge Distillation Framework for Visual Recognition Official PyTorch implementation of paper A Fast Knowledge Distillation Framework f

Zhiqiang Shen 129 Dec 24, 2022
Code accompanying our NeurIPS 2021 traffic4cast challenge

Traffic forecasting on traffic movie snippets This repo contains all code to reproduce our approach to the IARAI Traffic4cast 2021 challenge. In the c

Nina Wiedemann 2 Aug 09, 2022
AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning

AutoPentest-DRL: Automated Penetration Testing Using Deep Reinforcement Learning AutoPentest-DRL is an automated penetration testing framework based o

Cyber Range Organization and Design Chair 217 Jan 01, 2023
Deep Convolutional Generative Adversarial Networks

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks Alec Radford, Luke Metz, Soumith Chintala All images in t

Alec Radford 3.4k Dec 29, 2022
Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

PackNet: https://arxiv.org/abs/1711.05769 Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt Datasets in Py

Arun Mallya 216 Jan 05, 2023
EssentialMC2 Video Understanding

EssentialMC2 Introduction EssentialMC2 is a complete system to solve video understanding tasks including MHRL(representation learning), MECR2( relatio

Alibaba 106 Dec 11, 2022
Select, weight and analyze complex sample data

Sample Analytics In large-scale surveys, often complex random mechanisms are used to select samples. Estimates derived from such samples must reflect

samplics 37 Dec 15, 2022
Lab Materials for MIT 6.S191: Introduction to Deep Learning

This repository contains all of the code and software labs for MIT 6.S191: Introduction to Deep Learning! All lecture slides and videos are available

Alexander Amini 5.6k Dec 26, 2022
"Segmenter: Transformer for Semantic Segmentation" reproduced via mmsegmentation

Segmenter-based-on-OpenMMLab "Segmenter: Transformer for Semantic Segmentation, arxiv 2105.05633." reproduced via mmsegmentation. We reproduce Segment

EricKani 22 Feb 24, 2022
HTSeq is a Python library to facilitate processing and analysis of data from high-throughput sequencing (HTS) experiments.

HTSeq DEVS: https://github.com/htseq/htseq DOCS: https://htseq.readthedocs.io A Python library to facilitate programmatic analysis of data from high-t

HTSeq 57 Dec 20, 2022
An experiment to bait a generalized frontrunning MEV bot

Honeypot 🍯 A simple experiment that: Creates a honeypot contract Baits a generalized fronturnning bot with a unique transaction Analyze bot behaviour

0x1355 14 Nov 24, 2022
Self-Supervised Learning with Kernel Dependence Maximization

Self-Supervised Learning with Kernel Dependence Maximization This is the code for SSL-HSIC, a self-supervised learning loss proposed in the paper Self

DeepMind 29 Dec 29, 2022
《Train in Germany, Test in The USA: Making 3D Object Detectors Generalize》(CVPR 2020)

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize This paper has been accpeted by Conference on Computer Vision and Pattern Rec

Xiangyu Chen 101 Jan 02, 2023
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 06, 2022
Evolutionary Scale Modeling (esm): Pretrained language models for proteins

Evolutionary Scale Modeling This repository contains code and pre-trained weights for Transformer protein language models from Facebook AI Research, i

Meta Research 1.6k Jan 09, 2023
Joint project of the duo Hacker Ninjas

Project Smoothie Společný projekt dua Hacker Ninjas. První pokus o hříčku po třech týdnech učení se programování. Jakub Kolář e:\

Jakub Kolář 2 Jan 07, 2022
code associated with ACL 2021 DExperts paper

DExperts Hi! This repository contains code for the paper DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts to appear at

Alisa Liu 68 Dec 15, 2022
The Unsupervised Reinforcement Learning Benchmark (URLB)

The Unsupervised Reinforcement Learning Benchmark (URLB) URLB provides a set of leading algorithms for unsupervised reinforcement learning where agent

259 Dec 26, 2022
JstDoS - HTTP Protocol Stack Remote Code Execution Vulnerability

jstDoS If you are going to skid that, please give credits ! ^^ ¿How works? This

apolo 4 Feb 11, 2022