PyTorch Code for "Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning"

Overview

Generalization in Dexterous Manipulation via
Geometry-Aware Multi-Task Learning

[Project Page] [Paper]

Wenlong Huang1, Igor Mordatch2, Pieter Abbeel1, Deepak Pathak3

1University of California, Berkeley, 2Google Brain, 3Carnegie Mellon University

This is a PyTorch implementation of our Geometry-Aware Multi-Task Policy. The codebase also includes a suite of dexterous manipulation environments with 114 diverse real-world objects built upon Gym and MuJoCo.

We show that a single generalist policy can perform in-hand manipulation of over 100 geometrically-diverse real-world objects and generalize to new objects with unseen shape or size. Interestingly, we find that multi-task learning with object point cloud representations not only generalizes better but even outperforms the single-object specialist policies on both training as well as held-out test objects.

If you find this work useful in your research, please cite using the following BibTeX:

@article{huang2021geometry,
  title={Generalization in Dexterous Manipulation via Geometry-Aware Multi-Task Learning},
  author={Huang, Wenlong and Mordatch, Igor and Abbeel, Pieter and Pathak, Deepak},
  journal={arXiv preprint arXiv:2111.03062},
  year={2021}
}

Setup

Requirements

Setup Instructions

git clone https://github.com/huangwl18/geometry-dex.git
cd geometry-dex/
conda create --name geometry-dex-env python=3.6.9
conda activate geometry-dex-env
pip install --upgrade pip
pip install -r requirements.txt
bash install-baselines.sh

Running Code

Below are some flags and parameters for run_ddpg.py that you may find useful for reference:

Flags and Parameters Description
--expID <INT> Experiment ID
--train_names <List of STRING> list of environments for training; separated by space
--test_names <List of STRING> list of environments for zero-shot testing; separated by space
--point_cloud Use geometry-aware policy
--pointnet_load_path <INT> Experiment ID from which to load the pre-trained Pointnet; required for --point_cloud
--video_count <INT> Number of videos to generate for each env per cycle; only up to 1 is currently supported; 0 to disable
--n_test_rollouts <INT> Total number of collected rollouts across all train + test envs for each evaluation run; should be multiple of len(train_names) + len(test_names)
--num_rollouts <INT> Total number of collected rollouts across all train envs for 1 training cycle; should be multiple of len(train_names)
--num_parallel_envs <INT> Number of parallel envs to create for vec_env; should be multiple of len(train_names)
--chunk_size <INT> Number of parallel envs asigned to each worker in SubprocChunkVecEnv; 0 to disable and use SubprocVecEnv
--num_layers <INT> Number of layers in MLP for all policies
--width <INT> Width of each layer in MLP for all policies
--seed <INT> seed for Gym, PyTorch and NumPy
--eval Perform only evaluation using latest checkpoint
--load_path <INT> Experiment ID from which to load the checkpoint for DDPG; required for --eval

The code also uses WandB. You may wish to run wandb login in terminal to record to your account or choose to run anonymously.

WARNING: Due to the large number of total environments, generating videos during training can be slow and memory intensive. You may wish to train the policy without generating videos by passing video_count=0. After training completes, simply run run_ddpg.py with flags --eval and --video_count=1 to visualize the policy. See example below.

Training

To train Vanilla Multi-Task DDPG policy:

python run_ddpg.py --expID 1 --video_count 0 --n_cycles 40000 --chunk 10

To train Geometry-Aware Multi-Task DDPG policy, first pretrain PointNet encoder:

python train_pointnet.py --expID 2

Then train the policy:

python run_ddpg.py --expID 3 --video_count 0 --n_cycles 40000 --chunk 10 --point_cloud --pointnet_load_path 2 --no_save_buffer

Note we don't save replay buffer here because it is slow as it contains sampled point clouds. If you wish to resume training in the future, do not pass --no_save_buffer above.

Evaluation / Visualization

To evaluate a trained policy and generate video visualizations, run the same command used to train the policy but with additional flags --eval --video_count=<VIDEO_COUNT> --load_path=<LOAD_EXPID>. Replace <VIDEO_COUNT> with 1 if you wish to enable visualization and 0 otherwise. Replace <LOAD_EXPID> with the Experiment ID of the trained policy. For a Geometry-Aware Multi-Task DDPG policy trained using above command, run the following for evaluation and visualization:

python run_ddpg.py --expID 4 --video_count 1 --n_cycles 40000 --chunk 10 --point_cloud --pointnet_load_path 2 --no_save_buffer --eval --load_path 3

Trained Models

We will be releasing trained model files for our Geometry-Aware Policy and single-task oracle policies for each individual object. Stay tuned! Early access can be requested via email.

Provided Environments

Training Envs

e_toy_airplane

knife

flat_screwdriver

elephant

apple

scissors

i_cups

cup

foam_brick

pudding_box

wristwatch

padlock

power_drill

binoculars

b_lego_duplo

ps_controller

mouse

hammer

f_lego_duplo

piggy_bank

can

extra_large_clamp

peach

a_lego_duplo

racquetball

tuna_fish_can

a_cups

pan

strawberry

d_toy_airplane

wood_block

small_marker

sugar_box

ball

torus

i_toy_airplane

chain

j_cups

c_toy_airplane

airplane

nine_hole_peg_test

water_bottle

c_cups

medium_clamp

large_marker

h_cups

b_colored_wood_blocks

j_lego_duplo

f_toy_airplane

toothbrush

tennis_ball

mug

sponge

k_lego_duplo

phillips_screwdriver

f_cups

c_lego_duplo

d_marbles

d_cups

camera

d_lego_duplo

golf_ball

k_toy_airplane

b_cups

softball

wine_glass

chips_can

cube

master_chef_can

alarm_clock

gelatin_box

h_lego_duplo

baseball

light_bulb

banana

rubber_duck

headphones

i_lego_duplo

b_toy_airplane

pitcher_base

j_toy_airplane

g_lego_duplo

cracker_box

orange

e_cups
Test Envs

rubiks_cube

dice

bleach_cleanser

pear

e_lego_duplo

pyramid

stapler

flashlight

large_clamp

a_toy_airplane

tomato_soup_can

fork

cell_phone

m_lego_duplo

toothpaste

flute

stanford_bunny

a_marbles

potted_meat_can

timer

lemon

utah_teapot

train

g_cups

l_lego_duplo

bowl

door_knob

mustard_bottle

plum

Acknowledgement

The code is adapted from this open-sourced implementation of DDPG + HER. The object meshes are from the YCB Dataset and the ContactDB Dataset. We use SubprocChunkVecEnv from this pull request of OpenAI Baselines to speedup vectorized environments.

Owner
Wenlong Huang
Undergraduate Student @ UC Berkeley
Wenlong Huang
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

117 Dec 28, 2022
Pre-trained BERT Models for Ancient and Medieval Greek, and associated code for LaTeCH 2021 paper titled - "A Pilot Study for BERT Language Modelling and Morphological Analysis for Ancient and Medieval Greek"

Ancient Greek BERT The first and only available Ancient Greek sub-word BERT model! State-of-the-art post fine-tuning on Part-of-Speech Tagging and Mor

Pranaydeep Singh 22 Dec 08, 2022
Nested Graph Neural Network (NGNN) is a general framework to improve a base GNN's expressive power and performance

Nested Graph Neural Networks About Nested Graph Neural Network (NGNN) is a general framework to improve a base GNN's expressive power and performance.

Muhan Zhang 38 Jan 05, 2023
Multi-agent reinforcement learning algorithm and environment

Multi-agent reinforcement learning algorithm and environment [en/cn] Pytorch implements multi-agent reinforcement learning algorithms including IQL, Q

万鲲鹏 7 Sep 20, 2022
Release of the ConditionalQA dataset

ConditionalQA Datasets accompanying the paper ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers. Disclaimer This dataset

14 Oct 17, 2022
Pose estimation with MoveNet Lightning

Pose Estimation With MoveNet Lightning MoveNet is the TensorFlow pre-trained model that identifies 17 different key points of the human body. It is th

Yash Vora 2 Jan 04, 2022
Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at [email protected]

TableParser Repo for "TableParser: Automatic Table Parsing with Weak Supervision from Spreadsheets" at DS3 Lab 11 Dec 13, 2022

Multi-resolution SeqMatch based long-term Place Recognition

MRS-SLAM for long-term place recognition In this work, we imply an multi-resolution sambling based visual place recognition method. This work is based

METASLAM 6 Dec 06, 2022
RL Algorithms with examples in Python / Pytorch / Unity ML agents

Reinforcement Learning Project This project was created to make it easier to get started with Reinforcement Learning. It now contains: An implementati

Rogier Wachters 3 Aug 19, 2022
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction

Ditto: Building Digital Twins of Articulated Objects from Interaction Zhenyu Jiang, Cheng-Chun Hsu, Yuke Zhu CVPR 2022, Oral Project | arxiv News 2022

UT Robot Perception and Learning Lab 78 Dec 22, 2022
A curated list of awesome projects and resources related fastai

A curated list of awesome projects and resources related fastai

Tanishq Abraham 138 Dec 22, 2022
[ICML 2021] "Graph Contrastive Learning Automated" by Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang

Graph Contrastive Learning Automated PyTorch implementation for Graph Contrastive Learning Automated [talk] [poster] [appendix] Yuning You, Tianlong C

Shen Lab at Texas A&M University 80 Nov 23, 2022
Code implementation from my Medium blog post: [Transformers from Scratch in PyTorch]

transformer-from-scratch Code for my Medium blog post: Transformers from Scratch in PyTorch Note: This Transformer code does not include masked attent

Frank Odom 27 Dec 21, 2022
Code for "LASR: Learning Articulated Shape Reconstruction from a Monocular Video". CVPR 2021.

LASR Installation Build with conda conda env create -f lasr.yml conda activate lasr # install softras cd third_party/softras; python setup.py install;

Google 157 Dec 26, 2022
Learning multiple gaits of quadruped robot using hierarchical reinforcement learning

Learning multiple gaits of quadruped robot using hierarchical reinforcement learning We propose a method to learn multiple gaits of quadruped robot us

Yunho Kim 17 Dec 11, 2022
Ejemplo Algoritmo Viterbi - Example of a Viterbi algorithm applied to a hidden Markov model on DNA sequence

Ejemplo Algoritmo Viterbi Ejemplo de un algoritmo Viterbi aplicado a modelo ocul

Mateo Velásquez Molina 1 Jan 10, 2022
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs. It enables data scientists, machine

419 Jan 03, 2023
An experimentation and research platform to investigate the interaction of automated agents in an abstract simulated network environments.

CyberBattleSim April 8th, 2021: See the announcement on the Microsoft Security Blog. CyberBattleSim is an experimentation research platform to investi

Microsoft 1.5k Dec 25, 2022
🌊 Online machine learning in Python

In a nutshell River is a Python library for online machine learning. It is the result of a merger between creme and scikit-multiflow. River's ambition

OnlineML 4k Jan 02, 2023
Clairvoyance: a Unified, End-to-End AutoML Pipeline for Medical Time Series

Clairvoyance: A Pipeline Toolkit for Medical Time Series Authors: van der Schaar Lab This repository contains implementations of Clairvoyance: A Pipel

van_der_Schaar \LAB 89 Dec 07, 2022