Skip to main content


Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning
 Paper arXiv Project Page


NVIDIA’s physics simulation environment for reinforcement learning research.

Official Materials


This repository contains example RL environments for the NVIDIA Isaac Gym high performance environments described in NVIDIA's NeurIPS 2021 Datasets and Benchmarks paper.


Bi-DexHands provides a collection of bimanual dexterous manipulation tasks and reinforcement learning algorithms. Reaching human-level sophistication of hand dexterity and bimanual coordination remains an open challenge for modern robotics researchers.


DexPBT implements challenging tasks for one- or two-armed robots equipped with multi-fingered hand end-effectors, including regrasping, grasp-and-throw, and object reorientation. It also introduces a decentralized Population-Based Training (PBT) algorithm that massively amplifies the exploration capabilities of deep reinforcement learning.


TimeChamber is a large scale self-play framework running on parallel simulation. Running self-play algorithms always need lots of hardware resources, especially on 3D physically simulated environments. TimeChamber provides a self-play framework that can achieve fast training and evaluation with ONLY ONE GPU.