Skip to main content

object

Object Shapes

  • ShapeNet: ShapeNet is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. We provide researchers around the world with this data to enable research in computer graphics, computer vision, robotics, and other related disciplines. [Download] [Paper]

  • FineSeg: Shape2Motion paper releases FineSeg dataset, which contains about 3000 3D shapes over six shape categories: chair (1000), table (500), airplanes (600), sofa (600), helicopter (100) and bike (140). The models are collected from a subset of ShapeNet.[Download] [Paper] [Code]

  • PartNet: PartNet is a consistent, large-scale dataset of 3D objects annotated with fine-grained, instance-level, and hierarchical 3D part information. [Download] [Code] [Paper]

  • YCB: YCB Object and Model Set is designed for facilitating benchmarking in robotic manipulation. The set consists of objects in daily life with different shapes, sizes, textures, weight and rigidity, as well as some widely used manipulation tests. [Download]; [Paper]

  • OmniObject3D: OmniObject3D, a large vocabulary 3D object dataset with massive high-quality real-scanned 3D objects to facilitate the development of 3D perception, reconstruction, and generation in the real world. [Download] [Paper] [Code]

  • Objaverse-XL: A Universe of 10M+ 3D Objects. Objaverse-XL is 12x larger than Objaverse 1.0 and 100x larger than all other 3D datasets combined. [Download] [Paper] [Code]

Articulated Objects

  • PartNet Moblity: SAPIEN releases PartNet-Mobility dataset, which is a collection of 2K articulated objects with motion annotations and rendering material. The dataset powers research for generalizable computer vision and manipulation. [Download] [Paper]

  • AKB48: This is a realistic and physics-rich object repository for articulation analysis. It enables various robotic vision and interaction tasks that require detailed part-level understanding. More categories and download link will be released soon. [Download] [Paper]

  • GAPartNet (Merge and provide more annotations from PartNet Mobility and AKB48): By identifying and defining 9 GAPart classes (lids, handles, etc.) in 27 object categories, we construct a large-scale part-centric interactive dataset, GAPartNet, where we provide rich, part-level annotations (semantics, poses). [Download] [Paper] [Code]

  • UniDoorManip: This environment consists of the diverse door assets and door manipulation simulation with mechanisms. The dataset consists of door bodies and handles covering 6 categories (Interior, Window, Car, Safe, StorageFurniture, Refrigerator) for PartNet-Mobility and 3D Warehouse. The simulation provides mechanisms (such as locking and latching) of different doors. [Download] [Paper] [Code]

Deformable Objects

  • ClothesNet: This dataset consists of around 4400 models covering 11 categories annotated with clothes features, boundary lines, and key points. Further, the authors establish benchmark tasks for clothes perception, including classification, boundary line segmentation. [Download] [Paper]

  • GarmentLab: This environment provides physical simulations of garments in ClothesNet. Further, it provides benchmarking tasks of garment manipulation tasks, and models of real-world garments easily accessible globally for evaluation. [Download] [Paper] [Code]

Multi-modal

  • ObjectFolder: This dataset models the multisensory behaviors of real objects with 1) ObjectFolder 2.0, a dataset of 1,000 neural objects in the form of implicit neural representations with simulated multisensory data, and 2) ObjectFolder Real, a dataset that contains the multisensory measurements for 100 real-world household objects, building upon a newly designed pipeline for collecting the 3D meshes, videos, impact sounds, and tactile readings of real-world objects. It also contains a standard benchmark suite of 10 tasks for multisensory object-centric learning, centered around object recognition, reconstruction, and manipulation with sight, sound, and touch. [Download] [Paper]