Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics


Authors: Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio, Ken Goldberg

To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds and grasps, each with analytic robustness metrics from the thousands of 3D models in Dex-Net 1.0.The resulting dataset, Dex-Net 2.0, trains a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly classifies grasps as robust from point clouds and a grasp specified as the position, angle, and height above a planar worksurface.Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that Dex-Net 2.0 can plan grasps in 0.8s with a success rate of 93% on known objects with challenging geometry and is 3x faster than registering point clouds to a precomputed dataset of objects and grasps.The GQ-CNN also succeeds on 80% of attempts on a dataset of ten household objects not seen in training, with zero false positives over 29 grasps classified as robust.