I PointNet++ Improved PointNet++ for Segmentation and Localization of Leather Grasp Points

Main Article Content

Guang Jin
Gongchang Ren
Yuan Huan
Jiangong Sun

Abstract

In order to achieve accurate identification and positioning of leather grasp points during the process of robot grasp and spreading leather, this paper proposes a leather grasp point segmentation and positioning method based on improved PointNet++(IPointNet++). Taking leather in its natural falling state as the research object, a depth camera is used to collect point cloud data of the leather. Firstly, the preprocessing of leather point clouds is completed by removing background point clouds based on PassThroughFilter and eliminating noise based on Statistics Filter. Secondly, the octree sampling method is used to replace the farthest point sampling method of the original PointNet++, which is adapted to the nonrigid deformation characteristics of the leather itself. Thereby, the entire leather is divided into two parts: the main body and the grasp area. Lastly, the three-dimensional coordinates of the leather grasp points are obtained by solving the centroid of the point cloud data in the leather grasp area grasp. In the segmentation experiments, the improved PointNet++ has raised the mIoU by 11.8% and 2.5% comparing with PointNet and PointNet++ respectively, and the OA by 6.1% and 1.1%. In the grasp experiments, the success rate of leather grasp points identification grasp is 93.33%, and the success grasp rate grasp is 82.14%. The experimental results show that the proposed method has higher segmentation accuracy and good applicability.

Article Details

Section
Articles