Abstract
Estimating the orientation and position of objects is a crucial step in robotic bin-picking tasks. The challenge lies in the fact that, in real-world scenarios, a diverse array of objects is often randomly stacked, resulting in significant occlusion. This study introduces an innovative approach aimed at predicting 6D poses by processing point clouds through a two-stage neural network. In the initial stage, a network for scenes with low-textured environments is designed. Its purpose is to perform instance segmentation and provide an initial pose estimation. Entering the second stage, a pose refinement network is suggested. This network is intended to enhance the precision of pose prediction, building upon the output from the first stage. To tackle the challenge of resource-intensive annotation, a simulation technique is employed to generate a synthetic dataset. Additionally, a dedicated software tool has been developed to annotate real point cloud datasets. In practical experiments, our method demonstrated superior performance compared to baseline methods such as PointGroup and Iterative Closest Point. This superiority is evident in both segmentation accuracy and pose refinement. Moreover, practical grasping experiments have underscored the method's efficacy in real-world industrial robot bin-picking applications. The results affirm its capability to successfully address the challenges produced by occluded and randomly stacked objects.