IMPROVED 3D SEMANTIC SEGMENTATION MODEL BASED ON
RGB IMAGE AND LIDAR POINT CLOUD FUSION FOR
AUTOMANTIC DRIVING |
Jiahao Du , Xiaoci Huang , Mengyang Xing , Tao Zhang |
School of Mechanical and Automotive Engineering, Shanghai University of Engineering Science |
|
|
|
|
ABSTRACT |
LiDAR point cloud semantic segmentation algorithm is crucial to the environmental understanding of unmanned
vehicles. At this stage, in autonomous vehicles, effectively integrating the complementary information of LiDAR and camera
has become the focus of research. In this work, a network framework (called PI-Seg) for LiDAR point clouds semantic
segmentation by fusing appearance features of RGB images is proposed. In this paper, the perspective projection module is
introduced to align and synchronize point clouds with images to reduce appearance information loss. And an efficient and
concise dual-flow feature extraction network is designed, a fusion module based on a continuous convolution structure is used
for feature fusion, which effectively reduces the amount of parameters and runtime performance, and more suitable for
autonomous driving scenarios. Finally, the fused features are added to the LiDAR point cloud features as the final output
features, and the point cloud category label prediction is realized through the MLP network. The experimental results
demonstrate that PI-Seg has a 5.3% higher mIoU score than SalsaNext, which is also a projection-based method, and still has
a 1.4% performance improvement compared with the latest Cylinder3D algorithm, and in quantitative analyses the mAP value
also has the best performance, showing that PI-Seg is better than other existing methods. |
Key Words:
Point cloud semantic segmentation, Multi-sensor fusion, Perspective projection, Dual-flow network |
|
|
|