| Home | KSAE | E-Submission | Sitemap | Contact Us |  
top_img
International Journal of Automotive Technology > Volume 24(3); 2023 > Article
International Journal of Automotive Technology 2023;24(3): 787-797.
doi: https://doi.org/10.1007/s12239-023-0065-y
IMPROVED 3D SEMANTIC SEGMENTATION MODEL BASED ON RGB IMAGE AND LIDAR POINT CLOUD FUSION FOR AUTOMANTIC DRIVING
Jiahao Du , Xiaoci Huang , Mengyang Xing , Tao Zhang
School of Mechanical and Automotive Engineering, Shanghai University of Engineering Science
PDF Links Corresponding Author.  Xiaoci Huang  , Email. 06060005@sues.edu.cn
ABSTRACT
LiDAR point cloud semantic segmentation algorithm is crucial to the environmental understanding of unmanned vehicles. At this stage, in autonomous vehicles, effectively integrating the complementary information of LiDAR and camera has become the focus of research. In this work, a network framework (called PI-Seg) for LiDAR point clouds semantic segmentation by fusing appearance features of RGB images is proposed. In this paper, the perspective projection module is introduced to align and synchronize point clouds with images to reduce appearance information loss. And an efficient and concise dual-flow feature extraction network is designed, a fusion module based on a continuous convolution structure is used for feature fusion, which effectively reduces the amount of parameters and runtime performance, and more suitable for autonomous driving scenarios. Finally, the fused features are added to the LiDAR point cloud features as the final output features, and the point cloud category label prediction is realized through the MLP network. The experimental results demonstrate that PI-Seg has a 5.3% higher mIoU score than SalsaNext, which is also a projection-based method, and still has a 1.4% performance improvement compared with the latest Cylinder3D algorithm, and in quantitative analyses the mAP value also has the best performance, showing that PI-Seg is better than other existing methods.
Key Words: Point cloud semantic segmentation, Multi-sensor fusion, Perspective projection, Dual-flow network
TOOLS
Preview  Preview
Full text via DOI  Full text via DOI
Download Citation  Download Citation
  Print
Share:      
METRICS
2
Scopus
344
View
20
Download
Related article
Editorial Office
21 Teheran-ro 52-gil, Gangnam-gu, Seoul 06212, Korea
TEL: +82-2-564-3971   FAX: +82-2-564-3973   E-mail: manage@ksae.org
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © The Korean Society of Automotive Engineers.                 Developed in M2PI
Close layer
prev next