Abstract:To solve the inevitable shortcomings of single sensor environmental perception system, the LiDAR and camera were fused to combine the advantages of the two sensors and form complementarity for improving the environmental perception capability of unmanned vehicles. The fusion technology of LiDAR and camera was investigated and applied for the target recognition at urban intersections. Combining the search theory of Flood Fill algorithm with the tangent theory of spectral clustering algorithm, and considering the Euclidean distance and spatial distribution characteristics between point clouds, the laser radar target detection method was investigated. The target recognition method based on the fusion of LiDAR radar and camera was proposed, and the traditional PnP solving principle was analyzed. The pose transformation relationship was solved based on the plane normal alignment, and the genetic algorithm was introduced to optimize the solution results. The fusion results of LiDAR and camera were simulated and verified by the autonomous driving simulation software. The results show that by the proposed fusion method of LiDAR and camera, the vehicle targets at urban intersections can be accurately recognized, and the unmanned vehicles can perceive targets within 360° range. This can ensure the safety of unmanned vehicles and improve the environmental understanding ability.
李胜琴, 孙鑫, 张民安. 基于激光雷达与相机融合的城市交叉路口车辆识别技术[J]. 江苏大学学报(自然科学版), 2024, 45(6): 621-628.
LI Shengqin, SUN Xin, ZHANG Min′an. Vehicle recognition technology at urban intersection based on fusion of LiDAR and camera[J]. Journal of Jiangsu University(Natural Science Eidtion)
, 2024, 45(6): 621-628.
XU W X, LI W.Environmental perception and location technology of driverless vehicles[J]. Automobile Science & Technology,2021(6):53-60,52. (in Chinese)
YANG X, LIU W, LIN H. Research of radar and vision sensors data fusion algorithm applied in advanced driverassistance systems[J]. Automobile Applied Technology, 2018(1):37-40.(in Chinese)
ZHENG S W, LI W H, HU J Y. Vehicle detection in the traffic environment based on the fusion of laserpoint cloud and image information[J]. Chinese Journal of Scientific Instrument, 2019, 40(12):143-151.(in Chinese)
LIANG C C, TIAN J P, SONG C L. Aided driving target detection algorithm based on radar and camera sensor fusion[J]. Information Technology & Informatization, 2021(12):5-9.(in Chinese)
YANG D, CAI Y R, WANG P, et al. Traffic regional division method based on improved spectral clustering algorithm[J]. Computer Engineering and Design, 2021, 42(9):2478-2484.(in Chinese)
[11]
REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway,USA:IEEE,2017:7263-7271.
[12]
MOUSAVIAN A, ANGUELOV D, FLYNN J, et al. 3D bounding box estimation using deep learning and geometry[C]∥Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway,USA:IEEE,2017:7074-7082.
[13]
潘浩然. 基于改进损失函数的YOLOv3的人脸检测[D]. 南昌:南昌大学, 2020.
[14]
宫铭钱. 基于激光雷达和相机信息融合的车辆识别与跟踪研究[D]. 重庆:西南大学, 2021.
[15]
MOHAMMADI A, ASADI H, MOHAMED S, et al. OpenGA, a C++ genetic algorithm library[C]∥2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC). Piscataway,USA:IEEE,2017:2051-2056.