KANG Jun-min, ZHAO Xiang-mo, YANG Di. Corner feature extraction of 2D lidar data[J]. Journal of Traffic and Transportation Engineering, 2018, 18(3): 228-238. doi: 10.19818/j.cnki.1671-1637.2018.03.023
Citation: KANG Jun-min, ZHAO Xiang-mo, YANG Di. Corner feature extraction of 2D lidar data[J]. Journal of Traffic and Transportation Engineering, 2018, 18(3): 228-238. doi: 10.19818/j.cnki.1671-1637.2018.03.023

Corner feature extraction of 2D lidar data

doi: 10.19818/j.cnki.1671-1637.2018.03.023
More Information
  • Author Bio:

    KANG Jun-min(1978-), male, lecturer, PhD, 9578577@qq.com

    ZHAO Xiang-mo(1966-), male, professor, PhD, xmzhao@chd.edu.cn

  • Received Date: 2018-01-05
  • Publish Date: 2018-06-25
  • In order to enhance the robustness of the corner feature recognition in the driving environment by the unmanned vehicle and improve the recognition speed of the corner feature, based on the relative difference between bivariate normal probability density map values of observation points, a corner feature extraction method was proposed. The observation data set was mapped to the bivariate normal probability density space, and the mapping value of each observation point was obtained. The mapping results were normalized, and the numerical differences caused by the covariances were eliminated. The positions of peaks and troughs were found in the mapped numerical curve. The observation point corresponding to the peak was closest to the mean point, and the observation point corresponding to the trough was closest to the inflection point. Whether the set of observed data meets the edge length requirement of the corner features was determined by using the relative heights of peaks and troughs. Thecoordinates of the original observation data points corresponding to the troughs were used as corner features to construct the environment feature map. Test result shows that the extraction method can process sparse observation data with more than 63 observation points and angular resolution of the observation point greater than 1°. Therefore, in large-scale outdoor environment and indoor environment, the extraction method can stably identify large corner points. When the observation data points are less than 180, the maximum processing time is less than 5 ms, and the average processing time is less than 1.9 ms, so the extraction method has good real-time performance, which is conducive for decreasing the time required for designing the environment feature map. The extraction method extracts the corner features according to the bivariate normal probability density of the observation data, is insensitive to the observation error and the scale and shape of the corner feature, and can effectively improve the robustness of corner feature recognition.14 figs, 25 refs.

     

  • loading
  • [1]
    HESS W, KOHLER D, RAPP H, et al. Real-time loop closure in 2DLIDAR SLAM[C]∥IEEE. Proceedings of the2016 IEEE International Conference on Robotics and Automation. New York: IEEE, 2016: 1271-1278.
    [2]
    HIMSTEDT M, FROST J, HELLBACH S, et al. Large scale place recognition in 2DLIDAR scans using geometrical landmark relations[C]∥IEEE. 2014IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE, 2014: 5030-5035.
    [3]
    TIPALDI G D, BRAUN M, ARRAS K O. FLIRT: interest regions for 2D range data with applications to robot navigation[C]∥KHATIB O, KUMAR V, SUKHATME G. Experimental Robotics. Berlin: Springer, 2014: 695-710.
    [4]
    王云峰, 翁秀玲, 吴炜, 等. 基于贪心策略的视觉SLAM闭环检测算法[J]. 天津大学学报: 自然科学与工程技术版, 2017, 50 (12): 1262-1270. https://www.cnki.com.cn/Article/CJFDTOTAL-TJDX201712007.htm

    WANG Yun-feng, WONG Xiu-ling, WU Wei, et al. Loop closure detection algorithm based on greedy strategy for visual SLAM[J]. Journal of Tianjin University: Science and Technology, 2017, 50 (12): 1262-1270. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-TJDX201712007.htm
    [5]
    ZHU A Z, THAKUR D, ZASLAN T, et al. The multivehicle stereo event camera dataset: an event camera dataset for 3D perception[J]. Robotics and Automation Letters, 2018, 3 (3): 2032-2039. doi: 10.1109/LRA.2018.2800793
    [6]
    TAKETOMI T, UCHIYAMA H, IKEDA S. Visual SLAM algorithms: a survey from 2010 to 2016[J]. IPSJ Transactions on Computer Vision and Applications, 2017, 9 (1): 16-26. doi: 10.1186/s41074-017-0027-2
    [7]
    RUECKAUER B, DELBRUCK T. Evaluation of event-based algorithms for optical flow with ground-truth from inertial measurement sensor[J]. Frontiers in Neuroscience, 2016, 10 (137): 1-17.
    [8]
    GAO Xiang, ZHANG Tao. Unsupervised learning to detect loops using deep neural networks for visual SLAM system[J]. Autonomous Robots, 2017, 41: 1-18. doi: 10.1007/s10514-015-9516-2
    [9]
    TORRES-GONZLEZ A, DIOS J R M, OLLERO A. Rangeonly SLAM for robot-sensor network cooperation[J]. Autonomous Robots, 2018, 42: 649-663. doi: 10.1007/s10514-017-9663-8
    [10]
    LENAC K, KITANOV A, CUPEC R, et al. Fast planar surface3D SLAM using LIDAR[J]. Robotics and Autonomous Systems, 2017, 92: 197-220. doi: 10.1016/j.robot.2017.03.013
    [11]
    ANDERT F, AMMANN N, KRAUSE S, et al. Opticalaided aircraft navigation using decoupled visual SLAM with range sensor augmentation[J]. Journal of Intelligent and Robotic Systems, 2017, 88 (2-4): 547-565. doi: 10.1007/s10846-016-0457-6
    [12]
    康俊民, 赵祥模, 徐志刚. 无人车行驶环境特征分类方法[J]. 交通运输工程学报, 2016, 16 (6): 140-148. http://transport.chd.edu.cn/article/id/201606017

    KANG Jun-min, ZHAO Xiang-mo, XU Zhi-gang. Classification method of running environment features for unmanned vehicle[J]. Journal of Traffic and Transportation Engineering, 2016, 16 (6): 140-148. (in Chinese). http://transport.chd.edu.cn/article/id/201606017
    [13]
    ADAMS M, ZHANG Sen, XIE Li-hua. Particle filter based outdoor robot localization using natural features extracted from laser scanners[C]∥IEEE. Proceedings of the 2004IEEE International Conference on Robotics and Automation. New York: IEEE, 2004: 1493-1498.
    [14]
    VANDORPE J, VAN BRUSSEL H, XU H. Exact dynamic map building for a mobile robot using geometrical primitives produced by a 2Drange finder[C]∥IEEE. Proceedings of the1996 IEEE International Conference on Robotics and Automation. New York: IEEE, 1996: 901-908.
    [15]
    TAYLOR R M, PROBERT P J. Range finding and feature extraction by segmentation of images for mobile robot navigation[C]∥IEEE. Proceedings of the 1996 IEEE International Conference on Robotics and Automation. New York: IEEE, 1996: 95-100.
    [16]
    ADAMS M D, KERSTENS A. Tracking naturally occurring indoor features in 2Dand 3D with LIDAR range/amplitude data[J]. International Journal of Robotics Research, 1998, 17 (9): 907-923. doi: 10.1177/027836499801700901
    [17]
    GUIVANT J, NEBOT E, BAIKER S. Autonomous navigation and map building using laser range sensors in outdoor applications[J]. Journal of Robotic Systems, 2000, 17 (10): 565-583. doi: 10.1002/1097-4563(200010)17:10<565::AID-ROB4>3.0.CO;2-6
    [18]
    GUIVANT J, MASSON F, NEBOT E. Simultaneous localization and map building using natural features and absolute information[J]. Robotics and Autonomous Systems, 2002, 40 (2/3): 79-90.
    [19]
    满增光, 叶文华, 肖海宁, 等. 从激光扫描数据中提取角点特征的方法[J]. 南京航空航天大学学报, 2012, 44 (3): 379-383. doi: 10.3969/j.issn.1005-2615.2012.03.017

    MAN Zeng-guang, YE Wen-hua, XIAO Hai-ning, et al. Method for corner feature extraction from laser scan data[J]. Journal of Nanjing University of Aeronautics and Astronautics, 2012, 44 (3): 379-383. (in Chinese). doi: 10.3969/j.issn.1005-2615.2012.03.017
    [20]
    FABRESSE F R, CABALLERO F, MAZA I, et al. Undelayed3DRO-SLAM based on Gaussian-mixture and reduced spherical parametrization[C]∥IEEE. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE, 2013: 1555-1561.
    [21]
    EROL B A, VAISHNAV S, LABRADO J D, et al. Cloudbased control and vSLAM through cooperative mapping and localization[C]∥IEEE. 2016 World Automation Congress. New York: IEEE, 2016: 1-6.
    [22]
    ZHANG Sen, XIE Li-hua, ADAMS M. An efficient data association approach to simultaneous localization and map building[J]. International Journal of Robotics Research, 2005, 24 (1): 49-60. doi: 10.1177/0278364904049251
    [23]
    康俊民, 赵祥模, 徐志刚. 基于特征几何关系的无人车轨迹回环检测[J]. 中国公路学报, 2017, 30 (1): 121-128, 135. https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGL201701015.htm

    KANG Jun-min, ZHAO Xiang-mo, XU Zhi-gang. Loop closure detection of unmanned vehicle trajectory based on geometric relationship between features[J]. China Journal of Highway and Transport, 2017, 30 (1): 121-128, 135. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-ZGGL201701015.htm
    [24]
    李阳铭, 宋全军, 刘海, 等. 用于移动机器人导航的通用激光雷达特征提取[J]. 华中科技大学学报: 自然科学版, 2013, 41 (增1): 280-283. https://www.cnki.com.cn/Article/CJFDTOTAL-HZLG2013S1071.htm

    LI Yang-ming, SONG Quan-jun, LIU Hai, et al. General purpose LIDAR feature extractor for mobile robot navigation[J]. Journal of Huazhong University of Science and Technology: Natural Science Edition, 2013, 41 (S1): 280-283. (in Chinese). https://www.cnki.com.cn/Article/CJFDTOTAL-HZLG2013S1071.htm
    [25]
    LI Yang-ming, OLSON E B. Extracting general-purpose features from LIDAR data[C]∥IEEE. Proceedings of the2010 IEEE International Conference on Robotics and Automation. New York: IEEE, 2010: .
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (1023) PDF downloads(874) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return