留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于雾天退化图像的高速公路U形去雾车道线检测网络

隋胜春 何永明 裴玉龙 张龙龙 金玉凤

隋胜春, 何永明, 裴玉龙, 张龙龙, 金玉凤. 基于雾天退化图像的高速公路U形去雾车道线检测网络[J]. 交通运输工程学报, 2025, 25(6): 169-185. doi: 10.19818/j.cnki.1671-1637.2025.06.015
引用本文: 隋胜春, 何永明, 裴玉龙, 张龙龙, 金玉凤. 基于雾天退化图像的高速公路U形去雾车道线检测网络[J]. 交通运输工程学报, 2025, 25(6): 169-185. doi: 10.19818/j.cnki.1671-1637.2025.06.015
SUI Sheng-chun, HE Yong-ming, PEI Yu-long, ZHANG Long-long, JIN Yu-feng. U-shape defogging lane detection network for expressway based on fog-degraded images[J]. Journal of Traffic and Transportation Engineering, 2025, 25(6): 169-185. doi: 10.19818/j.cnki.1671-1637.2025.06.015
Citation: SUI Sheng-chun, HE Yong-ming, PEI Yu-long, ZHANG Long-long, JIN Yu-feng. U-shape defogging lane detection network for expressway based on fog-degraded images[J]. Journal of Traffic and Transportation Engineering, 2025, 25(6): 169-185. doi: 10.19818/j.cnki.1671-1637.2025.06.015

基于雾天退化图像的高速公路U形去雾车道线检测网络

doi: 10.19818/j.cnki.1671-1637.2025.06.015
基金项目: 

黑龙江省自然科学基金项目 LH2023E011

国家自然科学基金项目 52572369

长沙理工大学极端环境绿色长寿道路工程全国重点实验室开放基金资助项目 kfj230105

详细信息
    作者简介:

    隋胜春(1995-), 男, 辽宁朝阳人, 东北林业大学工学博士研究生, 从事高速公路智能车环境感知、超高速公路车辆控制研究

    通讯作者:

    何永明(1979-), 男, 湖北广水人, 东北林业大学副教授, 工学博士

  • 中图分类号: U491.52

U-shape defogging lane detection network for expressway based on fog-degraded images

Funds: 

Natural Science Foundation of Heilongjiang LH2023E011

National Natural Science Foundation of China 52572369

Open Fund of National Key Laboratory of Green and Long-life Road Engineering in Extreme Environment of Changsha University of Science & Technology kfj230105

More Information
    Corresponding author: HE Yong-ming (1979-), male, associate professor, PhD, hymjob@nefu.edu.cn
Article Text (Baidu Translation)
  • 摘要: 提出了一种基于雾天退化图像的高速公路U形去雾车道线检测网络(UDLD-Net)框架,融合图像去雾增强与轻量化车道线检测两大核心模块;基于U-Net架构搭建了去雾增强模块UD-Net,创新性引入多尺度自适应提升解码器Boosting与反投影特征融合模块,通过分层特征提取机制实现大气光精准估计;结合车道线空间先验特性定义检索区域,构建了轻量级车道线检测网络(LD-Net),将计算量缩减至全图检测的1/2;通过特征翻转融合技术增强对称特征鲁棒性,引入二阶差分损失函数约束车道线曲率平滑性,搭配一维化特征处理与双损失函数设计。研究结果表明:用UD-Net去雾增强模块在SF-Highway数据集上测试,处理后图像结构相似性达0.867,峰值信噪比提升至21.527 dB,显著提升了雾天图像对比度与细节清晰度;LD-Net轻量级检测网络在TuSimple数据集上实现262帧·s-1的检测速度与96.52%的F1分数,有效兼顾检测速度与精度;UDLD-Net在Haze-Highway真实雾天数据集上实现269帧·s-1的检测速度与91.16%的F1分数,较传统语义分割方法精度提升5.8%、速度提升3.7倍;该网络在不同雾浓度场景下均能稳定保持高精度与高实时性,其去雾增强与轻量化检测的协同设计有效平衡了检测性能与计算效率;经大量试验数据测试验证,UDLD-Net在雾天车道线检测的精度和速度上均达到先进性能水平,可为雾天高速公路智能车辆环境感知提供高效可靠的车道线检测技术方案。

     

  • 图  1  雾天图像退化过程

    Figure  1.  Degradation process of foggy image

    图  2  雾天退化图像车道线检测

    Figure  2.  Lane detection in fog-degraded image

    图  3  UD-Net图像去雾增强模块

    Figure  3.  UD-Net image defogging enhancement model

    图  4  A估计

    Figure  4.  Estimation of A

    图  5  车道线信息检索

    Figure  5.  Retrieval lane information

    图  6  车道线检测网络总体架构

    Figure  6.  General architecture of lane detection network

    图  7  除雾前后图像的可见边信息可视化

    Figure  7.  Visualization of visible edge information for images before and after defogging

    图  8  不同去雾方法的去雾效果可视化

    Figure  8.  Visualization of defogging effects of different defogging methods

    图  9  SF-Highway与Haze-Highway数据集去雾评价指标对比

    Figure  9.  Comparison of dehazing evaluation metrics between SF-Highway and Haze-Highway datasets

    图  10  UDLD-Net车道线检测可视化

    Figure  10.  Visualization of UDLD-Net lane detection

    图  11  去雾增强前后车道线检测可视化对比

    Figure  11.  Visualization comparison of lane detection before and after dehazing enhancement

    表  1  评估参数含义

    Table  1.   Meaning of valuation and parameters

    符号 定义
    Ps 反应图像处理前后图像失真多少,在10~30 dB内越大,图像质量越好
    S(IJ) 结构相似度,衡量两图像相似程度取值在[0, 1]之间越接近1,则两图像越相似
    e 去雾前后可见边数目新增比例,其数值越大表明去雾之后图像边缘信息增加被雾气遮挡的图像当中的边缘信息得以显现
    σ 黑白像素比,在整幅图像中所占的比例越小视觉效果更有色彩性
    γ 去雾前后可见边的梯度比,值越大色彩对比度越大图像视觉效果更好
    Pr 精确率指标,指在所有被模型预测为正类的样本中,真正为正类的比例
    Re 指在所有实际为正类的样本中,被模型正确预测为正类的比例
    F1 F1分数指标,为精确率和召回率的调和平均数
    Acc 检测准确度,衡量所有样本预测正确的比例,分值越高检测准确度越高
    下载: 导出CSV

    表  2  雾浓度等级划分

    Table  2.   Classification of fog concentration

    雾等级 无雾 薄雾 大雾 浓雾 强浓雾
    水平能见度/m >1 000 (500, 1 000] (200, 500] (50, 200] [0, 50]
    下载: 导出CSV

    表  3  户外训练集(OTS)测试结果

    Table  3.   Outdoor training set (OTS) test results

    方法 DCP DehazeNet MSCNN AOD-Net GCANet UD-Net
    PS/dB 25.479 24.358 24.546 26.235 22.177 28.731
    S(IJ) 0.796 0.816 0.887 0.694 0.851 0.827
    下载: 导出CSV

    表  4  SF-Highway数据集去雾效果测试结果

    Table  4.   SF-Highway dataset defogging effect test results

    方法 DCP DehazeNet MSCNN AOD-Net GCANet UD-Net
    PS/dB 17.043 19.632 17.465 19.286 21.779 21.527
    S(IJ) 0.767 0.706 0.835 0.629 0.727 0.867
    e 0.44 0.38 0.61 0.51 0.57 0.63
    σ/% 1.563 1.221 0.046 0.269 0.831 1.097
    γ 1.46 2.33 0.97 1.78 1.06 1.83
    下载: 导出CSV

    表  5  Haze-Highway数据集去雾效果测试结果

    Table  5.   Defogging effect test results of Haze-Highway dataset

    方法 DCP DehazeNet MSCNN AOD-Net GCANet UD-Net
    PS/dB 18.591 15.252 17.314 16.935 20.358 20.077
    S(IJ) 0.717 0.679 0.876 0.538 0.694 0.796
    e 0.51 0.44 0.63 0.59 0.61 0.67
    σ/% 1.103 1.098 0.034 0.255 0.430 1.222
    γ 1.190 1.976 0.980 1.521 0.879 1.567
    下载: 导出CSV

    表  6  TuSimple数据集测试对比结果

    Table  6.   Comparison results of tests on TuSimple dataset

    方法 Acc/% F1/% FPR FNR Fr/(帧·s-1)
    SCNN 96.53 95.97 6.17 1.80 7
    RESA 96.82 96.93 3.63 2.48 45
    LSPT 96.18 96.85 2.91 3.38 47
    FastDraw 93.92 95.20 7.60 5.40 90
    UFLD-ResNet18 95.65 96.16 3.06 4.61 250
    UFLD-ResNet34 95.56 96.22 3.18 4.37 171
    CLRNet 96.83 97.62 2.37 2.38 151
    UDLD-Net 96.61 96.52 2.23 2.36 262
    下载: 导出CSV

    表  7  SF-Highway数据集测试对比结果

    Table  7.   Comparison results of SF-Highway dataset tests

    方法 Acc/% F1/% Pr/% Re/% Fr/(帧·s-1)
    SCNN 90.14 88.55 89.73 87.42 5
    RESA 91.76 92.75 91.09 94.47 37
    UFLD-ResNet18 93.39 93.64 94.44 92.85 223
    UFLD-ResNet34 94.55 94.86 95.86 93.88 157
    CLRNet 96.18 94.36 93.98 94.76 139
    PointLaneNet 86.62 85.28 86.04 84.53 101
    UDLD-Net 95.97 94.93 95.66 94.21 240
    下载: 导出CSV

    表  8  Haze-Highway数据集测试对比结果

    Table  8.   Comparison results of Haze-Highway dataset tests

    方法 Acc/% F1/% Pr/% Re/% Fr/(帧·s-1)
    SCNN 92.08 87.31 90.44 88.53 27
    RESA 89.35 90.85 90.03 89.76 46
    UFLD-ResNet18 90.63 89.89 92.85 90.52 239
    UFLD-ResNet34 89.74 90.86 91.20 87.90 182
    CLRNet 92.63 89.44 90.69 90.99 177
    PointLaneNet 84.65 84.71 82.33 80.54 141
    UDLD-Net 92.55 91.16 91.22 91.17 269
    下载: 导出CSV
  • [1] SHA H, SINGH M K, HAOUARI R, et al. Network-wide safety impacts of dedicated lanes for connected and autonomous vehicles[J]. Accident Analysis & Prevention, 2024, 195: 107424.
    [2] 戴喆, 吴宇轩, 董是, 等. 雷达与视觉传感器融合的高速公路全域车辆轨迹与交通参数检测方法[J]. 交通运输工程学报, 2025, 25(1): 197-210. doi: 10.19818/j.cnki.1671-1637.2025.01.014

    DAI Zhe, WU Yu-xuan, DONG Shi, et al. Global vehicle trajectories and traffic parameters detecting method in expressway based on radar and vision sensor fusion[J]. Journal of Traffic and Transportation Engineering, 2025, 25(1): 197-210. doi: 10.19818/j.cnki.1671-1637.2025.01.014
    [3] 谢宪毅, 赵鑫, 金立生, 等. 融合深度强化学习与滚动时域优化的智能车辆轨迹跟踪控制[J]. 交通运输工程学报, 2024, 24(6): 259-272. doi: 10.19818/j.cnki.1671-1637.2024.06.018

    XIE Xian-yi, ZHAO Xin, JIN Li-sheng, et al. Trajectory tracking control of intelligent vehicles based on deep reinforcement learning and rolling horizon optimization[J]. Journal of Traffic and Transportation Engineering, 2024, 24(6): 259-272. doi: 10.19818/j.cnki.1671-1637.2024.06.018
    [4] NISHINO K, KRATZ L, LOMBARDI S. Bayesian defogging[J]. International Journal of Computer Vision, 2012, 98(3): 263-278. doi: 10.1007/s11263-011-0508-1
    [5] BORKAR A, HAYES M, SMITH M T. A novel lane detection system with efficient ground truth generation[J]. IEEE Transactions on Intelligent Transportation Systems, 2012, 13(1): 365-374. doi: 10.1109/TITS.2011.2173196
    [6] HOQUE S, XU S X, MAITI A, et al. Deep learning for 6D pose estimation of objects — A case study for autonomous driving[J]. Expert Systems with Applications, 2023, 223: 119838. doi: 10.1016/j.eswa.2023.119838
    [7] ZHENG T, HUANG Y F, LIU Y, et al. CLRNet: Cross layer refinement network for lane detection[C]//IEEE. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2022: 888-897.
    [8] CHEN Z P, LIU Q F, LIAN C F. PointLaneNet: Efficient end-to-end CNNs for accurate real-time lane detection[C]//IEEE. 2019 IEEE Intelligent Vehicles Symposium (Ⅳ). New York: IEEE, 2019: 2563-2568.
    [9] 王畅, 李勇杭, 张凯超, 等. 基于融合分割和变尺度窗口的车道线距离检测[J]. 中国公路学报, 2023, 36(7): 212-222.

    WANG Chang, LI Yong-hang, ZHANG Kai-chao, et al. Lane line distance detection based on fusion segmentation and a variable-scale window[J]. China Journal of Highway and Transport, 2023, 36(7): 212-222.
    [10] ZHOU Z, HU Z Z, LI N, et al. Enhancing vehicle localization by matching HD map with road marking detection[J]. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 2024, 238(13): 4129-4141. doi: 10.1177/09544070231191156
    [11] 吴骅跃, 赵祥模. 基于IPM和边缘图像过滤的多干扰车道线检测[J]. 中国公路学报, 2020, 33(5): 153-164.

    WU Hua-yue, ZHAO Xiang-mo. Multi-interference lane recognition based on IPM and edge image filtering[J]. China Journal of Highway and Transport, 2020, 33(5): 153-164.
    [12] QU Z, JIN H, ZHOU Y, et al. Focus on local: Detecting lane marker from bottom up via key point[C]//IEEE. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2021: 14117-14125.
    [13] ZHENG T, FANG H, ZHANG Y, et al. RESA: Recurrent feature-shift aggregator for lane detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 3547-3554. doi: 10.1609/aaai.v35i4.16469
    [14] LEE S, KIM J, YOON J S, et al. VPGNet: Vanishing point guided network for lane and road marking detection and recognition[C]//IEEE. 2017 IEEE International Conference on Computer Vision (ICCV). New York: IEEE, 2017: 1965-1973.
    [15] LI X, LI J, HU X L, et al. Line-CNN: End-to-end traffic line detection with line proposal unit[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 21(1): 248-258. doi: 10.1109/TITS.2019.2890870
    [16] LIU L Z, CHEN X H, ZHU S Y, et al. CondLaneNet: A top-to-down lane detection framework based on conditional convolution[C]//IEEE. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2021: 3753-3762.
    [17] ZHANG Y J, ZHU L, FENG W, et al. VIL-100: A new dataset and a baseline model for video instance lane detection[C]//IEEE. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2021: 15661-15670.
    [18] RAN H, YIN Y F, HUANG F L, et al. FLAMNet: A flexible line anchor mechanism network for lane detection[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(11): 12767-12778. doi: 10.1109/TITS.2023.3290991
    [19] ZHANG R H, PENG J T, GOU W T, et al. A robust and real-time lane detection method in low-light scenarios to advanced driver assistance systems[J]. Expert Systems with Applications, 2024, 256: 124923. doi: 10.1016/j.eswa.2024.124923
    [20] NAROTE S P, BHUJBAL P N, NAROTE A S, et al. A review of recent advances in lane detection and departure warning system[J]. Pattern Recognition, 2018, 73: 216-234. doi: 10.1016/j.patcog.2017.08.014
    [21] QIU Y S, LU Y Y, WANG Y T, et al. IDOD-YOLOV7: Image-dehazing YOLOV7 for object detection in low-light foggy traffic environments[J]. Sensors, 2023, 23(3): 1347. doi: 10.3390/s23031347
    [22] KAR A, DHARA S K, SEN D, et al. Zero-shot single image restoration through controlled perturbation of Koschmieder's model[C]//IEEE. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2021: 16205-16215.
    [23] MCCARTNEY E J, HALL F F. Optics of the atmosphere: Scattering by molecules and particles[J]. Physics Today, 1977, 30(5): 76-77. doi: 10.1063/1.3037551
    [24] KIM S E, PARK T H, EOM I K. Fast single image dehazing using saturation based transmission map estimation[J]. IEEE Transactions on Image Processing, 2019, 29: 1985-1998.
    [25] ZHU Q S, MAI J M, SHAO L. A fast single image haze removal algorithm using color attenuation prior[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3522-3533. doi: 10.1109/TIP.2015.2446191
    [26] CAI B L, XU X M, JIA K, et al. DehazeNet: An end-to-end system for single image haze removal[J]. IEEE Transactions on Image Processing, 2016, 25(11): 5187-5198. doi: 10.1109/TIP.2016.2598681
    [27] REN W Q, PAN J S, ZHANG H, et al. Single image dehazing via multi-scale convolutional neural networks with holistic edges[J]. International Journal of Computer Vision, 2020, 128(1): 240-259. doi: 10.1007/s11263-019-01235-8
    [28] LI B Y, PENG X L, WANG Z Y, et al. AOD-net: All-in-one dehazing network[C]//IEEE. 2017 IEEE International Conference on Computer Vision (ICCV). New York: IEEE, 2017: 4780-4788.
    [29] QIN M J, XIE F Y, LI W, et al. Dehazing for multispectral remote sensing images based on a convolutional neural network with the residual architecture[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(5): 1645-1655. doi: 10.1109/JSTARS.2018.2812726
    [30] LIU X H, MA Y R, SHI Z H, et al. GridDehazeNet: Attention-based multi-scale network for image dehazing[C]//IEEE. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2019: 7313-7322.
    [31] CHEN D D, HE M M, FAN Q N, et al. Gated context aggregation network for image dehazing and deraining[C]//IEEE. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). New York: IEEE, 2019: 1375-1383.
    [32] DONG Y, LIU Y H, ZHANG H, et al. FD-GAN: Generative adversarial networks with fusion-discriminator for single image dehazing[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 10729-10736. doi: 10.1609/aaai.v34i07.6701
    [33] HUANG P C, ZHAO L, JIANG R H, et al. Self-filtering image dehazing with self-supporting module[J]. Neuro-computing, 2021, 432: 57-69.
    [34] RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]//Springer. Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. Berlin: Springer, 2015: 234-241.
    [35] MA J Y, PENG C L, TIAN X, et al. DBDnet: A deep boosting strategy for image denoising[J]. IEEE Transactions on Multimedia, 2021, 24: 3157-3168.
    [36] RASTIVEIS H, SHAMS A, SARASUA W A, et al. Automated extraction of lane markings from mobile LiDAR point clouds based on fuzzy inference[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 160: 149-166. doi: 10.1016/j.isprsjprs.2019.12.009
    [37] TIAN W, YU X W, HU H H, et al. Interactive attention learning on detection of lane and lane marking on the road by monocular camera image[J]. Sensors, 2023, 23(14): 6545. doi: 10.3390/s23146545
    [38] PAN X G, SHI J P, LUO P, et al. Spatial as deep: Spatial CNN for traffic scene understanding[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2018, 32(1): 7276-7283.
    [39] ZOU Q, JIANG H W, DAI Q Y, et al. Robust lane detection from continuous driving scenes using deep neural networks[J]. IEEE Transactions on Vehicular Technology, 2020, 69(1): 41-54. doi: 10.1109/TVT.2019.2949603
    [40] GHAFOORIAN M, NUGTEREN C, BAKA N, et al. EL-GAN: Embedding loss driven generative adversarial networks for lane detection[C]//Springer. Computer Vision—ECCV 2018 Workshops. Berlin: Springer, 2019: 256-272.
    [41] YE Y Y, HAO X L, CHEN H J. Lane detection method based on lane structural analysis and CNNs[J]. IET Intelligent Transport Systems, 2018, 12(6): 513-520. doi: 10.1049/iet-its.2017.0143
    [42] SHUNMUGA PERUMAL P, WANG Y, SUJASREE M, et al. LaneScanNET: A deep-learning approach for simultaneous detection of obstacle-lane states for autonomous driving systems[J]. Expert Systems with Applications, 2023, 233: 120970. doi: 10.1016/j.eswa.2023.120970
    [43] MUNIR F, AZAM S, JEON M, et al. LDNet: End-to-end lane marking detection approach using a dynamic vision sensor[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(7): 9318-9334. doi: 10.1109/TITS.2021.3102479
    [44] QIN Z Q, ZHANG P Y, LI X. Ultra fast deep lane detection with hybrid anchor driven ordinal classification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(5): 2555-2568. doi: 10.1109/TPAMI.2022.3182097
    [45] AL MAMUN A, EM P P, HOSSEN M J, et al. A deep learning approach for lane marking detection applying encode-decode instant segmentation network[J]. Heliyon, 2023, 9(3): e14212. doi: 10.1016/j.heliyon.2023.e14212
    [46] XIAO D G, YANG X F, LI J F, et al. Attention deep neural network for lane marking detection[J]. Knowledge-Based Systems, 2020, 194: 105584. doi: 10.1016/j.knosys.2020.105584
    [47] HE K M, SUN J, TANG X O. Single image haze removal using dark channel prior[C]//IEEE. 2009 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2009: 1956-1963.
    [48] LIU R J, YUAN Z J, LIU T, et al. End-to-end lane shape prediction with transformers[C]//IEEE. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). New York: IEEE, 2021: 3693-3701.
    [49] PHILION J. FastDraw: Addressing the long tail of lane detection by adapting a sequential prediction network[C]//IEEE. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 11574-11583.
  • 加载中
图(11) / 表(8)
计量
  • 文章访问数:  126
  • HTML全文浏览量:  56
  • PDF下载量:  13
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-09-12
  • 录用日期:  2025-07-07
  • 修回日期:  2025-05-19
  • 刊出日期:  2025-12-28

目录

    /

    返回文章
    返回