U-shape defogging lane detection network for expressway based on fog-degraded images
-
摘要: 提出了一种基于雾天退化图像的高速公路U形去雾车道线检测网络(UDLD-Net)框架,融合图像去雾增强与轻量化车道线检测两大核心模块;基于U-Net架构搭建了去雾增强模块UD-Net,创新性引入多尺度自适应提升解码器Boosting与反投影特征融合模块,通过分层特征提取机制实现大气光精准估计;结合车道线空间先验特性定义检索区域,构建了轻量级车道线检测网络(LD-Net),将计算量缩减至全图检测的1/2;通过特征翻转融合技术增强对称特征鲁棒性,引入二阶差分损失函数约束车道线曲率平滑性,搭配一维化特征处理与双损失函数设计。研究结果表明:用UD-Net去雾增强模块在SF-Highway数据集上测试,处理后图像结构相似性达0.867,峰值信噪比提升至21.527 dB,显著提升了雾天图像对比度与细节清晰度;LD-Net轻量级检测网络在TuSimple数据集上实现262帧·s-1的检测速度与96.52%的F1分数,有效兼顾检测速度与精度;UDLD-Net在Haze-Highway真实雾天数据集上实现269帧·s-1的检测速度与91.16%的F1分数,较传统语义分割方法精度提升5.8%、速度提升3.7倍;该网络在不同雾浓度场景下均能稳定保持高精度与高实时性,其去雾增强与轻量化检测的协同设计有效平衡了检测性能与计算效率;经大量试验数据测试验证,UDLD-Net在雾天车道线检测的精度和速度上均达到先进性能水平,可为雾天高速公路智能车辆环境感知提供高效可靠的车道线检测技术方案。Abstract: A U-shape defogging lane detection network (UDLD-Net) framework for expressway lane based on fog-degraded images was proposed, integrating two core modules: image defogging enhancement and lightweight lane detection. The defogging enhancement module UD-Net was built upon the U-Net architecture, innovatively incorporating a multi-scale adaptive enhancement decoder Boosting and a back-projection feature fusion module. Through a hierarchical feature extraction mechanism, precise estimation of atmospheric light was achieved. Search regions were defined by leveraging the spatial prior characteristics of lanes, and a lightweight lane detection network LD-Net was constructed, reducing the computational load to 1/2 of full-image detection. Feature flipping fusion technology was adopted to enhance the robustness of symmetric features. A second-order difference loss function was introduced to constrain the smoothness of lane curvature, and it was combined with one-dimensional feature processing and dual-loss function design. Research results show that the defogging enhancement module UD-Net, tested on the SF-Highway dataset, achieves a structural similarity of 0.867 and a peak signal-to-noise ratio (PSNR) increased to 21.527 dB for processed images, significantly improving the contrast and detail clarity of foggy images. The lightweight detection network LD-Net realizes a detection speed of 262 frames per second (FPS) and an F1 score of 96.52% on the TuSimple dataset, effectively balancing detection speed and accuracy. UDLD-Net achieves a detection speed of 269 FPS and an F1 score of 91.16% on the Haze-Highway real-world fog dataset, representing a 5.8% accuracy improvement and a 3.7-fold speed increase compared to traditional semantic segmentation methods. The network can stably maintain high accuracy and real-time performance across scenarios with different fog concentrations, and its synergistic design of defogging enhancement and lightweight detection effectively balances detection performance and computational efficiency. Verified through extensive experimental data tests, UDLD-Net reaches the advanced performance level in both accuracy and speed for foggy-day lane detection, providing an efficient and reliable lane detection solution for intelligent vehicle environment perception on foggy highways.
-
表 1 评估参数含义
Table 1. Meaning of valuation and parameters
符号 定义 Ps 反应图像处理前后图像失真多少,在10~30 dB内越大,图像质量越好 S(I,J) 结构相似度,衡量两图像相似程度取值在[0, 1]之间越接近1,则两图像越相似 e 去雾前后可见边数目新增比例,其数值越大表明去雾之后图像边缘信息增加被雾气遮挡的图像当中的边缘信息得以显现 σ 黑白像素比,在整幅图像中所占的比例越小视觉效果更有色彩性 γ 去雾前后可见边的梯度比,值越大色彩对比度越大图像视觉效果更好 Pr 精确率指标,指在所有被模型预测为正类的样本中,真正为正类的比例 Re 指在所有实际为正类的样本中,被模型正确预测为正类的比例 F1 F1分数指标,为精确率和召回率的调和平均数 Acc 检测准确度,衡量所有样本预测正确的比例,分值越高检测准确度越高 表 2 雾浓度等级划分
Table 2. Classification of fog concentration
雾等级 无雾 薄雾 大雾 浓雾 强浓雾 水平能见度/m >1 000 (500, 1 000] (200, 500] (50, 200] [0, 50] 表 3 户外训练集(OTS)测试结果
Table 3. Outdoor training set (OTS) test results
方法 DCP DehazeNet MSCNN AOD-Net GCANet UD-Net PS/dB 25.479 24.358 24.546 26.235 22.177 28.731 S(I,J) 0.796 0.816 0.887 0.694 0.851 0.827 表 4 SF-Highway数据集去雾效果测试结果
Table 4. SF-Highway dataset defogging effect test results
方法 DCP DehazeNet MSCNN AOD-Net GCANet UD-Net PS/dB 17.043 19.632 17.465 19.286 21.779 21.527 S(I,J) 0.767 0.706 0.835 0.629 0.727 0.867 e 0.44 0.38 0.61 0.51 0.57 0.63 σ/% 1.563 1.221 0.046 0.269 0.831 1.097 γ 1.46 2.33 0.97 1.78 1.06 1.83 表 5 Haze-Highway数据集去雾效果测试结果
Table 5. Defogging effect test results of Haze-Highway dataset
方法 DCP DehazeNet MSCNN AOD-Net GCANet UD-Net PS/dB 18.591 15.252 17.314 16.935 20.358 20.077 S(I,J) 0.717 0.679 0.876 0.538 0.694 0.796 e 0.51 0.44 0.63 0.59 0.61 0.67 σ/% 1.103 1.098 0.034 0.255 0.430 1.222 γ 1.190 1.976 0.980 1.521 0.879 1.567 表 6 TuSimple数据集测试对比结果
Table 6. Comparison results of tests on TuSimple dataset
方法 Acc/% F1/% FPR FNR Fr/(帧·s-1) SCNN 96.53 95.97 6.17 1.80 7 RESA 96.82 96.93 3.63 2.48 45 LSPT 96.18 96.85 2.91 3.38 47 FastDraw 93.92 95.20 7.60 5.40 90 UFLD-ResNet18 95.65 96.16 3.06 4.61 250 UFLD-ResNet34 95.56 96.22 3.18 4.37 171 CLRNet 96.83 97.62 2.37 2.38 151 UDLD-Net 96.61 96.52 2.23 2.36 262 表 7 SF-Highway数据集测试对比结果
Table 7. Comparison results of SF-Highway dataset tests
方法 Acc/% F1/% Pr/% Re/% Fr/(帧·s-1) SCNN 90.14 88.55 89.73 87.42 5 RESA 91.76 92.75 91.09 94.47 37 UFLD-ResNet18 93.39 93.64 94.44 92.85 223 UFLD-ResNet34 94.55 94.86 95.86 93.88 157 CLRNet 96.18 94.36 93.98 94.76 139 PointLaneNet 86.62 85.28 86.04 84.53 101 UDLD-Net 95.97 94.93 95.66 94.21 240 表 8 Haze-Highway数据集测试对比结果
Table 8. Comparison results of Haze-Highway dataset tests
方法 Acc/% F1/% Pr/% Re/% Fr/(帧·s-1) SCNN 92.08 87.31 90.44 88.53 27 RESA 89.35 90.85 90.03 89.76 46 UFLD-ResNet18 90.63 89.89 92.85 90.52 239 UFLD-ResNet34 89.74 90.86 91.20 87.90 182 CLRNet 92.63 89.44 90.69 90.99 177 PointLaneNet 84.65 84.71 82.33 80.54 141 UDLD-Net 92.55 91.16 91.22 91.17 269 -
[1] SHA H, SINGH M K, HAOUARI R, et al. Network-wide safety impacts of dedicated lanes for connected and autonomous vehicles[J]. Accident Analysis & Prevention, 2024, 195: 107424. [2] 戴喆, 吴宇轩, 董是, 等. 雷达与视觉传感器融合的高速公路全域车辆轨迹与交通参数检测方法[J]. 交通运输工程学报, 2025, 25(1): 197-210. doi: 10.19818/j.cnki.1671-1637.2025.01.014DAI Zhe, WU Yu-xuan, DONG Shi, et al. Global vehicle trajectories and traffic parameters detecting method in expressway based on radar and vision sensor fusion[J]. Journal of Traffic and Transportation Engineering, 2025, 25(1): 197-210. doi: 10.19818/j.cnki.1671-1637.2025.01.014 [3] 谢宪毅, 赵鑫, 金立生, 等. 融合深度强化学习与滚动时域优化的智能车辆轨迹跟踪控制[J]. 交通运输工程学报, 2024, 24(6): 259-272. doi: 10.19818/j.cnki.1671-1637.2024.06.018XIE Xian-yi, ZHAO Xin, JIN Li-sheng, et al. Trajectory tracking control of intelligent vehicles based on deep reinforcement learning and rolling horizon optimization[J]. Journal of Traffic and Transportation Engineering, 2024, 24(6): 259-272. doi: 10.19818/j.cnki.1671-1637.2024.06.018 [4] NISHINO K, KRATZ L, LOMBARDI S. Bayesian defogging[J]. International Journal of Computer Vision, 2012, 98(3): 263-278. doi: 10.1007/s11263-011-0508-1 [5] BORKAR A, HAYES M, SMITH M T. A novel lane detection system with efficient ground truth generation[J]. IEEE Transactions on Intelligent Transportation Systems, 2012, 13(1): 365-374. doi: 10.1109/TITS.2011.2173196 [6] HOQUE S, XU S X, MAITI A, et al. Deep learning for 6D pose estimation of objects — A case study for autonomous driving[J]. Expert Systems with Applications, 2023, 223: 119838. doi: 10.1016/j.eswa.2023.119838 [7] ZHENG T, HUANG Y F, LIU Y, et al. CLRNet: Cross layer refinement network for lane detection[C]//IEEE. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2022: 888-897. [8] CHEN Z P, LIU Q F, LIAN C F. PointLaneNet: Efficient end-to-end CNNs for accurate real-time lane detection[C]//IEEE. 2019 IEEE Intelligent Vehicles Symposium (Ⅳ). New York: IEEE, 2019: 2563-2568. [9] 王畅, 李勇杭, 张凯超, 等. 基于融合分割和变尺度窗口的车道线距离检测[J]. 中国公路学报, 2023, 36(7): 212-222.WANG Chang, LI Yong-hang, ZHANG Kai-chao, et al. Lane line distance detection based on fusion segmentation and a variable-scale window[J]. China Journal of Highway and Transport, 2023, 36(7): 212-222. [10] ZHOU Z, HU Z Z, LI N, et al. Enhancing vehicle localization by matching HD map with road marking detection[J]. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, 2024, 238(13): 4129-4141. doi: 10.1177/09544070231191156 [11] 吴骅跃, 赵祥模. 基于IPM和边缘图像过滤的多干扰车道线检测[J]. 中国公路学报, 2020, 33(5): 153-164.WU Hua-yue, ZHAO Xiang-mo. Multi-interference lane recognition based on IPM and edge image filtering[J]. China Journal of Highway and Transport, 2020, 33(5): 153-164. [12] QU Z, JIN H, ZHOU Y, et al. Focus on local: Detecting lane marker from bottom up via key point[C]//IEEE. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2021: 14117-14125. [13] ZHENG T, FANG H, ZHANG Y, et al. RESA: Recurrent feature-shift aggregator for lane detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(4): 3547-3554. doi: 10.1609/aaai.v35i4.16469 [14] LEE S, KIM J, YOON J S, et al. VPGNet: Vanishing point guided network for lane and road marking detection and recognition[C]//IEEE. 2017 IEEE International Conference on Computer Vision (ICCV). New York: IEEE, 2017: 1965-1973. [15] LI X, LI J, HU X L, et al. Line-CNN: End-to-end traffic line detection with line proposal unit[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 21(1): 248-258. doi: 10.1109/TITS.2019.2890870 [16] LIU L Z, CHEN X H, ZHU S Y, et al. CondLaneNet: A top-to-down lane detection framework based on conditional convolution[C]//IEEE. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2021: 3753-3762. [17] ZHANG Y J, ZHU L, FENG W, et al. VIL-100: A new dataset and a baseline model for video instance lane detection[C]//IEEE. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2021: 15661-15670. [18] RAN H, YIN Y F, HUANG F L, et al. FLAMNet: A flexible line anchor mechanism network for lane detection[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(11): 12767-12778. doi: 10.1109/TITS.2023.3290991 [19] ZHANG R H, PENG J T, GOU W T, et al. A robust and real-time lane detection method in low-light scenarios to advanced driver assistance systems[J]. Expert Systems with Applications, 2024, 256: 124923. doi: 10.1016/j.eswa.2024.124923 [20] NAROTE S P, BHUJBAL P N, NAROTE A S, et al. A review of recent advances in lane detection and departure warning system[J]. Pattern Recognition, 2018, 73: 216-234. doi: 10.1016/j.patcog.2017.08.014 [21] QIU Y S, LU Y Y, WANG Y T, et al. IDOD-YOLOV7: Image-dehazing YOLOV7 for object detection in low-light foggy traffic environments[J]. Sensors, 2023, 23(3): 1347. doi: 10.3390/s23031347 [22] KAR A, DHARA S K, SEN D, et al. Zero-shot single image restoration through controlled perturbation of Koschmieder's model[C]//IEEE. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2021: 16205-16215. [23] MCCARTNEY E J, HALL F F. Optics of the atmosphere: Scattering by molecules and particles[J]. Physics Today, 1977, 30(5): 76-77. doi: 10.1063/1.3037551 [24] KIM S E, PARK T H, EOM I K. Fast single image dehazing using saturation based transmission map estimation[J]. IEEE Transactions on Image Processing, 2019, 29: 1985-1998. [25] ZHU Q S, MAI J M, SHAO L. A fast single image haze removal algorithm using color attenuation prior[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3522-3533. doi: 10.1109/TIP.2015.2446191 [26] CAI B L, XU X M, JIA K, et al. DehazeNet: An end-to-end system for single image haze removal[J]. IEEE Transactions on Image Processing, 2016, 25(11): 5187-5198. doi: 10.1109/TIP.2016.2598681 [27] REN W Q, PAN J S, ZHANG H, et al. Single image dehazing via multi-scale convolutional neural networks with holistic edges[J]. International Journal of Computer Vision, 2020, 128(1): 240-259. doi: 10.1007/s11263-019-01235-8 [28] LI B Y, PENG X L, WANG Z Y, et al. AOD-net: All-in-one dehazing network[C]//IEEE. 2017 IEEE International Conference on Computer Vision (ICCV). New York: IEEE, 2017: 4780-4788. [29] QIN M J, XIE F Y, LI W, et al. Dehazing for multispectral remote sensing images based on a convolutional neural network with the residual architecture[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(5): 1645-1655. doi: 10.1109/JSTARS.2018.2812726 [30] LIU X H, MA Y R, SHI Z H, et al. GridDehazeNet: Attention-based multi-scale network for image dehazing[C]//IEEE. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). New York: IEEE, 2019: 7313-7322. [31] CHEN D D, HE M M, FAN Q N, et al. Gated context aggregation network for image dehazing and deraining[C]//IEEE. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). New York: IEEE, 2019: 1375-1383. [32] DONG Y, LIU Y H, ZHANG H, et al. FD-GAN: Generative adversarial networks with fusion-discriminator for single image dehazing[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 10729-10736. doi: 10.1609/aaai.v34i07.6701 [33] HUANG P C, ZHAO L, JIANG R H, et al. Self-filtering image dehazing with self-supporting module[J]. Neuro-computing, 2021, 432: 57-69. [34] RONNEBERGER O, FISCHER P, BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]//Springer. Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015. Berlin: Springer, 2015: 234-241. [35] MA J Y, PENG C L, TIAN X, et al. DBDnet: A deep boosting strategy for image denoising[J]. IEEE Transactions on Multimedia, 2021, 24: 3157-3168. [36] RASTIVEIS H, SHAMS A, SARASUA W A, et al. Automated extraction of lane markings from mobile LiDAR point clouds based on fuzzy inference[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 160: 149-166. doi: 10.1016/j.isprsjprs.2019.12.009 [37] TIAN W, YU X W, HU H H, et al. Interactive attention learning on detection of lane and lane marking on the road by monocular camera image[J]. Sensors, 2023, 23(14): 6545. doi: 10.3390/s23146545 [38] PAN X G, SHI J P, LUO P, et al. Spatial as deep: Spatial CNN for traffic scene understanding[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2018, 32(1): 7276-7283. [39] ZOU Q, JIANG H W, DAI Q Y, et al. Robust lane detection from continuous driving scenes using deep neural networks[J]. IEEE Transactions on Vehicular Technology, 2020, 69(1): 41-54. doi: 10.1109/TVT.2019.2949603 [40] GHAFOORIAN M, NUGTEREN C, BAKA N, et al. EL-GAN: Embedding loss driven generative adversarial networks for lane detection[C]//Springer. Computer Vision—ECCV 2018 Workshops. Berlin: Springer, 2019: 256-272. [41] YE Y Y, HAO X L, CHEN H J. Lane detection method based on lane structural analysis and CNNs[J]. IET Intelligent Transport Systems, 2018, 12(6): 513-520. doi: 10.1049/iet-its.2017.0143 [42] SHUNMUGA PERUMAL P, WANG Y, SUJASREE M, et al. LaneScanNET: A deep-learning approach for simultaneous detection of obstacle-lane states for autonomous driving systems[J]. Expert Systems with Applications, 2023, 233: 120970. doi: 10.1016/j.eswa.2023.120970 [43] MUNIR F, AZAM S, JEON M, et al. LDNet: End-to-end lane marking detection approach using a dynamic vision sensor[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(7): 9318-9334. doi: 10.1109/TITS.2021.3102479 [44] QIN Z Q, ZHANG P Y, LI X. Ultra fast deep lane detection with hybrid anchor driven ordinal classification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(5): 2555-2568. doi: 10.1109/TPAMI.2022.3182097 [45] AL MAMUN A, EM P P, HOSSEN M J, et al. A deep learning approach for lane marking detection applying encode-decode instant segmentation network[J]. Heliyon, 2023, 9(3): e14212. doi: 10.1016/j.heliyon.2023.e14212 [46] XIAO D G, YANG X F, LI J F, et al. Attention deep neural network for lane marking detection[J]. Knowledge-Based Systems, 2020, 194: 105584. doi: 10.1016/j.knosys.2020.105584 [47] HE K M, SUN J, TANG X O. Single image haze removal using dark channel prior[C]//IEEE. 2009 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2009: 1956-1963. [48] LIU R J, YUAN Z J, LIU T, et al. End-to-end lane shape prediction with transformers[C]//IEEE. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). New York: IEEE, 2021: 3693-3701. [49] PHILION J. FastDraw: Addressing the long tail of lane detection by adapting a sequential prediction network[C]//IEEE. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New York: IEEE, 2019: 11574-11583. -
下载: