A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 7
Jul.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
H. Tian, T. Deng, and H. M. Yan, “Driving as well as on a sunny day? Predicting driver’s fixation in rainy weather conditions via a dual-branch visual model,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1335–1338, Jul. 2022. doi: 10.1109/JAS.2022.105716
Citation: H. Tian, T. Deng, and H. M. Yan, “Driving as well as on a sunny day? Predicting driver’s fixation in rainy weather conditions via a dual-branch visual model,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1335–1338, Jul. 2022. doi: 10.1109/JAS.2022.105716

Driving as well as on a Sunny Day? Predicting Driver’s Fixation in Rainy Weather Conditions via a Dual-Branch Visual Model

doi: 10.1109/JAS.2022.105716
More Information
  • loading
  • [1]
    S. Jung, K. Jang, Y. Yoon, and S. Kang, “Contributing factors to vehicle to vehicle crash frequency and severity under rainfall,” J. Safety Research, vol. 50, pp. 1–10, 2014. doi: 10.1016/j.jsr.2014.01.001
    [2]
    S. Jung, X. Qin, and D. A. Noyce, “Rainfall effect on single-vehicle crash severities using polychotomous response models,” Accident Analysis &Prevention, vol. 42, no. 1, pp. 213–224, 2010.
    [3]
    A. Drosu, C. Cofaru, and M. V. Popescu, “Fatal injury risk model (firm) of the road accidents that occurred in rainy conditions—a probabilistic approach,” Intern. J. Auto. Techn., vol. 22, no. 5, pp. 1415–1426, 2021. doi: 10.1007/s12239-021-0123-2
    [4]
    Z. Han and H. O. Sharif, “Investigation of the relationship between rainfall and fatal crashes in texas, 1994–2018,” Sustainability, vol. 12, no. 19, p. 7976, 2020.
    [5]
    A. Jain, H. S. Koppula, B. Raghavan, S. Soh, and A. Saxena, “Car that knows before you do: Anticipating maneuvers via learning temporal driving models,” in Proc. IEEE Intern. Conf. Comp. Vis., 2015.
    [6]
    M. Corbetta, F. Miezin, S. Dobmeyer, G. Shulman, and S. Petersen, “Attentional modulation of neural processing of shape, color, and velocity in humans,” Science, vol. 248, no. 4962, pp. 1556–1559, 1990. doi: 10.1126/science.2360050
    [7]
    T. S. Lee, “Computations in the early visual cortex,” J. Physiol. Paris., vol. 97, no. 2–3, pp. 121–139, 2003. doi: 10.1016/j.jphysparis.2003.09.015
    [8]
    T. Deng, K. Yang, Y. Li, and H. Yan, “Where does the driver look? Top-down-based saliency detection in a traffic driving environment,” IEEE Trans. Intell. Transp. Syst., vol. 17, no. 7, pp. 2051–2062, 2016. doi: 10.1109/TITS.2016.2535402
    [9]
    T. Deng, H. Yan, and Y.-J. Li, “Learning to boost bottom-up fixation prediction in driving environments via random forest,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 9, pp. 3059–3067, 2018. doi: 10.1109/TITS.2017.2766216
    [10]
    L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Patt. Analys. Mach. Intell., vol. 20, no. 11, pp. 1254–1259, 1998.
    [11]
    X. Hou, J. Harel, and C. Koch, “Image signature: Highlighting sparse salient regions,” IEEE Trans. Patt. Analysis Mach. Intell., vol. 34, no. 1, p. 194, 2012.
    [12]
    J. Li, M. D. Levine, X. An, X. Xu, and H. He, “Visual saliency based on scale-space analysis in the frequency domain,” IEEE Trans. Patte. Analys. Mach. Intell., vol. 35, no. 4, pp. 996–1010, 2013. doi: 10.1109/TPAMI.2012.147
    [13]
    Q. Lai, W. Wang, H. Sun, and J. Shen, “Video saliency prediction using spatiotemporal residual attentive networks,” IEEE Trans. Image Proc., vol. 29, pp. 1113–1126, 2020. doi: 10.1109/TIP.2019.2936112
    [14]
    W. Wang and J. Shen, “Deep visual attention prediction,” IEEE Trans. Image Proc., vol. 27, no. 5, pp. 2368–2378, 2018. doi: 10.1109/TIP.2017.2787612
    [15]
    M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara, “A deep multi-level network for saliency prediction,” in Proc. IEEE Int. Conf. Patt. Rec., 2016, pp. 3488–3493.
    [16]
    S. Jia and N. D. Bruce, “EML-Net: An expandable multi-layer network for saliency prediction,” Image Vis. Comp., vol. 95, p. 103887, 2020.
    [17]
    K. Min and J. J. Corso, “TASED-Net: Temporally-aggregating spatial encoder-decoder network for video saliency detection,” in Proc. IEEE Intern. Conf. Comp. Vis., 2019, pp. 2394–2403.
    [18]
    P. Linardos, E. Mohedano, J. J. Nieto, N. E. O’Connor, X. Giró-i-Nieto, and K. McGuinness, “Simple vs complex temporal recurrences for video saliency prediction,” in Proc. British Machine Vision Conf., BMVA Press, 2019, p. 182.
    [19]
    T. Deng, H. Yan, L. Qin, T. Ngo, and B. S. Manjunath, “How do drivers allocate their potential attention? Driving fixation prediction via convolutional neural networks,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 5, pp. 2146–2154, 2020. doi: 10.1109/TITS.2019.2915540
    [20]
    T. Deng, F. Yan, and H. Yan, “Driving video fixation prediction model via spatio-temporal networks and attention gates,” in Proc. IEEE Intern. Conf. Multim. Expo. 2021, pp. 1–6.
    [21]
    J. Fang, D. Yan, J. Qiao, J. Xue, and H. Yu, “Dada: Driver attention prediction in driving accident scenarios,” IEEE Trans. Intell. Transp. Syst., pp. 1–13, 2021.
    [22]
    Z. Bylinskii, T. Judd, A. Borji, L. Itti, F. Durand, A. Oliva, and A. Torralba, “Mit saliency benchmark,” 2015, Available: http://saliency.mit.edu/. Accessed on: Sept. 2021.
    [23]
    Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille, “The secrets of salient object segmentation,” in Proc. IEEE Conf. Comp. Vis. Patt. Rec., 2014, pp. 280–287.
    [24]
    M. Jiang, S. Huang, J. Duan, and Q. Zhao, “Salicon: Saliency in context,” in Proc. IEEE Conf. Comp. Vis. Patt. Rec., 2015.
    [25]
    S. Mathe and C. Sminchisescu, “Dynamic eye movement datasets and learnt saliency models for visual action recognition,” in Proc. Eur. Conf. Comp. Vis., 2012.
    [26]
    W. Wang, J. Shen, J. Xie, M.-M. Cheng, H. Ling, and A. Borji, “Revisiting video saliency prediction in the deep learning ERA,” IEEE Trans. Patt. Analys. Mach. Intell., vol. 43, no. 1, pp. 220–237, 2021. doi: 10.1109/TPAMI.2019.2924417
    [27]
    Y. Xia, D. Zhang, J. Kim, K. Nakayama, K. Zipser, and D. Whitney, “Predicting driver attention in critical situations,” in Proc. Conf. ACCV, Springer, May. 2018, pp. 658–674.
    [28]
    A. Palazzi, D. Abati, F. Solera, and R. Cucchiara, “Predicting the driver’s focus of attention: The DR(eye)VE project,” IEEE TPAMI, vol. 41, no. 7, pp. 1720–1733, 2019. doi: 10.1109/TPAMI.2018.2845370
    [29]
    D. Liu, Y. Wang, K. E. Ho, Z. Chu, and E. Matson, “Virtual world bridges the real challenge: Automated data generation for autonomous driving,” in Proc. IEEE Intell. Veh. Symp., 2019, pp. 159–164.
    [30]
    D. Liu, Y. Cui, Z. Cao, and Y. Chen, “A large-scale simulation dataset: Boost the detection accuracy for special weather conditions,” in Proc. IEEE Intern. Joint Conf. Neural Netw., 2020, pp. 1–8.
    [31]
    D. Liu, Y. Cui, Y. Chen, J. Zhang, and B. Fan, “Video object detection for autonomous driving: Motion-aid feature calibration,” Neurocomputing, vol. 409, pp. 1–11, 2020. doi: 10.1016/j.neucom.2020.05.027
    [32]
    M. Mishkin, L. G. Ungerleider, and K. A. Macko, “Object vision and spatial vision: Two cortical pathways,” Trends in Neur., vol. 6, no. 10, pp. 414–417, 1983.
    [33]
    J. M. Wolfe, M. L.-H. Võ, K. K. Evans, and M. R. Greene, “Visual search in scenes involves selective and nonselective pathways,” Trends Cogn. Sci., vol. 15, no. 2, pp. 77–84, 2011. doi: 10.1016/j.tics.2010.12.001
    [34]
    S. J. Luck and S. A, “Hillyard.Spatial filtering during visual search: Evidence from human electrophysiology,” Journ. Exper. Psych.: Human Perc. Perf., vol. 20, no. 5, pp. 1000–1014, 1994. doi: 10.1037/0096-1523.20.5.1000
    [35]
    G. F. Woodman and S. J. Luck, “Serial deployment of attention during visual search,” J. Exper. Psych.: Human Perc. Perf., vol. 29, no. 1, pp. 121–138, 2003.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(2)  / Tables(2)

    Article Metrics

    Article views (263) PDF downloads(43) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return