A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 19.2, Top 1 (SCI Q1)
    CiteScore: 28.2, Top 1% (Q1)
    Google Scholar h5-index: 95, TOP 5
Turn off MathJax
Article Contents
Z. Li, J. Li, Q. Wang, and Y. Yin, “Counterfactual-guided implicit correspondence prompting for visible-infrared person re-identification,” IEEE/CAA J. Autom. Sinica, vol. 13, no. 2, pp. 477–479, Feb. 2026. doi: 10.1109/JAS.2025.125432
Citation: Z. Li, J. Li, Q. Wang, and Y. Yin, “Counterfactual-guided implicit correspondence prompting for visible-infrared person re-identification,” IEEE/CAA J. Autom. Sinica, vol. 13, no. 2, pp. 477–479, Feb. 2026. doi: 10.1109/JAS.2025.125432

Counterfactual-Guided Implicit Correspondence Prompting for Visible-Infrared Person Re-Identification

doi: 10.1109/JAS.2025.125432
More Information
  • loading
  • [1]
    A. Wu, W.-S. Zheng, H.-X. Yu, S. Gong, and J. Lai, “RGB-infrared cross-modality person re-identification,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 5380–5389.
    [2]
    M. Ye, X. Lan, Z. Wang, and P. C. Yuen, “Bi-directional center-constrained top-ranking for visible thermal person re-identification,” IEEE Trans. Infor. Forensics and Security, vol. 15, pp. 407–419, 2019.
    [3]
    K. Jiang, T. Zhang, X. Liu, B. Qian, Y. Zhang, and F. Wu, “Cross-modality transformer for visible-infrared person re-identification,” in Proc. European Conf. Computer Vision, 2022, pp. 480–496.
    [4]
    D. Cheng, L. He, N. Wang, S. Zhang, Z. Wang, and X. Gao, “Efficient bilateral cross-modality cluster matching for unsupervised visible-infrared person reid,” in Proc. 31st ACM Int. Conf. Multimedia, 2023, pp. 1325–1333.
    [5]
    T. Liang, Y. Jin, W. Liu, S. Feng, T. Wang, and Y. Li, “Keypoint-guided modality-invariant discriminative learning for visible-infrared person reidentification,” in Proc. 30th ACM Int. Conf. Multimedia, 2022, pp. 3965–3973.
    [6]
    J. Feng, A. Wu, and W.-S. Zheng, “Shape-erased feature learning for visible-infrared person re-identification,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2023, pp. 22752–22761.
    [7]
    N. Pu, W. Chen, Y. Liu, E. M. Bakker, and M. S. Lew, “Dual Gaussian-based variational subspace disentanglement for visible-infrared person re-identification,” in Proc. 28th ACM Int. Conf. Multimedia, 2020, pp. 2149–2158.
    [8]
    Z. Feng, J. Lai, and X. Xie, “Learning modality-specific representations for visible-infrared person re-identification,” IEEE Trans. Image Processing, vol. 29, pp. 579–590, 2020. doi: 10.1109/TIP.2019.2928126
    [9]
    Y. Li, T. Zhang, X. Liu, Q. Tian, Y. Zhang, and F. Wu, “Visible-infrared person re-identification with modality-specific memory network,” IEEE Trans. Image Processing, vol. 31, pp. 7165–7178, 2022. doi: 10.1109/TIP.2022.3220408
    [10]
    Y. Rao, G. Chen, J. Lu, and J. Zhou, “Counterfactual attention learning for fine-grained visual categorization and re-identification,” in Proc. IEEE/CVF Int. Conf. Computer Vision, 2021, pp. 1025–1034.
    [11]
    M. Ye, J. Shen, G. Lin, T. Xiang, L. Shao, and S. C. Hoi, “Deep learning for person re-identification: A survey and outlook,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 44, no. 6, pp. 2872–2893, 2021.
    [12]
    D. T. Nguyen, H. G. Hong, K. W. Kim, and K. R. Park, “Person recognition system based on a combination of body images from visible light and thermal cameras,” Sensors, vol. 17, no. 3, p. 605, 2017. doi: 10.3390/s17030605
    [13]
    M. Ye, J. Shen, D. J. Crandall, L. Shao, and J. Luo, “Dynamic dual-attentive aggregation learning for visible-infrared person reidentification,” in Proc. European Conf. Computer Vision, 2020, pp. 229–247.
    [14]
    Y. Chen, L. Wan, Z. Li, Q. Jing, and Z. Sun, “Neural feature search for RGB-infrared person re-identification,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2021, pp. 587–597.
    [15]
    Z. Huang, J. Liu, L. Li, K. Zheng, and Z.-J. Zha, “Modality-adaptive mixup and invariant decomposition for RGB-infrared person re-identification,” in Proc. AAAI Conf. Artificial Intelligence, 2022, vol. 36, no. 1, pp. 1034–1042.
    [16]
    H. Lu, X. Zou, and P. Zhang, “Learning progressive modality-shared transformers for effective visible-infrared person re-identification,” in Proc. AAAI Conf. Artificial Intelligence, 2023, vol. 37, no. 2, pp. 1835–1843.
    [17]
    M. Ye, W. Ruan, B. Du, and M. Z. Shou, “Channel augmented joint learning for visible-infrared recognition,” in Proc. IEEE Int. Conf. Computer Vision, 2021, pp. 13547–13556.
    [18]
    M. Yang, Z. Huang, and X. Peng, “Robust object re-identification with coupled noisy labels,” Int. J. Computer Vision, vol. 132, pp. 2511–2529, 2024. doi: 10.1007/s11263-024-01997-w
    [19]
    M. Ye, Z. Wu, C. Chen, and B. Du, “Channel augmentation for visible-infrared re-identification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 46, no. 4, pp. 2299–2315, 2024. doi: 10.1109/TPAMI.2023.3332875
    [20]
    G. Zhang, Y. Zhang, and Z. Tan, “ProtoHPE: Prototype-guided high-frequency patch enhancement for visible-infrared person reidentification,” in Proc. 31st ACM Int. Conf. Multimedia, 2023, pp. 944–954.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(1)  / Tables(3)

    Article Metrics

    Article views (11) PDF downloads(0) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return