IEEE/CAA Journal of Automatica Sinica
Citation: | S. S. Mei, Y. Ma, X. G. Mei, J. Huang, and F. Fan, “S2-Net: Self-supervision guided feature representation learning for cross-modality images,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 10, pp. 1883–1885, Oct. 2022. doi: 10.1109/JAS.2022.105884 |
[1] |
J. Ma, X. Jiang, A. Fan, J. Jiang, and J. Yan, “Image matching from handcrafted to deep features: A survey,” Int. J. Comput. Vis., vol. 129, no. 1, pp. 23–79, 2021. doi: 10.1007/s11263-020-01359-2
|
[2] |
J. Ma, Z. Li, K. Zhang, Z. Shao, and G. Xiao, “Robust feature matching via neighborhood manifold representation consensus,” ISPRS J. Photogramm. Remote Sens., vol. 183, pp. 196–209, 2022. doi: 10.1016/j.isprsjprs.2021.11.004
|
[3] |
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004. doi: 10.1023/B:VISI.0000029664.99615.94
|
[4] |
J. Li, Q. Hu, and M. Ai, “RIFT: Multi-modal image matching based on radiation-variation insensitive feature transform,” IEEE Trans. Image Process., vol. 29, pp. 3296–3310, 2019.
|
[5] |
X. Jiang, J. Ma, G. Xiao, Z. Shao, and X. Guo, “A review of multimodal image matching: Methods and applications,” Inf. Fusion, vol. 73, pp. 22–71, 2021. doi: 10.1016/j.inffus.2021.02.012
|
[6] |
C. A. Aguilera, A. D. Sappa, C. Aguilera, and R. Toledo, “Cross-spectral local descriptors via quadruplet network,” Sensors, vol. 17, no. 4, p. 873, 2017.
|
[7] |
H. Zhang, W. Ni, W. Yan, D. Xiang, J. Wu, X. Yang, and H. Bian, “Registration of multimodal remote sensing image based on deep fully convolutional neural network,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 12, no. 8, pp. 3028–3042, 2019. doi: 10.1109/JSTARS.2019.2916560
|
[8] |
M. Dusmanu, I. Rocco, T. Pajdla, M. Pollefeys, J. Sivic, A. Torii, and T. Sattler, “D2-Net: A trainable CNN for joint description and detection of local features,” in Proc. IEEE Int. Conf. Comput. Vis., 2019, pp. 8092−8101.
|
[9] |
J. Revaud, C. De Souza, M. Humenberger, and P. Weinzaepfel, “R2D2: Reliable and repeatable detector and descriptor,” Adv. Neural Inf. Process Syst., vol. 32, pp. 12405–12415, 2019. doi: 10.1021/acs.langmuir.8b02867.s001
|
[10] |
D. DeTone, T. Malisiewicz, and A. Rabinovich, “Superpoint: Self-supervised interest point detection and description,” in Proc. IEEE Int. Conf. Comput. Vis., 2018, pp. 224−236.
|
[11] |
S. Cui, A. Ma, Y. Wan, Y. Zhong, B. Luo, and M. Xu, “Cross-modality image matching network with modality-invariant feature representation for airborne-ground thermal infrared and visible datasets,” IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1–14, 2021. doi: 10.1109/TGRS.2021.3099506
|
[12] |
H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, “U2Fusion: A unified unsupervised image fusion network,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 1, pp. 502–518, 2022. doi: 10.1109/TPAMI.2020.3012548
|
[13] |
M. Brown and S. Süsstrunk, “Multi-spectral sift for scene category recognition,” in Proc. IEEE Int. Conf. Comput. Vis. 2011, pp. 177−184.
|
[14] |
Y. Xiang, R. Tao, F. Wang, H. You, and B. Han, “Automatic registration of optical and SAR images via improved phase congruency model,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 13, pp. 5847–5861, 2020. doi: 10.1109/JSTARS.2020.3026162
|