Volume 13
Issue 1
IEEE/CAA Journal of Automatica Sinica
| Citation: | Y. He and X. Luo, “Tensor low-rank orthogonal compression for convolutional neural networks,” IEEE/CAA J. Autom. Sinica, vol. 13, no. 1, pp. 227–229, Jan. 2026. doi: 10.1109/JAS.2025.125213 |
| [1] |
L. Fan, S. Li, Y. Li, et al., “Pavement cracks coupled with shadows: A new shadow-crack dataset and a shadow-removal-oriented crack detection approach,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 7, pp. 1593–1607, Jul. 2023. doi: 10.1109/JAS.2023.123447
|
| [2] |
Y. Shi, N. Wang, and X. Guo, “YOLOV: Making still image object detectors great at video object detection,” in Proc. 37th AAAI Conf. Artif. Intell., 2023, pp. 2254–2262.
|
| [3] |
S. Gao, Z. Peng, H. Wang, et al., “Safety-critical model-free control for multi-target tracking of USV with collision avoidance,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1323–1326, Jul. 2022.
|
| [4] |
C. Zhu, J. Yang, Z. Shao, et al., “Vision based hand gesture recognition using 3D shape context,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 9, pp. 1600–1613, Sep. 2021. doi: 10.1109/JAS.2019.1911534
|
| [5] |
Y. Sui, M. Yin, Y. Gong, et al., “ELRT: Efficient low-rank training for compact convolutional neural networks,” arXiv preprint arXiv: 2401.10341, 2024.
|
| [6] |
C. Leng, H. Zhang, G. Cai, et al., “Total variation constrained non-negative matrix factorization for medical image registration,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 5, pp. 1025–1037, May 2021. doi: 10.1109/JAS.2021.1003979
|
| [7] |
L. Chen and X. Luo, “Tensor distribution regression based on the 3D conventional neural networks,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 7, pp. 1628–1630, Jul. 2023. doi: 10.1109/JAS.2023.123591
|
| [8] |
N. Zeng, X. Li, P. Wu, et al., “A novel tensor decomposition-based efficient detector for low-altitude aerial objects with knowledge distillation scheme,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 487–501, Feb. 2024. doi: 10.1109/JAS.2023.124029
|
| [9] |
J. Xiao, C. Zhang, Y. Gong, et al., “Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations,” in Proc. 35th Adv. Neural Inf. Process. Syst., 2024, pp. 20051–20063.
|
| [10] |
W. Wang, Y. Sun, B. Eriksson, et al., “Wide compression: Tensor ring nets,” in Proc. IEEE/CVF 31st Conf. Comput. Vis. Pattern Recognit., 2018, pp. 9329–9338.
|
| [11] |
H. Yang, M. Tang, W. Wen, et al., “Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification,” in Proc. IEEE/CVF 32nd Conf. Comput. Vis. Pattern Recognit, 2020, pp. 678–679.
|
| [12] |
F. Bi, X. Luo, B. Shen, et al., “Proximal alternating-direction-method-of-multipliers-incorporated nonnegative latent factor analysis,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 6, pp. 1388–1406, Mar. 2023. doi: 10.1109/JAS.2023.123474
|
| [13] |
D. Savostianova, E. Zangrando, G. Ceruti, et al., “Robust low-rank training via approximate orthonormal constraints,” in Proc. 38th Adv. Neural Inf. Process. Syst., 2024, pp. 1–20.
|
| [14] |
B. Shi and K. Liu, “Regularization by multiple dual frames for compressed sensing magnetic resonance imaging with convergence analysis,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 11, pp. 2136–2153, Nov. 2023. doi: 10.1109/JAS.2023.123543
|
| [15] |
X. Ruan, Y. Liu, C. Yuan, et al., “EDP: An efficient decomposition and pruning scheme for convolutional neural network compression,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 10, pp. 4499–4513, Oct. 2021. doi: 10.1109/TNNLS.2020.3018177
|
| [16] |
G. Fang, X. Ma, M. Song, M. Mi, and X. Wang, “DepGraph: Towards any structural pruning,” in Proc. IEEE/CVF 34th Conf. Comput. Vis. Pattern Recognit., 2023, pp. 16091–16101.
|
| [17] |
L. Binh and S. Woo, “ADD: Frequency attention and multi-view based knowledge distillation to detect low-quality compressed deepfake images,” in Proc. AAAI 36th Conf. Artif. Intell., 2022, pp. 122–130.
|
| [18] |
L. Deng, G. Li, S. Han, et al., “Model compression and hardware acceleration for neural networks: A comprehensive survey,” Proc. IEEE, vol. 108, no. 4, pp. 485–532, Apr. 2020. doi: 10.1109/JPROC.2020.2976475
|
| [19] |
Y. He and L. Xiao, “Structured pruning for deep convolutional neural networks: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 5, pp. 2900–2919, May 2024. doi: 10.1109/TPAMI.2023.3334614
|
| [20] |
W. Dai, J. Fan, Y. Miao, et al., “Deep learning model compression with rank reduction in tensor decomposition,” IEEE Trans. Neural Netw. Learn. Syst., vol. 36, no. 1, pp. 1315–1328, Jan. 2025.
|
| [21] |
J. Chee, M. Flynn, A. Damle, et al., “Model preserving compression for neural networks,” in Proc. 36th Adv. Neural Inf. Process. Syst., 2022, pp. 38060–38074.
|
| [22] |
Z. Kun, C. Yu, and Y. Wan, “Recursive least squares identification with variable-direction forgetting via oblique projection decomposition,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 3, pp. 547–555, Dec. 2021.
|
| [23] |
N. Bansal, X. Chen, and Z. Wang, “Can we gain more from orthogonality regularizations in training deep networks,” in Proc. 32nd Adv. Neural Inf. Process. Syst., 2018, pp. 1–11.
|