IEEE/CAA Journal of Automatica Sinica
Citation: | F.-Y. Wang and Y. Shen, “Parallel light fields: A perspective and a framework,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 542–544, Feb. 2024. doi: 10.1109/JAS.2023.123174 |
[1] |
F.-Y. Wang, “Parallel light field and parallel optics, from optical computing experiment to optical guided intelligence,” 2018. [Online]. Available: http://www.sklmccs.ia.ac.cn/2018reports.html.
|
[2] |
F.-Y. Wang, X. Meng, S. Du, et al., “Parallel light field: The framework and processes,” Chinese J. Intelligent Science Technology, vol. 3, no. 1, pp. 110–122, 2021.
|
[3] |
Y. Lu, C. Guo, X. Dai, et al., “Data-efficient image captioning of fine art paintings via virtual-real semantic alignment training,” Neurocomputing, vol. 490, pp. 163–180, 2022. doi: 10.1016/j.neucom.2022.01.068
|
[4] |
K. Wang, C. Gou, N. Zheng, et al., “Parallel vision for perception and understanding of complex scenes: Methods, framework, and perspectives,” Artificial Intelligence Review, vol. 48, no. 3, pp. 299–329, 2017. doi: 10.1007/s10462-017-9569-z
|
[5] |
Y. Tian, X. Wang, Y. Shen, et al., “Parallel point clouds: Hybrid point cloud generation and 3D model enhancement via virtual-real integration,” Remote Sensing, vol. 13, no. 15, p. 2868, 2021. doi: 10.3390/rs13152868
|
[6] |
F.-Y. Wang, “Parallel control and management for intelligent transportation systems: Concepts, architectures, and applications,” IEEE Trans. Intelligent Transportation Systems, vol. 11, no. 3, pp. 630–638, 2010. doi: 10.1109/TITS.2010.2060218
|
[7] |
T. G. Georgiev, K. C. Zheng, B. Curless, et al., “Spatio-angular resolution tradeoffs in integral photography,” Rendering Techniques, vol. 2006, no. 263–272, p. 21, 2006.
|
[8] |
B. Wilburn, N. Joshi, V. Vaish, et al., “High performance imaging using large camera arrays,” in Proc. ACM SIGGRAPH, 2005, pp. 765–776.
|
[9] |
K. Marwah, G. Wetzstein, Y. Bando, et al., “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graphics, vol. 32, no. 4, pp. 1–12, 2013.
|
[10] |
Y. Ma, X. Wang, W. Gao, et al., “Progressive fusion network based on infrared light field equipment for infrared image enhancement,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 9, pp. 1687–1690, 2022. doi: 10.1109/JAS.2022.105812
|
[11] |
K. Honauer, O. Johannsen, D. Kondermann, et al., “A dataset and evaluation methodology for depth estimation on 4D light fields,” in Proc. Asian Conf. Computer Vision, 2016, pp. 19–34.
|
[12] |
G. Wu, B. Masia, A. Jarabo, et al., “Light field image processing: An overview,” IEEE J. Selected Topics Signal Processing, vol. 11, no. 7, pp. 926–954, 2017. doi: 10.1109/JSTSP.2017.2747126
|
[13] |
D. Deng and A. Zakhor, “Temporal lidar frame prediction for autonomous driving,” in Proc. Int. Conf. 3D Vision, 2020, pp. 829–837.
|
[14] |
X. Liang, L. Lee, W. Dai, et al., “Dual motion gan for future-flow embedded video prediction,” in Proc. IEEE Int. Conf. Computer Vision, 2017, pp. 1744–1752.
|
[15] |
S. Ma, B. M. Smith, and M. Gupta, “3D scene flow from 4D light field gradients,” in Proc. European Conf. Computer Vision, 2018, pp. 666–681.
|
[16] |
L. Wan and X. Zhang, “Scene flow estimation based on light field,” in Proc. 7th Int. Conf. Intelligent Computing and Signal Processing, 2022, pp. 171–176.
|