IEEE/CAA Journal of Automatica Sinica
Citation:  Y. Zhao, X. He, M. Zhou, and T. Huang, “Accelerated primaldual projection neurodynamic approach with time scaling for linear and set constrained convex optimization problems,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 6, pp. 1485–1498, Jun. 2024. doi: 10.1109/JAS.2024.124380 
The Nesterov accelerated dynamical approach serves as an essential tool for addressing convex optimization problems with accelerated convergence rates. Most previous studies in this field have primarily concentrated on unconstrained smooth convex optimization problems. In this paper, on the basis of primaldual dynamical approach, Nesterov accelerated dynamical approach, projection operator and directional gradient, we present two accelerated primaldual projection neurodynamic approaches with time scaling to address convex optimization problems with smooth and nonsmooth objective functions subject to linear and set constraints, which consist of a secondorder ODE (ordinary differential equation) or differential conclusion system for the primal variables and a firstorder ODE for the dual variables. By satisfying specific conditions for time scaling, we demonstrate that the proposed approaches have a faster convergence rate. This only requires assuming convexity of the objective function. We validate the effectiveness of our proposed two accelerated primaldual projection neurodynamic approaches through numerical experiments.
[1] 
Y. E. Nesterov, “A method of solving a convex programming problem with convergence rate 0(1/k^{2}),” Sov. Math. Dokl., vol. 27, no. 2, pp. 372–376, 1983.

[2] 
W. Su, S. Boyd, and E. Candès, “A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights,” in Proc. 27th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2014, pp. 2510–2518.

[3] 
H. Attouch and Z. Chbani. “Fast inertial dynamics and FISTA algorithms in convex optimization. Perturbation aspects,” arXiv preprint arXiv: 1507.01367, 2015.

[4] 
H. Attouch, J. Peypouquet, and P. Redont, “Fast convex optimization via inertial dynamics with Hessian driven damping,” J. Differ. Equations, vol. 261, no. 10, pp. 5734–5783, Nov. 2016. doi: 10.1016/j.jde.2016.08.020

[5] 
H. Attouch, Z. Chbani, and H. Riahi, “Combining fast inertial dynamics for convex optimization with Tikhonov regularization,” J. Math. Anal. Appl., vol. 457, no. 2, pp. 1065–1094, Jan. 2018. doi: 10.1016/j.jmaa.2016.12.017

[6] 
H. Attouch and A. Cabot, “Convergence of a relaxed inertial proximal algorithm for maximally monotone operators,” Math. Program., vol. 184, no. 1–2, pp. 243–287, Nov. 2020. doi: 10.1007/s10107019014120

[7] 
A. Wibisono, A. C. Wilson, and M. I. Jordan, “A variational perspective on accelerated methods in optimization,” in Proc. Natl. Acad. Sci. USA, vol. 113, no. 47, pp. E7351–E7358, Nov. 2016.

[8] 
A. C. Wilson, B. Recht, and M. I. Jordan, “A Lyapunov analysis of accelerated methods in optimization,” J. Mach. Learn. Res., vol. 22, no. 1, p. 113, Jan. 2021.

[9] 
F. Alimisis, A. Orvieto, G. Bécigneul, and A. Lucchi, “A continuoustime perspective for modeling acceleration in Riemannian optimization,” in Proc. 23rd Int. Conf. Artificial Intelligence and Statistics, Palermo, Italy, 2020, pp. 1297–1307.

[10] 
A. Vassilis, A. JeanFrançois, and D. Charles, “The differential inclusion modeling FISTA algorithm and optimality of convergence rate in the case ≤ 3,” SIAM J. Optim., vol. 28, no. 1, pp. 551–574, Jan. 2018. doi: 10.1137/17M1128642

[11] 
H. Attouch, Z. Chbani, and H. Riahi, “Fast proximal methods via time scaling of damped inertial dynamics,” SIAM J. Optim., vol. 29, no. 3, pp. 2227–2256, Jan. 2019. doi: 10.1137/18M1230207

[12] 
D. Tank and J. Hopfield, “Simple ‘neural’ optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit,” IEEE Trans. Circuits Syst., vol. 33, no. 5, pp. 533–541, May 1986. doi: 10.1109/TCS.1986.1085953

[13] 
M. P. Kennedy and L. O. Chua, “Neural networks for nonlinear programming,” IEEE Trans. Circuits Syst., vol. 35, no. 5, pp. 554–562, May 1988. doi: 10.1109/31.1783

[14] 
Y. Xia and G. Feng, “On convergence rate of projection neural networks,” IEEE Trans. Autom. Control, vol. 49, no. 1, pp. 91–96, Jan. 2004. doi: 10.1109/TAC.2003.821413

[15] 
Y. Xia and J. Wang, “A recurrent neural network for solving nonlinear convex programs subject to linear constraints,” IEEE Trans. Neural Netw., vol. 16, no. 2, pp. 379–386, Mar. 2005. doi: 10.1109/TNN.2004.841779

[16] 
X. Hu and J. Wang, “Design of general projection neural networks for solving monotone linear variational inequalities and linear and quadratic optimization problems,” IEEE Trans. Syst., Man, Cybern. Part B (Cybern.), vol. 37, no. 5, pp. 1414–1421, Oct. 2007. doi: 10.1109/TSMCB.2007.903706

[17] 
X. He, T. Huang, J. Yu, C. Li, and C. Li, “An inertial projection neural network for solving variational inequalities,” IEEE Trans. Cybern., vol. 47, no. 3, pp. 809–814, Mar. 2017. doi: 10.1109/TCYB.2016.2523541

[18] 
S. Zhang and A. G. Constantinides, “Lagrange programming neural networks,” IEEE Trans. Circuits Syst. II: Analog Digit. Signal Process., vol. 39, no. 7, pp. 441–452, Jul. 1992. doi: 10.1109/82.160169

[19] 
W. Bian and X. Chen, “Smoothing neural network for constrained nonLipschitz optimization with applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 3, pp. 399–411, Mar. 2012. doi: 10.1109/TNNLS.2011.2181867

[20] 
Q. Liu and J. Wang, “A projection neural network for constrained quadratic minimax optimization,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 11, pp. 2891–2900, Nov. 2015. doi: 10.1109/TNNLS.2015.2425301

[21] 
Q. Liu and J. Wang, “L_{1}minimization algorithms for sparse signal reconstruction based on a projection neural network,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 3, pp. 698–707, Mar. 2016. doi: 10.1109/TNNLS.2015.2481006

[22] 
Z. Guo, S. Yang, and J. Wang, “Global synchronization of stochastically disturbed memristive neurodynamics via discontinuous control laws,” IEEE/CAA J. Autom. Sinica, vol. 3, no. 2, pp. 121–131, Apr. 2016. doi: 10.1109/JAS.2016.7451099

[23] 
R. Feng, C. S. Leung, A. G. Constantinides, and W. J. Zeng, “Lagrange programming neural network for nondifferentiable optimization problems in sparse approximation,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 10, pp. 2395–2407, Oct. 2017. doi: 10.1109/TNNLS.2016.2575860

[24] 
C. Xu and X. He, “A fully distributed approach to optimal energy scheduling of users and generators considering a novel combined neurodynamic algorithm in smart grid,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 7, pp. 1325–1335, Jul. 2021. doi: 10.1109/JAS.2021.1004048

[25] 
Y. Zhao, X. Liao, X. He, and R. Tang, “Centralized and collective neurodynamic optimization approaches for sparse signal reconstruction via L_{1}minimization,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 12, pp. 7488–7501, Dec. 2022. doi: 10.1109/TNNLS.2021.3085314

[26] 
J. Wang, J. Wang, and Q.L. Han, “Recedinghorizon trajectory planning for underactuated autonomous vehicles based on collaborative neurodynamic optimization,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 11, pp. 1909–1923, Nov. 2022. doi: 10.1109/JAS.2022.105524

[27] 
X. Wen, S. Qin, and J. Feng, “A novel projection neural network for solving a class of monotone variational inequalities,” IEEE Trans. Syst., Man, Cybern.: Syst., vol. 53, no. 9, pp. 5580–5590, Sept. 2023. doi: 10.1109/TSMC.2023.3274222

[28] 
W. Krichene, A. M. Bayen, and P. L. Bartlett, “Accelerated mirror descent in continuous and discrete time,” in Proc. 28th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2015, pp 2845–2853.

[29] 
Y. Zhao, X. Liao, and X. He, “Novel projection neurodynamic approaches for constrained convex optimization,” Neural Netw., vol. 150, pp. 336–349, Jun. 2022. doi: 10.1016/j.neunet.2022.03.011

[30] 
X. Zeng, J. Lei, and J. Chen, “Dynamical primaldual nesterov accelerated method and its application to network optimization,” IEEE Trans. Autom. Control, vol. 68, no. 3, pp. 1760–1767, Mar. 2023. doi: 10.1109/TAC.2022.3152720

[31] 
X. He, R. Hu, and Y. P. Fang, “Convergence rates of inertial primaldual dynamical methods for separable convex optimization problems,” SIAM J. Control Optim., vol. 59, no. 5, pp. 3278–3301, Jan. 2021. doi: 10.1137/20M1355379

[32] 
R. I. Boţ and D. K. Nguyen, “Improved convergence rates and trajectory convergence for primaldual dynamical systems with vanishing damping,” J. Differ. Equations., vol. 303, pp. 369–406, Dec. 2021. doi: 10.1016/j.jde.2021.09.021

[33] 
H. Attouch, Z. Chbani, J. Fadili, and H. Riahi, “Fast convergence of dynamical ADMM via time scaling of damped inertial dynamics,” J. Optim. Theory Appl., vol. 193, no. 1–3, pp. 704–736, Jun. 2022. doi: 10.1007/s10957021018592

[34] 
X. He, R. Hu, and Y. P. Fang, “Fast primaldual algorithm via dynamical system for a linearly constrained convex optimization problem,” Automatica, vol. 146, p. 110547, Dec. 2022. doi: 10.1016/j.automatica.2022.110547

[35] 
Y. Zhao, X. Liao, X. He, M. Zhou, and C. Li, “Accelerated primaldual mirror dynamics for centralized and distributed constrained convex optimization problems,” J. Mach. Learn. Res., vol. 24, no. 1, p. 343, Jan. 2023.

[36] 
R. T. Rockafellar, Convex Analysis. Princeton, USA: Princeton University Press, 1970.

[37] 
N. Parikh and S. Boyd, “Proximal algorithms,” Found. Trends Optim., vol. 1, no. 3, pp. 127–239, Jan. 2014. doi: 10.1561/2400000003

[38] 
A. Chambolle and T. Pock, “An introduction to continuous optimization for imaging,” Acta Numer., vol. 25, pp. 161–319, May 2016. doi: 10.1017/S096249291600009X

[39] 
D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory., vol. 52, no. 4, pp. 1289–1306, Apr. 2006. doi: 10.1109/TIT.2006.871582

[40] 
C. Li, X. Yu, W. Yu, T. Huang, and Z. W. Liu, “Distributed eventtriggered scheme for economic dispatch in smart grids,” IEEE Trans. Ind. Inf., vol. 12, no. 5, pp. 1775–1785, Oct. 2016. doi: 10.1109/TII.2015.2479558

[41] 
W. T. Lin, Y. W. Wang, C. Li, and X. Yu, “Distributed resource allocation via accelerated saddle point dynamics,” IEEE/CAA J. Automa. Sinica, vol. 8, no. 9, pp. 1588–1599, Sept. 2021. doi: 10.1109/JAS.2021.1004114

[42] 
C. Li, X. Yu, T. Huang, and X. He, “Distributed optimal consensus over resource allocation network and its Application to dynamical economic dispatch,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 6, pp. 2407–2418, Jun. 2018. doi: 10.1109/TNNLS.2017.2691760

[43] 
Z. Lin, H. Li, and C. Fang, Accelerated Optimization for Machine Learning: FirstOrder Algorithms. Singapore: Springer, 2020.

[44] 
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, no. 1, pp. 1–122, Jul. 2011.

[45] 
R. I. Boţ and E. R. Csetnek, “A forwardbackward dynamical approach to the minimization of the sum of a nonsmooth convex with a smooth nonconvex function,” ESAIM: COCV, vol. 24, no. 2, pp. 463–477, 2018. doi: 10.1051/cocv/2017020

[46] 
B. Abbas, H. Attouch, and B. F. Svaiter, “Newtonlike dynamics and forwardbackward methods for structured monotone inclusions in Hilbert spaces,” J. Optim. Theory Appl., vol. 161, no. 2, pp. 331–360, May 2014. doi: 10.1007/s1095701304145

[47] 
X. Chen and N. Li, “Exponential stability of primaldual gradient dynamics with nonstrong convexity,” in Proc. American Control Conf., Denver, USA, 2020, pp. 1612–1618.
