A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 1
Jan.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Y. Yu, Z. Y. Lei, Y. R. Wang, T. F. Zhang, C. Peng, and S. C. Gao, “Improving dendritic neuron model with dynamic scale-free network-based differential evolution,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 1, pp. 99–110, Jan. 2022. doi: 10.1109/JAS.2021.1004284
Citation: Y. Yu, Z. Y. Lei, Y. R. Wang, T. F. Zhang, C. Peng, and S. C. Gao, “Improving dendritic neuron model with dynamic scale-free network-based differential evolution,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 1, pp. 99–110, Jan. 2022. doi: 10.1109/JAS.2021.1004284

Improving Dendritic Neuron Model With Dynamic Scale-Free Network-Based Differential Evolution

doi: 10.1109/JAS.2021.1004284
Funds:  This work was partially supported by the National Natural Science Foundation of China (62073173, 61833011), the Natural Science Foundation of Jiangsu Province, China (BK20191376), and the Nanjing University of Posts and Telecommunications (NY220193, NY220145)
More Information
  • Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification, prediction, and other problems when its parameters are well-tuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and easily dropping into local minima. Therefore, more and more research adopts non-BP learning algorithms to train ANNs. In this paper, a dynamic scale-free network-based differential evolution (DSNDE) is developed by considering the demands of convergent speed and the ability to jump out of local minima. The performance of a DSNDE trained DNM is tested on 14 benchmark datasets and a photovoltaic power forecasting problem. Nine meta-heuristic algorithms are applied into comparison, including the champion of the 2017 IEEE Congress on Evolutionary Computation (CEC2017) benchmark competition effective butterfly optimizer with covariance matrix adapted retreat phase (EBOwithCMAR). The experimental results reveal that DSNDE achieves better performance than its peers.

     

  • loading
  • [1]
    S. Barra, S. M. Carta, A. Corriga, A. S. Podda, and D. R. Recupero, “Deep learning and time series-to-image encoding for financial forecasting,” IEEE/CAA Journal of Automatica Sinica, vol. 7, no. 3, pp. 683–692, 2020. doi: 10.1109/JAS.2020.1003132
    [2]
    A. Chaudhuri, K. Mandaviya, P. Badelia, and S. K. Ghosh, “Optical character recognition systems,” in Optical Character Recognition Systems for Different Languages with Soft Computing. Cham, Switzerland: Springer, 2017, pp. 9–41.
    [3]
    P. Roy, G. S. Mahapatra, and K. N. Dey, “Forecasting of software reliability using neighborhood fuzzy particle swarm optimization based novel neural network,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 6, pp. 1365–1383, 2019. doi: 10.1109/JAS.2019.1911753
    [4]
    W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” The Bulletin of Mathematical Biophysics, vol. 5, no. 4, pp. 115–133, 1943. doi: 10.1007/BF02478259
    [5]
    V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proc. 27th Int. Conf. Int. Conf. Machine Learning, Omnipress, USA, 2010, pp. 807–814.
    [6]
    Q. Liu and J. Wang, “A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming,” IEEE Trans. Neural Networks, vol. 19, no. 4, pp. 558–570, 2008. doi: 10.1109/TNN.2007.910736
    [7]
    A. Wuraola and N. Patel, “SQNL: A new computationally efficient activation function,” in Proc. Int. Joint Conf. Neural Networks (IJCNN), IEEE, 2018, pp. 1–7.
    [8]
    W. Zhang, J. Wang, and F. Lan, “Dynamic hand gesture recognition based on short-term sampling neural networks,” IEEE/CAA Journal of Automatica Sinica, vol. 8, no. 1, pp. 110–120, 2020.
    [9]
    M. Valueva, N. Nagornov, P. Lyakhov, G. Valuev, and N. Chervyakov, “Application of the residue number system to reduce hardware costs of the convolutional neural network implementation,” Mathematics and Computers in Simulation, vol. 177, pp. 232–243, 2020. doi: 10.1016/j.matcom.2020.04.031
    [10]
    A. Fernandez, R. B. H. Bunke, and J. Schmiduber, “A novel connectionist system for improved unconstrained handwriting recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 5, pp. 855–868, 2009.
    [11]
    T. Zhou, S. Gao, J. Wang, C. Chu, Y. Todo, and Z. Tang, “Financial time series prediction using a dendritic neuron model,” Knowledge-Based Systems, vol. 105, no. 1, pp. 214–224, 2016. doi: 10.1016/j.knosys.2016.05.031
    [12]
    A. Gidon, T. A. Zolnik, P. Fidzinski, F. Bolduan, A. Papoutsi, P. Poirazi, M. Holtkamp, I. Vida, and M. E. Larkum, “Dendritic action potentials and computation in human layer 2/3 cortical neurons,” Science, vol. 367, no. 6473, pp. 83–87, 2020. doi: 10.1126/science.aax6239
    [13]
    X. Li, J. Tang, Q. Zhang, B. Gao, J. J. Yang, S. Song, W. Wu, W. Zhang, P. Yao, N. Deng, and L. Deng, “Power-efficient neural network with artificial dendrites,” Nature Nanotechnology, vol. 15, no. 9, pp. 776–782, 2020.
    [14]
    F. Han, M. Wiercigroch, J.-A. Fang, and Z. Wang, “Excitement and synchronization of small-world neuronal networks with short-term synaptic plasticity,” Int. J. Neural Systems, vol. 21, no. 05, pp. 415–425, 2011. doi: 10.1142/S0129065711002924
    [15]
    S. Gao, M. Zhou, Y. Wang, J. Cheng, H. Yachi, and J. Wang, “Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction,” IEEE Trans. Neural Networks and Learning Systems, vol. 30, no. 2, pp. 601–614, 2019. doi: 10.1109/TNNLS.2018.2846646
    [16]
    F. Gabbiani, H. G. Krapp, C. Koch, and G. Laurent, “Multiplicative computation in a visual neuron sensitive to looming,” Nature, vol. 420, no. 6913, pp. 320–324, 2002. doi: 10.1038/nature01190
    [17]
    Y. Wang, S. Gao, Y. Yu, Z. Cai, and Z. Wang, “A gravitational search algorithm with hierarchy and distributed framework,” Knowledge-Based Systems, vol. 218, no. 22, Article No. 106877, 2021. doi: 10.1016/j.knosys.2021.106877
    [18]
    T. Zhang, C. Lv, F. Ma, K. Zhao, H. Wang, and G. M. O’Hare, “A photovoltaic power forecasting model based on dendritic neuron networks with the aid of wavelet transform,” Neurocomputing, vol. 397, no. 15, pp. 438–446, 2020. doi: 10.1016/j.neucom.2019.08.105
    [19]
    S. Liu, Y. Xia, Z. Shi, H. Yu, Z. Li, and J. Lin, “Deep learning in sheet metal bending with a novel theory-guided deep neural network,” IEEE/CAA Journal of Automatica Sinica, vol. 8, no. 3, pp. 565–581, 2021. doi: 10.1109/JAS.2021.1003871
    [20]
    S. C. Tan, J. Watada, Z. Ibrahim, and M. Khalid, “Evolutionary fuzzy ARTMAP neural networks for classification of semiconductor defects,” IEEE Trans. Neural Networks and Learning Systems, vol. 26, no. 5, pp. 933–950, 2014.
    [21]
    L. Zhang and P. N. Suganthan, “A survey of randomized algorithms for training neural networks,” Information Sciences, vol. 364, no. 10, pp. 146–155, 2016.
    [22]
    F. Gaxiola, P. Melin, F. Valdez, J. R. Castro, and O. Castillo, “Optimization of type-2 fuzzy weights in backpropagation learning for neural networks using GAs and PSO,” Applied Soft Computing, vol. 38, pp. 860–871, 2016. doi: 10.1016/j.asoc.2015.10.027
    [23]
    Y. Cao and J. Huang, “Neural-network-based nonlinear model predictive tracking control of a pneumatic muscle actuator-driven exoskeleton,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 6, pp. 1478–1488, 2020. doi: 10.1109/JAS.2020.1003351
    [24]
    Y. Wang, S. Gao, M. Zhou, and Y. Yu, “A multi-layered gravitational search algorithm for function optimization and real-world problems,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 1, pp. 94–109, 2020.
    [25]
    L. Hu, S. Yang, X. Luo, and M. Zhou, “An algorithm of inductively identifying clusters from attributed graphs,” IEEE Trans. Big Data, to be published, 2020,
    [26]
    L. Hu, K. C. Chan, X. Yuan, and S. Xiong, “A variational bayesian framework for cluster analysis in a complex network,” IEEE Trans. Knowledge and Data Engineering, vol. 32, no. 11, pp. 2115–2128, 2115.
    [27]
    S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Let a biogeography-based optimizer train your multi-layer perceptron,” Information Sciences, vol. 269, pp. 188–209, 2014. doi: 10.1016/j.ins.2014.01.038
    [28]
    R. Storn and K. Price, “Differential evolution – A simple and efficient heuristic for global optimization over continuous spaces,” J. Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. doi: 10.1023/A:1008202821328
    [29]
    S. Das and P. N. Suganthan, “Differential evolution: A survey of the state-of-the-art,” IEEE Trans. Evolutionary Computation, vol. 15, no. 1, pp. 4–31, 2011. doi: 10.1109/TEVC.2010.2059031
    [30]
    A. Kumar, R. K. Misra, and D. Singh, “Improving the local search capability of effective butterfly optimizer using covariance matrix adapted retreat phase,” in Proc. IEEE Congress Evolutionary Computation (CEC), 2017, pp. 1835–1842.
    [31]
    Y. Tang, J. Ji, S. Gao, H. Dai, Y. Yu, and Y. Todo, “A pruning neural network model in credit classification analysis,” Computational Intelligence and Neuroscience, vol. 2018, pp. 1–22, 2018.
    [32]
    Y. Ito, “Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory,” Neural Networks, vol. 4, no. 3, pp. 385–394, 1991. doi: 10.1016/0893-6080(91)90075-G
    [33]
    Y. Yi, Z. Zhang, and S. Patterson, “Scale-free loopy structure is resistant to noise in consensus dynamics in complex networks,” IEEE Trans. Cybernetics, vol. 50, no. 1, pp. 190–200, 2018.
    [34]
    D. Wu, N. Jiang, W. Du, K. Tang, and X. Cao, “Particle swarm optimization with moving particles on scale-free networks,” IEEE Trans. Network Science and Engineering, vol. 7, no. 1, pp. 497–506, 2018.
    [35]
    J. L. Payne and M. J. Eppstein, “Evolutionary dynamics on scale-free interaction networks,” IEEE Trans. Evolutionary Computation, vol. 13, no. 4, pp. 895–912, 2009. doi: 10.1109/TEVC.2009.2019825
    [36]
    A.-L. Barabási and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509–512, 1999. doi: 10.1126/science.286.5439.509
    [37]
    M. E. Newman, “The structure and function of complex networks,” SIAM Review, vol. 45, no. 2, pp. 167–256, 2003. doi: 10.1137/S003614450342480
    [38]
    D. Wu, X. Luo, G. Wang, M. Shang, Y. Yuan, and H. Yan, “A highly accurate framework for self-labeled semisupervised classification in industrial applications,” IEEE Trans. Industrial Informatics, vol. 14, no. 3, pp. 909–920, 2018. doi: 10.1109/TII.2017.2737827
    [39]
    D. Simon, “Biogeography-based optimization,” IEEE Trans. Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. doi: 10.1109/TEVC.2008.919004
    [40]
    Y. Yu, S. Gao, Y. Wang, and Y. Todo, “Global optimum-based search differential evolution,” IEEE/CAA Journal of Automatica Sinica, vol. 6, no. 2, pp. 379–394, 2019. doi: 10.1109/JAS.2019.1911378
    [41]
    J. Zhang and A. C. Sanderson, “JADE: Adaptive differential evolution with optional external archive,” IEEE Trans. Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009. doi: 10.1109/TEVC.2009.2014613
    [42]
    R. Tanabe and A. Fukunaga, “Success-history based parameter adaptation for differential evolution,” in Proc. Evolutionary Computation (CEC), IEEE, 2013, pp. 71–78.
    [43]
    S. Gao, Y. Yu, Y. Wang, J. Wang, J. Cheng, and M. Zhou, “Chaotic local search-based differential evolution algorithms for optimization,” IEEE Trans. Systems,Man and Cybernetics:Systems, vol. 51, no. 6, pp. 3954–3967, 2021. doi: 10.1109/TSMC.2019.2956121
    [44]
    Y. Cai and J. Wang, “Differential evolution with neighborhood and direction information for numerical optimization,” IEEE Trans. Cybernetics, vol. 43, no. 6, pp. 2202–2215, 2013. doi: 10.1109/TCYB.2013.2245501
    [45]
    A. W. Mohamed, A. A. Hadi, and K. M. Jambi, “Novel mutation strategy for enhancing SHADE and LSHADE algorithms for global numerical optimization,” Swarm and Evolutionary Computation, vol. 50, Article No. 100455, 2019. doi: 10.1016/j.swevo.2018.10.006
    [46]
    D. Dua and C. Graff, “UCI Machine Learning Repository,” 2017. [Online]. Available: http://archive.ics.uci.edu/ml, Accessed on: May 10, 2021.
    [47]
    G. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time Series Analysis: Forecasting and Control. John Wiley & Sons, 2015.
    [48]
    Keirn and Aunon, “Brain-Computer Interfaces Laboratory,” 1989. [Online]. Available: https://www.cs.colostate.edu/eeg/main/data/1989_Keirn_and_Aunon, Accessed on: May 10, 2021.
    [49]
    S. García, A. Fernández, J. Luengo, and F. Herrera, “Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power,” Information Sciences, vol. 180, no. 10, pp. 2044–2064, 2010. doi: 10.1016/j.ins.2009.12.010
    [50]
    J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms,” Swarm and Evolutionary Computation, vol. 1, no. 1, pp. 3–18, 2011. doi: 10.1016/j.swevo.2011.02.002
    [51]
    D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. doi: 10.1109/4235.585893
    [52]
    J. N. Mandrekar, “Receiver operating characteristic curve in diagnostic test assessment,” J. Thoracic Oncology, vol. 5, no. 9, pp. 1315–1316, 2010. doi: 10.1097/JTO.0b013e3181ec173d
    [53]
    V. Cerqueira, L. Torgo, and I. Mozetič, “Evaluating time series forecasting models: An empirical study on performance estimation methods,” Machine Learning, vol. 109, no. 11, pp. 1997–2028, 2020. doi: 10.1007/s10994-020-05910-7

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(10)

    Article Metrics

    Article views (761) PDF downloads(49) Cited by()

    Highlights

    • The optimization performance of DE is enhanced by a dynamic scale-free network
    • DSNDE is developing as a learning algorithm to improve the performance of DNM
    • The DSNDE-trained DNM has high accuracy in prediction, classification and other issues

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return