IEEE/CAA Journal of Automatica Sinica
Citation:  Lei Xu, "An Overview and Perspectives On Bidirectional Intelligence: Lmser Duality, Double IA Harmony, and Causal Computation," IEEE/CAA J. Autom. Sinica, vol. 6, no. 4, pp. 865893, July 2019. doi: 10.1109/JAS.2019.1911603 
[1] 
H. Bourlard and Y. Kamp, " Autoassociation by multilayer perceptrons and singular value decomposition,” Biol. Cybern., vol. 59, no. 45, pp. 291–294, Sep. 1988. doi: 10.1007/BF00332918

[2] 
L. Xu, " Least MSE reconstruction by selforganization. I. Multilayer neuralnets, ” in Proc. Int. Joint Conf. Neural Networks, Singapore, 1991, pp. 23622367.

[3] 
L. Xu, " Least mean square error reconstruction principle for selforganizing neuralnets,” Neural Networks, vol. 6, no. 5, pp. 627–648, Oct. 1993. doi: 10.1016/S08936080(05)801078

[4] 
D. H. Ballard, " Modular learning in neural networks, ” in Proc. 6th National Conf. Artificial Intelligence, Seattle, USA, 1987, pp. 279284.

[5] 
J. L. Elman and D. Zipser, " Learning the hidden structure of speech,” J. Acoust. Soc. Am., vol. 83, no. 4, pp. 1615–1626, Apr. 1988. doi: 10.1121/1.395916

[6] 
P. G. Cottrell, P. Munro, and D. Zipser, " Image compression by back propagation: An example of extensional programming, ” in Models of Cognition: A Review of Cognition Science, N. E. Sharkey, Ed. Norwood, USA: Ablex, 1989, pp. 208240.

[7] 
P. Baldi and K. Hornik, " Neural networks and principal component analysis: Learning from examples without local minima,” Neural Networks, vol. 2, no. 1, pp. 53–58, Dec. 1989. doi: 10.1016/08936080(89)900142

[8] 
M. Kawato, H. Hayakawa, and T. Inui, " A forwardinverse optics model of reciprocal connections between visual cortical areas,” Network:Comput. Neural Syst., vol. 4, no. 4, pp. 415–422, Oct. 1993. doi: 10.1088/0954898X_4_4_001

[9] 
W. E. A. Huang, " Deep LMSER learning with symmetric weights and neuron sharing, ” 2018.

[10] 
G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal, " The "wakesleep" algorithm for unsupervised neural networks,” Science, vol. 268, no. 5214, pp. 1158–1161, May 1995. doi: 10.1126/science.7761831

[11] 
P. Dayan, G. E. Hinton, R. M. Neal, and R. S. Zemel, " The Helmholtz machine,” Neural Comput., vol. 7, no. 5, pp. 889–904, Sep. 1995. doi: 10.1162/neco.1995.7.5.889

[12] 
L. Xu, " BayesianKullback coupled YingYang machines: Unified learnings and new results on vector quantization, ” in Proc. Int. Conf. Neural Information Processing, Beijing, China, 1995, pp. 977988.

[13] 
L. Xu, " A unified learning scheme: BayesianKullback YingYang machine, ” in Proc. 8th Int. Conf. Neural Information Processing Systems, Denver, USA, 1996, pp. 444450.

[14] 
G. E. Hinton, S. Osindero, and Y. W. Teh, " A fast learning algorithm for deep belief nets,” Neural Comput., vol. 18, no. 7, pp. 1527–1554, Jul. 2006. doi: 10.1162/neco.2006.18.7.1527

[15] 
G. E. Hinton and R. R. Salakhutdinov, " Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, Jul. 2006. doi: 10.1126/science.1127647

[16] 
X. J. Mao, C. H. Shen, and Y. B. Yang, " Image restoration using very deep convolutional encoderdecoder networks with symmetric skip connections, ” in Proc. 30th Int. Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 28022810.

[17] 
O. Ronneberger, P. Fischer, and T. Brox, " Unet: Convolutional networks for biomedical image segmentation, ” in Int. Conf. Medical Image Computing and ComputerAssisted Intervention, Munich, Germany, 2015, pp. 234241.

[18] 
K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, " Deep residual learning for image recognition, ” in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 770778.

[19] 
G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, " Densely connected convolutional networks, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 47004708.

[20] 
D. P. Kingma and M. Welling, " Autoencoding variational Bayes, ” arXiv preprint arXiv: 1312.6114, 2013.

[21] 
J. Schmidhuber, " Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85–117, Jan. 2015. doi: 10.1016/j.neunet.2014.09.003

[22] 
C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, and O. Winther, " Ladder variational autoencoder, ” in Proc. 30th Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 37383746.

[23] 
L. Xu, " Bayesian YingYang system, best harmony learning, and five action circling,” Front. Electr. Electron. Eng. China, vol. 5, no. 3, pp. 281–328, Sep. 2010. doi: 10.1007/s1146001001089

[24] 
L. Xu, " New advances on the YingYang machine, ” in Proc. 1995 Int. Symp. Artificial Neural Networks, Taiwan, China, 1995, pp. 712.

[25] 
L. Xu, " Codimensional matrix pairing perspective of BYY harmony learning: hierarchy of bilinear systems, joint decomposition of datacovariance, and applications of network biology,” Front. Electr. Electron. Eng. China, vol. 6, no. 1, pp. 86–119, Mar. 2011. doi: 10.1007/s1146001101351

[26] 
L. Xu, " On essential topics of BYY harmony learning: Current status, challenging issues, and gene analysis applications,” Front. Electr. Electron. Eng., vol. 7, no. 1, pp. 147–196, Mar. 2012.

[27] 
L. Xu, " Further advances on Bayesian YingYang harmony learning,” Appl. Inform., vol. 2, pp. 5, Dec. 2015. doi: 10.1186/s4053501500084

[28] 
D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra, " Oneshot generalization in deep generative models, ” in Proc. 33rd Int. Conf. Machine Learning, New York, USA, 2016.

[29] 
S. J. Zhao, J. M. Song, and S. Ermon, " Learning hierarchical features from deep generative models, ” in Proc. 34th Int. Conf. Machine Learning, Sydney, Australia, 2017, pp. 40914099.

[30] 
I. J. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, " Generative adversarial nets, ” in Proc. 27th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2014, pp. 26722680.

[31] 
S. Gurumurthy, R. K. Sarvadevabhatla, and R. V. Babu, " DeLiGAN: Generative adversarial networks for diverse and limited data, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017.

[32] 
L. Mescheder, S. Nowozin, and A. Geiger, " Adversarial variational Bayes: Unifying variational autoencoders and generative adversarial networks, ” in Proc. 34th Int. Conf. Machine Learning, Sydney, Australia, 2017.

[33] 
I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau, " Building endtoend dialogue systems using generative hierarchical neural network models, ” in Proc. 30th AAAI Conf. Artificial Intelligence, Phoenix, Arizona, 2016, pp. 37763783.

[34] 
I. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio, " A hierarchical latent variable encoderdecoder model for generating dialogues, ” in Proc. 31st AAAI Conf. Artificial Intelligence, San Francisco, USA, 2017, pp. 32953301.

[35] 
P. Ballester and R. Matsumura Araujo, " On the performance of GoogLeNet and AlexNet applied to sketches, ” in Proc. 30th AAAI Conf. Artificial Intelligence, Phoenix, Arizona, 2016, pp. 11241128.

[36] 
S. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and H. Lee, " Learning what and where to draw, ” in Proc. 29th Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 217225.

[37] 
Y. J. Chen, S. K. Tu, Y. Q. Yi, and L. Xu, " Sketchpix2seq: a model to generate sketches of multiple categories, ” arXiv preprint arXiv: 1709.04121, 2017.

[38] 
T. Nakamura and R. Goto, " Outfit generation and style extraction via bidirectional LSTM and autoencoder, ” arXiv preprint arXiv: 1807.03133, 2018.

[39] 
A. Augello, E. Cipolla, I. Infantino, A. Manfre, G. Pilato, and F. Vella, " Creative robot dance with variational encoder, ” arXiv preprint arXiv: 1707.01489, 2017.

[40] 
E. Denton, S. Chintala, A. Szlam, and R. Fergus, " Deep generative image models using a Laplacian pyramid of adversarial networks, ” in Proc. 28th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2015, pp. 14861494.

[41] 
P. Isola, J. Y. Zhu, T. H. Zhou, and A. A. Efros, " Imagetoimage translation with conditional adversarial networks, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 11251134.

[42] 
J. J. Wu, C. K. Zhang, T. F. Xue, B. Freeman, and J. Tenenbaum, " Learning a probabilistic latent space of object shapes via 3D generativeadversarial modeling, ” in Proc. 29th Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 8290.

[43] 
G. L. Liu, F. A. Reda, K. J. Shih, T. C. Wang, A. Tao, and B. Catanzaro, " Image inpainting for irregular holes using partial convolutions, ” in European Conf. Computer Vision, Munich, Germany, 2018.

[44] 
F. L. Ma, R. Chitta, J. Zhou, Q. Z. You, T. Sun, and J. Gao, " Dipole: Diagnosis prediction in healthcare via attentionbased bidirectional recurrent neural networks, ” in Proc. 23rd ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, Halifax, Canada, 2017, pp. 19031911.

[45] 
C. Vondrick and A. Torralba, " Generating the future with adversarial transformers, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017.

[46] 
Z. F. Zhang, Y. Song, and H. R. Qi, " Age progression/regression by conditional adversarial autoencoder, ” in Proc. 2017 IEEE Conf. Computer Vision and Pattern Recognition, Honolulu, USA, 2017.

[47] 
L. Xu, A. Krzyzak, and E. Oja, " Rival penalized competitive learning for clustering analysis, RBF net, and curve detection,” IEEE Trans. Neural Networks, vol. 4, no. 4, pp. 636–649, Jul. 1993. doi: 10.1109/72.238318

[48] 
T. Kohonen, " Learning vector quantization, ” in Selforganizing Maps, T. Kohonen, Eds. Berlin, Heidelberg, Germany: Springer, 1995, pp. 175189.

[49] 
L. Xu, " Deep bidirectional intelligence: AlphaZero, deep IAsearch, deep IAinfer, and TPC causal learning,” Appl. Inform., vol. 5, no. 1, pp. 5, Dec. 2018. doi: 10.1186/s405350180052y

[50] 
T. Kohonen, " The selforganizing map,” Proc. IEEE, vol. 78, no. 9, pp. 1464–1480, Sep. 1990. doi: 10.1109/5.58325

[51] 
L. Xu, " Adding learned expectation into the learning procedure of selforganizing maps,” Int. J. Neural Syst., vol. 1, no. 3, pp. 269–283, Apr. 1990. doi: 10.1142/S0129065790000175

[52] 
M. A. F. Figueiredo and A. K. Jain, " Unsupervised learning of finite mixture models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 3, pp. 381–396, Mar. 2002. doi: 10.1109/34.990138

[53] 
L. Xu, M. I. Jordan, and G. E. Hinton, " An alternative model for mixtures of experts, ” in Advances in Neural Information Processing Systems, J. D. Cowan, G. Tesauro, and J. Alspector, Eds. Cambridge: MIT Press, 1995, pp. 903912.

[54] 
G. Schwarz, " Estimating the dimension of a model,” Ann. Stat., vol. 6, no. 2, pp. 461–464, Mar. 1978. doi: 10.1214/aos/1176344136

[55] 
J. Rissanen, " Modeling by shortest data description,” Automatica, vol. 14, no. 5, pp. 465–471, Sep. 1978. doi: 10.1016/00051098(78)900055

[56] 
J. Rissanen, Information and Complexity in Statistical Modeling. New York, USA: Springer, 2007.

[57] 
D. J. MacKay, " A practical Bayesian framework for backpropagation networks,” Neural Comput., vol. 4, no. 3, pp. 448–472, May 1992. doi: 10.1162/neco.1992.4.3.448

[58] 
L. Xu, " Bayesian Ying Yang system and theory as a unified statistical learning approach: (I) unsupervised and semiunsupervised learning, ” in Brainlike Computing and Intelligent Information Systems, S. Amari and N. Kassabov, Eds. Berlin, Germany: SpringerVerlag, 1997, 241274.

[59] 
L. Xu, " Data smoothing regularization, multisetslearning, and problem solving strategies,” Neural Networks, vol. 16, no. 56, pp. 817–825, Jun.Jul. 2003. doi: 10.1016/S08936080(03)001199

[60] 
A. J. Bell and T. J. Sejnowski, " An informationmaximization approach to blind separation and blind deconvolution,” Neural Comput., vol. 7, no. 6, pp. 1129–1159, Nov. 1995. doi: 10.1162/neco.1995.7.6.1129

[61] 
S. Amari, A. Cichocki, and H. H. Yang, " A new learning algorithm for blind signal separation, ” in Proc. 8th Int. Conf. Neural Information Processing Systems, Denver, USA, 1995, 757763.

[62] 
L. Xu, " Independent subspaces, ” in Encyclopedia of Artificial Intelligence, J. Ramón, R. Dopico, J. Dorado, and A. P. Sierra, Eds. Hershey, USA: IGI Global, 2009, pp. 892901.

[63] 
L. Xu, " Independent component analysis and extensions with noise and time: a Bayesian YingYang learning perspective,” Neural Inf. Process. Lett. Rev., vol. 1, no. 1, pp. 1–52, Oct. 2003.

[64] 
A. L. Yuille, S. M. Smirnakis, and L. Xu, " Bayesian selforganization, ” in Proc. 6th Int. Conf. Neural Information Processing Systems, Denver, USA, 1993, pp. 10011008.

[65] 
L. Xu, " Yingyang learning, ” in The Handbook of Brain Theory and Neural Networks, M. A. Arbib, Ed. Cambridge, USA: MIT Press, 2002, 12311237.

[66] 
L. Xu, " BYY \Sigma\Pi factor systems and harmony learning, ” in Proc. Int. Conf. Neural Information Processing, Taejon, Korea, 2000, pp. 548558.

[67] 
L. Xu, " Best harmony, unified RPCL and automated model selection for unsupervised and supervised learning on Gaussian mixtures, threelayer nets and MERBFSVM models,” Int. J. Neural Syst., vol. 11, no. 1, pp. 43–69, Feb. 2001. doi: 10.1142/S0129065701000497

[68] 
L. Xu, " BYY harmony learning, independent state space, and generalized APT financial analyses,” IEEE Trans. Neural Networks, vol. 12, no. 4, pp. 822–849, Jul. 2001. doi: 10.1109/72.935094

[69] 
E. T. Jaynes, Probability Theory: The Logic of Science. New York, USA: Cambridge University Press, 2003.

[70] 
L. Xu and M. I. Jordan, " On convergence properties of the EM algorithm for Gaussian mixtures,” Neural Comput., vol. 8, no. 1, pp. 129–151, Jan. 1996. doi: 10.1162/neco.1996.8.1.129

[71] 
L. Xu, " RBF nets, mixture experts, and Bayesian YingYang learning,” Neurocomputing, vol. 19, no. 13, pp. 223–257, Apr. 1998. doi: 10.1016/S09252312(97)00091X

[72] 
L. Xu and S. I. Amari, " Combining classifiers and learning mixtureofexperts, ” in Encyclopedia of Artificial Intelligence, J. Ramón, R. Dopico, J. Dorado, and A. P. Sierra, Eds. Hershey, USA: IGI Global, 2008, pp. 318326.

[73] 
L. Xu, " Learning algorithms for RBF functions and subspace based functions, ” in Handbook of Research on Machine Learning, Applications and Trends: Algorithms, Methods, and Techniques, E. S. Olivas, J. D. M. Guerrero, M. MartinezSober, J. R. MagdalenaBenedito, and A. J. S. López, Eds. Hershey, USA: IGI Global, 2009, pp. 6094.

[74] 
X. S. Qian, " On thinking sciences,” Chin. J. Nat., no. 8, pp. 563–567, 572640, 1983.

[75] 
Y. H. Pan, " The synthesis reasoning,” Pattern Recognition and Artificial Intelligence, vol. 9, no. 3, pp. 201–208, 1996.

[76] 
D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, " Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016. doi: 10.1038/nature16961

[77] 
D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. T. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, " Mastering the game of Go without human knowledge,” Nature, vol. 550, no. 7676, pp. 354–259, Oct. 2017. doi: 10.1038/nature24270

[78] 
M. I. Jordan and L. Xu, " Convergence results for the EM approach to mixtures of experts architectures,” Neural Networks, vol. 8, no. 9, pp. 1409–1431, 1995. doi: 10.1016/08936080(95)000143

[79] 
M. Jünger, G. Reinelt, and G. Rinaldi, " The traveling salesman problem, ” in Handbooks in Operations Research and Management Science, Amsterdam, Netherlands: Elsevier, 1995, pp. 225330.

[80] 
C. Y. Dang and L. Xu, " A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem,” Neural Networks, vol. 14, no. 2, pp. 217–230, Mar. 2001. doi: 10.1016/S08936080(00)000927

[81] 
W. H. Tsai and K. S. Fu, " Errorcorrecting isomorphisms of attributed relational graphs for pattern analysis,” IEEE Trans. Syst.,Man,Cybern., vol. 9, no. 12, pp. 757–768, Dec. 1979. doi: 10.1109/TSMC.1979.4310127

[82] 
L. Xu and E. Oja, " Improved simulated annealing, Boltzmann machine, and attributed graph matching, ” in European Association for Signal Processing Workshop, Sesimbra, Portugal, 1990, pp. 151160.

[83] 
L. Xu and S. Klasa, " A PCAlike rule for pattern classification based on attributed graph, ” in Proc. 1993 Int. Conf. Neural Networks, Nagoya, Japan, 1993, pp. 12811284.

[84] 
J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, USA: Morgan Kaufmann, 1988.

[85] 
J. Pearl, " Fusion, propagation, and structuring in belief networks,” Artif. Intell., vol. 29, no. 3, pp. 241–288, Sep. 1986. doi: 10.1016/00043702(86)90072X

[86] 
J. J. Hopfield, " Neural networks and physical systems with emergent collective computational abilities,” Proc. Natl. Acad. Sci. USA, vol. 79, no. 8, pp. 2554–2558, Apr. 1982. doi: 10.1073/pnas.79.8.2554

[87] 
G. E. Hinton and T. J. Sejnowski, " Learning and relearning in Boltzmann machines, ” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Cambridge: MIT Press, 1986, pp. 282317.

[88] 
L. Xu, " Machine learning and causal analyses for modeling financial and economic data,” Appl. Inform., vol. 5, no. 1, pp. 11, Dec. 2018. doi: 10.1186/s4053501800585

[89] 
P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search. New York, USA: Springer, 1993.

[90] 
P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search. 2nd ed. Cambridge, USA: MIT Press, 2000.

[91] 
P. Judea, " An introduction to causal inference,” Int. J. Biostat., vol. 6, no. 2, pp. 7, Feb. 2010.

[92] 
P. Spirtes and K. Zhang, " Causal discovery and inference: concepts and recent methodological advances,” Appl. Inform., vol. 3, pp. 3, Dec. 2016. doi: 10.1186/s405350160018x

[93] 
S. Wright, " The method of path coefficients,” Ann. Math. Stat., vol. 5, no. 3, pp. 161–215, Sep. 1934. doi: 10.1214/aoms/1177732676

[94] 
G. W. Imbens and D. B. Rubin, Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. New York, USA: Cambridge University Press, 2015.

[95] 
R. B. Kline, Principles and Practice of Structural Equation Modeling. New York, USA: Guilford Publications, 2016.

[96] 
J. Peters, D. Janzing, and B. Scholkopf, " Causal inference on discrete data using additive noise models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2436–2450, Dec. 2011. doi: 10.1109/TPAMI.2011.71

[97] 
S. Shimizu, P. O. Hoyer, A. Hyvärinen, and A. Kerminen, " A linear nonGaussian acyclic model for causal discovery,” J. Mach. Learn. Res., vol. 7, pp. 2003–2030, Oct. 2006.

[98] 
K. Zhang and Hyvärinen, " On the identifiability of the postnonlinear causal model, ” in Proc. 25th Conf. Uncertainty in Artificial Intelligence, Montreal, Canada, 2009, pp. 647655.

[99] 
O. Goudet, D. Kalainathan, P. Caillou, I. Guyon, D. LopezPaz, and M. Sebag, " Causal generative neural networks, ” arXiv preprint arXiv: 1711.08936, 2017.

[100] 
L. Xu and J. Pearl, " Structuring causal tree models with continuous variables, ” in Proc. 3rd Annu. Conf. Uncertainty in Artificial Intelligence, Seattle, USA, pp. 170179, 1987.

[101] 
B. Efron, The Jackknife, the Bootstrap and Other Resampling Plans. Philadelphia, USA: SIAM, 1982.

[102] 
A. N. Gomez, M. Y. Ren, R. Urtasun, and R. B. Grosse, " The reversible residual network: Backpropagation without storing activations, ” in Proc. 31st Conf. Neural Information Processing Systems, Long Beach, USA, 2017, pp. 22142224.

[103] 
J. H. Jacobsen, A. Smeulders, and E. Oyallon, " iRevNet: Deep invertible networks, ” in Proc. 2018 Int. Conf. Learning Representations, Vancouver, Canada, 2018.

[104] 
P. E. Hart, N. J. Nilsson, and B. Raphael, " A formal basis for the heuristic determination of minimum cost paths,” IEEE Trans. Syst. Sci. Cybern., vol. 4, no. 2, pp. 100–107, Jul. 1968. doi: 10.1109/TSSC.1968.300136

[105] 
J. Pearl, Heuristics: Intelligent Search Strategies for Computer Problem Solving. Reading, USA: AddisonWesley Pub. Co., Inc., 1984.
