A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 8 Issue 9
Sep.  2021

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
S. Harford, F. Karim, and H. S. D. R. A. bi, "Generating Adversarial Samples on Multivariate Time Series using Variational Autoencoders," IEEE/CAA J. Autom. Sinica, vol. 8, no. 9, pp. 1523-1538, Sep. 2021. doi: 10.1109/JAS.2021.1004108
Citation: S. Harford, F. Karim, and H. S. D. R. A. bi, "Generating Adversarial Samples on Multivariate Time Series using Variational Autoencoders," IEEE/CAA J. Autom. Sinica, vol. 8, no. 9, pp. 1523-1538, Sep. 2021. doi: 10.1109/JAS.2021.1004108

Generating Adversarial Samples on Multivariate Time Series using Variational Autoencoders

doi: 10.1109/JAS.2021.1004108
More Information
  • Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models. However, limited research has been conducted on generating adversarial samples for multivariate time series classification models. Adversarial samples could become a security concern in systems with complex sets of sensors. This study proposes extending the existing gradient adversarial transformation network (GATN) in combination with adversarial autoencoders to attack multivariate time series classification models. The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model. In addition, the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples. The developed methodology is tested on two multivariate time series classification models: 1-nearest neighbor dynamic time warping (1-NN DTW) and a fully convolutional network (FCN). This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia (UEA) and University of California Riverside (UCR). The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series. To the best of our knowledge, this is the first study to explore adversarial attacks on multivariate time series. Additionally, we recommend future research utilizing the generated latent space from the variational autoencoders.

     

  • loading
  • [1]
    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 770–778.
    [2]
    K. Wang, C. Gou, Y. Duan, Y. Lin, X. Zheng, and F.-Y. Wang, “Generative adversarial networks: Introduction and outlook,” IEEE/CAA J. Autom. Sinica, vol. 4, no. 4, pp. 588–598, 2017. doi: 10.1109/JAS.2017.7510583
    [3]
    F. Seide, G. Li, and D. Yu, “Conversational speech transcription using context-dependent deep neural networks,” in Proc. 12th Annu. Conf. Int. Speech Communication Association, 2011.
    [4]
    X. Zhang, J. Zhao, and Y. LeCun, “Character-level convolutional networks for text classification,” Advances in Neural Information Processing Systems, 2015, pp. 649–657.
    [5]
    M. Pishgar, F. Karim, S. Majumdar, and H. Darabi, “Pathological voice classification using mel-cepstrum vectors and support vector machine,” arXiv preprint arXiv: 1812.07729, 2018.
    [6]
    S. Harford, H. Darabi, M. Del Rios, S. Majumdar, F. Karim, T. V. Hoek, K. Erwin, and D. P. Watson, “A machine learning based model for out of hospital cardiac arrest outcome classification and sensitivity analysis,” Resuscitation, vol. 138, pp. 134–140, 2019. doi: 10.1016/j.resuscitation.2019.03.012
    [7]
    H. A. Dau, A. Bagnall, K. Kamgar, C.-C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana, and E. Keogh, “The UCR time series archive,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 6, pp. 1293–1305, 2019. doi: 10.1109/JAS.2019.1911747
    [8]
    H. Darabi, G. Ifrim, P. Schafer, and D. F. Silva, “Guest editorial for special issue on time series classification,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 6, pp. 1291–1292, 2019. doi: 10.1109/JAS.2019.1911741
    [9]
    P. M. Papadopoulos, V. Reppa, M. M. Polycarpou, and C. G. Panayiotou, “Scalable distributed sensor fault diagnosis for smart buildings,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 3, pp. 638–655, 2020. doi: 10.1109/JAS.2020.1003123
    [10]
    K. Sirisambhand and C. A. Ratanamahatana, “A dimensionality reduction technique for time series classification using additive representation,” in Proc. 3rd Int. Congr. Information and Communication Technology. Springer, 2019, pp. 717–724.
    [11]
    A. Sharabiani, H. Darabi, A. Rezaei, S. Harford, H. Johnson, and F. Karim, “Efficient classification of long time series by 3-D dynamic time warping,” IEEE Trans. Systems,Man,and Cybernetics:Systems, vol. 47, no. 10, pp. 2688–2703, 2017. doi: 10.1109/TSMC.2017.2699333
    [12]
    A. Sharabiani, H. Darabi, S. Harford, E. Douzali, F. Karim, H. Johnson, and S. Chen, “Asymptotic dynamic time warping calculation with utilizing value repetition,” Knowledge and Information Systems, vol. 57, no. 2, pp. 359–388, 2018. doi: 10.1007/s10115-018-1163-4
    [13]
    A. Sharabiani, A. Sharabiani, and H. Darabi, “A novel bayesian and chain rule model on symbolic representation for time series classification,” in Proc. IEEE Int. Conf. Automation Science and Engineering, 2016, pp. 1014–1019.
    [14]
    X. Xi, E. Keogh, C. Shelton, L. Wei, and C. A. Ratanamahatana, “Fast time series classification using numerosity reduction,” in Proc. 23rd ACM Int. Conf. Machine Learning, 2006, pp. 1033–1040.
    [15]
    F. Karim, H. Darabi, S. Harford, S. Chen, and A. Sharabiani, “A framework for accurate time series classification based on partial observation,” in Proc. IEEE 15th Int. Conf. Automation Science and Engineering, 2019, pp. 634–639.
    [16]
    F. Karim, S. Majumdar, H. Darabi, and S. Chen, “LSTM fully convolutional networks for time series classification,” IEEE Access, vol. 6, pp. 1662–1669, 2017.
    [17]
    F. Karim, S. Majumdar, and H. Darabi, “Insights into lstm fully convolutional networks for time series classification,” IEEE Access, vol. 7, pp. 67718–67725, 2019. doi: 10.1109/ACCESS.2019.2916828
    [18]
    W. Pei, H. Dibeklioğlu, D. M. Tax, and L. van der Maaten, “Multivariate time-series classification using the hidden-unit logistic model,” IEEE Trans. Neural Networks and Learning Systems, vol. 29, no. 4, pp. 920–931, 2017.
    [19]
    P. Schäfer and U. Leser, “Multivariate time series classification with WEASEL+MUSE,” arXiv preprint arXiv: 1711.11343, 2017.
    [20]
    F. Karim, S. Majumdar, H. Darabi, and S. Harford, “Multivariate LSTM-FCNs for time series classification,” Neural Networks, vol. 116, pp. 237–245, 2019. doi: 10.1016/j.neunet.2019.04.014
    [21]
    T. Lintonen and T. Raty, “Self-learning of multivariate time series using perceptually important points,” IEEE/CAA J. Autom. Sinica, vol. 6, no. 6, pp. 1318–1331, 2019. doi: 10.1109/JAS.2019.1911777
    [22]
    H. Kang and S. Choi, “Bayesian common spatial patterns for multisubject EEG classification,” Neural Networks, vol. 57, pp. 39–50, 2014. doi: 10.1016/j.neunet.2014.05.012
    [23]
    S. Doltsinis, M. Krestenitis, and Z. Doulgeri, “A machine learning framework for real-time identification of successful snap-fit assemblies,” IEEE Trans. Automation Science and Engineering, 2019.
    [24]
    Y. Fu, Human Activity Recognition and Prediction. Springer, 2016.
    [25]
    S. Seto, W. Zhang, and Y. Zhou, “Multivariate time series classification using dynamic time warping template selection for human activity recognition,” in Proc. IEEE Symp. Series on Computational Intelligence, 2015, pp. 1399–1406.
    [26]
    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, p. 436, 2015.
    [27]
    S. Gao, M. Zhou, Y. Wang, J. Cheng, H. Yachi, and J. Wang, “Dendritic neuron model with effective learning algorithms for classification, approximation, and prediction,” IEEE Trans. Neural Networks and Learning Systems, vol. 30, no. 2, pp. 601–614, 2018.
    [28]
    Z. Wang, W. Yan, and T. Oates, “Time series classification from scratch with deep neural networks: A strong baseline,” in Proc. Int. Joint Conf. Neural Networks, 2017, pp. 1578–1585.
    [29]
    S. Hashida and K. Tamura, “Multi-channel MHLF: LSTM-FCN using MACD-histogram with multi-channel input for time series classification,” in Proc. IEEE 11th Int. Workshop on Computational Intelligence and Applications, 2019, pp. 67–72.
    [30]
    B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Šrndić, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in Proc. Joint European Conf. Machine Learning and Knowledge Discovery in Databases. Springer, 2013, pp. 387–402.
    [31]
    I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv: 1412.6572, 2014.
    [32]
    J. Yoon, D. Jarrett, and M. van der Schaar, “Time-series generative adversarial networks,” Advances in Neural Information Processing Systems, 2019, pp. 5508–5518.
    [33]
    N. Yang, M. Zhou, B. Xia, X. Guo, and L. Qi, “Inversion based on a detached dual-channel domain method for StyleGAN2 embedding,” IEEE Signal Processing Letters, vol. 28, pp. 553–557, 2021. doi: 10.1109/LSP.2021.3059371
    [34]
    H. Han, W. Ma, M. C. Zhou, Q. Guo, and A. Abusorrah, “A novel semi-supervised learning approach to pedestrian re-identification,” IEEE Internet of Things Journal, 2020. DOI: 10.1109/JIOT.2020.3024287
    [35]
    K. R. Mopuri, A. Ganeshan, and V. B. Radhakrishnan, “Generalizable data-free objective for crafting universal adversarial perturbations,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 41, no. 10, pp. 2452–2465, 2019. doi: 10.1109/TPAMI.2018.2861800
    [36]
    K. Reddy Mopuri, U. Ojha, U. Garg, and R. Venkatesh Babu, “Nag: Network for adversary generation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 742–751.
    [37]
    T. Miyato, S.-i. Maeda, M. Koyama, and S. Ishii, “Virtual adversarial training: A regularization method for supervised and semi-supervised learning,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 41, no. 8, pp. 1979–1993, 2018.
    [38]
    N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: From phenomena to black-box attacks using adversarial samples,” arXiv preprint arXiv: 1605.07277, 2016.
    [39]
    I. Oregi, J. Del Ser, A. Perez, and J. A. Lozano, “Adversarial sample crafting for time series classification with elastic similarity measures,” in Proc. Int. Symp. Intelligent and Distributed Computing. Springer, 2018, pp. 26–39.
    [40]
    F. Karim, S. Majumdar, and H. Darabi, “Adversarial attacks on time series,” IEEE Trans. Pattern Analysis and Machine Intelligence, 2020. DOI: 10.1109/TPAMI.2020.2986319
    [41]
    Q. Ma, W. Zhuang, S. Li, D. Huang, and G. W. Cottrell, “Adversarial dynamic shapelet networks.” in Proc. AAAI, 2020, pp. 5069–5076.
    [42]
    H. I. Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller, “Adversarial attacks on deep neural networks for time series classification,” in Proc. Int. Joint Conf. Neural Networks, 2019, pp. 1–8.
    [43]
    N. Akhtar and A. Mian, “Threat of adversarial attacks on deep learning in computer vision: A survey,” IEEE Access, vol. 6, pp. 14410–14430, 2018. doi: 10.1109/ACCESS.2018.2807385
    [44]
    A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv: 1706.06083, 2017.
    [45]
    F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv: 1705.07204, 2017.
    [46]
    A. J. Bagnall, H. A. Dau, J. Lines, M. Flynn, J. Large, A. Bostrom, P. Southam, and E. J. Keogh, “The UEA multivariate time series classification archive, 2018,” CoRR, vol. abs/1811.00075, 2018. [Online]. Available: http://arxiv.org/abs/1811.00075
    [47]
    H. Ding, G. Trajcevski, P. Scheuermann, X. Wang, and E. Keogh, “Querying and mining of time series data: Experimental comparison of representations and distance measures,” Proceedings of the VLDB Endowment, vol. 1, no. 2, pp. 1542–1552, 2008. doi: 10.14778/1454159.1454226
    [48]
    P. Papapetrou, V. Athitsos, M. Potamias, G. Kollios, and D. Gunopulos, “Embedding-based subsequence matching in time-series databases,” ACM Trans. Database Systems (TODS), vol. 36, no. 3, pp. 1–39, 2011.
    [49]
    E. Keogh and C. A. Ratanamahatana, “Exact indexing of dynamic time warping,” Knowledge and Information Systems, vol. 7, no. 3, pp. 358–386, 2005. doi: 10.1007/s10115-004-0154-9
    [50]
    A. P. Ruiz, M. Flynn, and A. Bagnall, “Benchmarking multivariate time series classification algorithms,” arXiv preprint arXiv: 2007.13156, 2020.
    [51]
    M. Shokoohi-Yekta, J. Wang, and E. Keogh, “On the non-trivial generalization of dynamic time warping to the multi-dimensional case,” in Proc. SIAM Int. Conf. Data Mining, 2015, pp. 289–297.
    [52]
    J. Grabocka and L. Schmidt-Thieme, “Neuralwarp: Time-series similarity with warping networks,” arXiv preprint arXiv: 1812.08306, 2018.
    [53]
    S. Baluja and I. Fischer, “Adversarial transformation networks: Learning to generate adversarial examples,” arXiv preprint arXiv: 1703.09387, 2017.
    [54]
    N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv: 1602.02697, vol. 1, no. 2, p. 3, 2016.
    [55]
    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv: 1312.6199, 2013.
    [56]
    C. Buciluǎ, R. Caruana, and A. Niculescu-Mizil, “Model compression,” in Proc. 12th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, 2006, pp. 535–541.
    [57]
    G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv: 1503.02531, 2015.
    [58]
    A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” arXiv preprint arXiv: 1511.05644, 2015.
    [59]
    I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems, 2014, pp. 2672–2680.
    [60]
    P. Baldi, “Autoencoders, unsupervised learning, and deep architectures,” in Proc. ICML Workshop on Unsupervised and Transfer Learning, 2012, pp. 37–49.
    [61]
    D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” arXiv preprint arXiv: 1312.6114, 2013.
    [62]
    P. Mirowski, H. Steck, P. Whiting, R. Palaniappan, M. MacDonald, and T. K. Ho, “KL-divergence kernel regression for non-Gaussian fingerprint based localization,” in Proc. IEEE Int. Conf. Indoor Positioning and Indoor Navigation, 2011, pp. 1–10.
    [63]
    D. M. Blei, A. Kucukelbir, and J. D. McAuliffe, “Variational inference: A review for statisticians,” J. American Statistical Association, vol. 112, no. 518, pp. 859–877, 2017. doi: 10.1080/01621459.2017.1285773
    [64]
    D. P. Kingma and M. Welling, “An introduction to variational autoencoders,” arXiv preprint arXiv: 1906.02691, 2019.
    [65]
    Y. LeCun et al., “LeNet-5, convolutional neural networks,” [Online]. Available: http://yann.lecun.com/exdb/lenet, vol. 20, p. 5, 2015.
    [66]
    C. Franz and M. C. Franz, “Package ‘cramer’,” 2019.
    [67]
    G. Hinton and S. T. Roweis, “Stochastic neighbor embedding,” in Proc. NIPS, vol. 15. Citeseer, 2002, pp. 833–840.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(10)

    Article Metrics

    Article views (1101) PDF downloads(95) Cited by()

    Highlights

    • Extends GATN by incorporating a VAE to attack multivariate time series classifiers, MGATN
    • MGATN generates black-box and white-box adversaries on two multivariate time series classifiers
    • MGATN outperforms prior adversarial generators for time series classifiers
    • MGATN can generate adversaries on unseen data without requiring retraining
    • Simple defense procedure can make classifier less vulnerable to attacks when retaining accuracy

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return