A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 7
Jul.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 7.847, Top 10% (SCI Q1)
    CiteScore: 13.0, Top 5% (Q1)
    Google Scholar h5-index: 64, TOP 7
Turn off MathJax
Article Contents
Y. M. Lei, H. P. Zhu, J. P. Zhang, and H. M. Shan, “Meta ordinal regression forest for medical image classification with ordinal labels,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1233–1247, Jul. 2022. doi: 10.1109/JAS.2022.105668
Citation: Y. M. Lei, H. P. Zhu, J. P. Zhang, and H. M. Shan, “Meta ordinal regression forest for medical image classification with ordinal labels,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 7, pp. 1233–1247, Jul. 2022. doi: 10.1109/JAS.2022.105668

Meta Ordinal Regression Forest for Medical Image Classification With Ordinal Labels

doi: 10.1109/JAS.2022.105668
Funds:  This work was supported in part by the Natural Science Foundation of Shanghai (21ZR1403600), the National Natural Science Foundation of China (62176059), Shanghai Municipal Science and Technology Major Project (2018SHZDZX01), Zhang Jiang Laboratory, Shanghai Sailing Program (21YF1402800), Shanghai Municipal of Science and Technology Project (20JC1419500), and Shanghai Center for Brain Science and Brain-inspired Technology
More Information
  • The performance of medical image classification has been enhanced by deep convolutional neural networks (CNNs), which are typically trained with cross-entropy (CE) loss. However, when the label presents an intrinsic ordinal property in nature, e.g., the development from benign to malignant tumor, CE loss cannot take into account such ordinal information to allow for better generalization. To improve model generalization with ordinal information, we propose a novel meta ordinal regression forest (MORF) method for medical image classification with ordinal labels, which learns the ordinal relationship through the combination of convolutional neural network and differential forest in a meta-learning framework. The merits of the proposed MORF come from the following two components: A tree-wise weighting net (TWW-Net) and a grouped feature selection (GFS) module. First, the TWW-Net assigns each tree in the forest with a specific weight that is mapped from the classification loss of the corresponding tree. Hence, all the trees possess varying weights, which is helpful for alleviating the tree-wise prediction variance. Second, the GFS module enables a dynamic forest rather than a fixed one that was previously used, allowing for random feature perturbation. During training, we alternatively optimize the parameters of the CNN backbone and TWW-Net in the meta-learning framework through calculating the Hessian matrix. Experimental results on two medical image classification datasets with ordinal labels, i.e., LIDC-IDRI and Breast Ultrasound datasets, demonstrate the superior performances of our MORF method over existing state-of-the-art methods.

     

  • loading
  • [1]
    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, June 2016.
    [2]
    L. Shen, L. R. Margolies, J. H. Rothstein, E. Fluder, R. McBride, and W. Sieh, “Deep learning to improve breast cancer detection on screening mammography,” Scientific Reports, vol. 9, no. 1, pp. 1–12, 2019.
    [3]
    W. Zhu, C. Liu, W. Fan, and X. Xie, “DeepLung: 3D deep convolutional nets for automated pulmonary nodule detection and classification,” arXiv preprint arXiv: 1709.05538, 2017.
    [4]
    W. Shen, Y. Guo, Y. Wang, K. Zhao, B. Wang, and A. L. Yuille, “Deep regression forests for age estimation,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2018, pp. 2304–2313.
    [5]
    P. Kontschieder, M. Fiterau, A. Criminisi, and S. Rota Bulo, “Deep neural decision forests,” in Proc. IEEE Int. Conf. Computer Vision, 2015, pp. 1467–1475.
    [6]
    Z. Zhang and Y. Pang, “CGNet: Cross-guidance network for semantic segmentation,” Science China Information Science, vol. 63, no. 2, pp. 45–60, 2020. doi: 10.1007/s11432-019-2718-7
    [7]
    Y. Lei, J. Zhang, and H. Shan, “Strided self-supervised low-dose CT denoising for lung nodule classification,” Phenomics, vol. 1, no. 6, pp. 257–268, 2021. doi: 10.1007/s43657-021-00025-y
    [8]
    Y. Lei, Y. Tian, H. Shan, J. Zhang, G. Wang, and M. K. Kalra, “Shape and margin-aware lung nodule classification in low-dose CT images via soft activation mapping,” Medical Image Analysis, vol. 60, p. 101628, 2020.
    [9]
    S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, et al., “The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans,” Medical Physics, vol. 38, no. 2, pp. 915–931, 2011. doi: 10.1118/1.3528204
    [10]
    H. Shan, G. Wang, M. K. Kalra, R. de Souza, and J. Zhang, “Enhancing transferability of features from pretrained deep neural networks for lung nodule classification,” in Proc. Int. Conf. Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, 2017, pp. 65–68.
    [11]
    S. Hussein, R. Gillies, K. Cao, Q. Song, and U. Bagci, “TumorNet: Lung nodule characterization using multi-view convolutional neural network with Gaussian process,” in Proc. IEEE 14th Int. Symp. Biomedical Imaging, 2017, pp. 1007–1010.
    [12]
    W. Shen, M. Zhou, F. Yang, C. Yang, and J. Tian, “Multi-scale convolutional neural networks for lung nodule classification,” in Proc. Int. Conf. Information Processing Medical Imaging, 2015, pp. 588–599.
    [13]
    Q. Dou, H. Chen, L. Yu, J. Qin, and P.-A. Heng, “Multilevel contextual 3D CNNs for false positive reduction in pulmonary nodule detection,” IEEE Trans. Biomed. Eng., vol. 64, no. 7, pp. 1558–1567, 2017. doi: 10.1109/TBME.2016.2613502
    [14]
    A. A. A. Setio, F. Ciompi, G. Litjens, P. Gerke, C. Jacobs, S. J. van Riel, M. M. W. Wille, M. Naqibullah, C. I. Sánchez, and B. van Ginneken, “Pulmonary nodule detection in CT images: False positive reduction using multi-view convolutional networks,” IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1160–1169, 2016. doi: 10.1109/TMI.2016.2536809
    [15]
    B. Wu, X. Sun, L. Hu, and Y. Wang, “Learning with unsure data for medical image diagnosis,” in Proc. IEEE Int. Conf. Computer Vision, 2019, pp. 10590–10599.
    [16]
    P. A. Gutiérrez, M. Perez-Ortiz, J. Sanchez-Monedero, F. Fernandez-Navarro, and C. Hervas-Martinez, “Ordinal regression methods: Survey and experimental study,” IEEE Trans. Knowl. Data Eng., vol. 28, no. 1, pp. 127–146, 2015.
    [17]
    E. Frank and M. Hall, “A simple approach to ordinal classification,” in Proc. European Conf. Machine Learning. Springer, 2001, pp. 145–156.
    [18]
    X. Liu, Y. Zou, Y. Song, C. Yang, J. You, and B. K Vijaya Kumar, “Ordinal regression with neuron stick-breaking for medical diagnosis,” in Proc. European Conf. Computer Vision, 2018, pp. 335–344.
    [19]
    C. Beckham and C. Pal, “Unimodal probability distributions for deep ordinal classification,” in Proc. Int. Conf. Machine Learning, 2017, pp. 411–419.
    [20]
    R. Diaz and A. Marathe, “Soft labels for ordinal regression,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2019, pp. 4738–4747.
    [21]
    W. Shen, K. Zhao, Y. Guo, and A. L. Yuille, “Label distribution learning forests,” in Proc. Advances Neural Information Processing Systems, 2017, pp. 834–843.
    [22]
    H. Zhu, H. Shan, Y. Zhang, L. Che, X. Xu, J. Zhang, J. Shi, and F.-Y. Wang, “Convolutional ordinal regression forest for image ordinal estimation,” IEEE Trans. Neural Netw. Learn. Syst., 2021. DOI: 10.1109/TNNLS.2021.3055816
    [23]
    L. Breiman, “Random forests,” Machine Learning, vol. 45, no. 1, pp. 5–32, 2001. doi: 10.1023/A:1010933404324
    [24]
    J. Shu, Q. Xie, L. Yi, Q. Zhao, S. Zhou, Z. Xu, and D. Meng, “Meta-Weight-Net: Learning an explicit mapping for sample weighting,” in Proc. Advances Neural Information Processing Systems, 2019, pp. 1919–1930.
    [25]
    S. Liu, A. Davison, and E. Johns, “Self-supervised generalisation with meta auxiliary learning,” in Proc. Advances Neural Information Processing Systems, 2019, pp. 1679–1689.
    [26]
    Y. Lei, H. Zhu, J. Zhang, and H. Shan, “Meta ordinal regression forest for learning with unsure lung nodules,” in Proc. IEEE Int. Conf. Bioinformatics and Biomedicine, 2020, pp. 442–445.
    [27]
    W. Al-Dhabyani, M. Gomaa, H. Khaled, and A. Fahmy, “Dataset of breast ultrasound images,” Data in Brief, vol. 28, p. 104863, 2020.
    [28]
    X. Liu, F. Hou, H. Qin, and A. Hao, “A CADe system for nodule detection in thoracic CT images based on artificial neural network,” Science China Information Science, vol. 60, no. 7, p. 072106, 2017.
    [29]
    G. Cao, T. Huang, K. Hou, W. Cao, P. Liu, and J. Zhang, “3D convolutional neural networks fusion model for lung nodule detection onclinical CT scans,” in Proc. IEEE Int. Conf. Bioinformatics and Biomedicine, 2018, pp. 973–978.
    [30]
    L. Fang, Z. Wang, Z. Chen, F. Jian, S. Li, and H. He, “3D shape reconstruction of lumbar vertebra from two X-ray images and a CT model,” IEEE/CAA J. Autom. Sinica, vol. 7, no. 4, pp. 1124–1133, 2019.
    [31]
    A. Muzahid, W. Wan, F. Sohel, L. Wu, and L. Hou, “CurveNet: Curvature-based multitask learning deep networks for 3D object recognition,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 6, pp. 1177–1187, 2020.
    [32]
    O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 234–241.
    [33]
    H. Shan, Y. Zhang, Q. Yang, U. Kruger, M. K. Kalra, L. Sun, W. Cong, and G. Wang, “3-D convolutional encoder-decoder network for low-dose CT via transfer learning from a 2-D trained network,” IEEE Trans. Med. Imag., vol. 37, no. 6, pp. 1522–1534, 2018. doi: 10.1109/TMI.2018.2832217
    [34]
    N. Ibtehaz and M. S. Rahman, “MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74–87, 2020. doi: 10.1016/j.neunet.2019.08.025
    [35]
    H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, X. Han, Y.-W. Chen, and J. Wu, “UNet 3+: A full-scale connected UNet for medical image segmentation,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing. 2020, pp. 1055–1059.
    [36]
    W. Wu, G. Liu, K. Liang, and H. Zhou, “Pneumothorax segmentation in routine computed tomography based on deep neural networks,” in Proc. IEEE 4th Int. Conf. Intelligent Autonomous Systems, 2021, pp. 78–83.
    [37]
    A. Ciritsis, C. Rossi, M. Eberhard, M. Marcon, A. S. Becker, and A. Boss, “Automatic classification of ultrasound breast lesions using a deep convolutional neural network mimicking human decision-making,” European Radiology, vol. 29, no. 10, pp. 5458–5468, 2019. doi: 10.1007/s00330-019-06118-7
    [38]
    C. D. L. Nascimento, S. D. d. S. Silva, T. A. d. Silva, W. C. d. A. Pereira, M. G. F. Costa, and C. F. F. Costa Filho, “Breast tumor classification in ultrasound images using support vector machines and neural networks,” Research on Biomedical Engineering, vol. 32, no. 3, pp. 283–292, 2016. doi: 10.1590/2446-4740.04915
    [39]
    Y. Wang, E. J. Choi, Y. Choi, H. Zhang, G. Y. Jin, and S.-B. Ko, “Breast cancer classification in automated breast ultrasound using multiview convolutional neural network with transfer learning,” Ultrasound in Medicine &Biology, vol. 45, no. 5, pp. 1119–1132, 2020.
    [40]
    Q. Sun, X. Lin, Y. Zhao, L. Li, K. Yan, D. Liang, D. Sun, and Z.-C. Li, “Deep learning vs. radiomics for predicting axillary lymph node metastasis of breast cancer using ultrasound images: Don’t forget the peritumoral region,” Frontiers in Oncology, vol. 10, pp. 53–64, 2020. doi: 10.3389/fonc.2020.00053
    [41]
    Q. Hu, H. M. Whitney, and M. L. Giger, “A deep learning methodology for improved breast cancer diagnosis using multiparametric MRI,” Scientific Reports, vol. 10, no. 1, pp. 1–11, 2020. doi: 10.1038/s41598-019-56847-4
    [42]
    J.-Y. Chiao, K.-Y. Chen, K. Y.-K. Liao, P.-H. Hsieh, G. Zhang, and T.-C. Huang, “Detection and classification the breast tumors using mask R-CNN on sonograms,” Medicine, vol. 98, no. 19, 2019.
    [43]
    N. Wu, J. Phang, J. Park, Y. Shen, Z. Huang, M. Zorin, S. Jastrzkebski, et al., “Deep neural networks improve radiologists’ performance in breast cancer screening,” IEEE Trans. Med. Imag., vol. 39, no. 4, pp. 1184–1194, 2019.
    [44]
    A. Akselrod-Ballin, M. Chorev, Y. Shoshan, A. Spiro, A. Hazan, R. Melamed, et al., “Predicting breast cancer by applying deep learning to linked health records and mammograms,” Radiology, vol. 292, no. 2, pp. 331–342, 2019. doi: 10.1148/radiol.2019182622
    [45]
    N. Dhungel, G. Carneiro, and A. P. Bradley, “The automated learning of deep features for breast mass classification from mammograms,” in Proc. Int. Conf. Medical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 106–114.
    [46]
    Y. B. Hagos, A. G. Mérida, and J. Teuwen, “Improving breast cancer detection using symmetry information with deep learning,” in Proc. Image Analysis for Moving Organ, Breast, and Thoracic Images. Springer, 2018, pp. 90–97.
    [47]
    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. Int. Conf. Learning Representations, 2015, pp. 1–14.
    [48]
    H. Zhu, Y. Zhang, G. Li, J. Zhang, and H. Shan, “Ordinal distribution regression for gait-based age estimation,” Science China Information Science, vol. 63, no. 2, pp. 17–30, 2020. doi: 10.1007/s11432-019-2733-4
    [49]
    G. Guo, H. Wang, Y. Yan, L. Zhang, and B. Li, “Large margin deep embedding for aesthetic image classification,” Science China Information Science, vol. 63, no. 1, pp. 1–3, 2020.
    [50]
    C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proc. Int. Conf. Machine Learning, PMLR, 2017, pp. 1126–1135.
    [51]
    M. A. Jamal and G.-J. Qi, “Task agnostic meta-learning for few-shot learning,” in Proc. IEEE Conf. Computer Vision and Pattern Recog-nition, 2019, pp. 11719–11727.
    [52]
    B. C. Csáji, “Approximation with artificial neural networks,” Faculty of Sciences, Etvs Lornd University, Hungary, vol. 24, no. 48, p. 7, 2001.
    [53]
    D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv: 1412.6980, 2014.
    [54]
    K. Janocha and W. M. Czarnecki, “On loss functions for deep neural networks in classification,” arXiv preprint arXiv: 1702.05659, 2017.
    [55]
    J. Zhang, Y. Xie, Q. Wu, and Y. Xia, “Medical image classification using synergic deep learning,” Medical Image Analysis, vol. 54, pp. 10–19, 2019. doi: 10.1016/j.media.2019.02.010
    [56]
    E. Pesce, S. J. Withey, P.-P. Ypsilantis, R. Bakewell, V. Goh, and G. Montana, “Learning to detect chest radiographs containing pulmonary lesions using visual attention networks,” Medical Image Analysis, vol. 53, pp. 26–38, 2019. doi: 10.1016/j.media.2018.12.007
    [57]
    A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” 2017.
    [58]
    F. Wilcoxon, “Individual comparisons by ranking methods,” in Breakthroughs in Statistics. New York, USA: Springer, 1992, pp. 196–202.
    [59]
    S. Xuan, G. Liu, and Z. Li, “Refined weighted random forest and its application to credit card fraud detection,” in Proc. Int. Conf. Computational Social Networks. Springer, 2018, pp. 343–355.
    [60]
    Y. Liang, M. Li, C. Jiang, and G. Liu, “CEModule: A computation efficient module for lightweight convolutional neural networks,” IEEE Trans. Neural Networks and Learning Systems, 2021. DOI: 10.1109/TNNLS2021.3133127
    [61]
    Y. Qin, C. Yan, G. Liu, Z. Li, and C. Jiang, “Pairwise Gaussian loss for convolutional neural networks,” IEEE Trans. Industrial Informatics, vol. 16, no. 10, pp. 6324–6333, 2020. doi: 10.1109/TII.2019.2963434

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(5)

    Article Metrics

    Article views (406) PDF downloads(55) Cited by()

    Highlights

    • A meta ordinal regression forest (MORF) is proposed for medical image classification with ordinal labels
    • MORF alleviates tree-wise variance and incorporates feature random perturbation for better generalization
    • The effectiveness of MORF has been evaluated on lung nodule classification and breast cancer classification

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return