A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 9 Issue 10
Oct.  2022

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
N. Yang, B. J. Xia, Z. Han, and T. R. Wang, “A domain-guided model for facial cartoonlization,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 10, pp. 1886–1888, Oct. 2022. doi: 10.1109/JAS.2022.105887
Citation: N. Yang, B. J. Xia, Z. Han, and T. R. Wang, “A domain-guided model for facial cartoonlization,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 10, pp. 1886–1888, Oct. 2022. doi: 10.1109/JAS.2022.105887

A Domain-Guided Model for Facial Cartoonlization

doi: 10.1109/JAS.2022.105887
More Information
  • loading
  • Nan Yang and Bingjie Xia contributed equally to this work.
  • [1]
    J. Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2242–2251.
    [2]
    S. Chen, Y. Tian, F. Wen, Y.-Q. Xu, and X. Tang, “Easytoon: An easy and quick tool to personalize a cartoon storyboard using family photo album,” in Proc. 16th ACM Int. Conf. Multimed., 2008, pp. 499–508.
    [3]
    Y. Chen, Y.-K. Lai, and Y.-J. Liu, “CartoonGAN: Generative adversarial networks for photo cartoonization,” in Proc. IEEE Conf. Comput. Vis. pattern Recognit., 2018, pp. 9465–9474.
    [4]
    H. Li, G. Liu, and K. N. Ngan, “Guided face cartoon synthesis,” IEEE Trans. Multimed., vol. 13, no. 6, pp. 1230–1239, 2011. doi: 10.1109/TMM.2011.2168814
    [5]
    Y. Zhang, W. Dong, O. Deussen, F. Huang, K. Li, and B.-G. Hu, “Data-driven face cartoon stylization,” in Proc. SIGGRAPH Asia Tech. Briefs, 2014, pp. 1–4.
    [6]
    M. Yang, S. Lin, P. Luo, L. Lin, and H. Chao, “Semantics-driven portrait cartoon stylization,” in Proc. IEEE Int. Conf. Image Process, 2010, pp. 1805–1808.
    [7]
    J. Gong, Y. Hold-Geoffroy, and J. Lu, “Autotoon: Automatic geometric warping for face cartoon generation,” in Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis., 2020, pp. 360–369.
    [8]
    K. Liu, Z. Ye, H. Guo, D. Cao, L. Chen, and F. Y. Wang, “FISS Gan: A generative adversarial network for foggy image semantic segmentation,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1428–1439, 2021. doi: 10.1109/JAS.2021.1004057
    [9]
    J. Li, Y. Tao, and T. Cai, “Predicting lung cancers using epidemiological data: A generative-discriminative framework,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 5, pp. 1067–1078, 2021. doi: 10.1109/JAS.2021.1003910
    [10]
    J. Kim, M. Kim, H. Kang, and K. Lee, “U-GAT-IT: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation,” arXiv preprint arXiv: 1907.10830, 2019.
    [11]
    Minivision-AI, “Photo2cartoon,” [Online]. Available: https://github.com/minivision-ai/photo2cartoon. Accessed: Jun. 2022.
    [12]
    B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2921–2929.
    [13]
    N. Zhuang and C. Yang, “Few-shot knowledge transfer for fine-grained cartoon face generation,” in Proc. IEEE Int. Conf. Multimed. Expo, 2021, pp. 1–6.
    [14]
    K. Ding, K. Ma, S. Wang, and E. P. Simoncelli, “Image quality assessment: Unifying structure and texture similarity,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 5, pp. 2567–2581, May 2022. doi: 10.1109/TPAMI.2020.3045810
    [15]
    L. Zhang, L. Zhang, X. Mou, and D. Zhang, “FSIM: A feature similarity index for image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 8, pp. 2378–2386, 2011. doi: 10.1109/TIP.2011.2109730
    [16]
    B. Zhang, P. V. Sander, and A. Bermak, “Gradient magnitude similarity deviation on multiple scales for color image quality assessment,” in Proc. ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process, 2017, pp. 1253–1257.
    [17]
    Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in Proc. 37th Asilomar Conf. Signals, Syst. Comput., 2013, vol. 2, pp. 1398–1402.
    [18]
    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, 2004. doi: 10.1109/TIP.2003.819861
    [19]
    R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2018, pp. 586–595.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(2)  / Tables(3)

    Article Metrics

    Article views (233) PDF downloads(32) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return