Fangda Han

Fangda Han is a graduate student in Ph.D program at the computer science department of Rutgers, the State University of New Jersey. He has been working with Professor Vladimir Pavlovic in SeqAM lab since 2016. He received a BS degree in Engineering Physics from Tsinghua University, China in 2013 and a MS degree in Engineering Physics from Tsinghua University, China in 2016. His research interests are machine learning and computer vision.

Recently he is working on exploring the relationship between natural languages and images. You can find more abouht me in my personal website.

Vitae

Publications

2019

  • L. Zhao, F. Han, X. Peng, X. Zhang, M. Kapadia, V. Pavlovic, and D. N. Metaxas, “Cartoonish sketch-based face editing in videos using identity deformation transfer,” Computers & Graphics, vol. 79, p. 58–68, 2019. doi:10.1016/j.cag.2019.01.004
    [BibTeX]
    @Article{ZhaoH0ZKPM19,
    author = {Long Zhao and Fangda Han and Xi Peng and Xun Zhang and Mubbasir Kapadia and Vladimir Pavlovic and Dimitris N. Metaxas},
    journal = {Computers {\&} Graphics},
    title = {Cartoonish sketch-based face editing in videos using identity deformation transfer},
    year = {2019},
    pages = {58--68},
    volume = {79},
    doi = {10.1016/j.cag.2019.01.004},
    }
  • F. Han, R. Guerrero, and V. Pavlovic, “VirtualCook: Cross-modal Synthesis of Food Images from Ingredients,” in Int’l Joint Conference on Artificial Intelligence IJCNN, Workshop on AI and Food, Macao, China, 2019.
    [BibTeX]
    @inproceedings{han19art_ijcai,
    Address = {Macao, China},
    Author = {Fangda Han and Ricardo Guerrero and Vladimir Pavlovic},
    Booktitle = {Int'l Joint Conference on Artificial Intelligence {IJCNN}, Workshop on AI and Food},
    Date-Modified = {2019-09-05 20:50:46 +0100},
    Month = aug,
    Title = {VirtualCook: Cross-modal Synthesis of Food Images from Ingredients},
    Year = {2019}}
  • F. Han, R. Guerrero, and V. Pavlovic, “The Art of Food: Meal Image Synthesis from Ingredients,” CoRR, vol. abs/1905.13149, 2019.
    [BibTeX]
    @Article{han19art_arxiv,
    author = {Fangda Han and Ricardo Guerrero and Vladimir Pavlovic},
    journal = {CoRR},
    title = {The Art of Food: Meal Image Synthesis from Ingredients},
    year = {2019},
    volume = {abs/1905.13149},
    eprint = {1905.13149v1},
    }

2017

  • L. Zhao, F. Han, M. Kapadia, V. Pavlovic, and D. Metaxas, “Sketch-based Face Editing in Video Using Identity Deformation Transfer,” , 2017.
    [BibTeX] [Abstract]
    We address the problem of using hand-drawn sketch to edit facial identity, such as enlarging the shape or modifying the position of eyes or mouth, in the whole video. This task is formulated as a 3D face model reconstruction and deformation problem. We first introduce a two-stage real-time 3D face model fitting schema to recover facial identity and expressions from the video. We recognize the user’s editing intention from the input sketch as a set of facial modifications. A novel identity deformation algorithm is then proposed to transfer these deformations from 2D space to 3D facial identity directly, while preserving the facial expressions. Finally, these changes are propagated to the whole video with the modified identity. Experimental results demonstrate that our method can effectively edit facial identity in video based on the input sketch with high consistency and fidelity.
    @Electronic{zhao17arx,
    author = {Long Zhao and Fangda Han and Mubbasir Kapadia and Vladimir Pavlovic and Dimitris Metaxas},
    title = {Sketch-based Face Editing in Video Using Identity Deformation Transfer},
    year = {2017},
    abstract = {We address the problem of using hand-drawn sketch to edit facial identity, such as enlarging the shape or modifying the position of eyes or mouth, in the whole video. This task is formulated as a 3D face model reconstruction and deformation problem. We first introduce a two-stage real-time 3D face model fitting schema to recover facial identity and expressions from the video. We recognize the user's editing intention from the input sketch as a set of facial modifications. A novel identity deformation algorithm is then proposed to transfer these deformations from 2D space to 3D facial identity directly, while preserving the facial expressions. Finally, these changes are propagated to the whole video with the modified identity. Experimental results demonstrate that our method can effectively edit facial identity in video based on the input sketch with high consistency and fidelity.},
    date = {2017-03-25},
    eprint = {1703.08738v1},
    eprintclass = {cs.CV},
    eprinttype = {arXiv},
    file = {online:http\:/arxiv.org/pdf/1703.08738v1:PDF},
    keywords = {cs.CV, face animation},
    owner = {vladimir},
    timestamp = {2017.06.19},
    }