Download PDFOpen PDF in browser

Generator From Edges: Reconstruction of Facial Images

EasyChair Preprint 4400

14 pagesDate: October 14, 2020

Abstract

Applications that involve supervised training require paired images. Researchers of single image super-resolution (SISR) create such images by artificially generating blurry input images from the corresponding ground truth. Similarly we can create paired images with the canny edge. We propose Generator From Edges (GFE) [Figure 1]. Our aim is to determine the best architecture for GFE, along with reviews of perceptual loss [1, 2]. To this end, we conducted three experiments. First, we explored the effects of the adversarial loss often used in SISR. In particular, we uncovered that it is not an essential component to form a perceptual loss. Eliminating adversarial loss will lead to a more effective architecture from the perspective of hardware resource. It also means that considerations for the problems pertaining to generative adversarial network (GAN) [3], such as mode collapse, are not necessary. Second, we reexamined VGG loss and found that the mid-layers yield the best results. By extracting the full potential of VGG loss, the overall performance of perceptual loss improves significantly. Third, based on the findings of the first two experiments, we reevaluated the dense network to construct GFE. Using GFE as an intermediate process, reconstructing a facial image from a pencil sketch can become an easy task.

Keyphrases: Canny Edges, Generative Adversarial Network, facial image

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:4400,
  author    = {Nao Takano and Gita Alaghband},
  title     = {Generator From Edges: Reconstruction of Facial Images},
  howpublished = {EasyChair Preprint 4400},
  year      = {EasyChair, 2020}}
Download PDFOpen PDF in browser