SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis

Authors: Wengling Chen, James Hays

Abstract: Synthesizing realistic images from human drawn sketches is a challenging problem in computer graphics and vision. Existing approaches either need exact edge maps, or require a database to retrieve images from. In this work, we propose a novel Generative Adversarial Network (GAN) approach that synthesizes realistic looking images from 50 categories including motorcycles, horses and couches. We demonstrate a data augmentation technique for sketches which is fully automatic, and we show that the augmented data is helpful to our task. We introduce a new building block suit for both the generator and discriminator which improves the information flow and utilizes input images at multiple scales. Compared to state-of-the-art image translation methods, our approach generates more realistic images and achieves significantly higher Inception Scores.

Submitted to arXiv on 09 Jan. 2018

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.