Supervisors: Dr. Radu Timofte
How awesome would it be if you could have a smart phone whose photo quality is as good as DSLR cameras? Limited by small sensor sizes and compact lenses, smartphone cameras suffer from low resolutions, dull color, blurry texture and intense noise compared to DSLR cameras. Luckily, today’s photo enhancing techniques can narrow the gap. This project presents an adversarial learning based, weakly supervised deep neural network which automatically enhances low-quality photos. Specifically, we focus on the texture learning aspect of photo enhancement. Encouraging impressive texture and natural smoothness, the proposed framework produces better result compared to previous work while remaining elegant and effective. In this project, we experiment with adversarial grayscale, gradient and laplace losses, and study their effects on texture learning. We show that the commonly used total variation loss is a problematic way of enforcing smoothness and can be replaced by gradient loss to obtain better results. To evaluate performance, we measure the similarity between enhanced and target patches with both traditional and deep architecture based metrics. The latter should correspond better with human visual perception. The performance metrics and visual results reveal that gradient loss is more sophisticated than total variation loss, as it preserves sharpness and details while displaying even smoother properties.