Supervisors: Dr. Shuhang Gu, Dr. Radu Timofte
Supervised super-resolution models have become a preferred solution to recover high-resolution images from low-resolution images by increasing high frequency components and removing undesired blur, noise and resolution degradation. Over the last years, the availability of high-performance computing has led to complex convolution neural networks with astonishing performance improvements. However, this increased complexity has also become the biggest obstacle to deploy the models on end-devices such as mobile phones. Further, an increasing number of practical applications require flexible models during inference, such that they can adapt to the deployment constraints or such that a trained model can be deployed by a high number of similar but different devices in terms of computational properties. In a setting of constrained memory, we are the first to present two concrete models for anytime super-resolution that achieve state-of-the-art image enhancement. We successfully introduce an adapted training method to increase PSNR performance during early exits. Further, we present a detailed analysis on the performance of different structures for image enhancement in regard to the required number of parameters and floating-point operations. Out of this in-depth analysis on different structures, we refine two models that based on specific deployment constraints are extremely-memory efficient and achieve excellent performance on image enhancement under strong time constraints while not missing out on state-of-the-art performance if sufficient computational resources are available.