18.June Salt Lake City, Utah

NTIRE 2018

New Trends in Image Restoration and Enhancement workshop

and challenges on super-resolution, dehazing, and spectral reconstruction

in conjunction with CVPR 2018

Sponsors







Call for papers

Image restoration and image enhancement are key computer vision tasks, aiming at the restoration of degraded image content or the filling in of missing information. Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, with image restoration or enhancement serving as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

Papers addressing topics related to image restoration and enhancement are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video dehazing
  • Demosaicing
  • Image/video compression
  • Artifact removal
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Hyperspectral imaging
  • Underwater imaging
  • Aerial and satellite imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Studies and applications of the above.

NTIRE 2018 has the following associated challenges:

  • the example-based single image super-resolution challenge
  • the image dehazing challenge
  • the spectral reconstruction from RGB images challenge

The authors of the top methods in each category will be invited to submit papers to NTIRE 2018 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted NTIRE workshop papers will be published under the book title "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland

NTIRE 2018 challenge on image super-resolution

In order to gauge the current state-of-the-art in (example-based) single-image super-resolution under realistic conditions, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference.

The challenge has 4 tracks as follows:

  1. Track 1: classic bicubic uses the bicubic downscaling (Matlab imresize), the most common setting from the recent single-image super-resolution literature.
  2. Track 2: realistic mild adverse conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high-resolution images. The degradation operators are the same within an image space and for all the images.
  3. Track 3: realistic difficult adverse conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high-resolution images. The degradation operators are the same within an image space and for all the images.
  4. Track 4: realistic wild conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high images. The degradation operators are the same within an image space but DIFFERENT from one image to another. This setting is the closest to real "wild" conditions.

To learn more about the challenge, to participate in the challenge, and to access the validation and test images everybody is invited to register at the above links!

The (train) data is made available for the registered participants on Codalab platform.

The training data consists from 800 HR images and corresponding LR images generated to match the conditions for each track.

The top ranked participants will be invited to co-author the challenge paper report.



NTIRE 2018 challenge on image dehazing

In order to gauge the current state-of-the-art in image dehazing for hazy images, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference. A novel datasets of real hazy images obtained in indoor and outdoor environments with ground truth is introduced with the challenge. It is the first image dehazing online challenge.

The challenge has 2 tracks:

  1. Track 1: Indoor - the goal is to restore the visibility in images with haze generated in a controlled indoor environment.
  2. Track 2: Outdoor - the goal is to restore the visibility in outdoor images with haze generated using a professional haze/fog generator.

To learn more about the challenge, to participate in the challenge, and to access the tran, validation and test images everybody is invited to register at the above links!

The (training) data is already made available for the registered participants.

The indoor training data consists from 25 hazy images (with haze generated in a controlled indoor environment) and their corresponding ground truth (haze-free) images of the same scene.

The outdoor training data consists from 35 hazy images and corresponding ground truth (haze-free) images. The haze has been produced using a professional haze/fog generator that imitates the real conditions of haze scenes.

The top ranked participants will be invited to co-author the challenge paper report.



NTIRE 2018 challenge on spectral reconstruction from RGB images

In order to gauge the current state-of-the-art in spectral reconstruction from RGB images, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference. The largest dataset to date will be introduced with the challenge. It is the first spectral reconstruction from RGB images online challenge.

The challenge has 2 tracks:

  1. Track 1: “Clean” recovering hyperspectral data from uncompressed 8-bit RGB images created by applying a know response function to ground truth hyperspectral information.
  2. Track 2: “Real World” recovering hyperspectral data from jpg-compressed 8-bit RGB images created by applying an unknown response function to ground truth hyperspectral information.

To learn more about the challenge, to participate in the challenge, and to access the train, validation and test images everybody is invited to register at the above links!

The training data consists from 254 spectral images (Train1 with 201 images from ICVL dataset and Train2 with 53 newly collected images) and corresponding RGB images generated to match the conditions for each track.

The top ranked participants will be invited to co-author the challenge paper report.

Important dates



Challenges Event Date (always 5PM Pacific Time)
Site online December 31, 2017
Release of train data and validation data (only low-res/rgb/hazy images) January 10, 2018
Validation server online January 15, 2018
Final test data release (only low-res/rgb/hazy images), validation data (high-res/spectral/clean images) released, validation server closed March 15, 2018
Test restoration results submission deadline March 22, 2018
Fact sheets submission deadline March 24, 2018
Code/executable submission deadline March 24, 2018
Final test results release to the participants March 28, 2018
Paper submission deadline for entries from the challenges April 12, 2018 (extended!)
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online February 1, 2018
Paper submission deadline March 22, 2018 (extended!)
Paper submission deadline (only for methods from challenges!) April 12, 2018 (extended!)
Decision notification April 14, 2018
Camera ready deadline April 19, 2018
Workshop day June 18, 2018

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2018 submissions.
http://cvpr2018.thecvf.com/submission/main_conference/author_guidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2018 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

https://cmt3.research.microsoft.com/NTIRE2018

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2018 main conference papers.

Author Kit

http://cvpr2018.thecvf.com/files/cvpr2018AuthorKit.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example egpaper_for_review.pdf for detailed formatting instructions.

People



Organizers

Radu Timofte

Radu Timofte is research group leader (lecturer) in the Computer Vision Laboratory, at ETH Zurich, Switzerland. He obtained a PhD degree in Electrical Engineering at the KU Leuven, Belgium in 2013, the MSc at the Univ. of Eastern Finland in 2007, and the Dipl. Eng. at the Technical Univ. of Iasi, Romania in 2006. He serves as a reviewer for top journals (such as TPAMI, TIP, IJCV, TNNLS, TCSVT, CVIU, PR) and conferences (ICCV, CVPR, ECCV, NIPS, ICML) and is area editor for Elsevier’s CVIU journal. He serves as area chair for ACCV 2018. He received a NIPS 2017 best reviewer award. His work received a best scientific paper award at ICPR 2012, the best paper award at CVVT workshop (ECCV 2012), the best paper award at ChaLearn LAP workshop (ICCV 2015), the best scientific poster award at EOS 2017, the honorable mention award at FG 2017, and his team won a number of challenges including traffic sign detection (IJCNN 2013) and apparent age estimation (ICCV 2015). He is co-founder of Merantix and organizer of NTIRE events. His current research interests include sparse and collaborative representations, deep learning, implicit models, optical flow, compression, image restoration and enhancement.

Shuhang Gu

Shuhang Gu received the B.E. degree from the School of Astronautics, Beijing University of Aeronautics and Astronautics, China, in 2010, the M.E. degree from the Institute of Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, China, in 2013, and Ph.D. degree from the Department of Computing, The Hong Kong Polytechnic University, in 2017. He currently holds a post-doctoral position at ETH Zurich, Switzerland. His research interests include image restoration, enhancement and compression.

Jiqing Wu

Jiqing Wu received the B.Sc. degree in mechanical engineering from Shanghai Maritime University, China, in 2006, and the B.Sc. degree from TU Darmstadt, Germany, in 2012, and the M.Sc. degree from ETH Zurich, Switzerland, in 2015, in mathematics, respectively. He is currently pursuing a PhD degree under the supervision of prof. Luc Van Gool, in his lab at ETH Zurich. His research interests mainly concern image demosaicing, image restoration, and image generation.

Ming-Hsuan Yang

Ming-Hsuan Yang received the PhD degree in Computer Science from University of Illinois at Urbana-Champaign. He is a full professor in Electrical Engineering and Computer Science at University of California at Merced. He has published more than 120 papers in the field of computer vision. Yang serves as a program co-chair of ACCV 2014, general co-chair of ACCV 2016, and program co-chair of ICCV 2019. He serves as an editor for PAMI, IJCV, CVIU, IVC and JAIR. His research interests include object detection, tracking, recognition, image deblurring, super resolution, saliency detection, and image/video segmentation.

Lei Zhang

Lei Zhang (M’04, SM’14) received his B.Sc. degree in 1995 from Shenyang Institute of Aeronautical Engineering, Shenyang, P.R. China, and M.Sc. and Ph.D degrees in Control Theory and Engineering from Northwestern Polytechnical University, Xi’an, P.R. China, respectively in 1998 and 2001, respectively. From 2001 to 2002, he was a research associate in the Department of Computing, The Hong Kong Polytechnic University. From January 2003 to January 2006 he worked as a Postdoctoral Fellow in the Department of Electrical and Computer Engineering, McMaster University, Canada. In 2006, he joined the Department of Computing, The Hong Kong Polytechnic University, as an Assistant Professor. Since July 2017, he has been a Chair Professor in the same department. His research interests include Computer Vision, Pattern Recognition, Image and Video Analysis, and Biometrics, etc. Prof. Zhang has published more than 200 papers in those areas. As of 2017, his publications have been cited more than 28,000 times in the literature. Prof. Zhang is an Associate Editor of IEEE Trans. on Image Processing, SIAM Journal of Imaging Sciences and Image and Vision Computing, etc. He is a "Clarivate Analytics Highly Cited Researcher" from 2015 to 2017.

Luc Van Gool

Luc Van Gool received a degree in electro-mechanical engineering at the Katholieke Universiteit Leuven in 1981. Currently, he is a full professor for Computer Vision at the ETH in Zurich and the Katholieke Universiteit Leuven in Belgium. He leads research and teaches at both places. He has authored over 300 papers. Luc Van Gool has been a program committee member of several, major computer vision conferences (e.g. Program Chair ICCV'05, Beijing, General Chair of ICCV'11, Barcelona, and of ECCV'14, Zurich). His main interests include 3D reconstruction and modeling, object recognition, and tracking and gesture analysis. He received several Best Paper awards (eg. David Marr Prize '98, Best Paper CVPR'07, Tsuji Outstanding Paper Award ACCV'09, Best Vision Paper ICRA'09). In 2015 he received the 5-yearly Excellence Award in Applied Sciences by the Flemish Fund for Scientific Research, in 2016 a Koenderink Prize and in 2017 a PAMI Distinguished Researcher award. He is a co-founder of more than 10 spin-off companies and was the holder of an ERC Advanced Grant (VarCity). Currently, he leads computer vision research for autonomous driving in the context of the Toyota TRACE labs in Leuven and at ETH.

Cosmin Ancuti

Cosmin Ancuti received the PhD degree at Hasselt University, Belgium (2009). He was a post-doctoral fellow at IMINDS and Intel Exascience Lab (IMEC), Leuven, Belgium (2010-2012) and a research fellow at University Catholique of Louvain, Belgium (2015-2017). Currently, he is a senior researcher/lecturer at University Politehnica Timisoara. He is the author of more than 50 papers published in international conference proceedings and journals. His area of interests includes image and video enhancement techniques, computational photography and low level computer vision.

Codruta O. Ancuti

Codruta O. Ancuti is a senior researcher/lecturer at University Politehnica Timisoara, Faculty of Electrical and Telecommunication Engineering. She obtained the PhD degree at Hasselt University, Belgium (2011) and between 2015 and 2017 she was a research fellow at University of Girona, Spain (ViCOROB group). Her work received the best paper award at NTIRE 2017 (CVPR workshop). Her main interest of research includes image understanding and visual perception. She is the first that introduced several single images-based enhancing techniques built on the multi-scale fusion (e.g. color-to grayscale, image dehazing, underwater image and video restoration.

Boaz Arad

Boaz Arad is a Ph.D. student in the Interdisciplinary Computational Vision Laboratory, at Ben-Gurion University of the Negev, Israel. Alongside Prof. Ben-Shahar, Boaz collected and curates the largest natural hyperspectral image database published to date. For his work on hyperspectral data recovery he was awarded the EMVA “Young Professional Award 2017” as well as the Zlotowski Center for Neuroscience “Best Research Project of 2016” award. Technologies developed during his Ph.D. studies are currently being commercialized by the BGU based startup HC Vision.

Ohad Ben-Shahar

Prof. Ohad Ben-Shahar serves as head of the Computer Science Department at Ben-Gurion University of the Negev. Since founding the BGU Interdisciplinary Computational Vision Lab in 2006, Ohad has led a research group focused on advancing the state of knowledge about biological vision while at the same time to contributing to various aspects of machine vision, both theoretical and applied.

Program committee

Invited Talks



Ming-Yu Liu

NVIDIA

Title: High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

Abstract: We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048x1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing/adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.

Bio: Ming-Yu Liu is a senior research scientist at NVIDIA Research. Before joining NVIDIA Research, he was a principal research scientist at Mitsubishi Electric Research Labs (MERL) from 2012 to 2016. He received his Ph.D. from the Department of Electrical and Computer Engineering at the University of Maryland College Park in 2012. The object pose estimation algorithm he developed was a major component of a commercial vision-based robotic bin-picking system, which was awarded the 100 most innovative technology products of the year by the R&D magazine in 2014. His street scene understanding paper was selected in the best paper finalist by the Robotics Science and System (RSS) conference in 2015. Recently, his research focus shifted to generative models for image understanding and generation. His goal is to enable machines superhuman-like imagination capabilities.

Liang Lin

SenseTime, Sun Yat-sen Univ.

Title: When Depth Estimation Meets Deep Learning

Abstract: Depth data is indispensable for reconstructing or understanding 3D scenes. It serves as a key ingredient for applications such as synthetic defocus, autonomous driving, and augmented reality. Although active 3D sensors (e.g., Lidar, ToF, and structured-light 3D scanner) can be employed, retrieving depth from monocular/stereo cameras is typically a more cost-effective approach. However, estimating depth from images is inherently under-determined, to regularize the problem, one typically needs handcrafted models characterizing the properties of depth data or scene geometry. As the recent advances in deep learning, depth estimation is cast as a learning task, leading to state-of-the-art performance. In this talk, I will present our new progress on depth estimation with convolutional neural networks (CNN). Particularly, I will first introduce cascade residual learning (CRL), our two-stage deep architecture on stereo matching producing high-quality disparity estimates. Observations with CRL inspires us to propose a domain-adaptation approach---zoom and learn (ZOLE)---for training a deep stereo matching algorithm without the ground-truth data of a target domain. By combining a view synthesis network and the first stage of CRL, we propose single view stereo matching (SVS) for single image depth estimation, with a performance superior to the classic stereo block matching method taking two images as inputs.

Bio: Liang Lin is the Executive R&D Director of SenseTime Group Limited and a full Professor of Sun Yat-sen University. He has published more than 100 papers in top-tier academic journals and conferences. He currently leads the SenseTime R&D teams to develop cutting-edges and deliverable solutions on computer vision, data analysis and mining, and intelligent robotic systems. He has been an Associate Editor of IEEE Trans. Human-Machine Systems and served as Area/Session Chairs for numerous conferences such as CVPR, ICME, ACCV, ICMR. He was the recipient of Best Paper Runners-Up Award in ACM NPAR 2010, Google Faculty Award in 2012, Best Paper Diamond Award in IEEE ICME 2017, and Hong Kong Scholars Award in 2014. He is a Fellow of IET.

William T. Freeman

MIT / Google

Title: Copying and editing images

Abstract: TBA

Bio: William T. Freeman is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) there. He was the Associate Department Head from 2011 - 2014. His current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. He received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and test-of-time retrospective awards for papers from 1990, 1995, and 2005. Previous research topics include steerable filters and pyramids, orientation histograms, the generic viewpoint assumption, color constancy, computer vision for computer games, and belief propagation in networks with loops. He is active in the program or organizing committees of computer vision, graphics, and machine learning conferences. He was the program co-chair for ICCV 2005, and for CVPR 2013.

Xian-Sheng Hua

Alibaba

Title: Computer Vision Technologies in City Brain

Abstract: Massive data is accumulating in a city everyday, and the problem is how to explore the potential value of the data. City Brain is a city-level system including but not limited to data perception, policy & optimization, search & mining, prediction & intervention. Computer vision technologies are widely applied in City Brain, such as image classification, object detection, multi-object tracking, semantic segmentation, image retrieval, as well as low-level image processing such as super-resolution and image denoising. In this talk, we will show how these technologies applied to City Brain.

Bio: Xian-Sheng Hua received a BS in Information Science from Peking University in 1996, a PhD in Applied Mathematics from Peking University in 2001. He is Distinguished Engineer / Deputy Managing Director of Machine Intelligence Technology Lab, DAMO Academy, Alibaba Group. Before that, he was a Senior Researcher in Microsoft Research. He is IEEE Fellow and a recipient of the MIT TR35 Young Innovator Award. His current research focuses on Large-scale video / image content analysis / understanding / retrieval / search, image search engine, video / image advertising, machine learning, pattern recognition, etc.

Graham Finlayson

Univ. of East Anglia, Spectral Edge Ltd, Simon Fraser Univ.

Title: Metamer Sets

Abstract:In this talk I review the ideas behind Metamer Sets. Given knowledge of the spectral sensitivities of the camera and the spectral power distribution of the illuminant we show how to recover the Metamer Set for an RGB. The Metamer Set is the set of all plausible reflectances which might have induced the RGB.
Applications of Metamer Set Theory range from providing valuable insights on how we see colour, to improved colour correction to ‘pass/fail’ vision (to know that an ROI is definitely not a particular object).
Calculating Metamer Sets is not easy, so I will present new work that reformulates the problem to allow faster computation.

Bio: Graham Finlayson is a Professor of Computer Science at the University of East Anglia. He joined UEA in 1999 when he was awarded a full professorship at the age of 30. He was and remains the youngest ever professorial appointment at that institution. Graham trained in computer science first at the University of Strathclyde and then for his masters and doctoral degrees at Simon Fraser University where he was awarded a ‘Dean’s medal’ for the best PhD dissertation in his faculty. Prior to joining UEA, Graham was a lecturer at the University of York and then a founder and Reader at the Colour and Imaging Institute at the University of Derby. Professor Finlayson is interested in ‘Computing how we see’ and his research spans computer science (algorithms), engineering (embedded systems) and psychophysics (visual perception). He has published over 50 journal, over 200 referred conference papers and 25+ patents. He has won best paper prizes at several conferences including, “The 5th IS&T conference on Colour in Graphics, Imaging and Vision” (2010) and “the IEE conference on Visual Information Engineering” (1995). Many of Graham’s patents are implemented and used in commercial products including photo processing software, dedicated image processing hardware (ASICs) and in embedded camera software. Graham’s research is funded from a number of sources including government, industry and through investment for spin-out companies. Industrial partners include Apple, Hewlett Packard, Sony, Xerox, Unilever and Buhler-Sortex. Significantly, Graham was the first academic at UEA (in its 50 year history) to either raise venture capital investment for a spin-out company – Imsense Ltd developed technology to make pictures look better - or to make a money for the university when this company was subsequently sold to a blue chip industry major in 2010. He is interested in taking the creative spark of an idea, developing the underlying theory and algorithms and then implementing and commercialising the technology. Graham's IP ships in 100s of millions of products. In 2002, Graham was awarded the Philip Leverhulme prize for science and in 2008 a Royal Society-Wolfson Merit award. In 2009 the Royal Photographic Society presented graham with the Davies medal in recognition of his contributions to the Photographic Industry. The RPS made Graham a fellow in 2012. In recognition of distinguished service to the Society for Imaging Science and Technology, Graham was elected a fellow of that society in 2010. In January 2013 Graham was also elected to a fellowship of the Institute of Engineering Technology.

Schedule


The 29 accepted NTIRE workshop papers will be published under the book title "2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library



08:10
Poster setting (Hall A; Halls 1-4) (all papers have poster panels for the whole day)

DPW-SDNet: Dual Pixel-Wavelet Domain Deep CNNs for Soft Decoding of JPEG-Compressed Images
Honggang Chen, Xiaohai He, Linbo Qing, Shuhua Xiong, Truong Nguyen
NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results
Radu Timofte, Shuhang Gu, Jiqing Wu, Luc Van Gool, Lei Zhang, Ming-Hsuan Yang, et
Attribute Augmented Convolutional Neural Network for Face Hallucination
Cheng-Han Lee, Kaipeng Zhang, Hu Cheng Lee, Chia-Wen Cheng, Winston Hsu
Recursive Deep Residual Learning for Single Image Dehazing
Yixin Du, Xin Li
Unsupervised Image Super-Resolution using Cycle-in-Cycle Generative Adversarial Networks
Yuan Yuan, Siyuan Liu, Jiawei Zhang, Yongbing Zhang, Chao Dong, Liang Lin
Synthesized Texture Quality Assessment via Multi-scale Spatial and Statistical Texture Attributes of Image and Gradient Magnitude Coefficients
Alireza Golestaneh, Lina Karam
Learning Face Deblurring Fast and Wide
Meiguang Jin, Michael Hirsch, Paolo Favaro
O-HAZE: a dehazing benchmark with real hazy and haze-free outdoor images
Codruta Ancuti, Cosmin Ancuti, Radu Timofte, Christophe De Vleeschouwer
Large Receptive Field Networks for High-Scale Image Super-Resolution
George Seif, Dimitrios Androutsos
Multi-level Wavelet-CNN for Image Restoration
Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, Wangmeng Zuo
ComboGAN: Unrestrained Scalability for Image Domain Translation
Asha Anoosheh, Eirikur Agustsson, Radu Timofte, Luc Van Gool
NTIRE 2018 Challenge on Image Dehazing: Methods and Results
Cosmin Ancuti, Codruta O. Ancuti, Radu Timofte, Luc Van Gool, Lei Zhang, Ming-Hsuan Yang, et al.
A Fully Progressive Approach to Single-Image Super-Resolution
Yifan Wang, Federico Perazzi, Brian McWilliams, Alexander Sorkine-Hornung, Olga Sorkine-Hornung, Christopher Schroers
Image Dehazing by Joint Estimation of Transmittance and Airlight Using Bi-directional Consistency Loss Minimized FCN
Ranjan Mondal, Sanchayan Santra, Bhabatosh Chanda
WESPE: Weakly Supervised Photo Enhancer for Digital Cameras
Andrey Ignatov, Nikolay Kobyshev, Radu Timofte, Kenneth Vanhoey, Luc Van Gool
Image Super-resolution via Progressive Cascading Residual Network
Namhyuk Ahn, Byungkon Kang, Kyung-Ah Sohn
HSCNN+: Advanced CNN-Based Hyperspectral Recovery from RGB Images
Zhan Shi, Chang Chen, Zhiwei Xiong, Dong Liu, Feng Wu
Multi-scale Single Image Dehazing using Perceptual Pyramid Deep Network
He Zhang, Vishwanath Sindagi, Vishal M. Patel
Efficient Module Based Single Image Super Resolution for Multiple Problems
Dongwon Park, Kwanyoung Kim, Se Young Chun
Deep Residual Network with Enhanced Upscaling Module for Super-Resolution
Jun-Hyuk Kim, Jong-Seok Lee
Persistent Memory Residual Network for Single Image Super Resolution
Rong Chen, Yanyun Qu, Kun Zeng, Jinkang Guo, Li Cui-hua, Yuan Xie
NTIRE 2018 Challenge on Spectral Reconstruction from RGB Images
Boaz Arad, Ohad Ben-Shahar, Radu Timofte, Luc Van Gool, Lei Zhang, Ming-Hsuan Yang, et al.
Fully End-to-End learning based Conditional Boundary Equilibrium GAN with Receptive Field Sizes Enlarged for Single Ultra-High Resolution Image Dehazing
Sehwan Ki, Hyeonjun Sim, Soo Ye Kim, Jae-Seok Choi, Saehun Kim, Munchurl Kim
Reconstructing Spectral Images from RGB-Images using a Convolutional Neural Network
Tarek Stiebel, Simon Koppers, Philipp Seltsam, Dorit Merhof
New Techniques for Preserving Global Structure and Denoising with Low Information Loss in Single-Image SuperResolution
Yijie Bei, Shijia Hu, Alexandru Damian, Nikhil Ravi, Cynthia Rudin, Sachit Menon
Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing
Deniz Engin, Anil Genc, Hazim Ekenel
IRGUN : Improved Residue based Gradual Up-Scaling Network for Single Image Super Resolution
Manoj Sharma, Rudrabha Mukhopadhyay, Avinash Upadhyay, Sriharsha Koundinya, Ankit Shukla, Santanu Chaudhury
High-Resolution Image Dehazing with respect to Training Losses and Receptive Field Sizes
Hyeonjun Sim, Sehwan Ki, Soo Ye Kim, Jae-Seok Choi, Saehun Kim, Munchurl Kim
2D-3D CNN based architectures for spectral reconstruction from RGB images
Sriharsha Koundinya, Manoj Sharma, Himanshu Sharma, Avinash Upadhyay, Rudrabha Mukhopadhyay, Raunak Manekar, Abhijit Karmakar, Santanu Chaudhury