18.June Salt Lake City, Utah

NTIRE 2018

New Trends in Image Restoration and Enhancement workshop

and challenges on super-resolution, dehazing, and spectral reconstruction

in conjunction with CVPR 2018

Sponsors - to be updated




Call for papers

Image restoration and image enhancement are key computer vision tasks, aiming at the restoration of degraded image content or the filling in of missing information. Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, with image restoration or enhancement serving as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

Papers addressing topics related to image restoration and enhancement are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video dehazing
  • Demosaicing
  • Image/video compression
  • Artifact removal
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Hyperspectral imaging
  • Underwater imaging
  • Aerial and satellite imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Studies and applications of the above.

NTIRE 2018 has the following associated challenges:

  • the example-based single image super-resolution challenge
  • the image dehazing challenge
  • the spectral reconstruction from RGB images challenge

The authors of the top methods in each category will be invited to submit papers to NTIRE 2018 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted NTIRE workshop papers will be published under the book title "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland

NTIRE 2018 challenge on image super-resolution (started!)

In order to gauge the current state-of-the-art in (example-based) single-image super-resolution under realistic conditions, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference.

The challenge has 4 tracks as follows:

  1. Track 1: classic bicubic uses the bicubic downscaling (Matlab imresize), the most common setting from the recent single-image super-resolution literature.
  2. Track 2: realistic mild adverse conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high-resolution images. The degradation operators are the same within an image space and for all the images.
  3. Track 3: realistic difficult adverse conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high-resolution images. The degradation operators are the same within an image space and for all the images.
  4. Track 4: realistic wild conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high images. The degradation operators are the same within an image space but DIFFERENT from one image to another. This setting is the closest to real "wild" conditions.

To learn more about the challenge, to participate in the challenge, and to access the validation and test images everybody is invited to register at the above links!

The (train) data is made available for the registered participants on Codalab platform.

The training data consists from 800 HR images and corresponding LR images generated to match the conditions for each track.

The top ranked participants will be invited to co-author the challenge paper report.



NTIRE 2018 challenge on image dehazing (started!)

In order to gauge the current state-of-the-art in image dehazing for hazy images, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference. A novel datasets of real hazy images obtained in indoor and outdoor environments with ground truth is introduced with the challenge. It is the first image dehazing online challenge.

The challenge has 2 tracks:

  1. Track 1: Indoor - the goal is to restore the visibility in images with haze generated in a controlled indoor environment.
  2. Track 2: Outdoor - the goal is to restore the visibility in outdoor images with haze generated using a professional haze/fog generator.

To learn more about the challenge, to participate in the challenge, and to access the tran, validation and test images everybody is invited to register at the above links!

The (training) data is already made available for the registered participants.

The indoor training data consists from 25 hazy images (with haze generated in a controlled indoor environment) and their corresponding ground truth (haze-free) images of the same scene.

The outdoor training data consists from 35 hazy images and corresponding ground truth (haze-free) images. The haze has been produced using a professional haze/fog generator that imitates the real conditions of haze scenes.

The top ranked participants will be invited to co-author the challenge paper report.



NTIRE 2018 challenge on spectral reconstruction from RGB images (started!)

In order to gauge the current state-of-the-art in spectral reconstruction from RGB images, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference. The largest dataset to date will be introduced with the challenge. It is the first spectral reconstruction from RGB images online challenge.

The challenge has 2 tracks:

  1. Track 1: “Clean” recovering hyperspectral data from uncompressed 8-bit RGB images created by applying a know response function to ground truth hyperspectral information.
  2. Track 2: “Real World” recovering hyperspectral data from jpg-compressed 8-bit RGB images created by applying an unknown response function to ground truth hyperspectral information.

To learn more about the challenge, to participate in the challenge, and to access the train, validation and test images everybody is invited to register at the above links!

The training data consists from 254 spectral images (Train1 with 201 images from ICVL dataset and Train2 with 53 newly collected images) and corresponding RGB images generated to match the conditions for each track.

The top ranked participants will be invited to co-author the challenge paper report.

Important dates



Challenges Event Date (always 5PM Pacific Time)
Site online December 31, 2017
Release of train data and validation data (only low-res/rgb/hazy images) January 10, 2018
Validation server online January 15, 2018
Final test data release (only low-res/rgb/hazy images), validation data (high-res/spectral/clean images) released, validation server closed February 20, 2018
Test restoration results submission deadline February 27, 2018
Fact sheets submission deadline March 1, 2018
Code/executable submission deadline March 1, 2018
Final test results release to the participants March 3, 2018
Paper submission deadline for entries from the challenges March 12, 2018
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online February 1, 2018
Paper submission deadline March 1, 2018
Paper submission deadline (only for methods from challenges!) March 12, 2018
Decision notification March 29, 2018
Camera ready deadline April 5, 2018
Workshop day June 18, 2018

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2018 submissions.
http://cvpr2018.thecvf.com/submission/main_conference/author_guidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2018 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

TBA

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2018 main conference papers.

Author Kit

http://cvpr2018.thecvf.com/files/cvpr2018AuthorKit.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example egpaper_for_review.pdf for detailed formatting instructions.

People



Organizers

Radu Timofte

Radu Timofte is research group leader (lecturer) in the Computer Vision Laboratory, at ETH Zurich, Switzerland. He obtained a PhD degree in Electrical Engineering at the KU Leuven, Belgium in 2013, the MSc at the Univ. of Eastern Finland in 2007, and the Dipl. Eng. at the Technical Univ. of Iasi, Romania in 2006. He serves as a reviewer for top journals (such as TPAMI, TIP, IJCV, TNNLS, TCSVT, CVIU, PR) and conferences (ICCV, CVPR, ECCV, NIPS, ICML) and is area editor for Elsevier’s CVIU journal. He serves as area chair for ACCV 2018. He received a NIPS 2017 best reviewer award. His work received a best scientific paper award at ICPR 2012, the best paper award at CVVT workshop (ECCV 2012), the best paper award at ChaLearn LAP workshop (ICCV 2015), the best scientific poster award at EOS 2017, the honorable mention award at FG 2017, and his team won a number of challenges including traffic sign detection (IJCNN 2013) and apparent age estimation (ICCV 2015). He is co-founder of Merantix and organizer of NTIRE events. His current research interests include sparse and collaborative representations, deep learning, implicit models, optical flow, compression, image restoration and enhancement.

Jiqing Wu

Jiqing Wu received the B.Sc. degree in mechanical engineering from Shanghai Maritime University, China, in 2006, and the B.Sc. degree from TU Darmstadt, Germany, in 2012, and the M.Sc. degree from ETH Zurich, Switzerland, in 2015, in mathematics, respectively. He is currently pursuing a PhD degree under the supervision of prof. Luc Van Gool, in his lab at ETH Zurich. His research interests mainly concern image demosaicing, image restoration, and image generation.

Ming-Hsuan Yang

Ming-Hsuan Yang received the PhD degree in Computer Science from University of Illinois at Urbana-Champaign. He is a full professor in Electrical Engineering and Computer Science at University of California at Merced. He has published more than 120 papers in the field of computer vision. Yang serves as a program co-chair of ACCV 2014, general co-chair of ACCV 2016, and program co-chair of ICCV 2019. He serves as an editor for PAMI, IJCV, CVIU, IVC and JAIR. His research interests include object detection, tracking, recognition, image deblurring, super resolution, saliency detection, and image/video segmentation.

Lei Zhang

Lei Zhang (M’04, SM’14) received his B.Sc. degree in 1995 from Shenyang Institute of Aeronautical Engineering, Shenyang, P.R. China, and M.Sc. and Ph.D degrees in Control Theory and Engineering from Northwestern Polytechnical University, Xi’an, P.R. China, respectively in 1998 and 2001, respectively. From 2001 to 2002, he was a research associate in the Department of Computing, The Hong Kong Polytechnic University. From January 2003 to January 2006 he worked as a Postdoctoral Fellow in the Department of Electrical and Computer Engineering, McMaster University, Canada. In 2006, he joined the Department of Computing, The Hong Kong Polytechnic University, as an Assistant Professor. Since July 2017, he has been a Chair Professor in the same department. His research interests include Computer Vision, Pattern Recognition, Image and Video Analysis, and Biometrics, etc. Prof. Zhang has published more than 200 papers in those areas. As of 2017, his publications have been cited more than 28,000 times in the literature. Prof. Zhang is an Associate Editor of IEEE Trans. on Image Processing, SIAM Journal of Imaging Sciences and Image and Vision Computing, etc. He is a "Clarivate Analytics Highly Cited Researcher" from 2015 to 2017.

Luc Van Gool

Luc Van Gool received a degree in electro-mechanical engineering at the Katholieke Universiteit Leuven in 1981. Currently, he is a full professor for Computer Vision at the ETH in Zurich and the Katholieke Universiteit Leuven in Belgium. He leads research and teaches at both places. He has authored over 300 papers. Luc Van Gool has been a program committee member of several, major computer vision conferences (e.g. Program Chair ICCV'05, Beijing, General Chair of ICCV'11, Barcelona, and of ECCV'14, Zurich). His main interests include 3D reconstruction and modeling, object recognition, and tracking and gesture analysis. He received several Best Paper awards (eg. David Marr Prize '98, Best Paper CVPR'07, Tsuji Outstanding Paper Award ACCV'09, Best Vision Paper ICRA'09). In 2015 he received the 5-yearly Excellence Award in Applied Sciences by the Flemish Fund for Scientific Research, in 2016 a Koenderink Prize and in 2017 a PAMI Distinguished Researcher award. He is a co-founder of more than 10 spin-off companies and was the holder of an ERC Advanced Grant (VarCity). Currently, he leads computer vision research for autonomous driving in the context of the Toyota TRACE labs in Leuven and at ETH.

Cosmin Ancuti

Cosmin Ancuti received the PhD degree at Hasselt University, Belgium (2009). He was a post-doctoral fellow at IMINDS and Intel Exascience Lab (IMEC), Leuven, Belgium (2010-2012) and a research fellow at University Catholique of Louvain, Belgium (2015-2017). Currently, he is a senior researcher/lecturer at University Politehnica Timisoara. He is the author of more than 50 papers published in international conference proceedings and journals. His area of interests includes image and video enhancement techniques, computational photography and low level computer vision.

Codruta O. Ancuti

Codruta O. Ancuti is a senior researcher/lecturer at University Politehnica Timisoara, Faculty of Electrical and Telecommunication Engineering. She obtained the PhD degree at Hasselt University, Belgium (2011) and between 2015 and 2017 she was a research fellow at University of Girona, Spain (ViCOROB group). Her work received the best paper award at NTIRE 2017 (CVPR workshop). Her main interest of research includes image understanding and visual perception. She is the first that introduced several single images-based enhancing techniques built on the multi-scale fusion (e.g. color-to grayscale, image dehazing, underwater image and video restoration.

Boaz Arad

Boaz Arad is a Ph.D. student in the Interdisciplinary Computational Vision Laboratory, at Ben-Gurion University of the Negev, Israel. Alongside Prof. Ben-Shahar, Boaz collected and curates the largest natural hyperspectral image database published to date. For his work on hyperspectral data recovery he was awarded the EMVA “Young Professional Award 2017” as well as the Zlotowski Center for Neuroscience “Best Research Project of 2016” award. Technologies developed during his Ph.D. studies are currently being commercialized by the BGU based startup HC Vision.

Ohad Ben-Shahar

Prof. Ohad Ben-Shahar serves as head of the Computer Science Department at Ben-Gurion University of the Negev. Since founding the BGU Interdisciplinary Computational Vision Lab in 2006, Ohad has led a research group focused on advancing the state of knowledge about biological vision while at the same time to contributing to various aspects of machine vision, both theoretical and applied.

Program committee - to be updated

Invited Talks


TBA

Schedule


TBA


NTIRE 2018 Awards


TBA
Best Paper Awards
Challenge Winners