18.June Salt Lake City, Utah

NTIRE 2018

New Trends in Image Restoration and Enhancement workshop

and challenges on super-resolution, dehazing, and spectral reconstruction

in conjunction with CVPR 2018

Sponsors - to be updated







Call for papers

Image restoration and image enhancement are key computer vision tasks, aiming at the restoration of degraded image content or the filling in of missing information. Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, with image restoration or enhancement serving as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

Papers addressing topics related to image restoration and enhancement are invited. The topics include, but are not limited to:

  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video dehazing
  • Demosaicing
  • Image/video compression
  • Artifact removal
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Hyperspectral imaging
  • Underwater imaging
  • Aerial and satellite imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Studies and applications of the above.

NTIRE 2018 has the following associated challenges:

  • the example-based single image super-resolution challenge
  • the image dehazing challenge
  • the spectral reconstruction from RGB images challenge

The authors of the top methods in each category will be invited to submit papers to NTIRE 2018 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted NTIRE workshop papers will be published under the book title "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland

NTIRE 2018 challenge on image super-resolution (ongoing!)

In order to gauge the current state-of-the-art in (example-based) single-image super-resolution under realistic conditions, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference.

The challenge has 4 tracks as follows:

  1. Track 1: classic bicubic uses the bicubic downscaling (Matlab imresize), the most common setting from the recent single-image super-resolution literature.
  2. Track 2: realistic mild adverse conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high-resolution images. The degradation operators are the same within an image space and for all the images.
  3. Track 3: realistic difficult adverse conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high-resolution images. The degradation operators are the same within an image space and for all the images.
  4. Track 4: realistic wild conditions assumes that the degradation operators (emulating the image acquisition process from a digital camera) can be estimated through training pairs of low and high images. The degradation operators are the same within an image space but DIFFERENT from one image to another. This setting is the closest to real "wild" conditions.

To learn more about the challenge, to participate in the challenge, and to access the validation and test images everybody is invited to register at the above links!

The (train) data is made available for the registered participants on Codalab platform.

The training data consists from 800 HR images and corresponding LR images generated to match the conditions for each track.

The top ranked participants will be invited to co-author the challenge paper report.



NTIRE 2018 challenge on image dehazing (ongoing!)

In order to gauge the current state-of-the-art in image dehazing for hazy images, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference. A novel datasets of real hazy images obtained in indoor and outdoor environments with ground truth is introduced with the challenge. It is the first image dehazing online challenge.

The challenge has 2 tracks:

  1. Track 1: Indoor - the goal is to restore the visibility in images with haze generated in a controlled indoor environment.
  2. Track 2: Outdoor - the goal is to restore the visibility in outdoor images with haze generated using a professional haze/fog generator.

To learn more about the challenge, to participate in the challenge, and to access the tran, validation and test images everybody is invited to register at the above links!

The (training) data is already made available for the registered participants.

The indoor training data consists from 25 hazy images (with haze generated in a controlled indoor environment) and their corresponding ground truth (haze-free) images of the same scene.

The outdoor training data consists from 35 hazy images and corresponding ground truth (haze-free) images. The haze has been produced using a professional haze/fog generator that imitates the real conditions of haze scenes.

The top ranked participants will be invited to co-author the challenge paper report.



NTIRE 2018 challenge on spectral reconstruction from RGB images (ongoing!)

In order to gauge the current state-of-the-art in spectral reconstruction from RGB images, to compare and to promote different solutions we are organizing an NTIRE challenge in conjunction with the CVPR 2018 conference. The largest dataset to date will be introduced with the challenge. It is the first spectral reconstruction from RGB images online challenge.

The challenge has 2 tracks:

  1. Track 1: “Clean” recovering hyperspectral data from uncompressed 8-bit RGB images created by applying a know response function to ground truth hyperspectral information.
  2. Track 2: “Real World” recovering hyperspectral data from jpg-compressed 8-bit RGB images created by applying an unknown response function to ground truth hyperspectral information.

To learn more about the challenge, to participate in the challenge, and to access the train, validation and test images everybody is invited to register at the above links!

The training data consists from 254 spectral images (Train1 with 201 images from ICVL dataset and Train2 with 53 newly collected images) and corresponding RGB images generated to match the conditions for each track.

The top ranked participants will be invited to co-author the challenge paper report.

Important dates



Challenges Event Date (always 5PM Pacific Time)
Site online December 31, 2017
Release of train data and validation data (only low-res/rgb/hazy images) January 10, 2018
Validation server online January 15, 2018
Final test data release (only low-res/rgb/hazy images), validation data (high-res/spectral/clean images) released, validation server closed March 15, 2018
Test restoration results submission deadline March 22, 2018
Fact sheets submission deadline March 24, 2018
Code/executable submission deadline March 24, 2018
Final test results release to the participants March 28, 2018
Paper submission deadline for entries from the challenges April 12, 2018 (extended!)
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online February 1, 2018
Paper submission deadline March 22, 2018 (extended!)
Paper submission deadline (only for methods from challenges!) April 12, 2018 (extended!)
Decision notification April 14, 2018
Camera ready deadline April 19, 2018
Workshop day June 18, 2018

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all CVPR 2018 submissions.
http://cvpr2018.thecvf.com/submission/main_conference/author_guidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with CVPR2018 main conference only. If a paper is submitted also to CVPR and accepted, the paper cannot be published both at the CVPR and the workshop.

Submission site

https://cmt3.research.microsoft.com/NTIRE2018

Proceedings

Accepted and presented papers will be published after the conference in CVPR Workshops proceedings together with the CVPR2018 main conference papers.

Author Kit

http://cvpr2018.thecvf.com/files/cvpr2018AuthorKit.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example egpaper_for_review.pdf for detailed formatting instructions.

People



Organizers

Radu Timofte

Radu Timofte is research group leader (lecturer) in the Computer Vision Laboratory, at ETH Zurich, Switzerland. He obtained a PhD degree in Electrical Engineering at the KU Leuven, Belgium in 2013, the MSc at the Univ. of Eastern Finland in 2007, and the Dipl. Eng. at the Technical Univ. of Iasi, Romania in 2006. He serves as a reviewer for top journals (such as TPAMI, TIP, IJCV, TNNLS, TCSVT, CVIU, PR) and conferences (ICCV, CVPR, ECCV, NIPS, ICML) and is area editor for Elsevier’s CVIU journal. He serves as area chair for ACCV 2018. He received a NIPS 2017 best reviewer award. His work received a best scientific paper award at ICPR 2012, the best paper award at CVVT workshop (ECCV 2012), the best paper award at ChaLearn LAP workshop (ICCV 2015), the best scientific poster award at EOS 2017, the honorable mention award at FG 2017, and his team won a number of challenges including traffic sign detection (IJCNN 2013) and apparent age estimation (ICCV 2015). He is co-founder of Merantix and organizer of NTIRE events. His current research interests include sparse and collaborative representations, deep learning, implicit models, optical flow, compression, image restoration and enhancement.

Shuhang Gu

Shuhang Gu received the B.E. degree from the School of Astronautics, Beijing University of Aeronautics and Astronautics, China, in 2010, the M.E. degree from the Institute of Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, China, in 2013, and Ph.D. degree from the Department of Computing, The Hong Kong Polytechnic University, in 2017. He currently holds a post-doctoral position at ETH Zurich, Switzerland. His research interests include image restoration, enhancement and compression.

Jiqing Wu

Jiqing Wu received the B.Sc. degree in mechanical engineering from Shanghai Maritime University, China, in 2006, and the B.Sc. degree from TU Darmstadt, Germany, in 2012, and the M.Sc. degree from ETH Zurich, Switzerland, in 2015, in mathematics, respectively. He is currently pursuing a PhD degree under the supervision of prof. Luc Van Gool, in his lab at ETH Zurich. His research interests mainly concern image demosaicing, image restoration, and image generation.

Ming-Hsuan Yang

Ming-Hsuan Yang received the PhD degree in Computer Science from University of Illinois at Urbana-Champaign. He is a full professor in Electrical Engineering and Computer Science at University of California at Merced. He has published more than 120 papers in the field of computer vision. Yang serves as a program co-chair of ACCV 2014, general co-chair of ACCV 2016, and program co-chair of ICCV 2019. He serves as an editor for PAMI, IJCV, CVIU, IVC and JAIR. His research interests include object detection, tracking, recognition, image deblurring, super resolution, saliency detection, and image/video segmentation.

Lei Zhang

Lei Zhang (M’04, SM’14) received his B.Sc. degree in 1995 from Shenyang Institute of Aeronautical Engineering, Shenyang, P.R. China, and M.Sc. and Ph.D degrees in Control Theory and Engineering from Northwestern Polytechnical University, Xi’an, P.R. China, respectively in 1998 and 2001, respectively. From 2001 to 2002, he was a research associate in the Department of Computing, The Hong Kong Polytechnic University. From January 2003 to January 2006 he worked as a Postdoctoral Fellow in the Department of Electrical and Computer Engineering, McMaster University, Canada. In 2006, he joined the Department of Computing, The Hong Kong Polytechnic University, as an Assistant Professor. Since July 2017, he has been a Chair Professor in the same department. His research interests include Computer Vision, Pattern Recognition, Image and Video Analysis, and Biometrics, etc. Prof. Zhang has published more than 200 papers in those areas. As of 2017, his publications have been cited more than 28,000 times in the literature. Prof. Zhang is an Associate Editor of IEEE Trans. on Image Processing, SIAM Journal of Imaging Sciences and Image and Vision Computing, etc. He is a "Clarivate Analytics Highly Cited Researcher" from 2015 to 2017.

Luc Van Gool

Luc Van Gool received a degree in electro-mechanical engineering at the Katholieke Universiteit Leuven in 1981. Currently, he is a full professor for Computer Vision at the ETH in Zurich and the Katholieke Universiteit Leuven in Belgium. He leads research and teaches at both places. He has authored over 300 papers. Luc Van Gool has been a program committee member of several, major computer vision conferences (e.g. Program Chair ICCV'05, Beijing, General Chair of ICCV'11, Barcelona, and of ECCV'14, Zurich). His main interests include 3D reconstruction and modeling, object recognition, and tracking and gesture analysis. He received several Best Paper awards (eg. David Marr Prize '98, Best Paper CVPR'07, Tsuji Outstanding Paper Award ACCV'09, Best Vision Paper ICRA'09). In 2015 he received the 5-yearly Excellence Award in Applied Sciences by the Flemish Fund for Scientific Research, in 2016 a Koenderink Prize and in 2017 a PAMI Distinguished Researcher award. He is a co-founder of more than 10 spin-off companies and was the holder of an ERC Advanced Grant (VarCity). Currently, he leads computer vision research for autonomous driving in the context of the Toyota TRACE labs in Leuven and at ETH.

Cosmin Ancuti

Cosmin Ancuti received the PhD degree at Hasselt University, Belgium (2009). He was a post-doctoral fellow at IMINDS and Intel Exascience Lab (IMEC), Leuven, Belgium (2010-2012) and a research fellow at University Catholique of Louvain, Belgium (2015-2017). Currently, he is a senior researcher/lecturer at University Politehnica Timisoara. He is the author of more than 50 papers published in international conference proceedings and journals. His area of interests includes image and video enhancement techniques, computational photography and low level computer vision.

Codruta O. Ancuti

Codruta O. Ancuti is a senior researcher/lecturer at University Politehnica Timisoara, Faculty of Electrical and Telecommunication Engineering. She obtained the PhD degree at Hasselt University, Belgium (2011) and between 2015 and 2017 she was a research fellow at University of Girona, Spain (ViCOROB group). Her work received the best paper award at NTIRE 2017 (CVPR workshop). Her main interest of research includes image understanding and visual perception. She is the first that introduced several single images-based enhancing techniques built on the multi-scale fusion (e.g. color-to grayscale, image dehazing, underwater image and video restoration.

Boaz Arad

Boaz Arad is a Ph.D. student in the Interdisciplinary Computational Vision Laboratory, at Ben-Gurion University of the Negev, Israel. Alongside Prof. Ben-Shahar, Boaz collected and curates the largest natural hyperspectral image database published to date. For his work on hyperspectral data recovery he was awarded the EMVA “Young Professional Award 2017” as well as the Zlotowski Center for Neuroscience “Best Research Project of 2016” award. Technologies developed during his Ph.D. studies are currently being commercialized by the BGU based startup HC Vision.

Ohad Ben-Shahar

Prof. Ohad Ben-Shahar serves as head of the Computer Science Department at Ben-Gurion University of the Negev. Since founding the BGU Interdisciplinary Computational Vision Lab in 2006, Ohad has led a research group focused on advancing the state of knowledge about biological vision while at the same time to contributing to various aspects of machine vision, both theoretical and applied.

Program committee - to be updated

Invited Talks (to be updated)



William T. Freeman

Title: TBA

Abstract: TBA

Bio: William T. Freeman is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) there. He was the Associate Department Head from 2011 - 2014. His current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. He received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and test-of-time retrospective awards for papers from 1990, 1995, and 2005. Previous research topics include steerable filters and pyramids, orientation histograms, the generic viewpoint assumption, color constancy, computer vision for computer games, and belief propagation in networks with loops. He is active in the program or organizing committees of computer vision, graphics, and machine learning conferences. He was the program co-chair for ICCV 2005, and for CVPR 2013.

Ming-Yu Liu

Title: TBA

Abstract: TBA

Bio: Ming-Yu Liu is a senior research scientist at NVIDIA Research. Before joining NVIDIA Research, he was a principal research scientist at Mitsubishi Electric Research Labs (MERL) from 2012 to 2016. He received his Ph.D. from the Department of Electrical and Computer Engineering at the University of Maryland College Park in 2012. The object pose estimation algorithm he developed was a major component of a commercial vision-based robotic bin-picking system, which was awarded the 100 most innovative technology products of the year by the R&D magazine in 2014. His street scene understanding paper was selected in the best paper finalist by the Robotics Science and System (RSS) conference in 2015. Recently, his research focus shifted to generative models for image understanding and generation. His goal is to enable machines superhuman-like imagination capabilities.

Graham Finlayson

Title: TBA

Abstract: TBA

Bio: Graham Finlayson is a Professor of Computer Science at the University of East Anglia. He joined UEA in 1999 when he was awarded a full professorship at the age of 30. He was and remains the youngest ever professorial appointment at that institution. Graham trained in computer science first at the University of Strathclyde and then for his masters and doctoral degrees at Simon Fraser University where he was awarded a ‘Dean’s medal’ for the best PhD dissertation in his faculty. Prior to joining UEA, Graham was a lecturer at the University of York and then a founder and Reader at the Colour and Imaging Institute at the University of Derby. Professor Finlayson is interested in ‘Computing how we see’ and his research spans computer science (algorithms), engineering (embedded systems) and psychophysics (visual perception). He has published over 50 journal, over 200 referred conference papers and 25+ patents. He has won best paper prizes at several conferences including, “The 5th IS&T conference on Colour in Graphics, Imaging and Vision” (2010) and “the IEE conference on Visual Information Engineering” (1995). Many of Graham’s patents are implemented and used in commercial products including photo processing software, dedicated image processing hardware (ASICs) and in embedded camera software. Graham’s research is funded from a number of sources including government, industry and through investment for spin-out companies. Industrial partners include Apple, Hewlett Packard, Sony, Xerox, Unilever and Buhler-Sortex. Significantly, Graham was the first academic at UEA (in its 50 year history) to either raise venture capital investment for a spin-out company – Imsense Ltd developed technology to make pictures look better - or to make a money for the university when this company was subsequently sold to a blue chip industry major in 2010. In 2002, Graham was awarded the Philip Leverhulme prize for science and in 2008 a Royal Society-Wolfson Merit award. In 2009 the Royal Photographic Society presented graham with the Davies medal in recognition of his contributions to the Photographic Industry. The RPS made Graham a fellow in 2012. In recognition of distinguished service to the Society for Imaging Science and Technology, Graham was elected a fellow of that society in 2010. In January 2013 Graham was also elected to a fellowship of the Institute of Engineering Technology.

Liang Lin

Title: TBA

Abstract: TBA

Bio: Liang Lin is the Executive R&D Director of SenseTime Group Limited and a full Professor of Sun Yat-sen University. He is the Excellent Young Scientist of the National Natural Science Foundation of China. He received his B.S. and Ph.D. degrees from the Beijing Institute of Technology (BIT), Beijing, China, in 2003 and 2008, respectively, and he was a joint Ph.D. student with the Department of Statistics, University of California, Los Angeles (UCLA). From 2008 to 2010, he was a Post-Doctoral Fellow at UCLA. From 2014 to 2015, as a senior visiting scholar he was with The Hong Kong Polytechnic University and The Chinese University of Hong Kong. He currently leads the SenseTime R&D teams to develop cutting-edges and deliverable solutions on computer vision, data analysis and mining, and intelligent robotic systems. He has authorized and co-authorized on more than 100 papers in top-tier academic journals and conferences (e.g., 10 papers in TPAMI/IJCV and 40+ papers in CVPR/ICCV/NIPS/IJCAI). He has been serving as an associate editor of IEEE Trans. Human-Machine Systems, The Visual Computer and Neurocomputing. He served as Area/Session Chairs for numerous conferences such as ICME, ACCV, ICMR. He was the recipient of Best Paper Runners-Up Award in ACM NPAR 2010, Google Faculty Award in 2012, Best Student Paper Award in IEEE ICME 2014, and Hong Kong Scholars Award in 2014.

Xian-Sheng Hua

Title: Computer Vision Technologies in City Brain

Abstract: Massive data is accumulating in a city everyday, and the problem is how to explore the potential value of the data. City Brain is a city-level system including but not limited to data perception, policy & optimization, search & mining, prediction & intervention. Computer vision technologies are widely applied in City Brain, such as image classification, object detection, multi-object tracking, semantic segmentation, image retrieval, as well as low-level image processing such as super-resolution and image denoising. In this talk, we will show how these technologies applied to City Brain.

Bio: Xian-Sheng Hua received a BS in Information Science from Peking University in 1996, a PhD in Applied Mathematics from Peking University in 2001. He is Distinguished Engineer / Deputy Managing Director of Machine Intelligence Technology Lab, DAMO Academy, Alibaba Group. Before that, he was a Senior Researcher in Microsoft Research. He is IEEE Fellow and a recipient of the MIT TR35 Young Innovator Award. His current research focuses on Large-scale video / image content analysis / understanding / retrieval / search, image search engine, video / image advertising, machine learning, pattern recognition, etc.

Schedule


TBA


NTIRE 2018 Awards


TBA
Best Paper Awards
Challenge Winners
Challenge Awards