20.nov Taipei, Taiwan

NTIRE 2016

New Trends in Image Restoration and Enhancement

in conjunction with ACCV 2016

Call for papers

Image restoration and image enhancement are key computer vision tasks aiming at the restoration of image contents and/or filling in missing contents from corrupted or incomplete images. Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, with image restoration or enhancement serving as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis. The emergence and ubiquitous adoption of mobile and wearable devices offer another fertile ground for additional applications and demand for efficient methods.

This workshop aims to provide an overview of the new trends and advances in the intriguing areas of image restoration and enhancement research. Moreover, it will be a welcome meeting place and an opportunity for academic and industrial attendees to interact and explore further collaborations.

Papers addressing image restoration and enhancement and related topics are invited. The topics include, but are not limited to:

  • Image inpainting
  • Image deblurring
  • Image denoising
  • Image upsampling and super-resolution
  • Image filtering
  • Image dehazing
  • Demosaicing
  • Image enhancement: brightening, color adjustment, sharpening, etc.
  • Image-quality assessment
  • Video processing
  • Hyperspectral imaging
  • Studies and applications of the above.

The NTIRE organizers prepare also a Special Issue on "Vision and Computational Photography and Graphics" for Elsevier's CVIU journal. (Submission deadline is February 14, 2017)

Click here for more information

Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland

Important dates



Event Date
Paper submission deadline August 25, 2016 - 5:00PM Pacific Time
Decision notification September 17, 2016
Camera ready deadline September 26, 2016- 5:00PM Pacific Time
Workshop day November 20, 2016
CVIU Special Issue deadline February 14, 2017

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 14 pages (excluding references) in LNCS style. The paper format must follow the same guidelines as for all ACCV submissions.

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with ACCV main conference only. If a paper is submitted also to ACCV and accepted, the paper cannot be published both at the ACCV and the workshop.

Submission site

For the paper submissions, please go to the online submission site.

Proceedings

Accepted and presented papers will be published after the conference in Springer's Lecture Notes in Computer Science.

Author Kit

The author kit provides a LaTeX2e template for paper submissions. Please refer to the example for detailed formatting instructions. If you use a different document processing system to LaTeX then see the LNCS author instruction page.

People



Organizers

Radu Timofte

Radu Timofte obtained a PhD degree in Electrical Engineering at the KU Leuven, Belgium in 2013, the MSc at the Univ. of Eastern Finland in 2007, and the Dipl. Eng. at the Technical Univ. of Iasi, Romania in 2006. Currently, he is a postdoctoral researcher in the Computer Vision Lab, from the ETH Zurich, Switzerland. He serves as a reviewer for top journals (such as TPAMI, TIP, IJCV, TNNLS, TCSVT, CVIU, PRL) and conferences (ICCV, CVPR, ECCV). His work received a best scientific paper award at ICPR 2012, the best paper award at CVVT workshop (ECCV 2012), the best paper award at ChaLearn LAP workshop (ICCV 2015) and his team won a number of challenges including traffic sign detection (IJCNN 2013) and apparent age estimation (ICCV 2015). He is co-founder of Merantix. His current research interests include sparse and collaborative representations, classification, deep learning, optical flow, image restoration and enhancement.

Luc Van Gool

Luc Van Gool received a degree in electro-mechanical engineering at the Katholieke Universiteit Leuven in 1981. Currently, he is a full professor for Computer Vision at the ETH in Zurich and the Katholieke Universiteit Leuven in Belgium. He leads research and teaches at both places. He has authored over 200 papers in his field. Luc Van Gool has been a program committee member of several, major computer vision conferences (e.g. Program Chair ICCV'05, Beijing, and General Chair of ICCV'11, Barcelona, and of ECCV'14, Zurich). His main interests include 3D reconstruction and modeling, object recognition, and tracking and gesture analysis. He received several Best Paper awards (eg. David Marr Prize '98, Best Paper CVPR'07, Tsuji Outstanding Paper Award ACCV'09, Best Vision Paper ICRA'09). He is a co-founder of 10 spin-off companies. In 2015 he received the 5-yearly Excellence Award in Applied Sciences by the Flemish Fund for Scientific Research. He is the holder of an ERC Advanced Grant (VarCity).

Ming-Hsuan Yang

Ming-Hsuan Yang received the PhD degree in Computer Science from University of Illinois at Urbana-Champaign. He is an associate professor in Electrical Engineering and Computer Science at University of California at Merced. He has published more than 120 papers in the field of computer vision. Yang serves as a program co-chair of ACCV 2014, general co-chair of ACCV 2016, and program co-chair of ICCV 2019. He serves as an editor for PAMI, IJCV, CVIU, IVC and JAIR. His research interests include object detection, tracking, recognition, image deblurring, super resolution, saliency detection, and image/video segmentation.

Program committee

Invited Talks



Michael S. Brown

York University

Title: Low-level Vision and Curse of the In-Camera Image Processing Pipeline

Abstract: This talk discusses the in-camera processing pipeline that converts the RAW sensor response to the final sRGB output. Specifically, the talk will overview the steps of this camera processing pipeline and describe how these can often be detrimental to a number of low-level vision algorithms. Afterwards, a number of ways to overcome the issues with the in-camera image processing pipeline will be discussed.

Bio: Michael S. Brown obtained his BS and PhD in Computer Science from the University of Kentucky in 1995 and 2001 respectively. He is currently a professor at York University in Canada. Dr. Brown has served as an area chair multiple times for CVPR, ICCV, ECCV, and ACCV and is currently an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) and the International Journal of Computer Vision (IJCV). His research interests including computer vision, image processing, and computer graphics.

Kyoung Mu Lee

Dept. of ECE, Seoul National University

Title: Single-Image Super-Resolution using Very Deep Convolutional Networks

Abstract: In this talk, highly accurate single-image super-resolution methods will be presented, which are based on very deep convolutional networks. Increasing the network depth enlarges the receptive field, resulting a significant improvement in accuracy. Our CNN models use weight layers, and by cascading small filters many times in deep network structures, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates enabled by adjustable gradient clipping. We show that our proposed methods outperform existing deconvolution methods quantitatively as well as qualitatively.

Bio: Kyoung Mu Lee received the B.S. and M.S. Degrees from Seoul National University, Seoul, Korea, and Ph. D. degree in Electrical Engineering from the University of Southern California in 1993. Currently he is with the Dept. of ECE at Seoul National University as a full professor. His primary research interests include scene understanding, object recognition, low-level vision, visual tracking, and visual navigation. He is currently serving as an AEIC (Associate Editor in Chief) of the IEEE TPAMI, an Area Editor of the Computer Vision and Image Understanding (CVIU), and has served as an Associate Editor of the IEEE TPAMI, the Machine Vision Application (MVA) Journal and the IPSJ Transactions on Computer Vision and Applications (CVA), and the IEEE Signal Processing Letter. He also has served as Area Chars of CVPR, ICCV, ECCV, and ACCV many times, and will serve as a general co-chair of ACM MM 2018 and ICCV2019. He was a Distinguished Lecturer of the Asia-Pacific Signal and Information Processing Association (APSIPA) for 2012-2013. More information can be found on his homepage .

Yasuyuki Matsushita

Graduate School of Information Science and Technology, Osaka University

Title: Efficient robust estimation for 3D shape recovery

Abstract: Robust estimation has been always of interest in image restoration and enhancement tasks for effectively disregarding outliers. In the topic of 3D shape recovery from images, a similar problem arises. This talk illustrates how robust estimation methods can be used for the task of photometric stereo. Specifically, it presents photometric stereo methods based on L1 regression and robust principal component analysis. Further, it introduces an efficient R-PCA computation technique based on randomization.

Bio: Yasuyuki Matsushita received his Ph.D. degree in EECS from the University of Tokyo in 2003. From April 2003 to March 2015, he was with Visual Computing group at Microsoft Research Asia. In April 2015, he joined Osaka University as a professor. His research area includes computer vision, machine learning and optimization.

Lei Zhang

Hong Kong Polytechnic University

Title: Image restoration: From sparse prior, low-rank prior to deep priors

Abstract: Image restoration is one fundamental problem in low-level vision, while prior modeling and learning are key issues to the image restoration performance. By assuming that images can be sparsely represented over pre-defined or learned dictionaries, sparse representation and dictionary learning methods have been very popular in image restoration. By stacking image nonlocal similar patches into a matrix and imposing low-rank prior on it, rank minimization methods have also achieved a great success in image restoration. Recently, it has been found more effective prior models can be learned via training a deep convolutional neural network. In this talk, we will introduce our work and findings on sparse representation, low-rank analysis and deep learning with their application to image restoration.

Bio: Lei Zhang (M’04, SM’14) received his B.Sc. degree in 1995 from Shenyang Institute of Aeronautical Engineering, Shenyang, P.R. China, and M.Sc. and Ph.D degrees in Control Theory and Engineering from Northwestern Polytechnical University, Xi’an, P.R. China, respectively in 1998 and 2001, respectively. From 2001 to 2002, he was a research associate in the Department of Computing, The Hong Kong Polytechnic University. From January 2003 to January 2006 he worked as a Postdoctoral Fellow in the Department of Electrical and Computer Engineering, McMaster University, Canada. In 2006, he joined the Department of Computing, The Hong Kong Polytechnic University, as an Assistant Professor. Since July 2015, he has been a Full Professor in the same department. His research interests include Computer Vision, Pattern Recognition, Image and Video Processing, and Biometrics, etc. Prof. Zhang has published more than 200 papers in those areas. As of 2016, his publications have been cited more than 20,000 times in the literature. Prof. Zhang is an Associate Editor of IEEE Trans. on Image Processing, SIAM Journal of Imaging Sciences and Image and Vision Computing, etc. He is a "Highly Cited Researcher" selected by Thomson Reuters. More information can be found in his homepage>.

Schedule



09:00
Invited Talk: Image restoration: From sparse prior, low-rank prior to deep priors.
Lei Zhang (Hong Kong Polytechnic Univ.)

Abstract: Image restoration is one fundamental problem in low-level vision, while prior modeling and learning are key issues to the image restoration performance. By assuming that images can be sparsely represented over pre-defined or learned dictionaries, sparse representation and dictionary learning methods have been very popular in image restoration. By stacking image nonlocal similar patches into a matrix and imposing low-rank prior on it, rank minimization methods have also achieved a great success in image restoration. Recently, it has been found more effective prior models can be learned via training a deep convolutional neural network. In this talk, we will introduce our work and findings on sparse representation, low-rank analysis and deep learning with their application to image restoration.

10:50
Invited Talk: Single-Image Super-Resolution using Very Deep Convolutional Networks
Kyoung Mu Lee (Seoul National University)

Abstract: In this talk, highly accurate single-image super-resolution methods will be presented, which are based on very deep convolutional networks. Increasing the network depth enlarges the receptive field, resulting a significant improvement in accuracy. Our CNN models use weight layers, and by cascading small filters many times in deep network structures, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates enabled by adjustable gradient clipping. We show that our proposed methods outperform existing deconvolution methods quantitatively as well as qualitatively.

13:30
Invited Talk: Efficient robust estimation for 3D shape recovery
Yasuyuki Matsushita (Osaka University)

Abstract: Robust estimation has been always of interest in image restoration and enhancement tasks for effectively disregarding outliers. In the topic of 3D shape recovery from images, a similar problem arises. This talk illustrates how robust estimation methods can be used for the task of photometric stereo. Specifically, it presents photometric stereo methods based on L1 regression and robust principal component analysis. Further, it introduces an efficient R-PCA computation technique based on randomization.

14:10
Invited Talk: Low-level Vision and Curse of the In-Camera Image Processing Pipeline
Michael S. Brown (York University)

Abstract: This talk discusses the in-camera processing pipeline that converts the RAW sensor response to the final sRGB output. Specifically, the talk will overview the steps of this camera processing pipeline and describe how these can often be detrimental to a number of low-level vision algorithms. Afterwards, a number of ways to overcome the issues with the in-camera image processing pipeline will be discussed.