28.August Glasgow, UK

AIM 2020

Advances in Image Manipulation workshop

and challenges on image and video manipulation

in conjunction with ECCV 2020

Check ECCV 2020 online AIM workshop landing page for LIVE, Q&A, recordings, interaction


Sponsors






Call for papers

Image manipulation is a key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of Advances in Image Manipulation (AIM) workshop at ICCV 2019, Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018 , the workshop and Challenge on Learned Image Compression (CLIC) editions at CVPR 2018, CVPR 2019, CVPR 2020 and the New Trends in Image Restoration and Enhancement (NTIRE) editions: at CVPR 2017 , 2018, 2019 and 2020 and at ACCV 2016. Moreover, it relies on the people associated with the PIRM, CLIC, and NTIRE events such as organizers, PC members, distinguished speakers, authors of published papers, challenge participants and winning teams.

Papers addressing topics related to image/video manipulation, restoration and enhancement are invited. The topics include, but are not limited to:

  • Image-to-image translation
  • Video-to-video translation
  • Image/video manipulation
  • Perceptual manipulation
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation
  • Perceptual enhancement
  • Multimodal translation
  • Depth estimation
  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Hyperspectral imaging
  • Underwater imaging
  • Aerial and satellite imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video manipulation on mobile devices
  • Image/video restoration and enhancement on mobile devices
  • Studies and applications of the above.

AIM 2020 has the following associated groups of challenges:

  • image challenges
  • video challenges

The authors of the top methods in each category will be invited to submit papers to AIM 2020 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted AIM workshop papers will be published under the book title "ECCV 2020 Workshops" by

European Computer Vision Association (ECVA) and Springer

Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland



AIM 2020 video challenges

Important dates



Challenges Event Date (always 5PM Pacific Time)
Site online February 15, 2020
Release of train data and validation data May 05, 2020
Validation server online May 15, 2020
Final test data release, validation server closed July 10, 2020
Test restoration results submission deadline July 17, 2020
Fact sheets submission deadline July 17, 2020
Code/executable submission deadline July 17, 2020
Preliminary test results release to the participants July 21, 2020
Paper submission deadline for entries from the challenges August 1, 2020
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online May 08, 2020
Paper submission deadline July 17, 2020
Paper submission deadline (only for methods from AIM 2020 challenges or BMVC 2020 and ECCV 2020 rejected papers!) August 1, 2020
Paper decision notification August 5, 2020
Camera ready deadline September 13, 2020
Workshop day August 28, 2020

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 14 pages (excluding references) in single column. The paper format must follow the same guidelines as for all ECCV 2020 submissions.
https://eccv2020.eu/author-instructions/

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with ECCV2020 main conference only. If a paper is submitted also to ECCV and accepted, the paper cannot be published both at the ECCV and the workshop.

Submission site (online!)

https://cmt3.research.microsoft.com/AIMWC2020

Proceedings

Accepted and presented papers will be published after the conference in ECCV Workshops proceedings together with the ECCV2020 main conference papers.

Author Kit

https://eccv2020.eu/wp-content/uploads/2020/01/eccv2020kit-1.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example eccv2020submission.pdf for detailed formatting instructions.

People



Organizers

  • Radu Timofte, ETH Zurich,
  • Andrey Ignatov, ETH Zurich,
  • Luc Van Gool, KU Leuven & ETH Zurich,
  • Wangmeng Zuo, Harbin Institute of Technology,
  • Ming-Hsuan Yang, University of California at Merced & Google,
  • Kyoung Mu Lee, Seoul National University,
  • Liang Lin, Sun Yat-Sen University,
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research,
  • Kai Zhang, ETH Zurich,
  • Dario Fuoli, ETH Zurich,
  • Zhiwu Huang, ETH Zurich,
  • Martin Danelljan, ETH Zurich,
  • Shuhang Gu, ETH Zurich & University of Sydney,
  • Ming-Yu Liu, NVIDIA Research,
  • Seungjun Nah, Seoul National University,
  • Sanghyun Son, Seoul National University,
  • Jaerin Lee, Seoul National University,
  • Andres Romero, ETH Zurich,
  • Hannan Lu, Harbin Institute of Technology
  • Ruofan Zhou, EPFL,
  • Majed El Helou, EPFL,
  • Sabine Süsstrunk, EPFL,
  • Roey Mechrez, BeyondMinds & Technion,
  • Pengxu Wei, Sun Yat-Sen University
  • Evangelos Ntavelis, CSEM & ETH Zurich,
  • Siavash Bigdeli, CSEM


PC Members

  • Cosmin Ancuti, UPT
  • Siavash Bigdeli, CSEM
  • Michael S. Brown, York University
  • Jianrui Cai, Hong Kong Polytechnic University
  • Chia-Ming Cheng, MediaTek
  • Cheng-Ming Chiang, MediaTek
  • Sunghyun Cho, POSTECH
  • Martin Danelljan, ETH Zurich
  • Chao Dong, SIAT
  • Weisheng Dong, Xidian University
  • Touradj Ebrahimi, EPFL
  • Majed El Helou, EPFL
  • Corneliu Florea, University Politechnica of Bucharest
  • Dario Fuoli, ETH Zurich
  • Peter Gehler, Amazon
  • Bastian Goldluecke, University of Konstanz
  • Shuhang Gu, ETH Zurich & University of Sydney
  • Zhe Hu, Hikvision Research
  • Zhiwu Huang, ETH Zurich
  • Andrey Ignatov, ETH Zurich
  • Phillip Isola, MIT
  • In So Kweon, KAIST
  • Christian Ledig, VideaHealth
  • Jaerin Lee, Seoul National University
  • Kyoung Mu Lee, Seoul National University
  • Seungyong Lee, POSTECH
  • Victor Lempitsky, Skoltech & Samsung
  • Liang Lin, Sun Yat-Sen University & DarkMatter AI
  • Ming-Yu Liu, NVIDIA Research
  • Hannan Lu, Harbin Institute of Technology
  • Vladimir Lukin, National Aerospace University
  • Roey Mechrez, BeyondMinds & Technion
  • Zibo Meng, OPPO
  • Yusuke Monno, Tokyo Institute of Technology
  • Seungjun Nah, Seoul National University
  • Hajime Nagahara, Osaka University
  • Vinay P. Namboodiri, IIT Kanpur
  • Evangelos Ntavelis, CSEM & ETH Zurich
  • Federico Perazzi, Adobe
  • Wenqi Ren, Chinese Academy of Sciences
  • Andres Romero, ETH Zurich
  • Aline Roumy, INRIA
  • Yoichi Sato, University of Tokyo
  • Nicu Sebe, University of Trento
  • Eli Shechtman, Creative Intelligence Lab at Adobe Research
  • Gregory Slabaugh, Huawei Noah's Ark Lab
  • Sanghyun Son, Seoul National University
  • Sabine Süsstrunk, EPFL
  • Yu-Wing Tai, HKUST
  • Hugues Talbot, Université Paris Est
  • Masayuki Tanaka, Tokyo Institute of Technology
  • Jean-Philippe Tarel, IFSTTAR
  • Radu Timofte, ETH Zurich
  • George Toderici, Google
  • Luc Van Gool, ETH Zurich & KU Leuven
  • Ting-Chun Wang, NVIDIA
  • Xintao Wang, The Chinese University of Hong Kong
  • Pengxu Wei, Sun Yat-Sen University
  • Feng Yang, Google
  • Ming-Hsuan Yang, University of California at Merced & Google
  • Ren Yang, ETH Zurich
  • Wenjun Zeng, Microsoft Research
  • Kai Zhang, ETH Zurich
  • Richard Zhang, UC Berkeley & Adobe Research
  • Ruofan Zhou, EPFL
  • Jun-Yan Zhu, Adobe Research & CMU
  • Wangmeng Zuo, Harbin Institute of Technology

Invited Talks



David Bau

MIT

Title: Reflected Light and Doors in the Sky: Rewriting a GAN's Rules [slides, video]

Abstract: Modern GANs learn remarkable representations that encode knowledge about relationships in the visual world. What is the shape of a child's eyebrow? Where do trees grow? In this talk we will dissect the internals of a GAN in order to ask how it represents such rules. Then we will ask: can we build a new world by rewriting those rules? If we edit a few weights of the GAN, how can we create a new model that goes beyond the training data, to synthesize a modified world that follows new rules that we design?

Bio: David Bau is a PhD student at MIT studying Computer Vision with Antonio Torralba. His research focuses on the dissection, visualization, and interactive manipulation of deep networks in vision. David was previously an engineer at Google, and he is also coauthor of a widely-used textbook on numerical linear algebra. David believes that, to expand human agency, machine learning should be more transparent.

Richard Zhang

Adobe Research

Title: Style and Structure Disentanglement for Image Manipulation [slides, video]

Abstract: A fundamental challenge of image manipulation problems is separating “style” from “structure”. We explore this in two settings: (a) unpaired image translation, where two image collections are given and (b) the purely unsupervised setting, where an unlabeled image collection is given. In unpaired image translation, the domain labels (for example, horses and zebras) are provided and indicate “style”. A successful model must then discover the structural correspondence between the two domains. In other words, each patch in the output should faithfully reflect the content of the corresponding patch in the input, independent of domain. We propose a straightforward method for doing so -- maximizing mutual information between the two, using a framework based on contrastive learning. Our framework, Contrastive Unpaired Translation, does not rely on any prespecified similarity function (such as L1 or perceptual loss) and enables one-sided translation, improving quality and reducing training time.
Next, we explore the purely unsupervised setting, where an unlabeled image collection is given, and structure and style must both be discovered. While generative models have become increasingly effective at producing realistic images from such a collection, adapting such models for controllable manipulation of existing images remains challenging. We propose the Swapping Autoencoder, which is designed specifically for image manipulation, rather than random sampling. The key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image. In particular, we enforce one component to encode co-occurrent patch statistics across different parts of an image, corresponding to its “style”. Such a distentangled representation allows us to flexibly manipulate real images in various ways, including texture swapping, local and global editing, and latent code vector arithmetic.

Bio: Richard Zhang is a Research Scientist at Adobe Research, with interests in computer vision, deep learning, machine learning, and graphics. He obtained his PhD in EECS, advised by Professor Alexei A. Efros, at UC Berkeley in 2018. He graduated summa cum laude with BS and MEng degrees from Cornell University in ECE. He is a recipient of the 2017 Adobe Research Fellowship.

Ravi Ramamoorthi

University of California, San Diego

Title: Light Fields and View Synthesis from Sparse Images: Revisiting Image-Based Rendering [slides, video]

Abstract: Light Fields and sparse view synthesis have the potential to enable easy acquisition of objects and scenes for virtual reality, display, and combining with computer graphics imagery. However, light fields are expensive to acquire and often involve a tradeoff of spatial and angular resolution. Moreover, View Synthesis reprises the classic problem of image-based rendering, which is challenging for situations involving occlusions or specularity.
In this talk, we present our work over the last 4 years on addressing light field reconstruction and view synthesis from sparse images using a physically-motivated deep learning approach. We start by describing local light field fusion, a method to achieve guaranteed sampling bounds for light fields from sparse images. We then discuss our efforts chronologically, starting with light fields from only the four corner views, light fields from a single legacy photograph and light field video. We close by briefly reviewing two of our ECCV contributions involving volumetric representations for view synthesis and relighting: including neural radiance fields and deep reflectance volumes.

Bio: Ravi Ramamoorthi is the Ronald L. Graham professor of Computer Science at the University of California, San Diego, and founding Director of the UC San Diego Center for Visual Computing. Prof. Ramamoorthi is an author of more than 150 refereed publications in computer vision and computer graphics, and has played a key role in building multi-faculty research groups that have been recognized as leaders in computer graphics and computer vision at Columbia, Berkeley and UCSD. His research has been recognized with a half-dozen early career awards, including the ACM SIGGRAPH Significant New Researcher Award in computer graphics in 2007, and the Presidential Early Career Award for Scientists and Engineers (PECASE) for his work in physics-based computer vision in 2008. He was elevated to IEEE and ACM Fellow in 2017, and inducted into the SIGGRAPH Academy in 2019.

Peyman Milanfar

Google

Title: Modern Computational Photography: Burst Imaging and Handheld Multi-frame Super-Resolution [slides, video]

Abstract: Modern algorithmic and computing advances, including machine learning, have changed the rules of photography, bringing to it new modes of capture, post-processing, storage, and sharing. In this talk, I’ll give a brief overview of recent progress in computational photography and describe some of the key advances of this technology, including burst photography and multi-frame super-resolution.

Bio: Dr. Peyman Milanfar is a Principal Scientist / Director at Google Research, where he leads the Computational Imaging team. Prior to this, he was a Professor of Electrical Engineering at UC Santa Cruz from 1999-2014. He was Associate Dean for Research at the School of Engineering from 2010-12. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass. Most recently, Peyman’s team at Google developed the digital zoom pipeline for the Pixel phones, which includes the multi-frame super-resolution (Super Res Zoom) pipeline, and the RAISR upscaling algorithm. In addition, the Night Sight mode on Pixel 3 uses our Super Res Zoom technology to merge images (whether you zoom or not) for vivid shots in low light, including astrophotography. Peyman received his undergraduate education in electrical engineering and mathematics from the University of California, Berkeley, and the MS and PhD degrees in electrical engineering from the Massachusetts Institute of Technology. He founded MotionDSP, which was acquired by Cubic Inc. (NYSE:CUB). Peyman has been keynote speaker at numerous technical conferences including Picture Coding Symposium (PCS), SIAM Imaging Sciences, SPIE, and the International Conference on Multimedia (ICME). Along with his students, he has won several best paper awards from the IEEE Signal Processing Society.
He is a Distinguished Lecturer of the IEEE Signal Processing Society, and a Fellow of the IEEE “for contributions to inverse problems and super-resolution in imaging.”

Schedule, check LIVE, Q&A, recordings, interaction

A subset of the accepted AIM workshop papers have oral (LIVE) presentation.
The accepted AIM workshop papers will be published under the book title "ECCV 2020 Workshops" by

European Computer Vision Association (ECVA) and Springer



List of AIM 2020 publications and links to online materials:



A Benchmark for Burst Color Constancy
Qian, Yanlin *; Käpylä, Jani M; Kamarainen, Joni-Kristian; Koskinen, Samu; Matas, Jiri
A Benchmark for Inpainting of Clothing Images with Irregular Holes
Kınlı, Furkan Osman*; Özcan, Barış; Kirac, Furkan
Adaptive Hybrid Composition based Super-Resolution Network via Fine-grained Channel Pruning
Chen, Siang*; Huang, Kai; claesen, luc; Li, Bowen; Xiong, Dongliang; Jiang, Haitian
AgingMapGAN (AMGAN): High-Resolution Controllable Face Aging with Spatially-Aware Conditional GANs
Despois, Julien*; Flament, Frederic; Perrot, Matthieu
AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results
Zhang, Kai*; Danelljan, Martin; Li, Yawei; Timofte, Radu
AIM 2020 Challenge on Image Extreme Inpainting
Ntavelis, Evangelos*; Romero, Andrés; Arjomand Bigdeli, Siavash; Timofte, Radu
AIM 2020 Challenge on Learned Image Signal Processing Pipeline
Ignatov, Andrey; Timofte, Radu*
AIM 2020 Challenge on Real Image Super-Resolution
Wei, Pengxu*; Lu, Hannan; Timofte, Radu; Lin, Liang; Zuo, Wangmeng
AIM 2020 Challenge on Rendering Realistic Bokeh
Ignatov, Andrey; Timofte, Radu*
AIM 2020 Challenge on Video Extreme Super-Resolution: Methods and Results
Fuoli, Dario*; Huang, Zhiwu; Gu, Shuhang; Timofte, Radu
AIM 2020 Challenge on Video Temporal Super-Resolution
Son, Sanghyun*; Lee, Jaerin; Nah, Seungjun; Timofte, Radu; Lee, Kyoung Mu
AIM 2020: Scene Relighting and Illumination Estimation Challenge
El Helou, Majed*; Zhou, Ruofan; Süsstrunk, Sabine; Timofte, Radu
An Ensemble Neural Network for Scene Relighting with Light Classification
Dong, Liping*; Jiang, Zhuolong; Li, Chenghua
AWNet: Attentive Wavelet Network for ImageISP
Dai, Linhui*; Liu, Xiaohong; LI, CHENGQI; CHEN, JUN
BGGAN: Bokeh-Glass Generative Adversarial Network for Rendering Realistic Bokeh
Qian, Ming*; Qiao, Congyu; Lin, Jiamin; Guo, Zhenyu; LI, Chenghua; Leng, Cong
Bokeh Rendering from Defocus Estimation
Luo, Xianrui*; Peng, Juewen; Xian, Ke; Wu, Zijin; Cao, Zhiguo
CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer
KIPS, Robin*; PERROT, Matthieu; Bloch, Isabelle; Gori, Pietro
Conditional Adversarial Camera Model Anonymisation
Andrews, Jerone T A*; Zhang, Yidan; Griffin, Lewis
Deep Adaptive Inference Networks for Single Image Super-Resolution
Liu, Ming; Zhang, Zhilu; Hou, Liya; Zuo, Wangmeng*; Zhang, Lei
Deep Cyclic Generative Adversarial Residual Convolutional Networks for Real Image Super-Resolution
Muhammad Umer, Rao *; MICHELONI, CHRISTIAN
Deep Plug-and-Play Video Super-Resolution
Lu, Hannan*; Tong, Chaoyu; Lian, Wei; Ren, Dongwei; Zuo, Wangmeng
Deep Relighting Networks for Image Light Source Manipulation
Wang, Li-Wen; Siu, Wan-Chi*; Liu, Zhisong; Li, Chu-Tak; Lun, Daniel, P.K>
DeepGIN: Deep Generative Inpainting Network for Extreme Image Inpainting
Li, Chu-Tak; Siu, Wan-Chi*; Liu, Zhisong; Wang, Li-Wen; Lun, Daniel P.K.
Deformable Kernel Convolutional Network for Video Extreme Super-Resolution
Xu, Xuan*; Li, Xin; Wang, Jinge; Xiong, Xin
Densely Connecting Depth Maps for Monocular Depth Estimation
Zhang, Jinqing*; Yue, Haosong; Wu, Xingming; Chen, Weihai; Wen, Changyun
Disrupting Deepfakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems
Ruiz, Nataniel*; Bargal, Sarah; Sclaroff, Stan
EEDNet: Enhanced Encoder Decoder Network for AutoISP
Zhu, Yu*; tian, liang; Guo, Zhenyu; He, Xiangyu; LI, Chenghua; Leng, Cong; Cheng, Jian; Zhang, Yifan
Efficient Image Super-Resolution using Pixel Attention
Zhao, Hengyuan*; Xiangtao, Kong; He, Jingwen; Qiao, Yu; Dong, Chao
Efficient Super-Resolution using MobileNetV3
Wang, Haicheng*; Bhaskara, Vineeth S.; Levinshtein, Alex; Tsogkas, Stavros; Jepson, Allan D
Efficiently Detecting Plausible Locations for Object Placement using Masked Convolutions
Susmelj, Igor; Volokitin, Anna*; Agustsson, Eirikur; Van Gool, Luc; Timofte, Radu
Enhanced Adaptive Dense Connection Single Image Super-Resolution
Xie, Tangxin TX*; Li, Jing; Shen, Yi; Jia, Yu; Zhang, Jialiang; Zeng, Bing
Enhanced Quadratic Video Interpolation
Liu, Yihao*; Xie, Liangbin; Si-Yao, Li; Sun, Wenxiu; Qiao, Yu; Dong, Chao
FamilyGAN: Generating Kin Face Images usingGenerative Adversarial Networks
Sinha, Rauank; Vatsa, Mayank ; Singh, Richa*
FAN: Frequency Aggregation Network for Real Image Super-resolution
Pang, Yingxue; Li, Xin; Jin, Xin; Wu, Yaojun; Liu, Jianzhao; Liu, Sen; Chen, Zhibo*
Flexible Example-based Image Enhancement with Task Adaptive Global Feature Self-Guided Network
Kneubuehler, Dario*; Gu, Shuhang; Van Gool, Luc; Timofte, Radu
Gated Texture CNN for Efficient and Configurable Image Denoising
Imai, Kaito*; Miyata, Takamichi
Genetic-GAN: Synthesizing images between two domains by genetic crossover
Zaman, Ishtiak*; Crandall, David
GIA-Net: Global Information Aware Network for Low-light Imaging
Meng, Zibo*; Xu, Runsheng; Ho, Chiu Man
Human Motion Transfer from Poses in the Wild
Ren, Jian*; Chai, Menglei; Tulyakov, Sergey; Fang, Chen; Shen, Xiaohui; Yang, Jianchao
IdleSR: Efficient Super-Resolution Network with Multi-Scale IdleBlocks
Xiong, Dongliang*; Huang, Kai; Jiang, Haitian; Li, Bowen; Chen, Siang; Jiang, Xiaowen
Joint Demosaicking and Denoising for CFA and MSFA Images using a Mosaic-Adaptive Dense Residual Network
Pan, Zhihong*; Li, Baopu; Cheng, Hsuchun; Bao, Yingze
L2-Constrained RemNet for Camera Model Identification and Image Manipulation Detection
Rafi, Abdul Muntakim*; Wu, Jonathan; Hasan , Md. Kamrul
LarvaNet: Hierarchical Super-Resolution via Multi-exit Architecture
Jeon, Geun-Woo; Choi, Jun-Ho; Kim, Jun-Hyuk; Lee, Jong-Seok*
Learning to improve image compression without changing the standard decoder
Strümpler, Yannick*; Yang, Ren; Timofte, Radu
LightNet: Deep Learning based Illumination Estimation from Virtual Images
Nathan, Sabari*; M, Parisa Beham
Long-Term Human Video Generation of Multiple Futures Using Poses
Fushishita, Naoya*; Tejero-de-Pablos, Antonio; Mukuta, Yusuke; Harada, Tatsuya
MSEM: Multi-Scale Semantic-Edge Merged Model for Image Inpainting
Ni, Haopeng*; Zeng, Weijian; CAI, YIYANG; LI, Chenghua
Multi-Objective Reinforced Evolution in Mobile Neural Architecture Search
Chu, Xiangxiang; Zhang, Bo*; Xu, Ruijun
Noise-Aware Merging of High Dynamic Range Image Stacks without Camera Calibration
Hanji , Param*; Zhong, Fangcheng; Mantiuk, Rafal
PyNET-CA: Enhanced PyNET with Channel Attention for End-to-end Mobile Image Signal Processing
Kim, Byung-Hoon; Song, Joonyoung; Ye, Jong Chul*; Baek, JaeHyun
Pyramidal Edge-maps and Attention based Guided Thermal Super-resolution
Gupta, Honey*; Mitra, Kaushik
Quantized Warping and Residual Temporal Integration for Video Super-Resolution on Fast Motions
Karageorgos, Konstantinos*; ZAFEIROULI, KASSIANI; Konstantoudakis, Konstantinos; Dimou, Anastasios; Daras, Petros
Real Image Super Resolution via Heterogeneous Model Ensemble using GP-NAS
Pan, Zhihong*; Li, Baopu; Xi, Teng; Fan, Yanwen; zhang, gang; liu, jingtuo; Han, Junyu; Ding, Errui
Residual Feature Distillation Network for Lightweight Image Super-Resolution
Liu, Jie*; Tang, Jie; Wu, Gangshan
SA-AE for Any-to-any Relighting
Hu, Zhongyun; Huang, Xin; Li, Yaning; Wang, Qing*
Self-Calibrated Attention Neural Network for Real-World Super Resolution
Cheng, Kaihua*; Wu, Chenhuan
Single image dehazing for a variety of haze scenarios using back projected pyramid network
Singh, Ayush; Bhave, Ajay; Prasad, Dilip K*
Ultra Lightweight Image Super-Resolution with Multi-Attention
Muqeet, Abdul*; Hwang, Jiwon; Yang, Subin; Kang, Jung Heum; Kim, Yongwoo; Bae, Sung-Ho
Unconstrained Text Detection in Manga: a New Dataset and Baseline
del gobbo, julian*; Matuk Herrera, Rosana
WDRN : A Wavelet Decomposed RelighNet for Image Relighting
Puthussery, Densen; P.S., Hrishikesh*; Kuriakose , Melvin ; C.V., Jiji
Fast Light-Weight Network for Face Image Inpainting
bai, mengmeng*; Li, Shuchen




Invited Talk 1: " Reflected Light and Doors in the Sky: Rewriting a GAN's Rules "
David Bau (MIT)
Invited Talk 2: " Style and Structure Disentanglement for Image Manipulation "
Richard Zhang (Adobe)
Invited Talk 3: " Light Fields and View Synthesis from Sparse Images: Revisiting Image-Based Rendering "
Ravi Ramamoorthi (University of California, San Diego)
Invited Talk 4: " Modern Computational Photography: Burst Imaging and Handheld Multi-frame Super-Resolution "
Peyman Milanfar (Google)