2.November Seoul, Korea

AIM 2019

Advances in Image Manipulation workshop

and challenges on image and video manipulation

in conjunction with ICCV 2019

Sponsors (TBD)




Call for papers

Image manipulation is a key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

This workshop builds upon the success of the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018 , the workshop and Challenge on Learned Image Compression (CLIC) editions at CVPR 2018 and CVPR 2019 and the New Trends in Image Restoration and Enhancement (NTIRE) editions: at CVPR 2017 , 2018 and 2019 and at ACCV 2016. Moreover, it relies on the people associated with the PIRM, CLIC, and NTIRE events such as organizers, PC members, distinguished speakers, authors of published papers, challenge participants and winning teams.

Papers addressing topics related to image/video manipulation, restoration and enhancement are invited. The topics include, but are not limited to:

  • Image-to-image translation
  • Video-to-video translation
  • Image/video manipulation
  • Perceptual manipulation
  • Image/video generation and hallucination
  • Image/video quality assessment
  • Image/video semantic segmentation
  • Perceptual enhancement
  • Multimodal translation
  • Depth estimation
  • Image/video inpainting
  • Image/video deblurring
  • Image/video denoising
  • Image/video upsampling and super-resolution
  • Image/video filtering
  • Image/video de-hazing, de-raining, de-snowing, etc.
  • Demosaicing
  • Image/video compression
  • Removal of artifacts, shadows, glare and reflections, etc.
  • Image/video enhancement: brightening, color adjustment, sharpening, etc.
  • Style transfer
  • Hyperspectral imaging
  • Underwater imaging
  • Aerial and satellite imaging
  • Methods robust to changing weather conditions / adverse outdoor conditions
  • Image/video manipulation on mobile devices
  • Image/video restoration and enhancement on mobile devices
  • Studies and applications of the above.

AIM 2019 has the following associated groups of challenges:

  • image manipulation, restoration and enhancement challenges
  • video manipulation, restoration and enhancement challenges

The authors of the top methods in each category will be invited to submit papers to AIM 2019 workshop.

The authors of the top methods will co-author the challenge reports.

The accepted AIM workshop papers will be published under the book title "ICCV Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library

Contact:

Radu Timofte, radu.timofte@vision.ee.ethz.ch

Computer Vision Laboratory

ETH Zurich, Switzerland

Important dates



Challenges Event Date (always 5PM Pacific Time)
Site online July 1, 2019
Release of train data and validation data July 15, 2019
Validation server online July 20, 2019
Final test data release, validation server closed September 2, 2019
Test restoration results submission deadline September 11, 2019
Fact sheets submission deadline September 11, 2019
Code/executable submission deadline September 11, 2019
Preliminary test results release to the participants September 15, 2019
Paper submission deadline for entries from the challenges September 22, 2019 23:59 Pacific Standard Time (EXTENDED!)
Workshop Event Date (always 5PM Pacific Time)
Paper submission server online July 20, 2019
Paper submission deadline August 11, 2019
Paper submission deadline (late/challenge papers!) September 22, 2019 23:59 Pacific Standard Time (EXTENDED)
Regular papers decision notification August 23, 2019
Challenge papers decision notification September 22, 2019
Regular paper camera ready deadline August 30, 2019
Challenge paper camera ready deadline September 27, 2019
Workshop day November 2, 2019

Submit



Instructions and Policies
Format and paper length

A paper submission has to be in English, in pdf format, and at most 8 pages (excluding references) in double column. The paper format must follow the same guidelines as for all ICCV 2019 submissions.
http://iccv2019.thecvf.com/submission/main_conference/author_guidelines

Double-blind review policy

The review process is double blind. Authors do not know the names of the chair/reviewers of their papers. Reviewers do not know the names of the authors.

Dual submission policy

Dual submission is allowed with ICCV2019 main conference only. If a paper is submitted also to ICCV and accepted, the paper cannot be published both at the ICCV and the workshop.

Submission site

https://cmt3.research.microsoft.com/AIMW2019

Proceedings

Accepted and presented papers will be published after the conference in ICCV Workshops proceedings together with the ICCV2019 main conference papers.

Author Kit

http://iccv2019.thecvf.com/files/iccv2019AuthorKit.zip
The author kit provides a LaTeX2e template for paper submissions. Please refer to the example egpaper_for_review.pdf for detailed formatting instructions.

People



Organizers

Radu Timofte

Radu Timofte is lecturer and research group leader in the Computer Vision Laboratory, at ETH Zurich, Switzerland. He obtained a PhD degree in Electrical Engineering at the KU Leuven, Belgium in 2013, the MSc at the Univ. of Eastern Finland in 2007, and the Dipl. Eng. at the Technical Univ. of Iasi, Romania in 2006. He serves as a reviewer for top journals (such as TPAMI, TIP, IJCV, TNNLS, TCSVT, CVIU, PR) and conferences (ICCV, CVPR, ECCV, NIPS) and is area editor for Elsevier's CVIU journal. He serves as area chair for ACCV 2018 and ICCV 2019. He received a NIPS 2017 best reviewer award. His work received a best scientific paper award at ICPR 2012, the best paper award at CVVT workshop (ECCV 2012), the best paper award at ChaLearn LAP workshop (ICCV 2015), the best scientific poster award at EOS 2017, the honorable mention award at FG 2017, and his team won a number of challenges including traffic sign detection (IJCNN 2013) and apparent age estimation (ICCV 2015). He is co-founder of Merantix and co-organizer of NTIRE, CLIC and PIRM events. His current research interests include sparse and collaborative representations, deep learning, optical flow, image/video compression, restoration and enhancement.

Shuhang Gu

Shuhang Gu received the B.E. degree from the School of Astronautics, Beijing University of Aeronautics and Astronautics, China, in 2010, the M.E. degree from the Institute of Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, China, in 2013, and Ph.D. degree from the Department of Computing, The Hong Kong Polytechnic University, in 2017. He currently holds a post-doctoral position at ETH Zurich, Switzerland. His research interests include image restoration, enhancement and compression.

Martin Danelljan

Martin Danelljan received his Ph.D. degree from Linköping University, Sweden in 2018. He is currently a postdoctoral researcher at ETH Zurich, Switzerland. His main research interests are online machine learning methods for visual tracking and video object segmentation, probabilistic models for point cloud registration, and machine learning with no or limited supervision. His research in the field of visual tracking in particular has attracted much attention. In 2014, he won the Visual Object Tracking (VOT) Challenge and the OpenCV State-ofthe-Art Vision Challenge. Furthermore, he achieved top ranks in VOT2016 and VOT2017 challenges. He received the best paper award in the computer vision track in ICPR 2016.

Ming-Hsuan Yang

Ming-Hsuan Yang received the PhD degree in Computer Science from University of Illinois at Urbana-Champaign. He is a full professor in Electrical Engineering and Computer Science at University of California at Merced. He has published more than 120 papers in the field of computer vision. Yang serves as a program co-chair of ACCV 2014, general co-chair of ACCV 2016, and program co-chair of ICCV 2019. He serves as an editor for PAMI, IJCV, CVIU, IVC and JAIR. His research interests include object detection, tracking, recognition, image deblurring, super resolution, saliency detection, and image/video segmentation.

Luc Van Gool

Luc Van Gool received a degree in electro-mechanical engineering at the Katholieke Universiteit Leuven in 1981. Currently, he is a full professor for Computer Vision at the ETH in Zurich and the Katholieke Universiteit Leuven in Belgium. He leads research and teaches at both places. He has authored over 300 papers. Luc Van Gool has been a program committee member of several, major computer vision conferences (e.g. Program Chair ICCV'05, Beijing, General Chair of ICCV'11, Barcelona, and of ECCV'14, Zurich). His main interests include 3D reconstruction and modeling, object recognition, and tracking and gesture analysis. He received several Best Paper awards (eg. David Marr Prize '98, Best Paper CVPR'07, Tsuji Outstanding Paper Award ACCV'09, Best Vision Paper ICRA'09). In 2015 he received the 5-yearly Excellence Award in Applied Sciences by the Flemish Fund for Scientific Research, in 2016 a Koenderink Prize and in 2017 a PAMI Distinguished Researcher award. He is a co-founder of more than 10 spin-off companies and was the holder of an ERC Advanced Grant (VarCity). Currently, he leads computer vision research for autonomous driving in the context of the Toyota TRACE labs in Leuven and at ETH, as well as image and video enhancement research for Huawei.

Kyoung Mu Lee

Kyoung Mu Lee received the B.S. and M.S. Degrees from Seoul National University, Seoul, Korea, and Ph. D. degree in Electrical Engineering from the University of Southern California in 1993. Currently he is with the Dept. of ECE at Seoul National University as a full professor. His primary research interests include scene understanding, object recognition, low-level vision, visual tracking, and visual navigation. He is currently serving as an AEIC (Associate Editor in Chief) of the IEEE TPAMI, an Area Editor of the Computer Vision and Image Understanding (CVIU), and has served as an Associate Editor of the IEEE TPAMI, the Machine Vision Application (MVA) Journal and the IPSJ Transactions on Computer Vision and Applications (CVA), and the IEEE Signal Processing Letter. He is an Advisory Board Member of CVF (Computer Vision Foundation) and an Editorial Advisory Board Member for Academic Press/Elsevier. He also has served as Area Chars of CVPR, ICCV, ECCV, and ACCV many times, and serves as a general co-chair of ACM MM 2018, ACCV2018 and ICCV2019. He was a Distinguished Lecturer of the Asia-Pacific Signal and Information Processing Association (APSIPA) for 2012-2013.

Eli Shechtman

Eli Shechtman is a Principal Scientist at the Creative Intelligence Lab at Adobe Research. He received the B.Sc. degree in Electrical Engineering (magna cum laude) from Tel-Aviv University in 1996. Between 2001 and 2007 he attended the Weizmann Institute of Science where he received with honors his M.Sc. and Ph.D. degrees in Applied Mathematics and Computer Science. In 2007 he joined Adobe and started sharing his time as a post-doc with the University of Washington in Seattle. He published over 60 academic publications and holds over 20 issued patents. He served as a Technical Paper Committee member at SIGGRAPH 2013 and 2014, as an Area Chair at CVPR'15, ICCV'15 and CVPR'17 and serves an Associate Editor at TPAMI. He received several honors and awards, including the Best Paper prize at ECCV 2002, a Best Poster Award at CVPR 2004, a Best Reviewer Award at ECCV 2014 and published two Research Highlights papers in the Communication of the ACM journal.

Ming-Yu Liu

Ming-Yu Liu is a principal research scientist at NVIDIA Research. Before joining NVIDIA in 2016, he was a principal research scientist at Mitsubishi Electric Research Labs (MERL). He received his Ph.D. from the Department of Electrical and Computer Engineering at the University of Maryland College Park in 2012. His object pose estimation system was awarded one of hundred most innovative technology products by the R&D magazine in 2014. His street scene understanding paper was selected in the best paper finalist in the 2015 Robotics Science and System (RSS) conference. In CVPR 2018, he won the 1st place in both the Domain Adaptation for Semantic Segmentation Competition in the WAD challenge and the Optical Flow Competition in the Robust Vision Challenge. His research focus is on generative models for image generation and understanding. His goal is to enable machines superhuman-like imagination capabilities.

Zhiwu Huang

Zhiwu Huang is currently a postdoctoral researcher in the Computer Vision Lab, ETH Zurich, Switzerland. He received the PhD degree from Institute of Computing Technology, Chinese Academy of Sciences in 2015. His main research interest is in human-focussed video analysis with Riemannian manifold networks and Wasserstein generative models.

Seungjun Nah

Seungjun Nah is a Ph.D. student at Seoul National University, advised by Prof. Kyoung Mu Lee. He received his BS degree from Seoul National University in 2014. He has worked on computer vision research topics including super-resolution, deblurring, and neural network acceleration. He won the 1st place award from NTIRE 2017 super-resolution challenge and co-organized NTIRE 2019 workshop. He has reviewed conference (ICCV 2019, CVPR 2018, SIGGRAPH Asia 2018) and journal (IJCV, TNNLS, TMM, TIP) submission papers. He was nominated as one of the best reviewers in ICCV 2019. His research interests include visual quality enhancement, low-level computer vision, and deep learning.

Richard Zhang

Richard Zhang is a Research Scientist at Adobe Research, San Francisco. He received his Ph.D. in 2018 in Electrical Engineering and Computer Sciences at the University of California, Berkeley, advised by Professor Alexei A. Efros. Before that, he obtained B.S. and M.Eng. degrees in 2010 from Cornell University in Electrical and Computer Engineering, graduating summa cum laude. He is a recipient of the 2017 Adobe Research Fellowship. His research interests include generative modeling, unsupervised learning, and deep learning.

Andrey Ignatov

Andrey Ignatov is a PhD student at ETH Zurich supervised by Prof. Luc Van Gool and Dr. Radu Timofte. He obtained his master degree from ETH Zurich in 2017. His current research interests include computational imaging, deep learning, wearable devices, and benchmarking.

Invited Talks (TBD)



Victor Lempitsky

Samsung AI Center, Skoltech

Title: Few-shot learning of realistic neural talking head models

Abstract: Several recent works have shown how highly realistic human head images can be obtained by training convolutional neural networks to generate them. In order to create a personalized talking head model, these works require training on a large dataset of images of a single person. However, in many practical scenarios, such personalized talking head models need to be learned from a few image views of a person, potentially even a single image. I will present a system with such few-shot capability. It performs lengthy meta-learning on a large dataset of videos, and after that is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters. I will show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings. Joint work with Egor Zakharov, Aliaksandra Shysheya, Egor Burkov.

Bio: Victor Lempitsky leads the Samsung AI Center in Moscow as well as the Vision, Learning, Telepresence (VIOLET) Lab at this center. He is also an associate professor at Skolkovo Institute of Science and Technology (Skoltech). In the past, Victor was a researcher at Yandex, at the Visual Geometry Group (VGG) of Oxford University, and at Microsoft Research Cambridge. He has a PhD ("kandidat nauk") degree from Moscow State University (2007). Victor's research interests are in various aspects of computer vision and deep learning, in particular, generative deep learning. He has published extensively in top computer vision and machine learning venues and has served as an area chair for top computer vision conferences (CVPR, ICCV, ECCV, ICLR) on multiple occasions. He has received a Scopus Russia Award for being a highly cited researcher in 2018.

Michal Irani

Weizmann Institute

Title: “Deep Internal learning” -- Deep Learning with Zero Examples

Abstract: In this talk I will show how complex visual inference tasks can be performed using Deep-Learning, in a totally unsupervised way, by exploiting the internal redundancy inside a single image. The strong recurrence of information inside a single natural image provides powerful internal examples which suffice for self-supervision of Deep-Networks, without any prior examples or training data. This new paradigm gives rise to true “Zero-Shot Learning”. I will show the power of this approach to a variety of image enhancement/manipulation problems, including blind super-resolution, blind image-dehazing, image-segmentation, transparent layer separation, image-retargeting, and more. In some of these problems, this approach yields state-of-the-art results.

Bio: Michal Irani is a Professor at the Weizmann Institute of Science, Israel, in the Department of Computer Science and Applied Mathematics. She received a B.Sc. degree in Mathematics and Computer Science from the Hebrew University of Jerusalem, and M.Sc. and Ph.D. degrees in Computer Science from the same institution. During 1993-1996 she was a member of the Vision Technologies Laboratory at the Sarnoff Research Center (Princeton). She joined the Weizmann Institute in 1997. Michal's research interests center around computer vision, image processing, and video information analysis. Michal's prizes and honors include the David Sarnoff Research Center Technical Achievement Award (1994), the Yigal Allon three-year Fellowship for Outstanding Young Scientists (1998), the Morris L. Levinson Prize in Mathematics (2003), and the Maria Petrou Prize (awarded by the IAPR) for outstanding contributions to the fields of Computer Vision and Pattern Recognition (2016). She received the ECCV Best Paper Award in 2000 and in 2002, and was awarded the Honorable Mention for the Marr Prize in 2001 and in 2005. In ICCV 2017 Michal received the Helmholtz “Test of Time Award”.

Vladlen Koltun

Intel

Title: Deep Learning for Imaging

Abstract: Deep learning initially appeared relevant primarily for higher-level signal analysis, such as image recognition or object detection. But recent work has clarified that image processing is not immune and may benefit substantially from ability to reliably optimize multi-layer function approximators. I will review a line of work that investigates applications of deep networks to image processing. First, I will discuss the remarkable ability of convolutional networks to fit a variety of image processing operators. Next, I will present approaches that replace much of the traditional image processing pipeline by a deep network, with substantial benefits for applications such as low-light imaging and computational zoom. One take-away is that deep learning is a surprisingly exciting and consequential development for image processing.

Bio: Vladlen Koltun is the Chief Scientist for Intelligent Systems at Intel. He directs the Intelligent Systems Lab, which conducts high-impact basic research in computer vision, machine learning, robotics, and related areas. He has mentored more than 50 PhD students, postdocs, research scientists, and PhD student interns, many of whom are now successful research leaders.

Ting-Chun Wang

Nvidia

Title: Few-Shot Video-to-Video Synthesis

Abstract: In this talk, I will summarize our recent work on few-shot adaptive video translation. I'll start with briefly reviewing the original video-to-video synthesis (vid2vid) work. Then I'll show how we extend it to the few-shot case, where we can synthesize unseen subjects or scenes by requiring only few example images of the target at test time. Our model achieves this generalization capability via a novel network weight generation module utilizing an attention mechanism. We conduct extensive experimental validations with comparisons to strong baselines on different datasets.

Bio: Ting-Chun Wang is a senior research scientist at NVIDIA in Santa Clara, USA. He obtained his Ph.D. in EECS from UC Berkeley, advised by Professor Ravi Ramamoorthi and Alexei A. Efros. He won the 1st place in the Domain Adaptation for Semantic Segmentation Competition in CVPR, 2018. His semantic image synthesis paper was in the best paper finalist in CVPR, 2019, and the corresponding GauGAN demo won the Best in Show Award and Audience Choice Award in SIGGRAPH RealTimeLive, 2019. He served as an area chair in WACV, 2020. His research interests include computer vision, machine learning and computer graphics, particularly the intersections of all three. His recent research focus is on using generative adversarial models to synthesize realistic images and videos, with applications to rendering, visual manipulations and beyond.

Eli Shechtman

Adobe

Title: Text-based Editing of Talking-head Video

Abstract: Editing talking-head video to change the speech content or to remove filler words is challenging. We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified, while maintaining a seamless audio-visual flow (i.e. no jump cuts). Our method automatically annotates an input talking-head video with phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame. To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material. The annotated parameters corresponding to the selected segments are seamlessly stitched together and used to produce an intermediate video representation in which the lower half of the face is rendered with a parametric face model. Finally, a recurrent video generation network transforms this representation to a photorealistic video that matches the edited transcript. We demonstrate a large variety of edits, such as the addition, removal, and alteration of words, as well as convincing language translation and full sentence synthesis.

Bio: Eli Shechtman is a Principal Scientist at the Creative Intelligence Lab in Adobe Research, one of the top leading non-academic research labs in the world in the areas of computer graphics, computer vision, audio, machine learning and HCI. He received his B.Sc. in Electrical Engineering (cum laude) from Tel-Aviv University in 1996 and his M.Sc.and Ph.D. with honors in Applied Mathematics and Computer Science from the Weizmann Institute of Science in 2003 and 2007. He then joined Adobe and also shared his time as a post-doc at the University of Washington between 2007-2010. He has published over 90 academic publications. Two of his papers were chosen to be published as “Research Highlight” papers in the Communication of the ACM (CACM) journal. He served as a Technical Paper Committee member at SIGGRAPH 2013 and 2014, was an Area Chair at CVPR 2015, 2017 and 2020 (future), ICCV 2015 and 2019, and is an Associate Editor for TPAMI since 2016. He has received several honors and awards, including the Best Paper prize at ECCV 2002, a Best Paper award at WACV 2018 and Helmholtz “Test of Time Award” at ICCV 2017. His research is in the intersection of computer vision, computer graphics and machine learning. In particular, he is focusing on generative and example-based modeling and editing of visual data. Some of his research can be found in Adobe’s products such as Photoshop’s Content Aware Fill, Content Aware Fill for Video in After Effects, Upright in Lightroom and Characterizer in Character Animator.

Schedule


The accepted AIM workshop papers will have poster presentation. The dimension of poster panels is 195cm (width) x 95cm (height).
The accepted AIM workshop papers will be published under the book title "ICCV 2019 Workshops" by

Computer Vision Foundation Open Access and IEEE Xplore Digital Library



List of AIM 2019 papers (final panel allocation)

(Morning Poster #94) NoUCSR: Efficient Super-Resolution Network without Upsampling Convolution
Xiong, Dongliang; Huang, Kai; Chen, Siang; Li, Bowen; Jiang, Haitian; Xu, Wenyuan
(Morning Poster #95) HighEr-Resolution Network for Image Demosaicing and Enhancing
Mei, Kangfu; Li, Juncheng; Zhang, Jiajie; Wu, Haoyu; Li, Jie; Huang, Rui
(Morning Poster #96) Edge-Informed Single Image Super-Resolution
Nazeri, Kamyar; Thasarathan, Harrish P; Ebrahimi, Mehran
(Morning Poster #97) Multi-scale Dynamic Feature Encoding Network for Image Demoireing
Cheng, Xi; Yang, Jian; Fu, Zhenyong
(Morning Poster #98) DIV8K: DIVerse 8K Resolution Image Dataset
Gu, Shuhang; Lugmayr, Andreas; Danelljan, Martin; Fritsche, Manuel; Lamour, Julien; Timofte, Radu
(Morning Poster #99) AIM 2019 Challenge on Image Extreme Super-Resolution: Methods and Results
Gu, Shuhang; Danelljan, Martin; Timofte, Radu; et al.
(Morning Poster #100) AIM 2019 Challenge on Constrained Super-Resolution: Methods and Results
Zhang, Kai; Gu, Shuhang; Timofte, Radu; et al.
(Morning Poster #101) PFAGAN: An aesthetics-conditional GAN for generating photographic fine art
Murray, Naila
(Morning Poster #102) Image Super-Resolution via Attention based Back Projection Networks
Liu, Zhisong; Siu, Wan-Chi
(Morning Poster #103) AIM 2019 challenge on image demoireing: dataset and study
Yuan, Shanxin; Timofte, Radu; Slabaugh, Gregory; Leonardis, Ales
(Morning Poster #104) AIM 2019 challenge on image demoireing: methods and results
Yuan, Shanxin; Timofte, Radu; Slabaugh, Gregory; Leonardis, Ales; et al.
(Morning Poster #105) Saliency Map-aided Generative Adversarial Network for RAW to RGB Mapping
Zhao, Yuzhi; Lai Man, Po; Zhang, Tian Tian; Liao, Zongbang; Shi, Xiang; ZHANG, Yujia; OU, Weifeng; Xian, Pengfei; Xiong, Jingjing; Zhou, Chang; Yu, Wing Yin
(Morning Poster #106) MGBPv2: Scaling Up Multi-Grid Back-Projection Networks
Navarrete Michelini, Pablo; Chen, Wenbin; Wen, Han; Zhu, Dan
(Morning Poster #107) Depth-guided Dense Dynamic Filtering Network for Bokeh Effect Rendering
Purohit, Kuldeep; Suin, Maitreya; Kandula, Praveen; Ambasamudram, Rajagopalan N
(Morning Poster #108) AI Benchmark: All About Deep Learning on Smartphones in 2019
Ignatov, Andrey; Timofte, Radu; Kulik, Andrei; Yang, Seungsoo; Wang, Ke; Wu, Max; Baum, Felix; Xu, Lirong; Van Gool, Luc
(Morning Poster #109) AIM 2019 Challenge on RAW to RGB Mapping: Methods and Results
Ignatov, Andrey; Timofte, Radu; et al.
(Morning Poster #110) AIM 2019 Challenge on Bokeh Effect Synthesis: Methods and Results
Ignatov, Andrey; Patel, Jagruti; Timofte, Radu; et al.
(Morning Poster #111) 3SGAN: 3D Shape Embedded Generative Adversarial Networks
Zhu, Xiru; Che, Fengdi; yang, Tianzi; Yu, Tzu-Yang
(Morning Poster #112) W-Net: Two-stage U-Net with misaligned data for raw-to-RGB mapping
Uhm, Kwang Hyun; Kim, Seung-Wook; Ji, Seo-Won; Cho, Sungjin; Hong, Jun Pyo; Ko, Sung-Jea
(Morning Poster #113) Extremely Weak Supervised Image-to-Image Translation for Semantic Segmentation
Shukla, Samarth; Van Gool, Luc; Timofte, Radu
(Morning Poster #114) ASSR: Lightweight Super Resolution Network with Aggregative Structure
Gang, Ruipeng; Liu, Shuai; Li, Chenghua; Song, Ruixia
(Morning Poster #115) Adaptive Densely Connected Super-Resolution Reconstruction
Xie, Tangxin TX; Yang, Xin; Jia, Yu; Zhu, Chen; Li, Xiaochuan
(Afternoon Poster #93) Blind Single Image Reflection Suppression for Face Images using Deep Generative Priors
Chandramouli, Paramanand; Gandikota, Kanchana Vaishnavi
(Afternoon Poster #94) Quadriatic Video Interpolation for AIM challenge
Si-Yao, Li; Xu, Xiangyu; Sun, Wenxiu; Pan, Ze
(Afternoon Poster #95) Un-paired Real World Super-resolution with Degradation Consistency
Huang, Yuanfei; Xiaopeng, Sun; Lu, Wen; Li, Jie; Gao, Xinbo
(Afternoon Poster #96) Dual Reconstruction with Densely Connected Residual Network for Single Image Super-Resolution
Hsu, Chih-Chung; Lin, Chia-Hsiang
(Afternoon Poster #97) SMIT: Stochastic Multi-Label Image-to-Image Translation
Romero Vergara, Andres Felipe; Arbelaez, Pablo; Van Gool, Luc; Timofte, Radu
(Afternoon Poster #98) PoSNet: 4x Video Frame Interpolation Using Position-Specific Flow
Yu, Songhyun; Park, Bumjun; Jeong, Jechang
(Afternoon Poster #99) SteReFo: Efficient Image Refocusing with Stereo Vision
Busam, Benjamin ; Hog, Matthieu; McDonagh, Steven; Slabaugh, Gregory
(Afternoon Poster #100) Efficient Video Super-Resolution through Recurrent Latent Space Propagation
Fuoli, Dario; Gu, Shuhang; Timofte, Radu
(Afternoon Poster #101) AIM 2019 Challenge on Video Extreme Super-Resolution: Methods and Results
Fuoli, Dario; Gu, Shuhang; Timofte, Radu; et al.
(Afternoon Poster #102) The Vid3oC and IntVID Datasets for Video Super Resolution and Quality Mapping
Kim, Sohyeong; Li, Guanju; Fuoli, Dario; Danelljan, Martin; Huang, Zhiwu; Gu, Shuhang; Timofte, Radu
(Afternoon Poster #103) Can Generative Adversarial Networks Teach Themselves Text Segmentation?
Al-Rawi, Mohammed; Bazazian, Dena; Valveny, Ernest
(Afternoon Poster #104) Quotienting Impertinent Camera Kinematics for 3D Video Stabilization
Mitchel, Thomas W
(Afternoon Poster #105) AIM 2019 Challenge on Video Temporal Super-Resolution: Methods and Results
Nah, Seungjun; Son, Sanghyun; Timofte, Radu; Lee, Kyoung Mu; et al.
(Afternoon Poster #106) Image Disentanglement and Uncooperative Re-Entanglement for High-Fidelity Image-to-Image Translation
Harley, Adam; Wei, Shih-En; Saragih, Jason; Fragkiadaki, Katerina
(Afternoon Poster #107) Unsupervised Learning for Real-World Super-Resolution
Lugmayr, Andreas; Danelljan, Martin; Timofte, Radu
(Afternoon Poster #108) AIM 2019 Challenge on Real World Super-Resolution: Methods and Results
Lugmayr, Andreas; Danelljan, Martin; Timofte, Radu; et al.
(Afternoon Poster #109) Augmented Reality Based Recommendations based on Perceptual Shape Style Compatibility with Objects in the Viewpoint and Color Compatibility with the Background
Ayush, Kumar; Tanmay, Kumar
(Afternoon Poster #110) EdgeConnect: Structure Guided Image Inpainting using Edge Prediction
Nazeri, Kamyar; Ng, Eric; Joseph, Tony; Qureshi, Faisal Z; Ebrahimi, Mehran
(Afternoon Poster #111) Frequency Separation for Real-World Super-Resolution
Fritsche, Manuel; Gu, Shuhang; Timofte, Radu
(Afternoon Poster #112) Towards Spectral Estimation from a Single RGB Image in the Wild
Kaya, Berk; Can, Yigit Baran; Timofte, Radu
(Afternoon Poster #113) Robust Temporal Super-Resolution for Dynamic Motion Videos
Park, Bumjun; Yu, Songhyun; Jeong, Jechang

Awards (TBD)



Best Paper Awards
Challenge Winners
Challenge Awards