Real images from the web

Below you will find results for real images taken from the web. On the left part of this webpage you can briefly browse the results showing from left to right the initial image and our predicted illumination (the ground truth illumination is not available in this case). You can scroll down to see all the different examples. If you click one of the examples then on the right part of the webpage the detailed results for this particular example appear, showing images, inputs (reflectance maps and background), and the different variants of our method. Note that for single-material input images the Singlet 1 (+ Background) result, presented on the right part, is copied to Tuple (+ Background). The results do not just look plausible but can be used to re-render an object in the scene editing either its material or shape (see Figure 10 in the paper). The rendered images in this case look nevertheless convincing indicating that our method can be applied to images beyond our dataset.

The geometry for these images was estimated either by fitting a primitive shape (e.g. sphere, cylinder, rectangle) or by manually aligning a 3D model (e.g. the Bugatti Veyron in the last 2 examples). The background segmentation was also performed manually in each case.

Image Our predicted illumination Ground-thuth illumination Detailed result
Our predicted illumination Ground-thuth illumination
Image Background
Material 1 Reflectance
Map 1
Material 2 Reflectance
Map 2
Material 3 Reflectance
Map 3
Singlet 1 Singlet 1 +
Background
Singlet 2 Singlet 2 +
Background
Singlet 3 Singlet 3 +
Background
Tuple Tuple +
Background