Publications

Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding
Christos Sakaridis, Dengxin Dai, Simon Hecker, and Luc Van Gool
European Conference on Computer Vision (ECCV), 2018
[PDF]   [BibTeX]   [arXiv]
The final authenticated version is available online here.

Curriculum Model Adaptation with Synthetic and Real Data for Semantic Foggy Scene Understanding
Dengxin Dai, Christos Sakaridis, Simon Hecker, and Luc Van Gool
International Journal of Computer Vision (IJCV), 2020
[PDF]   [BibTeX]   [arXiv]  
The PDF above is a post-peer-review, pre-copyedit version of the article published in IJCV. The final authenticated version is available online here.

News

Foggy Datasets

We present two distinct datasets for semantic understanding of foggy scenes: Foggy Cityscapes-DBF and Foggy Zurich.

Foggy Cityscapes-DBF derives from the Cityscapes dataset and constitutes a collection of synthetic foggy images that automatically inherit the semantic annotations of their real, clear counterparts. The fog simulation pipeline involves computation of a denoised and complete depth map, followed by refinement of the corresponding transmittance map with our novel Dual-reference cross-Bilateral Filter (DBF) that uses both color and semantics as reference, from which the dataset borrows its name. Foggy Cityscapes-DBF constitutes the successor of the Foggy Cityscapes dataset which was presented in our previous work and it contains synthetic foggy images with better adherence to semantic boundaries in the scene than the latter dataset.

Cityscapes Foggy Cityscapes Foggy Cityscapes-DBF

Due to licensing issues, the main dataset modality of foggy images is only available for download at the Cityscapes website. This is also the case for semantic annotations as well as other modalities which are shared by Foggy Cityscapes-DBF and Cityscapes. On the contrary, the auxiliary Foggy Cityscapes-DBF modality of transmittance maps is available at this website in the following package:


Transmittance maps (8-bit) for foggy scenes in train and val sets
10425 images (3475 images x 3 fog densities)
MD5 checksum

Foggy Zurich is a collection of 3808 real-world foggy road scenes in the city of Zurich and its suburbs. We provide semantic segmentation annotations for a diverse test subset Foggy Zurich-test with 40 scenes containing dense fog, which serves as a first benchmark for the challenging domain of dense foggy weather. These scenes are annotated with fine pixel-level semantic annotations for the 19 evaluation classes of Cityscapes.

MD5 checksum

Pretrained Models

Semantic Segmentation

We provide below the central trained models for semantic segmentation in dense fog corresponding to the experimental results of our ECCV 2018 paper, which follow the RefineNet architecture.

  1. RefineNet ResNet-101 Cityscapes model trained with our full method CMAda-7 with Stereo-DBF (410 MB)
    This is the best-performing model in our ECCV 2018 experiments (see Tables 3 and 4 therein) and it has been adapted to fog in two adaptation steps, using both synthetic foggy data from our Foggy Cityscapes-DBF dataset and real foggy data from our Foggy Zurich dataset.
    It achieves a mean IoU score of 41.4% on the full Foggy Zurich-test set, 37.9% on the initial testv1 version of Foggy Zurich-test presented in our ECCV 2018 paper, 48.6% on Foggy Driving and 34.3% on Foggy Driving-dense, exhibiting an improvement of 6.8%, 5.9%, 4.3% and 3.9% respectively against the baseline daytime-trained RefineNet model.
    Please note that all these results have been obtained without applying multi-scale testing. The performance of our full CMAda-7 model as well as that of the baseline RefineNet model are expected to improve significantly with multi-scale testing.
  2. RefineNet ResNet-101 Cityscapes model trained with our CMAda-4 variant with Stereo-DBF (410 MB)
    This model has been adapted to fog in one adaptation step, using only synthetic foggy data from our Foggy Cityscapes-DBF dataset. See Tables 3 and 4 of our ECCV 2018 paper for details.

Code

The source code for our fog simulation on real scenes using the dual-reference cross-bilateral filter is available on GitHub.

Citation

Please cite our publication if you use our datasets, code or models in your work.

Moreover, in case you use the Foggy Cityscapes-DBF dataset, please cite additionally the Cityscapes publication, and in case you use our fog simulation code, please cite additionally our preceding IJCV 2018 publication with the basic fog simulation pipeline, the Cityscapes publication and the SLIC superpixels publication.