The recent success of deep learning has shown that a deep architecture in conjunction with abundant quantities of labeled training data is the most promising approach for most vision tasks. However, annotating a large-scale dataset for training such deep neural networks is costly and time-consuming, even with the availability of scalable crowdsourcing platforms like Amazon’s Mechanical Turk. As a result, there are relatively few public large-scale datasets (e.g., ImageNet and Places2) from which it is possible to learn generic visual representations from scratch.
Thus, it is unsurprising that there is continued interest in developing novel deep learning systems that train on low-cost data for image and video recognition. Among different solutions, crawling data from Internet and using the web as a source of supervision for learning deep representations has shown promising performance for a variety of important computer vision applications. However, the datasets and tasks differ in various ways, which makes it difficult to fairly evaluate different solutions, and identify the key issues when learning from web data.
This workshop aims at promoting the advance of learning state-of-the-art visual models directly from the web, and bringing together computer vision researchers interested in this field. To this end, we release a large scale web image dataset named WebVision for visual understanding by learning from web data. The datasets consists of 2.4 million of web images crawled from Interenet for 1,000 visual concepts. A validation set consists of 50K images with human annotation are also provided for the convenience algorithm development.
Based on this dataset, we also organize the first Challenge on Visual Understanding by Learning from Web Data. The final results will be announced at the workshop, and the winners will be invited to present their approaches at the workshop. An invited paper tack will also be included in the workshop.
News 05.12.2017: The workshop website is now online.!
Researchers are invited to participate the WebVision challenge, which aims to advance the area of learning useful knowledge and effective representation from noisy web images and meta information. The knowledge and representation could be used to solve vision problems. In particular, we organize two tasks to evaluate the learned knowledge and representation: (1) WebVision Image Classification Task, and (2) Pascal VOC Transfer Learning Task. The second task is built upon the first task. Researchers can participate into only the first task, or both tasks.
The WebVision dataset is composed of training, validation, and test set. The training set is downloaded from Web without any human annotation. The validation and test set are human annotated, where the labels of validation data are provided and the ones of test data are withheld. To imitate the setting of learning from web data, the participants are required to learn their models solely on the training set and submit classification results on the test set. In this sense, the validation data and labels could be simply used to tune hyper-parameters and cannot be used to learn the model weights.
This task is designed for verify the knowledge and representation learned from the WebVision training set on the new task. Hence, participants are required to submitting results to the first task and transfer the only models learned in the first task. We choose the image classification task of Pascal VOC to test the transfer learning performance. Participant could exploit different ways to transfer the knowledge learned in the first task perform image classification Pascal VOC. For example, treating the learned models as feature extractors and learning the SVM classifier based on the features. The evaluation protocol strictly follows the previous Pascal VOC.
The WebVision dataset provides the web images and their corresponding meta information (e.g., query, title, comments, etc.) and more information can be found at the dataset page. Learning from web data poses several challenges such as
Participant are encouraged to design new methods to solve these challenges.
All deadlines are at 23:59 Pacific Standard Time.