Supervisors: Dr. Danda Pani Paudel, Dr. Thomas Probst, Prof. Luc Van Gool
With the success of artificial deep neural networks on single tasking, networks that can perform more than one task, similar to human brain, is highly desirable. Such desire immediately invites multiple issues that need to be addressed beforehand. Two major issues in this regard are; network architecture design and training strategy. This paper addresses the aforementioned issues of neural networks that perform multiple tasks by switching between them -- while performing only one task at a time. The proposed Task Switching Neural Networks (TSNNs) have constant number of parameters and the same input/output data types, irrespective of the number and type of the tasks to be performed. This is achieved by using a task conditional single-input-single-output network architecture. The stable training of TSNNs is maintained by using the proposed gradient regularization technique, which is otherwise impaired by conflicting tasks and stochastic switching behavior. In this work, we divide tasks into two categories: regression and classification types. The accuracy of the classification tasks is boasted by introducing a cross-entropy-like formulation of the network predictions, which are otherwise used directly for regression tasks. Experiments on five different real-world tasks of the benchmark dataset demonstrate not only the utility of the proposed TSNNs framework, but also provide very encouraging results for all five different tasks.