Supervisors: Prof. Ender Konukoglu, Dr. Christine Tanner and Gustav Bredell
Medical image segmentation is frequently performed with 2D networks that segment slice-wise. To utilize the spatial information within a volumetric image, we propose a framework that segments 3D medical images of multiple classes both automatically and semi-automatically. We implement a robot-user that marks discrepancies between the automatic segmentation and the ground-truth and creates scribbles. The semi-automatic model learns from the automatic segmentation network and increases the prediction accuracy by 16%, from 79.14% to 94.97%. We obtain a method that reduces the amount of user interaction required by 48% compared to competing frameworks.