Supervisors: Xiaoran Chen, Prof. Ender Konukoglu
Semantic segmentation of healthy tissues and lesions have been critical tasks for medical image analysis. Softwares have been built to automatically segment numerous types of tissues of brain MRI images. However, existing softwares are mostly developed to analyse healthy brains particularly and do not give reliable tissue segmentation on brains with lesions. This prevents radiologists from assessing the effects of lesions on the surrounding non-lesion tissues. With datasets annotated on healthy tissues and lesions respectively, we propose a method to unify the two different label spaces and perform joint segmentations for both healthy tissues and lesions on brain MRI images with lesions. In this method, a U-net architecture is trained to segment images from the two datasets, one with annotated healthy tissues and the other with lesion annotations. Meanwhile the network is also trained to reconstruct images to provide more supervision for images with partial labels, i.e. lesion images with only lesion annotations. Experimental results show that the proposed method outperforms related previous works and also reveals that the segmentation in the two label spaces may benefit from each other.