Supervisors: Dr. Christine Tanner, Prof. Ender Konukoglu
Recent advances in automatic image segmentation by the introduction of convolutional neural networks (CNNs) promise to increase the efficiency of morphological- and treatment planning tasks. Integration of these methods have however been hampered by the limitation of the algorithms to only perform well on data it was trained on. In this work we present a segmentation editing algorithm based on CNNs (interCNN) that is trained on top of an automatic segmentation method (autoCNN) that allows the user to edit autoCNN segmentations directly and thus overcome current limitations. We propose an iterative interaction training approach to simulate user-interaction during training of the interCNN. User-input is simulated with a robot-user and the optimal number of user interactions are determined empirically. Furthermore, we investigate the incorporation of interaction history during the segmentation editing of a specific image into the architecture of the interCNN by using convolutional-LSTM layers to introduce a running memory. We show that the empirically optimal iterative user-interactions is ten for our algorithm and show an improved segmentation editing performance by integrating a convolutional LSTM into our network over scribble memory. Furthermore, we conduct a human study to show that our algorithm can utilize human input during testing and show improved segmentation performance over current interactive segmentation benchmarks. Lastly, we show that our algorithm is not limited to binary segmentation, but can be used without alteration on multi-class problems and show the generality of it by evaluating its performance on a prostate as well as a brain tumor data-set.