Applied Sciences, Vol. 15, Pages 3985: Enhanced Liver and Tumor Segmentation Using a Self-Supervised Swin-Transformer-Based Framework with Multitask Learning and Attention Mechanisms
Applied Sciences doi: 10.3390/app15073985
Authors:
Zhebin Chen
Meng Dou
Xu Luo
Yu Yao
Automatic liver and tumor segmentation in contrast-enhanced magnetic resonance imaging (CE-MRI) images are of great value in clinical practice as they can reduce surgeons’ workload and increase the probability of success in surgery. However, this is still a challenging task due to the complex background, irregular shape, and low contrast between the organ and lesion. In addition, the size, number, shape, and spatial location of liver tumors vary from person to person, and existing automatic segmentation models are unable to achieve satisfactory results. In this work, drawing inspiration from self-attention mechanisms and multitask learning, we propose a segmentation network that leverages Swin-Transformer as the backbone, incorporating self-supervised learning strategies to enhance performance. In addition, accurately segmenting the boundaries and spatial location of liver tumors is the biggest challenge. To address this, we propose a multitask learning strategy based on segmentation and signed distance map (SDM), incorporating an attention gate into the skip connections. The strategy can perform liver tumor segmentation and SDM regression tasks simultaneously. The incorporation of the SDM regression branch effectively improves the detection and segmentation performance for small objects since it imposes additional shape and global constraints on the network. We performed comprehensive evaluations, both quantitative and qualitative, of our approach. The model we proposed outperforms existing state-of-the-art models in terms of DSC, 95HD, and ASD metrics. This research provides a valuable solution that lessens the burden on surgeons and improves the chances of successful surgeries.
Source link
Zhebin Chen www.mdpi.com