Open this publication in new window or tab >>Show others...
2024 (English)In: IET Image Processing, ISSN 1751-9659, E-ISSN 1751-9667, Vol. 18, no 14, p. 4928-4943Article in journal (Refereed) Published
Abstract [en]
Precise localization and volumetric segmentation of glioblastoma before and after surgery are crucial for various clinical purposes, including post-surgery treatment planning, monitoring tumour recurrence, and creating radiotherapy maps. Manual delineation is time-consuming and prone to errors, hence the adoption of automated 3D quantification methods using deep learning algorithms from MRI scans in recent times. However, automated segmentation often leads to over-segmentation or under-segmentation of tumour regions. Introducing an interactive deep-learning tool would empower radiologists to rectify these inaccuracies by adjusting the over-segmented and under-segmented voxels as needed. This paper proposes a network named Atten-SEVNETR, that has a combined architecture of vision transformers and convolutional neural networks (CNN). This hybrid architecture helps to learn the input volume representation in sequences and focuses on the global multi-scale information. An interactive graphical user interface is also developed where the initial 3D segmentation of glioblastoma can be interactively corrected to remove falsely detected spurious tumour regions. Atten-SEVNETR is trained on BraTS training dataset and tested on BraTS validation dataset and on Uppsala University post-operative glioblastoma dataset. The methodology outperformed state-of-the-art networks like nnFormer, SwinUNet, and SwinUNETR. The mean dice score achieved is 0.7302, and the mean Hausdorff distance-95 got is 7.78 mm for the Uppsala University dataset.
Place, publisher, year, edition, pages
Institution of Engineering and Technology, 2024
National Category
Medical Imaging
Identifiers
urn:nbn:se:uu:diva-542828 (URN)10.1049/ipr2.13218 (DOI)001303364600001 ()2-s2.0-85202937649 (Scopus ID)
Funder
Vinnova, 2020-03616
2024-11-142024-11-142025-04-01Bibliographically approved