Show simple item record

dc.contributor.authorSünnetçi, Kubilay Muhammed
dc.contributor.authorKaba, Esat
dc.contributor.authorÇeliker, Fatma Beyazal
dc.contributor.authorAlkan, Ahmet
dc.date.accessioned2022-11-16T07:18:02Z
dc.date.available2022-11-16T07:18:02Z
dc.date.issued2022en_US
dc.identifier.citationSunnetci, K.M., Kaba, E., Celiker, F.b. & Alkan, A. (2022). Comparative parotid gland segmentation by using ResNet-18 and MobileNetV2 based DeepLab v3+architectures from magnetic resonance images. Concurrency and Computation: Practice and Experience. https://doi.org/10.1002/cpe.7405en_US
dc.identifier.issn1532-0626
dc.identifier.issn1532-0634
dc.identifier.urihttps://doi.org/10.1002/cpe.7405
dc.identifier.urihttps://hdl.handle.net/11436/7046
dc.description.abstractNowadays, artificial intelligence-based medicine plays an important role in determining correlations not comprehensible to humans. In addition, the segmentation of organs at risk is a tedious and time-consuming procedure. Segmentation of these organs or tissues is widely used in early diagnosis, treatment planning, and diagnosis. In this study, we trained semantic segmentation networks to segment healthy parotid glands using deep learning. The dataset we used in the study was obtained from Recep Tayyip Erdogan University Training and Research Hospital, and there were 72 T2-weighted magnetic resonance (MR) images in this dataset. After these images were manually segmented by experts, masks of these images were obtained according to them and all images were cropped. Afterward, these cropped images and masks were rotated 45 degrees, 120 degrees, and 210 degrees, quadrupling the number of images. We trained ResNet-18/MobileNetV2-based DeepLab v3+ without augmentation and ResNet-18/MobileNetV2-based DeepLab v3+ with augmentation using these datasets. Here, we set the training set and testing set sizes for all architectures to be 80% and 20%, respectively. We designed two different graphical user interface (GUI) applications so that users can easily segment their parotid glands by utilizing all of these deep learning-based semantic segmentation networks. From the results, mean-weighted dice values of MobileNetV2-based DeepLab v3+ without augmentation and ResNet-18-based DeepLab v3+ with augmentation were equal to 0.90845-0.93931 and 0.93237-0.96960, respectively. We also noted that the sensitivity (%), specificity (%), F-1 score (%) values of these models were equal to 83.21, 96.65, 85.04 and 89.81, 97.84, 87.80, respectively. As a result, these designed models were found to be clinically successful, and the user-friendly GUI applications of these proposed systems can be used by clinicians. This study is competitive as it uses MR images, can automatically segment both parotid glands, the results are meaningful according to the literature and have software application.en_US
dc.language.isoengen_US
dc.publisherWileyen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAugmentationen_US
dc.subjectDeepLab v3+en_US
dc.subjectMobileNetV2en_US
dc.subjectParotid glanden_US
dc.subjectResNet-18en_US
dc.subjectSemantic segmentationen_US
dc.titleComparative parotid gland segmentation by using ResNet-18 and MobileNetV2 based DeepLab v3+architectures from magnetic resonance imagesen_US
dc.typearticleen_US
dc.contributor.departmentRTEÜ, Tıp Fakültesi, Dahili Tıp Bilimleri Bölümüen_US
dc.contributor.institutionauthorKaba, Esat
dc.contributor.institutionauthorÇeliker, Fatma Beyazal
dc.identifier.doi10.1002/cpe.7405en_US
dc.relation.journalConcurrency and Computation: Practice and Experienceen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record