RESUMO
Objective: This study aimed to evaluate and validate the performance of deep convolutional neural networks when discriminating different histologic types of ovarian tumor in ultrasound (US) images. Material and methods: Our retrospective study took 1142 US images from 328 patients from January 2019 to June 2021. Two tasks were proposed based on US images. Task 1 was to classify benign and high-grade serous carcinoma in original ovarian tumor US images, in which benign ovarian tumor was divided into six classes: mature cystic teratoma, endometriotic cyst, serous cystadenoma, granulosa-theca cell tumor, mucinous cystadenoma and simple cyst. The US images in task 2 were segmented. Deep convolutional neural networks (DCNN) were applied to classify different types of ovarian tumors in detail. We used transfer learning on six pre-trained DCNNs: VGG16, GoogleNet, ResNet34, ResNext50, DensNet121 and DensNet201. Several metrics were adopted to assess the model performance: accuracy, sensitivity, specificity, FI-score and the area under the receiver operating characteristic curve (AUC). Results: The DCNN performed better in labeled US images than in original US images. The best predictive performance came from the ResNext50 model. The model had an overall accuracy of 0.952 for in directly classifying the seven histologic types of ovarian tumors. It achieved a sensitivity of 90% and a specificity of 99.2% for high-grade serous carcinoma, and a sensitivity of over 90% and a specificity of over 95% in most benign pathological categories. Conclusion: DCNN is a promising technique for classifying different histologic types of ovarian tumors in US images, and provide valuable computer-aided information.
RESUMO
Background: The application of artificial intelligence (AI) powered algorithm in clinical decision-making is globally popular among clinicians and medical scientists. In this research endeavor, we harnessed the capabilities of AI to enhance the precision of hysteroscopic myomectomy procedures. Methods: Our multidisciplinary team developed a comprehensive suite of algorithms, rooted in deep learning technology, addressing myomas segmentation tasks. We assembled a cohort comprising 56 patients diagnosed with submucosal myomas, each of whom underwent magnetic resonance imaging (MRI) examinations. Subsequently, half of the participants were randomly designated to undergo AI-augmented procedures. Our AI system exhibited remarkable proficiency in elucidating the precise spatial localization of submucosal myomas. Results: The results of our study showcased a statistically significant reduction in both operative duration (41.32 ± 17.83 minutes vs. 32.11 ± 11.86 minutes, p=0.03) and intraoperative blood loss (10.00 (6.25-15.00) ml vs. 10.00 (5.00-15.00) ml, p=0.04) in procedures assisted by AI. Conclusion: This work stands as a pioneering achievement, marking the inaugural deployment of an AI-powered diagnostic model in the domain of hysteroscopic surgery. Consequently, our findings substantiate the potential of AI-driven interventions within the field of gynecological surgery.