Your browser doesn't support javascript.
loading
Multimodal learning system integrating electronic medical records and hysteroscopic images for reproductive outcome prediction and risk stratification of endometrial injury: a multicenter diagnostic study.
Li, Bohan; Chen, Hui; Lin, Xiaona; Duan, Hua.
Affiliation
  • Li B; Department of Minimally Invasive Gynecologic Center, Beijing Obstetrics and Gynecology Hospital, Capital Medical University, Beijing Maternal and Child Health Care Hospital.
  • Chen H; School of Biomedical Engineering.
  • Lin X; Beijing Advanced Innovation Center for Big Data-based Precision Medicine, Capital Medical University, Beijing.
  • Duan H; Assisted Reproduction Unit, Department of Obstetrics and Gynecology, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Key Laboratory of Reproductive Dysfunction Management of Zhejiang Province, Hangzhou, People's Republic of China.
Int J Surg ; 110(6): 3237-3248, 2024 Jun 01.
Article in En | MEDLINE | ID: mdl-38935827
ABSTRACT

OBJECTIVE:

To develop a multimodal learning application system that integrates electronic medical records (EMR) and hysteroscopic images for reproductive outcome prediction and risk stratification of patients with intrauterine adhesions (IUAs) resulting from endometrial injuries. MATERIALS AND

METHODS:

EMR and 5014 revisited hysteroscopic images of 753 post hysteroscopic adhesiolysis patients from the multicenter IUA database we established were randomly allocated to training, validation, and test datasets. The respective datasets were used for model development, tuning, and testing of the multimodal learning application. MobilenetV3 was employed for image feature extraction, and XGBoost for EMR and image feature ensemble learning. The performance of the application was compared against the single-modal approaches (EMR or hysteroscopic images), DeepSurv and ElasticNet models, along with the clinical scoring systems. The primary outcome was the 1-year conception prediction accuracy, and the secondary outcome was the assisted reproductive technology (ART) benefit ratio after risk stratification.

RESULTS:

The multimodal learning system exhibited superior performance in predicting conception within 1-year, achieving areas under the curves of 0.967 (95% CI 0.950-0.985), 0.936 (95% CI 0.883-0.989), and 0.965 (95% CI 0.935-0.994) in the training, validation, and test datasets, respectively, surpassing single-modal approaches, other models and clinical scoring systems (all P<0.05). The application of the model operated seamlessly on the hysteroscopic platform, with an average analysis time of 3.7±0.8 s per patient. By employing the application's conception probability-based risk stratification, mid-high-risk patients demonstrated a significant ART benefit (odds ratio=6, 95% CI 1.27-27.8, P=0.02), while low-risk patients exhibited good natural conception potential, with no significant increase in conception rates from ART treatment (P=1).

CONCLUSIONS:

The multimodal learning system using hysteroscopic images and EMR demonstrates promise in accurately predicting the natural conception of patients with IUAs and providing effective postoperative stratification, potentially contributing to ART triage after IUA procedures.
Subject(s)

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Hysteroscopy / Endometrium / Electronic Health Records Limits: Adult / Female / Humans / Pregnancy Language: En Journal: Int J Surg Year: 2024 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Hysteroscopy / Endometrium / Electronic Health Records Limits: Adult / Female / Humans / Pregnancy Language: En Journal: Int J Surg Year: 2024 Document type: Article