Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Comput Assist Radiol Surg ; 19(3): 531-539, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37934401

ABSTRACT

PURPOSE: Computer-assisted surgical systems provide support information to the surgeon, which can improve the execution and overall outcome of the procedure. These systems are based on deep learning models that are trained on complex and challenging-to-annotate data. Generating synthetic data can overcome these limitations, but it is necessary to reduce the domain gap between real and synthetic data. METHODS: We propose a method for image-to-image translation based on a Stable Diffusion model, which generates realistic images starting from synthetic data. Compared to previous works, the proposed method is better suited for clinical application as it requires a much smaller amount of input data and allows finer control over the generation of details by introducing different variants of supporting control networks. RESULTS: The proposed method is applied in the context of laparoscopic cholecystectomy, using synthetic and real data from public datasets. It achieves a mean Intersection over Union of 69.76%, significantly improving the baseline results (69.76 vs. 42.21%). CONCLUSIONS: The proposed method for translating synthetic images into images with realistic characteristics will enable the training of deep learning methods that can generalize optimally to real-world contexts, thereby improving computer-assisted intervention guidance systems.


Subject(s)
Endoscopy , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods
2.
Med Image Anal ; 92: 103066, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38141453

ABSTRACT

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.


Subject(s)
Fetofetal Transfusion , Placenta , Female , Humans , Pregnancy , Algorithms , Fetofetal Transfusion/diagnostic imaging , Fetofetal Transfusion/surgery , Fetofetal Transfusion/pathology , Fetoscopy/methods , Fetus , Placenta/diagnostic imaging
3.
Comput Biol Med ; 167: 107602, 2023 12.
Article in English | MEDLINE | ID: mdl-37925906

ABSTRACT

Accurate prediction of fetal weight at birth is essential for effective perinatal care, particularly in the context of antenatal management, which involves determining the timing and mode of delivery. The current standard of care involves performing a prenatal ultrasound 24 hours prior to delivery. However, this task presents challenges as it requires acquiring high-quality images, which becomes difficult during advanced pregnancy due to the lack of amniotic fluid. In this paper, we present a novel method that automatically predicts fetal birth weight by using fetal ultrasound video scans and clinical data. Our proposed method is based on a Transformer-based approach that combines a Residual Transformer Module with a Dynamic Affine Feature Map Transform. This method leverages tabular clinical data to evaluate 2D+t spatio-temporal features in fetal ultrasound video scans. Development and evaluation were carried out on a clinical set comprising 582 2D fetal ultrasound videos and clinical records of pregnancies from 194 patients performed less than 24 hours before delivery. Our results show that our method outperforms several state-of-the-art automatic methods and estimates fetal birth weight with an accuracy comparable to human experts. Hence, automatic measurements obtained by our method can reduce the risk of errors inherent in manual measurements. Observer studies suggest that our approach may be used as an aid for less experienced clinicians to predict fetal birth weight before delivery, optimizing perinatal care regardless of the available expertise.


Subject(s)
Fetal Weight , Ultrasonography, Prenatal , Infant, Newborn , Pregnancy , Humans , Female , Birth Weight , Ultrasonography, Prenatal/methods , Biometry
4.
Am J Obstet Gynecol MFM ; 5(12): 101182, 2023 12.
Article in English | MEDLINE | ID: mdl-37821009

ABSTRACT

BACKGROUND: Fetal weight is currently estimated from fetal biometry parameters using heuristic mathematical formulas. Fetal biometry requires measurements of the fetal head, abdomen, and femur. However, this examination is prone to inter- and intraobserver variability because of factors, such as the experience of the operator, image quality, maternal characteristics, or fetal movements. Our study tested the hypothesis that a deep learning method can estimate fetal weight based on a video scan of the fetal abdomen and gestational age with similar performance to the full biometry-based estimations provided by clinical experts. OBJECTIVE: This study aimed to develop and test a deep learning method to automatically estimate fetal weight from fetal abdominal ultrasound video scans. STUDY DESIGN: A dataset of 900 routine fetal ultrasound examinations was used. Among those examinations, 800 retrospective ultrasound video scans of the fetal abdomen from 700 pregnant women between 15 6/7 and 41 0/7 weeks of gestation were used to train the deep learning model. After the training phase, the model was evaluated on an external prospectively acquired test set of 100 scans from 100 pregnant women between 16 2/7 and 38 0/7 weeks of gestation. The deep learning model was trained to directly estimate fetal weight from ultrasound video scans of the fetal abdomen. The deep learning estimations were compared with manual measurements on the test set made by 6 human readers with varying levels of expertise. Human readers used standard 3 measurements made on the standard planes of the head, abdomen, and femur and heuristic formula to estimate fetal weight. The Bland-Altman analysis, mean absolute percentage error, and intraclass correlation coefficient were used to evaluate the performance and robustness of the deep learning method and were compared with human readers. RESULTS: Bland-Altman analysis did not show systematic deviations between readers and deep learning. The mean and standard deviation of the mean absolute percentage error between 6 human readers and the deep learning approach was 3.75%±2.00%. Excluding junior readers (residents), the mean absolute percentage error between 4 experts and the deep learning approach was 2.59%±1.11%. The intraclass correlation coefficients reflected excellent reliability and varied between 0.9761 and 0.9865. CONCLUSION: This study reports the use of deep learning to estimate fetal weight using only ultrasound video of the fetal abdomen from fetal biometry scans. Our experiments demonstrated similar performance of human measurements and deep learning on prospectively acquired test data. Deep learning is a promising approach to directly estimate fetal weight using ultrasound video scans of the fetal abdomen.


Subject(s)
Deep Learning , Fetal Weight , Pregnancy , Female , Humans , Retrospective Studies , Reproducibility of Results , Abdomen/diagnostic imaging
5.
Pediatr Res ; 93(2): 376-381, 2023 01.
Article in English | MEDLINE | ID: mdl-36195629

ABSTRACT

Necrotising enterocolitis (NEC) is one of the most common diseases in neonates and predominantly affects premature or very-low-birth-weight infants. Diagnosis is difficult and needed in hours since the first symptom onset for the best therapeutic effects. Artificial intelligence (AI) may play a significant role in NEC diagnosis. A literature search on the use of AI in the diagnosis of NEC was performed. Four databases (PubMed, Embase, arXiv, and IEEE Xplore) were searched with the appropriate MeSH terms. The search yielded 118 publications that were reduced to 8 after screening and checking for eligibility. Of the eight, five used classic machine learning (ML), and three were on the topic of deep ML. Most publications showed promising results. However, no publications with evident clinical benefits were found. Datasets used for training and testing AI systems were small and typically came from a single institution. The potential of AI to improve the diagnosis of NEC is evident. The body of literature on this topic is scarce, and more research in this area is needed, especially with a focus on clinical utility. Cross-institutional data for the training and testing of AI algorithms are required to make progress in this area. IMPACT: Only a few publications on the use of AI in NEC diagnosis are available although they offer some evidence that AI may be helpful in NEC diagnosis. AI requires large, multicentre, and multimodal datasets of high quality for model training and testing. Published results in the literature are based on data from single institutions and, as such, have limited generalisability. Large multicentre studies evaluating broad datasets are needed to evaluate the true potential of AI in diagnosing NEC in a clinical setting.


Subject(s)
Enterocolitis, Necrotizing , Infant, Newborn, Diseases , Infant, Newborn , Humans , Infant, Premature , Enterocolitis, Necrotizing/prevention & control , Artificial Intelligence , Infant, Very Low Birth Weight
6.
Phys Med Biol ; 67(4)2022 02 16.
Article in English | MEDLINE | ID: mdl-35051921

ABSTRACT

Objective.This work investigates the use of deep convolutional neural networks (CNN) to automatically perform measurements of fetal body parts, including head circumference, biparietal diameter, abdominal circumference and femur length, and to estimate gestational age and fetal weight using fetal ultrasound videos.Approach.We developed a novel multi-task CNN-based spatio-temporal fetal US feature extraction and standard plane detection algorithm (called FUVAI) and evaluated the method on 50 freehand fetal US video scans. We compared FUVAI fetal biometric measurements with measurements made by five experienced sonographers at two time points separated by at least two weeks. Intra- and inter-observer variabilities were estimated.Main results.We found that automated fetal biometric measurements obtained by FUVAI were comparable to the measurements performed by experienced sonographers The observed differences in measurement values were within the range of inter- and intra-observer variability. Moreover, analysis has shown that these differences were not statistically significant when comparing any individual medical expert to our model.Significance.We argue that FUVAI has the potential to assist sonographers who perform fetal biometric measurements in clinical settings by providing them with suggestions regarding the best measuring frames, along with automated measurements. Moreover, FUVAI is able perform these tasks in just a few seconds, which is a huge difference compared to the average of six minutes taken by sonographers. This is significant, given the shortage of medical experts capable of interpreting fetal ultrasound images in numerous countries.


Subject(s)
Deep Learning , Biometry/methods , Female , Fetus/diagnostic imaging , Gestational Age , Humans , Pregnancy , Ultrasonography, Prenatal/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...