Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 197
Filtrar
1.
Eur J Obstet Gynecol Reprod Biol ; 298: 13-17, 2024 May 03.
Artigo em Inglês | MEDLINE | ID: mdl-38705008

RESUMO

INTRODUCTION: This study aims to investigate probe motion during full mid-trimester anomaly scans. METHODS: We undertook a prospective, observational study of obstetric sonographers at a UK University Teaching Hospital. We collected prospectively full-length video recordings of routine second-trimester anomaly scans synchronized with probe trajectory tracking data during the scan. Videos were reviewed and trajectories analyzed using duration, path metrics (path length, velocity, acceleration, jerk, and volume) and angular metrics (spectral arc, angular area, angular velocity, angular acceleration, and angular jerk). These trajectories were then compared according to the participant level of expertise, fetal presentation, and patient BMI. RESULTS: A total of 17 anomaly scans were recorded. The average velocity of the probe was 12.9 ± 3.4 mm/s for the consultants versus 24.6 ± 5.7 mm/s for the fellows (p = 0.02), the average acceleration 170.4 ± 26.3 mm/s2 versus 328.9 ± 62.7 mm/s2 (p = 0.02), and the average jerk 7491.7 ± 1056.1 mm/s3 versus 14944.1 ± 3146.3 mm/s3 (p = 0.02), the working volume 9.106 ± 4.106 mm3 versus 29.106 ± 11.106 mm3 (p = 0.03), respectively. The angular metrics were not significantly different according to the participant level of expertise, the fetal presentation, or to patients BMI. CONCLUSION: Some differences in the probe path metrics (velocity, acceleration, jerk and working volume) were noticed according to operator's level.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38805333

RESUMO

Deep learning has been used across a large number of computer vision tasks, however designing the network architectures for each task is time consuming. Neural Architecture Search (NAS) promises to automatically build neural networks, optimised for the given task and dataset. However, most NAS methods are constrained to a specific macro-architecture design which makes it hard to apply to different tasks (classification, detection, segmentation). Following the work in Differentiable NAS (DNAS), we present a simple and efficient NAS method, Differentiable Parallel Operation (DIPO), that constructs a local search space in the form of a DIPO block, and can easily be applied to any convolutional network by injecting it in-place of the convolutions. The DIPO block's internal architecture and parameters are automatically optimised end-to-end for each task. We demonstrate the flexibility of our approach by applying DIPO to 4 model architectures (U-Net, HRNET, KAPAO and YOLOX) across different surgical tasks (surgical scene segmentation, surgical instrument detection, and surgical instrument pose estimation) and evaluated across 5 datasets. Results show significant improvements in surgical scene segmentation (+10.5% in CholecSeg8K, +13.2% in CaDIS), instrument detection (+1.5% in ROBUST-MIS, +5.3% in RoboKP), and instrument pose estimation (+9.8% in RoboKP).

3.
Artigo em Inglês | MEDLINE | ID: mdl-38777945

RESUMO

PURPOSE: In robotic-assisted minimally invasive surgery, surgeons often use intra-operative ultrasound to visualise endophytic structures and localise resection margins. This must be performed by a highly skilled surgeon. Automating this subtask may reduce the cognitive load for the surgeon and improve patient outcomes. METHODS: We demonstrate vision-based shape sensing of the pneumatically attachable flexible (PAF) rail by using colour-dependent image segmentation. The shape-sensing framework is evaluated on known curves ranging from r = 30 to r = 110 mm, replicating curvatures in a human kidney. The shape sensing is then used to inform path planning of a collaborative robot arm paired with an intra-operative ultrasound probe. We execute 15 autonomous ultrasound scans of a tumour-embedded kidney phantom and retrieve viable ultrasound images, as well as seven freehand ultrasound scans for comparison. RESULTS: The vision-based sensor is shown to have comparable sensing accuracy with FBGS-based systems. We find the RMSE of the vision-based shape sensing of the PAF rail compared with ground truth to be 0.4975 ± 0.4169 mm. The ultrasound images acquired by the robot and by the human were evaluated by two independent clinicians. The median score across all criteria for both readers was '3-good' for human and '4-very good' for robot. CONCLUSION: We have proposed a framework for autonomous intra-operative US scanning using vision-based shape sensing to inform path planning. Ultrasound images were evaluated by clinicians for sharpness of image, clarity of structures visible, and contrast of solid and fluid areas. Clinicians evaluated that robot-acquired images were superior to human-acquired images in all metrics. Future work will translate the framework to a da Vinci surgical robot.

4.
Med Image Anal ; 96: 103195, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38815359

RESUMO

Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve as a training platform. However, reconstructing the colon from video footage remains difficult. Learning-based approaches hold promise as robust alternatives, but necessitate extensive datasets. Establishing a benchmark dataset, the 2022 EndoVis sub-challenge SimCol3D aimed to facilitate data-driven depth and pose prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022 in Singapore. Six teams from around the world and representatives from academia and industry participated in the three sub-challenges: synthetic depth prediction, synthetic pose prediction, and real pose prediction. This paper describes the challenge, the submitted methods, and their results. We show that depth prediction from synthetic colonoscopy images is robustly solvable, while pose estimation remains an open research question.

5.
Neuropathol Appl Neurobiol ; 50(3): e12981, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38738494

RESUMO

The convergence of digital pathology and artificial intelligence could assist histopathology image analysis by providing tools for rapid, automated morphological analysis. This systematic review explores the use of artificial intelligence for histopathological image analysis of digitised central nervous system (CNS) tumour slides. Comprehensive searches were conducted across EMBASE, Medline and the Cochrane Library up to June 2023 using relevant keywords. Sixty-eight suitable studies were identified and qualitatively analysed. The risk of bias was evaluated using the Prediction model Risk of Bias Assessment Tool (PROBAST) criteria. All the studies were retrospective and preclinical. Gliomas were the most frequently analysed tumour type. The majority of studies used convolutional neural networks or support vector machines, and the most common goal of the model was for tumour classification and/or grading from haematoxylin and eosin-stained slides. The majority of studies were conducted when legacy World Health Organisation (WHO) classifications were in place, which at the time relied predominantly on histological (morphological) features but have since been superseded by molecular advances. Overall, there was a high risk of bias in all studies analysed. Persistent issues included inadequate transparency in reporting the number of patients and/or images within the model development and testing cohorts, absence of external validation, and insufficient recognition of batch effects in multi-institutional datasets. Based on these findings, we outline practical recommendations for future work including a framework for clinical implementation, in particular, better informing the artificial intelligence community of the needs of the neuropathologist.


Assuntos
Inteligência Artificial , Neoplasias do Sistema Nervoso Central , Humanos , Neoplasias do Sistema Nervoso Central/patologia , Processamento de Imagem Assistida por Computador/métodos
6.
Artigo em Inglês | MEDLINE | ID: mdl-38652416

RESUMO

PURPOSE: Obtaining large volumes of medical images, required for deep learning development, can be challenging in rare pathologies. Image augmentation and preprocessing offer viable solutions. This work explores the case of necrotising enterocolitis (NEC), a rare but life-threatening condition affecting premature neonates, with challenging radiological diagnosis. We investigate data augmentation and preprocessing techniques and propose two optimised pipelines for developing reliable computer-aided diagnosis models on a limited NEC dataset. METHODS: We present a NEC dataset of 1090 Abdominal X-rays (AXRs) from 364 patients and investigate the effect of geometric augmentations, colour scheme augmentations and their combination for NEC classification based on the ResNet-50 backbone. We introduce two pipelines based on colour contrast and edge enhancement, to increase the visibility of subtle, difficult-to-identify, critical NEC findings on AXRs and achieve robust accuracy in a challenging three-class NEC classification task. RESULTS: Our results show that geometric augmentations improve performance, with Translation achieving +6.2%, while Flipping and Occlusion decrease performance. Colour augmentations, like Equalisation, yield modest improvements. The proposed Pr-1 and Pr-2 pipelines enhance model accuracy by +2.4% and +1.7%, respectively. Combining Pr-1/Pr-2 with geometric augmentation, we achieve a maximum performance increase of 7.1%, achieving robust NEC classification. CONCLUSION: Based on an extensive validation of preprocessing and augmentation techniques, our work showcases the previously unreported potential of image preprocessing in AXR classification tasks with limited datasets. Our findings can be extended to other medical tasks for designing reliable classifier models with limited X-ray datasets. Ultimately, we also provide a benchmark for automated NEC detection and classification from AXRs.

7.
Artigo em Inglês | MEDLINE | ID: mdl-38610108

RESUMO

INTRODUCTION: There is a growing emphasis on proficiency-based progression within surgical training. To enable this, clearly defined metrics for those newly acquired surgical skills are needed. These can be formulated in objective assessment tools. The aim of the present study was to systematically review the literature reporting on available tools for objective assessment of minimally invasive gynecological surgery (simulated) performance and evaluate their reliability and validity. MATERIAL AND METHODS: A systematic search (1989-2022) was conducted in MEDLINE, Embase, PubMed, Web of Science in accordance with PRISMA. The trial was registered with the Prospective Register of Systematic Reviews (PROSPERO) ID: CRD42022376552. Randomized controlled trials, prospective comparative studies, prospective single-group (with pre- and post-training assessment) or consensus studies that reported on the development, validation or usage of assessment tools of surgical performance in minimally invasive gynecological surgery, were included. Three independent assessors assessed study setting and validity evidence according to a contemporary framework of validity, which was adapted from Messick's validity framework. Methodological quality of included studies was assessed using the modified medical education research study quality instrument (MERSQI) checklist. Heterogeneity in data reporting on types of tools, data collection, study design, definition of expertise (novice vs. experts) and statistical values prevented a meaningful meta-analysis. RESULTS: A total of 19 746 titles and abstracts were screened of which 72 articles met the inclusion criteria. A total of 37 different assessment tools were identified of which 13 represented manual global assessment tools, 13 manual procedure-specific assessment tools and 11 automated performance metrices. Only two tools showed substantive evidence of validity. Reliability and validity per tool were provided. No assessment tools showed direct correlation between tool scores and patient related outcomes. CONCLUSIONS: Existing objective assessment tools lack evidence on predicting patient outcomes and suffer from limitations in transferability outside of the research environment, particularly for automated performance metrics. Future research should prioritize filling these gaps while integrating advanced technologies like kinematic data and AI for robust, objective surgical skill assessment within gynecological advanced surgical training programs.

8.
Artigo em Inglês | MEDLINE | ID: mdl-38528306

RESUMO

PURPOSE: Endoscopic pituitary surgery entails navigating through the nasal cavity and sphenoid sinus to access the sella using an endoscope. This procedure is intricate due to the proximity of crucial anatomical structures (e.g. carotid arteries and optic nerves) to pituitary tumours, and any unintended damage can lead to severe complications including blindness and death. Intraoperative guidance during this surgery could support improved localization of the critical structures leading to reducing the risk of complications. METHODS: A deep learning network PitSurgRT is proposed for real-time localization of critical structures in endoscopic pituitary surgery. The network uses high-resolution net (HRNet) as a backbone with a multi-head for jointly localizing critical anatomical structures while segmenting larger structures simultaneously. Moreover, the trained model is optimized and accelerated by using TensorRT. Finally, the model predictions are shown to neurosurgeons, to test their guidance capabilities. RESULTS: Compared with the state-of-the-art method, our model significantly reduces the mean error in landmark detection of the critical structures from 138.76 to 54.40 pixels in a 1280 × 720-pixel image. Furthermore, the semantic segmentation of the most critical structure, sella, is improved by 4.39% IoU. The inference speed of the accelerated model achieves 298 frames per second with floating-point-16 precision. In the study of 15 neurosurgeons, 88.67% of predictions are considered accurate enough for real-time guidance. CONCLUSION: The results from the quantitative evaluation, real-time acceleration, and neurosurgeon study demonstrate the proposed method is highly promising in providing real-time intraoperative guidance of the critical anatomical structures in endoscopic pituitary surgery.

9.
Front Surg ; 11: 1361040, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38450052

RESUMO

Introduction: Informed consent is a fundamental component in the work-up for surgical procedures. Statistical risk information pertaining to a procedure is by nature probabilistic and challenging to communicate, especially to those with poor numerical literacy. Visual aids and audio/video tools have previously been shown to improve patients' understanding of statistical information. In this study, we aimed to explore the impact of different methods of risk communication in healthy participants randomized to either undergo the consent process with visual aids or the standard consent process for lumbar puncture. Material and methods: Healthy individuals above 18 years old were eligible. The exclusion criteria were prior experience of the procedure or relevant medical knowledge, lack of capacity to consent, underlying cognitive impairment and hospitalised individuals. After randomisation, both groups received identical medical information about the procedure of a lumbar puncture in a hypothetical clinical scenario via different means of consent. The control group underwent the standard consent process in current clinical practice (Consent Form 1 without any illustrative examples), whereas the intervention group received additional anatomy diagrams, the Paling Palette and the Paling perspective scale. Anonymised questionnaires were received to evaluate their perception of the procedure and its associated risks. Results: Fifty-two individuals were eligible without statistically significant differences in age, sex, professional status and the familiarity of the procedure. Visual aids were noted to improve the confidence of participants to describe the risks by themselves (p = 0.009) and participants in the intervention group felt significantly less overwhelmed with medical information (p = 0.028). The enhanced consent process was found to be significantly more acceptable by participants (p = 0.03). There was a trend towards greater appropriateness (p = 0.06) and it appeared to have "good" usability (median SUS = 76.4), although this also did not reach statistical significance (p = 0.06). Conclusion: Visual aids could be an appropriate alternative method for medical consent without being inferior regarding the understanding of the procedure, its risks and its benefits. Future studies could possibly compare or incorporate multiple interventions to determine the most effective tools in a larger scale of population including patients as well as healthy individuals.

12.
Nat Med ; 30(1): 61-75, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38242979

RESUMO

The next generation of surgical robotics is poised to disrupt healthcare systems worldwide, requiring new frameworks for evaluation. However, evaluation during a surgical robot's development is challenging due to their complex evolving nature, potential for wider system disruption and integration with complementary technologies like artificial intelligence. Comparative clinical studies require attention to intervention context, learning curves and standardized outcomes. Long-term monitoring needs to transition toward collaborative, transparent and inclusive consortiums for real-world data collection. Here, the Idea, Development, Exploration, Assessment and Long-term monitoring (IDEAL) Robotics Colloquium proposes recommendations for evaluation during development, comparative study and clinical monitoring of surgical robots-providing practical recommendations for developers, clinicians, patients and healthcare systems. Multiple perspectives are considered, including economics, surgical training, human factors, ethics, patient perspectives and sustainability. Further work is needed on standardized metrics, health economic assessment models and global applicability of recommendations.


Assuntos
Inteligência Artificial , Procedimentos Cirúrgicos Robóticos , Humanos , Robótica
13.
IEEE Trans Med Imaging ; 43(1): 297-308, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37494156

RESUMO

Personalized federated learning (PFL) addresses the data heterogeneity challenge faced by general federated learning (GFL). Rather than learning a single global model, with PFL a collection of models are adapted to the unique feature distribution of each site. However, current PFL methods rarely consider self-attention networks which can handle data heterogeneity by long-range dependency modeling and they do not utilize prediction inconsistencies in local models as an indicator of site uniqueness. In this paper, we propose FedDP, a novel fed erated learning scheme with d ual p ersonalization, which improves model personalization from both feature and prediction aspects to boost image segmentation results. We leverage long-range dependencies by designing a local query (LQ) that decouples the query embedding layer out of each local model, whose parameters are trained privately to better adapt to the respective feature distribution of the site. We then propose inconsistency-guided calibration (IGC), which exploits the inter-site prediction inconsistencies to accommodate the model learning concentration. By encouraging a model to penalize pixels with larger inconsistencies, we better tailor prediction-level patterns to each local site. Experimentally, we compare FedDP with the state-of-the-art PFL methods on two popular medical image segmentation tasks with different modalities, where our results consistently outperform others on both tasks. Our code and models are available at https://github.com/jcwang123/PFL-Seg-Trans.


Assuntos
Calibragem
14.
Int J Comput Assist Radiol Surg ; 19(2): 375-382, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37347345

RESUMO

PURPOSE: Semantic segmentation in surgical videos has applications in intra-operative guidance, post-operative analytics and surgical education. Models need to provide accurate predictions since temporally inconsistent identification of anatomy can hinder patient safety. We propose a novel architecture for modelling temporal relationships in videos to address these issues. METHODS: We developed a temporal segmentation model that includes a static encoder and a spatio-temporal decoder. The encoder processes individual frames whilst the decoder learns spatio-temporal relationships from frame sequences. The decoder can be used with any suitable encoder to improve temporal consistency. RESULTS: Model performance was evaluated on the CholecSeg8k dataset and a private dataset of robotic Partial Nephrectomy procedures. Mean Intersection over Union improved by 1.30% and 4.27% respectively for each dataset when the temporal decoder was applied. Our model also displayed improvements in temporal consistency up to 7.23%. CONCLUSIONS: This work demonstrates an advance in video segmentation of surgical scenes with potential applications in surgery with a view to improve patient outcomes. The proposed decoder can extend state-of-the-art static models, and it is shown that it can improve per-frame segmentation output and video temporal consistency.


Assuntos
Robótica , Semântica , Humanos , Aprendizagem , Nefrectomia , Período Pós-Operatório
15.
Int J Comput Assist Radiol Surg ; 19(1): 61-68, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37340283

RESUMO

PURPOSE: Advances in surgical phase recognition are generally led by training deeper networks. Rather than going further with a more complex solution, we believe that current models can be exploited better. We propose a self-knowledge distillation framework that can be integrated into current state-of-the-art (SOTA) models without requiring any extra complexity to the models or annotations. METHODS: Knowledge distillation is a framework for network regularization where knowledge is distilled from a teacher network to a student network. In self-knowledge distillation, the student model becomes the teacher such that the network learns from itself. Most phase recognition models follow an encoder-decoder framework. Our framework utilizes self-knowledge distillation in both stages. The teacher model guides the training process of the student model to extract enhanced feature representations from the encoder and build a more robust temporal decoder to tackle the over-segmentation problem. RESULTS: We validate our proposed framework on the public dataset Cholec80. Our framework is embedded on top of four popular SOTA approaches and consistently improves their performance. Specifically, our best GRU model boosts performance by [Formula: see text] accuracy and [Formula: see text] F1-score over the same baseline model. CONCLUSION: We embed a self-knowledge distillation framework for the first time in the surgical phase recognition training pipeline. Experimental results demonstrate that our simple yet powerful framework can improve performance of existing phase recognition models. Moreover, our extensive experiments show that even with 75% of the training set we still achieve performance on par with the same baseline model trained on the full set.


Assuntos
Aprendizagem , Estudantes , Humanos
16.
Int J Comput Assist Radiol Surg ; 19(3): 481-492, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38066354

RESUMO

PURPOSE: In twin-to-twin transfusion syndrome (TTTS), abnormal vascular anastomoses in the monochorionic placenta can produce uneven blood flow between the two fetuses. In the current practice, TTTS is treated surgically by closing abnormal anastomoses using laser ablation. This surgery is minimally invasive and relies on fetoscopy. Limited field of view makes anastomosis identification a challenging task for the surgeon. METHODS: To tackle this challenge, we propose a learning-based framework for in vivo fetoscopy frame registration for field-of-view expansion. The novelties of this framework rely on a learning-based keypoint proposal network and an encoding strategy to filter (i) irrelevant keypoints based on fetoscopic semantic image segmentation and (ii) inconsistent homographies. RESULTS: We validate our framework on a dataset of six intraoperative sequences from six TTTS surgeries from six different women against the most recent state-of-the-art algorithm, which relies on the segmentation of placenta vessels. CONCLUSION: The proposed framework achieves higher performance compared to the state of the art, paving the way for robust mosaicking to provide surgeons with context awareness during TTTS surgery.


Assuntos
Transfusão Feto-Fetal , Terapia a Laser , Gravidez , Feminino , Humanos , Fetoscopia/métodos , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Placenta/cirurgia , Placenta/irrigação sanguínea , Terapia a Laser/métodos , Algoritmos
18.
Br J Surg ; 111(1)2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-37951600

RESUMO

BACKGROUND: There is a need to standardize training in robotic surgery, including objective assessment for accreditation. This systematic review aimed to identify objective tools for technical skills assessment, providing evaluation statuses to guide research and inform implementation into training curricula. METHODS: A systematic literature search was conducted in accordance with the PRISMA guidelines. Ovid Embase/Medline, PubMed and Web of Science were searched. Inclusion criterion: robotic surgery technical skills tools. Exclusion criteria: non-technical, laparoscopy or open skills only. Manual tools and automated performance metrics (APMs) were analysed using Messick's concept of validity and the Oxford Centre of Evidence-Based Medicine (OCEBM) Levels of Evidence and Recommendation (LoR). A bespoke tool analysed artificial intelligence (AI) studies. The Modified Downs-Black checklist was used to assess risk of bias. RESULTS: Two hundred and forty-seven studies were analysed, identifying: 8 global rating scales, 26 procedure-/task-specific tools, 3 main error-based methods, 10 simulators, 28 studies analysing APMs and 53 AI studies. Global Evaluative Assessment of Robotic Skills and the da Vinci Skills Simulator were the most evaluated tools at LoR 1 (OCEBM). Three procedure-specific tools, 3 error-based methods and 1 non-simulator APMs reached LoR 2. AI models estimated outcomes (skill or clinical), demonstrating superior accuracy rates in the laboratory with 60 per cent of methods reporting accuracies over 90 per cent, compared to real surgery ranging from 67 to 100 per cent. CONCLUSIONS: Manual and automated assessment tools for robotic surgery are not well validated and require further evaluation before use in accreditation processes.PROSPERO: registration ID CRD42022304901.


BACKGROUND: Robotic surgery is increasingly used worldwide to treat many different diseases. The robot is controlled by a surgeon, which may give them greater precision and better outcomes for patients. However, surgeons' robotic skills should be assessed properly, to make sure patients are safe, to improve feedback and for exam assessments for certification to indicate competency. This should be done by experts, using assessment tools that have been agreed upon and proven to work. AIM: This review's aim was to find and explain which training and examination tools are best for assessing surgeons' robotic skills and to find out what gaps remain requiring future research. METHOD: This review searched for all available studies looking at assessment tools in robotic surgery and summarized their findings using several different methods. FINDINGS AND CONCLUSION: Two hundred and forty-seven studies were looked at, finding many assessment tools. Further research is needed for operation-specific and automatic assessment tools before they should be used in the clinical setting.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Procedimentos Cirúrgicos Robóticos/educação , Inteligência Artificial , Competência Clínica , Laparoscopia/educação
19.
Med Image Anal ; 92: 103066, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38141453

RESUMO

Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.


Assuntos
Transfusão Feto-Fetal , Placenta , Feminino , Humanos , Gravidez , Algoritmos , Transfusão Feto-Fetal/diagnóstico por imagem , Transfusão Feto-Fetal/cirurgia , Transfusão Feto-Fetal/patologia , Fetoscopia/métodos , Feto , Placenta/diagnóstico por imagem
20.
Artigo em Inglês | MEDLINE | ID: mdl-38083568

RESUMO

Optical imaging techniques such as spectral imaging show promise for the assessment of tissue health during surgery; however, the validation and translation of such techniques into clinical practise is limited by the lack of representative tissue models. In this paper, we demonstrate the application of an organ perfusion machine as an ex vivo tissue model for optical imaging. Three porcine livers are perfused at stepped blood oxygen saturations. Over the duration of each perfusion, spectral data of the tissue are captured via diffuse optical spectroscopy and multispectral imaging. These data are synchronised with blood oxygen saturation measurements recorded by the perfusion machine. Shifts in the optical properties of the tissue are demonstrated over the duration of the each perfusion, as the tissue becomes reperfused and as the oxygen saturation is varied.


Assuntos
Fígado , Imagem Óptica , Suínos , Animais , Perfusão/métodos , Fígado/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA