Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Gastrointest Endosc ; 2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38639679

RESUMEN

BACKGROUND AND AIMS: The American Society for Gastrointestinal Endoscopy (ASGE) AI Task Force along with experts in endoscopy, technology space, regulatory authorities, and other medical subspecialties initiated a consensus process that analyzed the current literature, highlighted potential areas, and outlined the necessary research in artificial intelligence (AI) to allow a clearer understanding of AI as it pertains to endoscopy currently. METHODS: A modified Delphi process was used to develop these consensus statements. RESULTS: Statement 1: Current advances in AI allow for the development of AI-based algorithms that can be applied to endoscopy to augment endoscopist performance in detection and characterization of endoscopic lesions. Statement 2: Computer vision-based algorithms provide opportunities to redefine quality metrics in endoscopy using AI, which can be standardized and can reduce subjectivity in reporting quality metrics. Natural language processing-based algorithms can help with the data abstraction needed for reporting current quality metrics in GI endoscopy effortlessly. Statement 3: AI technologies can support smart endoscopy suites, which may help optimize workflows in the endoscopy suite, including automated documentation. Statement 4: Using AI and machine learning helps in predictive modeling, diagnosis, and prognostication. High-quality data with multidimensionality are needed for risk prediction, prognostication of specific clinical conditions, and their outcomes when using machine learning methods. Statement 5: Big data and cloud-based tools can help advance clinical research in gastroenterology. Multimodal data are key to understanding the maximal extent of the disease state and unlocking treatment options. Statement 6: Understanding how to evaluate AI algorithms in the gastroenterology literature and clinical trials is important for gastroenterologists, trainees, and researchers, and hence education efforts by GI societies are needed. Statement 7: Several challenges regarding integrating AI solutions into the clinical practice of endoscopy exist, including understanding the role of human-AI interaction. Transparency, interpretability, and explainability of AI algorithms play a key role in their clinical adoption in GI endoscopy. Developing appropriate AI governance, data procurement, and tools needed for the AI lifecycle are critical for the successful implementation of AI into clinical practice. Statement 8: For payment of AI in endoscopy, a thorough evaluation of the potential value proposition for AI systems may help guide purchasing decisions in endoscopy. Reliable cost-effectiveness studies to guide reimbursement are needed. Statement 9: Relevant clinical outcomes and performance metrics for AI in gastroenterology are currently not well defined. To improve the quality and interpretability of research in the field, steps need to be taken to define these evidence standards. Statement 10: A balanced view of AI technologies and active collaboration between the medical technology industry, computer scientists, gastroenterologists, and researchers are critical for the meaningful advancement of AI in gastroenterology. CONCLUSIONS: The consensus process led by the ASGE AI Task Force and experts from various disciplines has shed light on the potential of AI in endoscopy and gastroenterology. AI-based algorithms have shown promise in augmenting endoscopist performance, redefining quality metrics, optimizing workflows, and aiding in predictive modeling and diagnosis. However, challenges remain in evaluating AI algorithms, ensuring transparency and interpretability, addressing governance and data procurement, determining payment models, defining relevant clinical outcomes, and fostering collaboration between stakeholders. Addressing these challenges while maintaining a balanced perspective is crucial for the meaningful advancement of AI in gastroenterology.

2.
Gastrointest Endosc ; 97(4): 646-654, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36460087

RESUMEN

BACKGROUND AND AIMS: We aimed to develop a computer-aided characterization system that could support the diagnosis of dysplasia in Barrett's esophagus (BE) on magnification endoscopy. METHODS: Videos were collected in high-definition magnification white-light and virtual chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic and nondysplastic BE (NDBE) from 4 centers. We trained a neural network with a Resnet101 architecture to classify frames as dysplastic or nondysplastic. The network was tested on 3 different scenarios: high-quality still images, all available video frames, and a selected sequence within each video. RESULTS: Fifty-seven patients, each with videos of magnification areas of BE (34 dysplasia, 23 NDBE), were included. Performance was evaluated by a leave-1-patient-out cross-validation method. In all, 60,174 (39,347 dysplasia, 20,827 NDBE) magnification video frames were used to train the network. The testing set included 49,726 i-scan-3/optical enhancement magnification frames. On 350 high-quality still images, the network achieved a sensitivity of 94%, specificity of 86%, and area under the receiver operator curve (AUROC) of 96%. On all 49,726 available video frames, the network achieved a sensitivity of 92%, specificity of 82%, and AUROC of 95%. On a selected sequence of frames per case (total of 11,471 frames), we used an exponentially weighted moving average of classifications on consecutive frames to characterize dysplasia. The network achieved a sensitivity of 92%, specificity of 84%, and AUROC of 96%. The mean assessment speed per frame was 0.0135 seconds (SD ± 0.006). CONCLUSION: Our network can characterize BE dysplasia with high accuracy and speed on high-quality magnification images and sequence of video frames, moving it toward real-time automated diagnosis.


Asunto(s)
Esófago de Barrett , Neoplasias Esofágicas , Humanos , Esófago de Barrett/diagnóstico , Neoplasias Esofágicas/diagnóstico por imagen , Esofagoscopía/métodos , Hiperplasia , Computadores
3.
J Gastroenterol Hepatol ; 38(5): 768-774, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36652526

RESUMEN

BACKGROUND AND AIM: Lack of visual recognition of colorectal polyps may lead to interval cancers. The mechanisms contributing to perceptual variation, particularly for subtle and advanced colorectal neoplasia, have scarcely been investigated. We aimed to evaluate visual recognition errors and provide novel mechanistic insights. METHODS: Eleven participants (seven trainees and four medical students) evaluated images from the UCL polyp perception dataset, containing 25 polyps, using eye-tracking equipment. Gaze errors were defined as those where the lesion was not observed according to eye-tracking technology. Cognitive errors occurred when lesions were observed but not recognized as polyps by participants. A video study was also performed including 39 subtle polyps, where polyp recognition performance was compared with a convolutional neural network. RESULTS: Cognitive errors occurred more frequently than gaze errors overall (65.6%), with a significantly higher proportion in trainees (P = 0.0264). In the video validation, the convolutional neural network detected significantly more polyps than trainees and medical students, with per-polyp sensitivities of 79.5%, 30.0%, and 15.4%, respectively. CONCLUSIONS: Cognitive errors were the most common reason for visual recognition errors. The impact of interventions such as artificial intelligence, particularly on different types of perceptual errors, needs further investigation including potential effects on learning curves. To facilitate future research, a publicly accessible visual perception colonoscopy polyp database was created.


Asunto(s)
Pólipos del Colon , Neoplasias Colorrectales , Humanos , Pólipos del Colon/diagnóstico , Pólipos del Colon/patología , Tecnología de Seguimiento Ocular , Inteligencia Artificial , Colonoscopía/métodos , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/patología
4.
Dig Endosc ; 35(5): 645-655, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36527309

RESUMEN

OBJECTIVES: Convolutional neural networks (CNN) for computer-aided diagnosis of polyps are often trained using high-quality still images in a single chromoendoscopy imaging modality with sessile serrated lesions (SSLs) often excluded. This study developed a CNN from videos to classify polyps as adenomatous or nonadenomatous using standard narrow-band imaging (NBI) and NBI-near focus (NBI-NF) and created a publicly accessible polyp video database. METHODS: We trained a CNN with 16,832 high and moderate quality frames from 229 polyp videos (56 SSLs). It was evaluated with 222 polyp videos (36 SSLs) across two test-sets. Test-set I consists of 14,320 frames (157 polyps, 111 diminutive). Test-set II, which is publicly accessible, 3317 video frames (65 polyps, 41 diminutive), which was benchmarked with three expert and three nonexpert endoscopists. RESULTS: Sensitivity for adenoma characterization was 91.6% in test-set I and 89.7% in test-set II. Specificity was 91.9% and 88.5%. Sensitivity for diminutive polyps was 89.9% and 87.5%; specificity 90.5% and 88.2%. In NBI-NF, sensitivity was 89.4% and 89.5%, with a specificity of 94.7% and 83.3%. In NBI, sensitivity was 85.3% and 91.7%, with a specificity of 87.5% and 90.0%, respectively. The CNN achieved preservation and incorporation of valuable endoscopic innovations (PIVI)-1 and PIVI-2 thresholds for each test-set. In the benchmarking of test-set II, the CNN was significantly more accurate than nonexperts (13.8% difference [95% confidence interval 3.2-23.6], P = 0.01) with no significant difference with experts. CONCLUSIONS: A single CNN can differentiate adenomas from SSLs and hyperplastic polyps in both NBI and NBI-NF. A publicly accessible NBI polyp video database was created and benchmarked.


Asunto(s)
Adenoma , Pólipos del Colon , Neoplasias Colorrectales , Aprendizaje Profundo , Humanos , Pólipos del Colon/diagnóstico por imagen , Pólipos del Colon/patología , Colonoscopía/métodos , Neoplasias Colorrectales/patología , Adenoma/diagnóstico por imagen , Adenoma/patología , Imagen de Banda Estrecha/métodos
5.
Dig Endosc ; 34(4): 862-869, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-34748665

RESUMEN

OBJECTIVES: There is uncertainty regarding the efficacy of artificial intelligence (AI) software to detect advanced subtle neoplasia, particularly flat lesions and sessile serrated lesions (SSLs), due to low prevalence in testing datasets and prospective trials. This has been highlighted as a top research priority for the field. METHODS: An AI algorithm was evaluated on four video test datasets containing 173 polyps (35,114 polyp-positive frames and 634,988 polyp-negative frames) specifically enriched with flat lesions and SSLs, including a challenging dataset containing subtle advanced neoplasia. The challenging dataset was also evaluated by eight endoscopists (four independent, four trainees, according to the Joint Advisory Group on gastrointestinal endoscopy [JAG] standards in the UK). RESULTS: In the first two video datasets, the algorithm achieved per-polyp sensitivities of 100% and 98.9%. Per-frame sensitivities were 84.1% and 85.2%. In the subtle dataset, the algorithm detected a significantly higher number of polyps (P < 0.0001), compared to JAG-independent and trainee endoscopists, achieving per-polyp sensitivities of 79.5%, 37.2% and 11.5%, respectively. Furthermore, when considering subtle polyps detected by both the algorithm and at least one endoscopist, the AI detected polyps significantly faster on average. CONCLUSIONS: The AI based algorithm achieved high per-polyp sensitivities for advanced colorectal neoplasia, including flat lesions and SSLs, outperforming both JAG independent and trainees on a very challenging dataset containing subtle lesions that could have been overlooked easily and contribute to interval colorectal cancer. Further prospective trials should evaluate AI to detect subtle advanced neoplasia in higher risk populations for colorectal cancer.


Asunto(s)
Pólipos del Colon , Neoplasias Colorrectales , Algoritmos , Inteligencia Artificial , Pólipos del Colon/diagnóstico , Pólipos del Colon/patología , Colonoscopía , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/patología , Humanos
6.
Magn Reson Med ; 81(2): 1066-1079, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30230609

RESUMEN

PURPOSE: Pre-interventional assessment of atrial wall thickness (AWT) and of subject-specific variations in the anatomy of the pulmonary veins may affect the success rate of RF ablation procedures for the treatment of atrial fibrillation (AF). This study introduces a novel non-contrast enhanced 3D whole-heart sequence providing simultaneous information on the cardiac anatomy-including both the arterial and the venous system-(bright-blood volume) and AWT (black-blood volume). METHODS: The proposed MT-prepared bright-blood and black-blood phase sensitive inversion recovery (PSIR) BOOST framework acquires 2 differently weighted bright-blood volumes in an interleaved fashion. The 2 data sets are then combined in a PSIR-like reconstruction to obtain a complementary black-blood volume for atrial wall visualization. Image-based navigation and non-rigid respiratory motion correction are exploited for 100% scan efficiency and predictable acquisition time. The proposed approach was evaluated in 11 healthy subjects and 4 patients with AF scheduled for RF ablation. RESULTS: Improved depiction of the cardiac venous system was obtained in comparison to a T2 -prepared BOOST implementation, and quantified AWT was shown to be in good agreement with previously reported measurements obtained in healthy subjects (right atrium AWT: 2.54 ± 0.87 mm, left atrium AWT: 2.51 ± 0.61 mm). Feasibility for MT-prepared BOOST acquisitions in patients with AF was demonstrated. CONCLUSION: The proposed motion-corrected MT-prepared BOOST sequence provides simultaneous non-contrast pulmonary vein depiction as well as black-blood visualization of atrial walls. The proposed sequence has a large spectrum of potential clinical applications and further validation in patients is warranted.


Asunto(s)
Arterias/patología , Fibrilación Atrial/diagnóstico por imagen , Corazón/diagnóstico por imagen , Imagenología Tridimensional/métodos , Venas Pulmonares/diagnóstico por imagen , Adulto , Angiografía , Ablación por Catéter , Medios de Contraste/química , Vasos Coronarios/diagnóstico por imagen , Femenino , Voluntarios Sanos , Atrios Cardíacos/anatomía & histología , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Movimiento (Física) , Ondas de Radio , Respiración
7.
Biomed Opt Express ; 14(2): 593-607, 2023 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-36874484

RESUMEN

Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets.

8.
Biomed Opt Express ; 14(6): 2629-2644, 2023 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-37342682

RESUMEN

Colorectal cancer is the third most common type of cancer with almost two million new cases worldwide. They develop from neoplastic polyps, most commonly adenomas, which can be removed during colonoscopy to prevent colorectal cancer from occurring. Unfortunately, up to a quarter of polyps are missed during colonoscopies. Studies have shown that polyp detection during a procedure correlates with the time spent searching for polyps, called the withdrawal time. The different phases of the procedure (cleaning, therapeutic, and exploration phases) make it difficult to precisely measure the withdrawal time, which should only include the exploration phase. Separating this from the other phases requires manual time measurement during the procedure which is rarely performed. In this study, we propose a method to automatically detect the cecum, which is the start of the withdrawal phase, and to classify the different phases of the colonoscopy, which allows precise estimation of the final withdrawal time. This is achieved using a Resnet for both detection and classification trained with two public datasets and a private dataset composed of 96 full procedures. Out of 19 testing procedures, 18 have their withdrawal time correctly estimated, with a mean error of 5.52 seconds per minute per procedure.

9.
Med Image Anal ; 82: 102625, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36209637

RESUMEN

Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst in clinical practice the treatment is performed on a real-time video feed. Non-curated video data remains a challenge, as it contains low-quality frames when compared to still, selected images often obtained from diagnostic records. Nevertheless, it also embeds temporal information that can be exploited to increase predictions stability. A hybrid 2D/3D convolutional neural network architecture for polyp segmentation is presented in this paper. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients and on the publicly available SUN polyp database. A higher performance and increased generalisability indicate that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm and the inclusion of temporal information.


Asunto(s)
Pólipos del Colon , Colonoscopía , Humanos , Colonoscopía/métodos , Pólipos del Colon/diagnóstico por imagen , Redes Neurales de la Computación , Algoritmos , Bases de Datos Factuales
10.
United European Gastroenterol J ; 10(6): 528-537, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35521666

RESUMEN

BACKGROUND AND AIMS: Seattle protocol biopsies for Barrett's Esophagus (BE) surveillance are labour intensive with low compliance. Dysplasia detection rates vary, leading to missed lesions. This can potentially be offset with computer aided detection. We have developed convolutional neural networks (CNNs) to identify areas of dysplasia and where to target biopsy. METHODS: 119 Videos were collected in high-definition white light and optical chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic and non-dysplastic BE (NDBE). We trained an indirectly supervised CNN to classify images as dysplastic/non-dysplastic using whole video annotations to minimise selection bias and maximise accuracy. The CNN was trained using 148,936 video frames (31 dysplastic patients, 31 NDBE, two normal esophagus), validated on 25,161 images from 11 patient videos and tested on 264 iscan-1 images from 28 dysplastic and 16 NDBE patients which included expert delineations. To localise targeted biopsies/delineations, a second directly supervised CNN was generated based on expert delineations of 94 dysplastic images from 30 patients. This was tested on 86 i-scan one images from 28 dysplastic patients. FINDINGS: The indirectly supervised CNN achieved a per image sensitivity in the test set of 91%, specificity 79%, area under receiver operator curve of 93% to detect dysplasia. Per-lesion sensitivity was 100%. Mean assessment speed was 48 frames per second (fps). 97% of targeted biopsy predictions matched expert and histological assessment at 56 fps. The artificial intelligence system performed better than six endoscopists. INTERPRETATION: Our CNNs classify and localise dysplastic Barrett's Esophagus potentially supporting endoscopists during surveillance.


Asunto(s)
Esófago de Barrett , Neoplasias Esofágicas , Inteligencia Artificial , Esófago de Barrett/diagnóstico por imagen , Esófago de Barrett/patología , Biopsia/métodos , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/patología , Humanos , Redes Neurales de la Computación
11.
Ann Surg ; 254(2): 257-66, 2011 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-21691197

RESUMEN

OBJECTIVE: The aims of this study were to (1) describe the visual attention strategies employed by surgeons that are associated with high performance in reorientation and (2) identify key structures guiding attention deployment of the surgeon in the process of self-orientation in common clinical natural orifice translumenal endoscopic surgery (NOTES) scenarios. BACKGROUND: Disorientation has been identified as one of the major barriers to be overcome before widespread clinical NOTES uptake. Understanding disorientation requires description of key perceptual-motor factors leading to disorientation, assessment of their relative impact, and quantification of navigation performance. METHODS: Twenty-one surgeons were shown a series of 8 images acquired during human NOTES operations from the flexible endoscope from different perspectives to induce disorientation. Gaze behavior was recorded using an eye tracker as the subjects were asked to establish the image orientation. Main outcome measures were times taken to establish orientation, eye-tracking parameters, and fixation sequences on organs and structures/regions of interest (ROI). RESULTS: High-performance subjects had a lower number of fixations and normalized dwell time per ROI compared with others, suggesting a more structured and focused approach to orientation. Orientation strategies associated with high performance were described using a validated algorithm for comparing visual reorientation behavior and amount of visual attention on individual ROIs in each scenario were quantified. Key areas of organs and structures during reorientation were illustrated using dwell time normalized visual maps. CONCLUSIONS: Targeted orientation strategies revealed in this study are expected to aid in decreasing the learning curve associated with NOTES and increase performance even for experienced surgeons and gastroenterologists. Crucially, these data can provide guidance for designing orientation friendly NOTES platforms.


Asunto(s)
Cirugía Endoscópica por Orificios Naturales/métodos , Cavidad Peritoneal/anatomía & histología , Cavidad Peritoneal/cirugía , Percepción Espacial , Percepción Visual , Algoritmos , Atención , Diseño de Equipo , Femenino , Humanos , Internado y Residencia , Masculino , Persona de Mediana Edad , Cirugía Endoscópica por Orificios Naturales/educación , Cirugía Endoscópica por Orificios Naturales/instrumentación , Orientación , Estadísticas no Paramétricas
12.
Front Cardiovasc Med ; 8: 655252, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34277724

RESUMEN

Objectives: The aim of this study is to develop a scar detection method for routine computed tomography angiography (CTA) imaging using deep convolutional neural networks (CNN), which relies solely on anatomical information as input and is compatible with existing clinical workflows. Background: Identifying cardiac patients with scar tissue is important for assisting diagnosis and guiding interventions. Late gadolinium enhancement (LGE) magnetic resonance imaging (MRI) is the gold standard for scar imaging; however, there are common instances where it is contraindicated. CTA is an alternative imaging modality that has fewer contraindications and is faster than Cardiovascular magnetic resonance imaging but is unable to reliably image scar. Methods: A dataset of LGE MRI (200 patients, 83 with scar) was used to train and validate a CNN to detect ischemic scar slices using segmentation masks as input to the network. MRIs were segmented to produce 3D left ventricle meshes, which were sampled at points along the short axis to extract anatomical masks, with scar labels from LGE as ground truth. The trained CNN was tested with an independent CTA dataset (25 patients, with ground truth established with paired LGE MRI). Automated segmentation was performed to provide the same input format of anatomical masks for the network. The CNN was compared against manual reading of the CTA dataset by 3 experts. Results: Note that 84.7% cross-validated accuracy (AUC: 0.896) for detecting scar slices in the left ventricle on the MRI data was achieved. The trained network was tested against the CTA-derived data, with no further training, where it achieved an 88.3% accuracy (AUC: 0.901). The automated pipeline outperformed the manual reading by clinicians. Conclusion: Automatic ischemic scar detection can be performed from a routine cardiac CTA, without any scar-specific imaging or contrast agents. This requires only a single acquisition in the cardiac cycle. In a clinical setting, with near zero additional cost, scar presence could be detected to triage images, reduce reading times, and guide clinical decision-making.

13.
World J Gastroenterol ; 26(38): 5784-5796, 2020 Oct 14.
Artículo en Inglés | MEDLINE | ID: mdl-33132634

RESUMEN

The past decade has seen significant advances in endoscopic imaging and optical enhancements to aid early diagnosis. There is still a treatment gap due to the underdiagnosis of lesions of the oesophagus. Computer aided diagnosis may play an important role in the coming years in providing an adjunct to endoscopists in the early detection and diagnosis of early oesophageal cancers, therefore curative endoscopic therapy can be offered. Research in this area of artificial intelligence is expanding and the future looks promising. In this review article we will review current advances in artificial intelligence in the oesophagus and future directions for development.


Asunto(s)
Esófago de Barrett , Neoplasias Esofágicas , Inteligencia Artificial , Endoscopía , Neoplasias Esofágicas/diagnóstico por imagen , Humanos
14.
ESC Heart Fail ; 6(5): 909-920, 2019 10.
Artículo en Inglés | MEDLINE | ID: mdl-31400060

RESUMEN

Despite medical advancements, the prognosis of patients with heart failure remains poor. While echocardiography and cardiac magnetic resonance imaging remain at the forefront of diagnosing and monitoring patients with heart failure, cardiac computed tomography (CT) has largely been considered to have a limited role. With the advancements in scanner design, technology, and computer processing power, cardiac CT is now emerging as a valuable adjunct to clinicians managing patients with heart failure. In the current manuscript, we review the current applications of cardiac CT to patients with heart failure and also the emerging areas of research where its clinical utility is likely to extend into the realm of treatment, procedural planning, and advanced heart failure therapy implementation.


Asunto(s)
Cardiomiopatías/diagnóstico por imagen , Insuficiencia Cardíaca/diagnóstico por imagen , Insuficiencia Cardíaca/fisiopatología , Tomografía Computarizada por Rayos X/métodos , Bioingeniería/instrumentación , Electrofisiología Cardíaca/instrumentación , Cardiomiopatías/patología , Ecocardiografía/métodos , Femenino , Insuficiencia Cardíaca/mortalidad , Insuficiencia Cardíaca/terapia , Humanos , Imagen por Resonancia Magnética/métodos , Imagen de Perfusión Miocárdica/métodos , Pronóstico , Volumen Sistólico/fisiología
15.
Med Phys ; 45(11): 5066-5079, 2018 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-30221493

RESUMEN

PURPOSE: Catheters and guidewires are used extensively in cardiac catheterization procedures such as heart arrhythmia treatment (ablation), angioplasty, and congenital heart disease treatment. Detecting their positions in fluoroscopic X-ray images is important for several clinical applications, for example, motion compensation, coregistration between 2D and 3D imaging modalities, and 3D object reconstruction. METHODS: For the generalized framework, a multiscale vessel enhancement filter is first used to enhance the visibility of wire-like structures in the X-ray images. After applying adaptive binarization method, the centerlines of wire-like objects were extracted. Finally, the catheters and guidewires were detected as a smooth path which is reconstructed from centerlines of target wire-like objects. In order to classify electrode catheters which are mainly used in electrophysiology procedures, additional steps were proposed. First, a blob detection method, which is embedded in vessel enhancement filter with no additional computational cost, localizes electrode positions on catheters. Then the type of electrode catheters can be recognized by detecting the number of electrodes and also the shape created by a series of electrodes. Furthermore, for detecting guiding catheters or guidewires, a localized machine learning algorithm is added into the framework to distinguish between target wire objects and other wire-like artifacts. The proposed framework were tested on total 10,624 images which are from 102 image sequences acquired from 63 clinical cases. RESULTS: Detection errors for the coronary sinus (CS) catheter, lasso catheter ring and lasso catheter body are 0.56 ± 0.28 mm, 0.64 ± 0.36 mm, and 0.66 ± 0.32 mm, respectively, as well as success rates of 91.4%, 86.3%, and 84.8% were achieved. Detection errors for guidewires and guiding catheters are 0.62 ± 0.48 mm and success rates are 83.5%. CONCLUSION: The proposed computational framework do not require any user interaction or prior models and it can detect multiple catheters or guidewires simultaneously and in real-time. The accuracy of the proposed framework is sub-mm and the methods are robust toward low-dose X-ray fluoroscopic images, which are mainly used during procedures to maintain low radiation dose.


Asunto(s)
Cateterismo Cardíaco/instrumentación , Catéteres Cardíacos , Modelos Teóricos , Imagenología Tridimensional , Factores de Tiempo
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 592-595, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30440466

RESUMEN

Congestive heart failure is associated with significant morbidity and mortality, as first line treatments are not always effective in improving symptoms and quality of life. Furthermore, 30-50% of patients who are treated with cardiac resynchronization therapy (CRT), a minimally invasive intervention, do not respond when assessed by objective criteria such as cardiac remodeling. Positioning of the left ventricular lead in the latest activating myocardial region is associated with the best outcome. Cardiac magnetic resonance (CMR) imaging can detect scar tissue and interventricular dyssynchrony; improving the outcome of CRT. However, MR is currently not standard modality for CRT due to its cost and limited availability. This paper explores a novel method to exploit interventional X-ray fluoroscopy set up in CRT procedures to gain information on mechanical activation of the myocardium by tracking the movement of vessels overlying to left ventricular myocardium. Fluoroscopic images were labelled, to track branch movement and determine the motion along the main principal component associated with cardiac motion, to optimize lead placement in CRT. A comparison between MR- and fluoroscopy-derived mechanical activation was performed on 9 datasets, showing more than 66% agreement in 8 cases.


Asunto(s)
Terapia de Resincronización Cardíaca , Fluoroscopía , Ventrículos Cardíacos/diagnóstico por imagen , Corazón/diagnóstico por imagen , Dispositivos de Terapia de Resincronización Cardíaca , Cicatriz , Corazón/fisiopatología , Insuficiencia Cardíaca/fisiopatología , Ventrículos Cardíacos/fisiopatología , Humanos , Imagen por Resonancia Magnética/métodos , Miocardio/patología
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 1111-1114, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30440584

RESUMEN

The use of implantable cardiac devices has increased in the last 30 years. Cardiac resynchronisation therapy (CRT) is a procedure which involves implanting a coin sized pacemaker for reversing heart failure. The pacemaker electrode leads are implanted into cardiac myocardial tissue. The optimal site for implantation is highly patient-specific. Most implanters use empirical placement of the lead. One region identified to have a poor response rate are myocardial tissue with transmural scar. Studies that precisely measure transmurality of scar tissue in the left ventricle (LV) are few. Most studies lack proper validation of their transmurality measurement technique. This study presents an image analysis technique for computing scar transmurality from late-gadolinium enhancement MRI. The technique is validated using phantoms under a CRT image guidance system. The study concludes that scar transmurality can be accurately measured in certain situations and validation with phantoms is important.


Asunto(s)
Terapia de Resincronización Cardíaca , Cicatriz , Medios de Contraste , Análisis de Datos , Gadolinio , Insuficiencia Cardíaca , Humanos , Imagen por Resonancia Magnética , Resultado del Tratamiento
18.
Int J Comput Assist Radiol Surg ; 13(8): 1141-1149, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29754382

RESUMEN

PURPOSE: In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application. METHODS: This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images. RESULTS: Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was [Formula: see text] on 1000 test cases, superior to that of manual ([Formula: see text]) and gradient-based ([Formula: see text]) registration. High robustness is shown in 19 clinical CRT cases. CONCLUSION: Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.


Asunto(s)
Terapia de Resincronización Cardíaca/métodos , Corazón/diagnóstico por imagen , Imagenología Tridimensional , Aprendizaje Automático , Modelos Anatómicos , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen Multimodal/métodos , Reproducibilidad de los Resultados
19.
Med Image Anal ; 42: 160-172, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28803216

RESUMEN

A key component of image guided interventions is the registration of preoperative and intraoperative images. Classical registration approaches rely on cross-modality information; however, in modalities such as MRI and X-ray there may not be sufficient cross-modality information. This paper proposes a fundamentally different registration approach which uses adjacent anatomical structures with superabundant vessel reconstruction and dynamic outlier rejection. In the targeted clinical scenario of cardiac resynchronization therapy (CRT) delivery, preoperative, non contrast-enhanced, MRI is registered to intraoperative, contrasted X-ray fluoroscopy. The adjacent anatomical structures are the left ventricle (LV) from MRI and the coronary veins reconstructed from two contrast-enhanced X-ray images. The novel concept of superabundant vessel reconstruction is introduced to bypass the standard reconstruction problem of establishing one-to-one correspondences. Furthermore, a new dynamic outlier rejection method is proposed, to enable globally optimal point set registration. The proposed approach has been qualitatively and quantitatively evaluated on phantom, clinical CT angiography with ground truth and clinical CRT data. A novel evaluation method is proposed for clinical CRT data based on previously implanted artificial aortic and mitral valves. The registration accuracy in 3D was 2.94 mm for the aortic and 3.86 mm for the mitral valve. The results are below the required accuracy identified by clinical partners to be the half-segment size (16.35 mm) of a standard American Heart Association (AHA) 16 segment model of the LV.


Asunto(s)
Terapia de Resincronización Cardíaca/métodos , Vasos Coronarios/diagnóstico por imagen , Válvulas Cardíacas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Algoritmos , Puntos Anatómicos de Referencia , Fluoroscopía , Humanos , Imagen por Resonancia Magnética , Modelos Anatómicos , Fantasmas de Imagen
20.
IEEE Trans Med Imaging ; 36(11): 2366-2375, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28678701

RESUMEN

Patients with drug-refractory heart failure can greatly benefit from cardiac resynchronization therapy (CRT). A CRT device can resynchronize the contractions of the left ventricle (LV) leading to reduced mortality. Unfortunately, 30%-50% of patients do not respond to treatment when assessed by objective criteria such as cardiac remodeling. A significant contributing factor is the suboptimal placement of the LV lead. It has been shown that placing this lead away from scar and at the point of latest mechanical activation can improve response rates. This paper presents a comprehensive and highly automated system that uses scar and mechanical activation to plan and guide CRT procedures. Standard clinical preoperative magnetic resonance imaging is used to extract scar and mechanical activation information. The data are registered to a single 3-D coordinate system and visualized in novel 2-D and 3-D American Heart Association plots enabling the clinician to select target segments. During the procedure, the planning information is overlaid onto live fluoroscopic images to guide lead deployment. The proposed platform has been used during 14 CRT procedures and validated on synthetic, phantom, volunteer, and patient data.


Asunto(s)
Terapia de Resincronización Cardíaca/métodos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Terapia Asistida por Computador/métodos , Algoritmos , Cicatriz/diagnóstico por imagen , Cicatriz/fisiopatología , Corazón/diagnóstico por imagen , Corazón/fisiopatología , Humanos , Fantasmas de Imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA