Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Eur Radiol ; 33(3): 1707-1718, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36307551

RESUMEN

OBJECTIVES: Time-resolved, 2D-phase-contrast MRI (2D-CINE-PC-MRI) enables in vivo blood flow analysis. However, accurate vessel contour delineation (VCD) is required to achieve reliable results. We sought to evaluate manual analysis (MA) compared to the performance of a deep learning (DL) application for fully-automated VCD and flow quantification and corrected semi-automated analysis (corSAA). METHODS: We included 97 consecutive patients (age = 52.9 ± 16 years, 41 female) with 2D-CINE-PC-MRI imaging on 1.5T MRI systems at sinotubular junction (STJ), and 28/97 also received 2D-CINE-PC at main pulmonary artery (PA). A cardiovascular radiologist performed MA (reference) and corSAA (built-in tool) in commercial software for all cardiac time frames (median: 20, total contours per analysis: 2358 STJ, 680 PA). DL-analysis automatically performed VCD, followed by net flow (NF) and peak velocity (PV) quantification. Contours were compared using Dice similarity coefficients (DSC). Discrepant cases (> ± 10 mL or > ± 10 cm/s) were reviewed in detail. RESULTS: DL was successfully applied to 97% (121/125) of the 2D-CINE-PC-MRI series (STJ: 95/97, 98%, PA: 26/28, 93%). Compared to MA, mean DSC were 0.91 ± 0.02 (DL), 0.94 ± 0.02 (corSAA) at STJ, and 0.85 ± 0.08 (DL), 0.93 ± 0.02 (corSAA) at PA; this indicated good to excellent DL-performance. Flow quantification revealed similar NF at STJ (p = 0.48) and PA (p > 0.05) between methods while PV assessment was significantly different (STJ: p < 0.001, PA: p = 0.04). A detailed review showed noisy voxels in MA and corSAA impacted PV results. Overall, DL analysis compared to human assessments was accurate in 113/121 (93.4%) cases. CONCLUSIONS: Fully-automated DL-analysis of 2D-CINE-PC-MRI provided flow quantification at STJ and PA at expert level in > 93% of cases with results being available instantaneously. KEY POINTS: • Deep learning performed flow quantification on clinical 2D-CINE-PC series at the sinotubular junction and pulmonary artery at the expert level in > 93% of cases. • Location detection and contouring of the vessel boundaries were performed fully-automatic with results being available instantaneously compared to human assessments which approximately takes three minutes per location. • The evaluated tool indicates usability in daily practice.


Asunto(s)
Aprendizaje Profundo , Humanos , Femenino , Adulto , Persona de Mediana Edad , Anciano , Velocidad del Flujo Sanguíneo/fisiología , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Cinemagnética/métodos , Hemodinámica
2.
J Cardiovasc Magn Reson ; 24(1): 27, 2022 04 11.
Artículo en Inglés | MEDLINE | ID: mdl-35410226

RESUMEN

BACKGROUND: Theoretically, artificial intelligence can provide an accurate automatic solution to measure right ventricular (RV) ejection fraction (RVEF) from cardiovascular magnetic resonance (CMR) images, despite the complex RV geometry. However, in our recent study, commercially available deep learning (DL) algorithms for RVEF quantification performed poorly in some patients. The current study was designed to test the hypothesis that quantification of RV function could be improved in these patients by using more diverse CMR datasets in addition to domain-specific quantitative performance evaluation metrics during the cross-validation phase of DL algorithm development. METHODS: We identified 100 patients from our prior study who had the largest differences between manually measured and automated RVEF values. Automated RVEF measurements were performed using the original version of the algorithm (DL1), an updated version (DL2) developed from a dataset that included a wider range of RV pathology and validated using multiple domain-specific quantitative performance evaluation metrics, and conventional methodology performed by a core laboratory (CORE). Each of the DL-RVEF approaches was compared against CORE-RVEF reference values using linear regression and Bland-Altman analyses. Additionally, RVEF values were classified into 3 categories: ≤ 35%, 35-50%, and ≥ 50%. Agreement between RVEF classifications made by the DL approaches and the CORE measurements was tested. RESULTS: CORE-RVEF and DL-RVEFs were obtained in all patients (feasibility of 100%). DL2-RVEF correlated with CORE-RVEF better than DL1-RVEF (r = 0.87 vs. r = 0.42), with narrower limits of agreement. As a result, DL2 algorithm also showed increasing accuracy from 0.53 to 0.80 for categorizing RV function. CONCLUSIONS: The use of a new DL algorithm cross-validated on a dataset with a wide range of RV pathology using multiple domain-specific metrics resulted in a considerable improvement in the accuracy of automated RVEF measurements. This improvement was demonstrated in patients whose images were the most challenging and resulted in the largest RVEF errors. These findings underscore the critical importance of this strategy in the development of DL approaches for automated CMR measurements.


Asunto(s)
Inteligencia Artificial , Disfunción Ventricular Derecha , Ventrículos Cardíacos/diagnóstico por imagen , Humanos , Imagen por Resonancia Magnética , Imagen por Resonancia Cinemagnética/métodos , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Volumen Sistólico , Disfunción Ventricular Derecha/diagnóstico por imagen , Función Ventricular Derecha
3.
Pediatr Cardiol ; 42(3): 578-589, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33394116

RESUMEN

Ventricular contouring of cardiac magnetic resonance imaging is the gold standard for volumetric analysis for repaired tetralogy of Fallot (rTOF), but can be time-consuming and subject to variability. A convolutional neural network (CNN) ventricular contouring algorithm was developed to generate contours for mostly structural normal hearts. We aimed to improve this algorithm for use in rTOF and propose a more comprehensive method of evaluating algorithm performance. We evaluated the performance of a ventricular contouring CNN, that was trained on mostly structurally normal hearts, on rTOF patients. We then created an updated CNN by adding rTOF training cases and evaluated the new algorithm's performance generating contours for both the left and right ventricles (LV and RV) on new testing data. Algorithm performance was evaluated with spatial metrics (Dice Similarity Coefficient (DSC), Hausdorff distance, and average Hausdorff distance) and volumetric comparisons (e.g., differences in RV volumes). The original Mostly Structurally Normal (MSN) algorithm was better at contouring the LV than the RV in patients with rTOF. After retraining the algorithm, the new MSN + rTOF algorithm showed improvements for LV epicardial and RV endocardial contours on testing data to which it was naïve (N = 30; e.g., DSC 0.883 vs. 0.905 for LV epicardium at end diastole, p < 0.0001) and improvements in RV end-diastolic volumetrics (median %error 8.1 vs 11.4, p = 0.0022). Even with a small number of cases, CNN-based contouring for rTOF can be improved. This work should be extended to other forms of congenital heart disease with more extreme structural abnormalities. Aspects of this work have already been implemented in clinical practice, representing rapid clinical translation. The combined use of both spatial and volumetric comparisons yielded insights into algorithm errors.


Asunto(s)
Algoritmos , Ventrículos Cardíacos/diagnóstico por imagen , Redes Neurales de la Computación , Tetralogía de Fallot/diagnóstico por imagen , Adulto , Estudios de Casos y Controles , Femenino , Ventrículos Cardíacos/anatomía & histología , Humanos , Imagen por Resonancia Magnética/métodos , Masculino
4.
Front Cardiovasc Med ; 9: 894503, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36051279

RESUMEN

Objectives: Currently, administering contrast agents is necessary for accurately visualizing and quantifying presence, location, and extent of myocardial infarction (MI) with cardiac magnetic resonance (CMR). In this study, our objective is to investigate and analyze pre- and post-contrast CMR images with the goal of predicting post-contrast information using pre-contrast information only. We propose methods and identify challenges. Methods: The study population consists of 272 retrospectively selected CMR studies with diagnoses of MI (n = 108) and healthy controls (n = 164). We describe a pipeline for pre-processing this dataset for analysis. After data feature engineering, 722 cine short-axis (SAX) images and segmentation mask pairs were used for experimentation. This constitutes 506, 108, and 108 pairs for the training, validation, and testing sets, respectively. We use deep learning (DL) segmentation (UNet) and classification (ResNet50) models to discover the extent and location of the scar and classify between the ischemic cases and healthy cases (i.e., cases with no regional myocardial scar) from the pre-contrast cine SAX image frames, respectively. We then capture complex data patterns that represent subtle signal and functional changes in the cine SAX images due to MI using optical flow, rate of change of myocardial area, and radiomics data. We apply this dataset to explore two supervised learning methods, namely, the support vector machines (SVM) and the decision tree (DT) methods, to develop predictive models for classifying pre-contrast cine SAX images as being a case of MI or healthy. Results: Overall, for the UNet segmentation model, the performance based on the mean Dice score for the test set (n = 108) is 0.75 (±0.20) for the endocardium, 0.51 (±0.21) for the epicardium and 0.20 (±0.17) for the scar. For the classification task, the accuracy, F1 and precision scores of 0.68, 0.69, and 0.64, respectively, were achieved with the SVM model, and of 0.62, 0.63, and 0.72, respectively, with the DT model. Conclusion: We have presented some promising approaches involving DL, SVM, and DT methods in an attempt to accurately predict contrast information from non-contrast images. While our initial results are modest for this challenging task, this area of research still poses several open problems.

5.
Front Cardiovasc Med ; 8: 816985, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35242820

RESUMEN

BACKGROUND: The quantitative measures used to assess the performance of automated methods often do not reflect the clinical acceptability of contouring. A quality-based assessment of automated cardiac magnetic resonance (CMR) segmentation more relevant to clinical practice is therefore needed. OBJECTIVE: We propose a new method for assessing the quality of machine learning (ML) outputs. We evaluate the clinical utility of the proposed method as it is employed to systematically analyse the quality of an automated contouring algorithm. METHODS: A dataset of short-axis (SAX) cine CMR images from a clinically heterogeneous population (n = 217) were manually contoured by a team of experienced investigators. On the same images we derived automated contours using a ML algorithm. A contour quality scoring application randomly presented manual and automated contours to four blinded clinicians, who were asked to assign a quality score from a predefined rubric. Firstly, we analyzed the distribution of quality scores between the two contouring methods across all clinicians. Secondly, we analyzed the interobserver reliability between the raters. Finally, we examined whether there was a variation in scores based on the type of contour, SAX slice level, and underlying disease. RESULTS: The overall distribution of scores between the two methods was significantly different, with automated contours scoring better than the manual (OR (95% CI) = 1.17 (1.07-1.28), p = 0.001; n = 9401). There was substantial scoring agreement between raters for each contouring method independently, albeit it was significantly better for automated segmentation (automated: AC2 = 0.940, 95% CI, 0.937-0.943 vs manual: AC2 = 0.934, 95% CI, 0.931-0.937; p = 0.006). Next, the analysis of quality scores based on different factors was performed. Our approach helped identify trends patterns of lower segmentation quality as observed for left ventricle epicardial and basal contours with both methods. Similarly, significant differences in quality between the two methods were also found in dilated cardiomyopathy and hypertension. CONCLUSIONS: Our results confirm the ability of our systematic scoring analysis to determine the clinical acceptability of automated contours. This approach focused on the contours' clinical utility could ultimately improve clinicians' confidence in artificial intelligence and its acceptability in the clinical workflow.

6.
Med Image Anal ; 40: 184-198, 2017 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28692857

RESUMEN

Identification of vascular structures from medical images is integral to many clinical procedures. Most vessel segmentation techniques ignore the characteristic pulsatile motion of vessels in their formulation. In a recent effort to automatically segment vessels that are hidden under fat, we motivated the use of the magnitude of local pulsatile motion extracted from surgical endoscopic video. In this article we propose a new approach that leverages the local orientation, in addition to magnitude of motion, and demonstrate that the extended computation and utilization of motion vectors can improve the segmentation of vascular structures. We implement our approach using four alternatives to magnitude-only motion estimation by using traditional optical flow and by exploiting the monogenic signal for fast flow estimation. Our evaluations are conducted on both synthetic phantoms as well as two real ultrasound datasets showing improved segmentation results with negligible change in computational performance compared to the previous magnitude only approach.


Asunto(s)
Vasos Sanguíneos/diagnóstico por imagen , Endoscopía , Movimiento , Ultrasonografía/métodos , Grabación en Video , Algoritmos , Vasos Sanguíneos/fisiología , Humanos , Fantasmas de Imagen , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Factores de Tiempo
7.
Int J Comput Assist Radiol Surg ; 11(8): 1409-18, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26872810

RESUMEN

PURPOSE: Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue). METHODS: In this paper, we propose a variational technique to augment a surgeon's endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention. RESULTS: We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method. CONCLUSIONS: A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.


Asunto(s)
Endoscopía/métodos , Imagenología Tridimensional/métodos , Color , Humanos , Nefrectomía/métodos
8.
Med Image Anal ; 25(1): 103-10, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25977157

RESUMEN

Hilar dissection is an important and delicate stage in partial nephrectomy, during which surgeons remove connective tissue surrounding renal vasculature. Serious complications arise when the occluded blood vessels, concealed by fat, are missed in the endoscopic view and as a result are not appropriately clamped. Such complications may include catastrophic blood loss from internal bleeding and associated occlusion of the surgical view during the excision of the cancerous mass (due to heavy bleeding), both of which may compromise the visibility of surgical margins or even result in a conversion from a minimally invasive to an open intervention. To aid in vessel discovery, we propose a novel automatic method to segment occluded vasculature from labeling minute pulsatile motion that is otherwise imperceptible with the naked eye. Our segmentation technique extracts subtle tissue motions using a technique adapted from phase-based video magnification, in which we measure motion from periodic changes in local phase information albeit for labeling rather than magnification. Based on measuring local phase through spatial decomposition of each frame of the endoscopic video using complex wavelet pairs, our approach assigns segmentation labels by detecting regions exhibiting temporal local phase changes matching the heart rate. We demonstrate how our technique is a practical solution for time-critical surgical applications by presenting quantitative and qualitative performance evaluations of our vessel detection algorithms with a retrospective study of fifteen clinical robot-assisted partial nephrectomies.


Asunto(s)
Endoscopía/métodos , Neoplasias Renales/cirugía , Riñón/irrigación sanguínea , Nefrectomía/métodos , Obstrucción de la Arteria Renal/patología , Obstrucción de la Arteria Renal/cirugía , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos , Humanos , Imagenología Tridimensional , Riñón/cirugía , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Grabación en Video
9.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 407-14, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25333144

RESUMEN

Hilar dissection is an important and delicate stage in partial nephrectomy during which surgeons remove connective tissue surrounding renal vasculature. Potentially serious complications arise when vessels occluded by fat are missed in the endoscopic view and are not appropriately clamped. To aid in vessel discovery, we propose an automatic method to localize and label occluded vasculature. Our segmentation technique is adapted from phase-based video magnification, in which we measure subtle motion from periodic changes in local phase information albeit for labeling rather than magnification. We measure local phase through spatial decomposition of each frame of the endoscopic video using complex wavelet pairs. We then assign segmentation labels based on identifying responses of regions exhibiting temporal local phase changes matching the heart rate frequency. Our method is evaluated with a retrospective study of eight real robot-assisted partial nephrectomies demonstrating utility for surgical guidance that could potentially reduce operation times and complication rates.


Asunto(s)
Endoscopía/métodos , Nefrectomía/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Obstrucción de la Arteria Renal/patología , Obstrucción de la Arteria Renal/cirugía , Robótica/métodos , Cirugía Asistida por Computador/métodos , Algoritmos , Inteligencia Artificial , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA