Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Eur Radiol ; 31(6): 3837-3845, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33219850

RESUMEN

OBJECTIVE: The aim is to evaluate whether smart worklist prioritization by artificial intelligence (AI) can optimize the radiology workflow and reduce report turnaround times (RTATs) for critical findings in chest radiographs (CXRs). Furthermore, we investigate a method to counteract the effect of false negative predictions by AI-resulting in an extremely and dangerously long RTAT, as CXRs are sorted to the end of the worklist. METHODS: We developed a simulation framework that models the current workflow at a university hospital by incorporating hospital-specific CXR generation rates and reporting rates and pathology distribution. Using this, we simulated the standard worklist processing "first-in, first-out" (FIFO) and compared it with a worklist prioritization based on urgency. Examination prioritization was performed by the AI, classifying eight different pathological findings ranked in descending order of urgency: pneumothorax, pleural effusion, infiltrate, congestion, atelectasis, cardiomegaly, mass, and foreign object. Furthermore, we introduced an upper limit for the maximum waiting time, after which the highest urgency is assigned to the examination. RESULTS: The average RTAT for all critical findings was significantly reduced in all prioritization simulations compared to the FIFO simulation (e.g., pneumothorax: 35.6 min vs. 80.1 min; p < 0.0001), while the maximum RTAT for most findings increased at the same time (e.g., pneumothorax: 1293 min vs 890 min; p < 0.0001). Our "upper limit" substantially reduced the maximum RTAT in all classes (e.g., pneumothorax: 979 min vs. 1293 min/1178 min; p < 0.0001). CONCLUSION: Our simulations demonstrate that smart worklist prioritization by AI can reduce the average RTAT for critical findings in CXRs while maintaining a small maximum RTAT as FIFO. KEY POINTS: • Development of a realistic clinical workflow simulator based on empirical data from a hospital allowed precise assessment of smart worklist prioritization using artificial intelligence. • Employing a smart worklist prioritization without a threshold for maximum waiting time runs the risk of false negative predictions of the artificial intelligence greatly increasing the report turnaround time. • Use of a state-of-the-art convolution neural network can reduce the average report turnaround time almost to the upper limit of a perfect classification algorithm (e.g., pneumothorax: 35.6 min vs. 30.4 min).


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Humanos , Radiografía , Flujo de Trabajo , Rayos X
2.
Magn Reson Med ; 73(4): 1457-68, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24760736

RESUMEN

PURPOSE: Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. METHODS: Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. RESULTS: We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. CONCLUSION: The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors.


Asunto(s)
Algoritmos , Artefactos , Mano/anatomía & histología , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Simulación por Computador , Humanos , Imagen por Resonancia Magnética/métodos , Modelos Biológicos , Movimiento (Física) , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
3.
J Med Imaging (Bellingham) ; 11(4): 044005, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39099642

RESUMEN

Purpose: The trend towards lower radiation doses and advances in computed tomography (CT) reconstruction may impair the operation of pretrained segmentation models, giving rise to the problem of estimating the dose robustness of existing segmentation models. Previous studies addressing the issue suffer either from a lack of registered low- and full-dose CT images or from simplified simulations. Approach: We employed raw data from full-dose acquisitions to simulate low-dose CT scans, avoiding the need to rescan a patient. The accuracy of the simulation is validated using a real CT scan of a phantom. We consider down to 20% reduction of radiation dose, for which we measure deviations of several pretrained segmentation models from the full-dose prediction. In addition, compatibility with existing denoising methods is considered. Results: The results reveal the surprising robustness of the TotalSegmentator approach, showing minimal differences at the pixel level even without denoising. Less robust models show good compatibility with the denoising methods, which help to improve robustness in almost all cases. With denoising based on a convolutional neural network (CNN), the median Dice between low- and full-dose data does not fall below 0.9 (12 for the Hausdorff distance) for all but one model. We observe volatile results for labels with effective radii less than 19 mm and improved results for contrasted CT acquisitions. Conclusion: The proposed approach facilitates clinically relevant analysis of dose robustness for human organ segmentation models. The results outline the robustness properties of a diverse set of models. Further studies are needed to identify the robustness of approaches for lesion segmentation and to rank the factors contributing to dose robustness.

4.
Magn Reson Med ; 70(6): 1608-18, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23401078

RESUMEN

PURPOSE: Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. METHODS: The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. RESULTS: The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. CONCLUSION: The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data.


Asunto(s)
Algoritmos , Artefactos , Encéfalo/anatomía & histología , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Animales , Haplorrinos , Humanos , Movimiento (Física) , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
5.
J Med Imaging (Bellingham) ; 9(2): 025001, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35360417

RESUMEN

Purpose: Implanting stents to re-open stenotic lesions during percutaneous coronary interventions is considered a standard treatment for acute or chronic coronary syndrome. Intravascular ultrasound (IVUS) can be used to guide and assess the technical success of these interventions. Automatically segmenting stent struts in IVUS sequences improves workflow efficiency but is non-trivial due to a challenging image appearance entailing manifold ambiguities with other structures. Manual, ungated IVUS pullbacks constitute a challenge in this context. We propose a fully data-driven strategy to first longitudinally detect and subsequently segment stent struts in IVUS frames. Approach: A cascaded deep learning approach is presented. It first trains an encoder model to classify frames as "stent," "no stent," or "no use." A segmentation model then delineates stent struts on a pixel level only in frames with a stent label. The first stage of the cascade acts as a gateway to reduce the risk for false positives in the second stage, the segmentation, which is trained on a smaller and difficult-to-annotate dataset. Training of the classification and segmentation model was based on 49,888 and 1826 frames of 74 sequences from 35 patients, respectively. Results: The longitudinal classification yielded Dice scores of 92.96%, 82.35%, and 94.03% for the classes stent, no stent, and no use, respectively. The segmentation achieved a Dice score of 65.1% on the stent ground truth (intra-observer performance: 75.5%) and 43.5% on all frames (including frames without stent, with guidewires, calcium, or without clinical use). The latter improved to 49.5% when gating the frames by the classification decision and further increased to 57.4% with a heuristic on the plausible stent strut area. Conclusions: A data-driven strategy for segmenting stents in ungated, manual pullbacks was presented-the most common and practical scenario in the time-critical clinical workflow. We demonstrated a mitigated risk for ambiguities and false positive predictions.

6.
Magn Reson Med ; 63(1): 116-26, 2010 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-19859957

RESUMEN

The optimization of k-space sampling for nonlinear sparse MRI reconstruction is phrased as a Bayesian experimental design problem. Bayesian inference is approximated by a novel relaxation to standard signal processing primitives, resulting in an efficient optimization algorithm for Cartesian and spiral trajectories. On clinical resolution brain image data from a Siemens 3T scanner, automatically optimized trajectories lead to significantly improved images, compared to standard low-pass, equispaced, or variable density randomized designs. Insights into the nonlinear design optimization problem for MRI are given.


Asunto(s)
Algoritmos , Inteligencia Artificial , Encéfalo/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Teorema de Bayes , Compresión de Datos/métodos , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
7.
Sci Rep ; 9(1): 6381, 2019 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-31011155

RESUMEN

The increased availability of labeled X-ray image archives (e.g. ChestX-ray14 dataset) has triggered a growing interest in deep learning techniques. To provide better insight into the different approaches, and their applications to chest X-ray classification, we investigate a powerful network architecture in detail: the ResNet-50. Building on prior work in this domain, we consider transfer learning with and without fine-tuning as well as the training of a dedicated X-ray network from scratch. To leverage the high spatial resolution of X-ray data, we also include an extended ResNet-50 architecture, and a network integrating non-image data (patient age, gender and acquisition type) in the classification process. In a concluding experiment, we also investigate multiple ResNet depths (i.e. ResNet-38 and ResNet-101). In a systematic evaluation, using 5-fold re-sampling and a multi-label loss function, we compare the performance of the different approaches for pathology classification by ROC statistics and analyze differences between the classifiers using rank correlation. Overall, we observe a considerable spread in the achieved performance and conclude that the X-ray-specific ResNet-38, integrating non-image data yields the best overall results. Furthermore, class activation maps are used to understand the classification process, and a detailed analysis of the impact of non-image features is provided.


Asunto(s)
Aprendizaje Profundo , Tórax/diagnóstico por imagen , Adolescente , Adulto , Distribución por Edad , Anciano , Anciano de 80 o más Años , Área Bajo la Curva , Niño , Preescolar , Bases de Datos como Asunto , Humanos , Procesamiento de Imagen Asistido por Computador , Lactante , Recién Nacido , Persona de Mediana Edad , Modelos Teóricos , Estadísticas no Paramétricas , Rayos X , Adulto Joven
8.
Med Phys ; 45(3): 1170-1177, 2018 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-29355991

RESUMEN

PURPOSE: The purpose of this study is to develop and evaluate a functionally personalized boundary condition (BC) model for estimating the fractional flow reserve (FFR) from coronary computed tomography angiography (CCTA) using flow simulation (CT-FFR). MATERIALS AND METHODS: The CCTA data of 90 subjects with subsequent invasive FFR in 123 lesions within 21 days (range: 0-83) were retrospectively collected. We developed a functionally personalized BC model accounting specifically for the coronary microvascular resistance dependency on the coronary outlets pressure suggested by several physiological studies. We used the proposed model to estimate the hemodynamic significance of coronary lesions with an open-loop physics-based flow simulation. We generated three-dimensional (3D) coronary tree geometries using automatic software and corrected manually where required. We evaluated the improvement in CT-FFR estimates achieved using a functionally personalized BC model over anatomically personalized BC model using k-fold cross-validation. RESULTS: The functionally personalized BC model slightly improved CT-FFR specificity in determining hemodynamic significance of lesions with intermediate diameter stenosis (30%-70%, N = 72), compared to the anatomically personalized model lesions with invasive FFR measurements as the reference (sensitivity/specificity: 0.882/0.79 vs 0.882/0.763). For the entire set of 123 coronary lesions, the functionally personalized BC model improved only the area under the curve (AUC) but not the sensitivity/specificity in determining the hemodynamic significance of lesions, compared to the anatomically personalized model (AUC: 0.884 vs 0.875, sensitivity/specificity: 0.848/0.805). CONCLUSION: The functionally personalized BC model has the potential to improve the quality of CT-FFR estimates compared to an anatomically personalized BC model.


Asunto(s)
Angiografía Coronaria , Reserva del Flujo Fraccional Miocárdico , Procesamiento de Imagen Asistido por Computador , Modelos Cardiovasculares , Modelación Específica para el Paciente , Tomografía Computarizada por Rayos X , Femenino , Humanos , Masculino , Persona de Mediana Edad
9.
Med Phys ; 44(3): 1040-1049, 2017 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-28112409

RESUMEN

PURPOSE: The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). MATERIALS AND METHODS: Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. RESULTS: Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast, integrating PVE analysis into an automatic coronary lumen segmentation algorithm improved the flow simulation specificity from 0.6 to 0.68 with the same sensitivity of 0.83. Also, accounting for PVE improved the area under the ROC curve for detecting hemodynamically significant CAD from 0.76 to 0.8 compared to automatic segmentation without PVE analysis with invasive FFR threshold of 0.8 as the reference standard. Accounting for PVE in flow simulation to support the detection of hemodynamic significant disease in CCTA-based obstructive lesions improved specificity from 0.51 to 0.73 with same sensitivity of 0.83 and the area under the curve from 0.69 to 0.79. The improvement in the AUC was statistically significant (N = 76, Delong's test, P = 0.012). CONCLUSION: Accounting for the partial volume effects in automatic coronary lumen segmentation algorithms has the potential to improve the accuracy of CCTA-based hemodynamic assessment of coronary artery lesions.


Asunto(s)
Angiografía por Tomografía Computarizada/métodos , Angiografía Coronaria/métodos , Estenosis Coronaria/diagnóstico por imagen , Hemodinámica , Aprendizaje Automático , Reconocimiento de Normas Patrones Automatizadas , Área Bajo la Curva , Estenosis Coronaria/fisiopatología , Vasos Coronarios/diagnóstico por imagen , Vasos Coronarios/fisiopatología , Conjuntos de Datos como Asunto , Humanos , Imagenología Tridimensional/métodos , Modelos Cardiovasculares , Curva ROC , Estudios Retrospectivos , Programas Informáticos
10.
IEEE Trans Pattern Anal Mach Intell ; 36(3): 453-65, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24457503

RESUMEN

We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.


Asunto(s)
Clasificación/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Estadísticos , Reconocimiento de Normas Patrones Automatizadas/métodos , Animales , Bases de Datos Factuales , Semántica , Máquina de Vectores de Soporte
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA