Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Pathol ; 254(2): 147-158, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33904171

RESUMO

Artificial intelligence (AI)-based systems applied to histopathology whole-slide images have the potential to improve patient care through mitigation of challenges posed by diagnostic variability, histopathology caseload, and shortage of pathologists. We sought to define the performance of an AI-based automated prostate cancer detection system, Paige Prostate, when applied to independent real-world data. The algorithm was employed to classify slides into two categories: benign (no further review needed) or suspicious (additional histologic and/or immunohistochemical analysis required). We assessed the sensitivity, specificity, positive predictive values (PPVs), and negative predictive values (NPVs) of a local pathologist, two central pathologists, and Paige Prostate in the diagnosis of 600 transrectal ultrasound-guided prostate needle core biopsy regions ('part-specimens') from 100 consecutive patients, and to ascertain the impact of Paige Prostate on diagnostic accuracy and efficiency. Paige Prostate displayed high sensitivity (0.99; CI 0.96-1.0), NPV (1.0; CI 0.98-1.0), and specificity (0.93; CI 0.90-0.96) at the part-specimen level. At the patient level, Paige Prostate displayed optimal sensitivity (1.0; CI 0.93-1.0) and NPV (1.0; CI 0.91-1.0) at a specificity of 0.78 (CI 0.64-0.89). The 27 part-specimens considered by Paige Prostate as suspicious, whose final diagnosis was benign, were found to comprise atrophy (n = 14), atrophy and apical prostate tissue (n = 1), apical/benign prostate tissue (n = 9), adenosis (n = 2), and post-atrophic hyperplasia (n = 1). Paige Prostate resulted in the identification of four additional patients whose diagnoses were upgraded from benign/suspicious to malignant. Additionally, this AI-based test provided an estimated 65.5% reduction of the diagnostic time for the material analyzed. Given its optimal sensitivity and NPV, Paige Prostate has the potential to be employed for the automated identification of patients whose histologic slides could forgo full histopathologic review. In addition to providing incremental improvements in diagnostic accuracy and efficiency, this AI-based system identified patients whose prostate cancers were not initially diagnosed by three experienced histopathologists. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons, Ltd. on behalf of The Pathological Society of Great Britain and Ireland.


Assuntos
Inteligência Artificial , Neoplasias da Próstata/diagnóstico , Idoso , Idoso de 80 Anos ou mais , Biópsia , Biópsia com Agulha de Grande Calibre , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Patologistas , Próstata/patologia , Neoplasias da Próstata/patologia
2.
Mod Pathol ; 33(10): 2058-2066, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32393768

RESUMO

Prostate cancer (PrCa) is the second most common cancer among men in the United States. The gold standard for detecting PrCa is the examination of prostate needle core biopsies. Diagnosis can be challenging, especially for small, well-differentiated cancers. Recently, machine learning algorithms have been developed for detecting PrCa in whole slide images (WSIs) with high test accuracy. However, the impact of these artificial intelligence systems on pathologic diagnosis is not known. To address this, we investigated how pathologists interact with Paige Prostate Alpha, a state-of-the-art PrCa detection system, in WSIs of prostate needle core biopsies stained with hematoxylin and eosin. Three AP-board certified pathologists assessed 304 anonymized prostate needle core biopsy WSIs in 8 hours. The pathologists classified each WSI as benign or cancerous. After ~4 weeks, pathologists were tasked with re-reviewing each WSI with the aid of Paige Prostate Alpha. For each WSI, Paige Prostate Alpha was used to perform cancer detection and, for WSIs where cancer was detected, the system marked the area where cancer was detected with the highest probability. The original diagnosis for each slide was rendered by genitourinary pathologists and incorporated any ancillary studies requested during the original diagnostic assessment. Against this ground truth, the pathologists and Paige Prostate Alpha were measured. Without Paige Prostate Alpha, pathologists had an average sensitivity of 74% and an average specificity of 97%. With Paige Prostate Alpha, the average sensitivity for pathologists significantly increased to 90% with no statistically significant change in specificity. With Paige Prostate Alpha, pathologists more often correctly classified smaller, lower grade tumors, and spent less time analyzing each WSI. Future studies will investigate if similar benefit is yielded when such a system is used to detect other forms of cancer in a setting that more closely emulates real practice.


Assuntos
Aprendizado Profundo , Diagnóstico por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Patologia Clínica/métodos , Neoplasias da Próstata/diagnóstico , Biópsia com Agulha de Grande Calibre , Humanos , Masculino
3.
Eur Radiol ; 28(9): 4018-4026, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29572635

RESUMO

OBJECTIVES: Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). METHODS: The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. RESULTS: The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. CONCLUSION: Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. KEY POINTS: • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.


Assuntos
Angiografia por Tomografia Computadorizada/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Aprendizado de Máquina , Intensificação de Imagem Radiográfica/métodos , Idoso , Área Sob a Curva , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
4.
Magn Reson Med ; 68(4): 1176-89, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22213069

RESUMO

To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with generalized autocalibrating partially parallel acquisitions (GRAPPA) alone, the DEnoising of Sparse Images from GRAPPA using the Nullspace method is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DEnoising of Sparse Images from GRAPPA using the Nullspace method are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), peak-signal-to-noise ratio, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DEnoising of Sparse Images from GRAPPA using the Nullspace method mitigates noise amplification better than both GRAPPA and L1 iterative self-consistent parallel imaging reconstruction (the latter limited here by uniform undersampling).


Assuntos
Algoritmos , Artefatos , Encéfalo/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Razão Sinal-Ruído
5.
Med Phys ; 46(12): 5514-5527, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31603567

RESUMO

PURPOSE: Coronary x-ray computed tomography angiography (CCTA) continues to develop as a noninvasive method for the assessment of coronary vessel geometry and the identification of physiologically significant lesions. The uncertainty of quantitative lesion diameter measurement due to limited spatial resolution and vessel motion reduces the accuracy of CCTA diagnoses. In this paper, we introduce a new technique called computed tomography (CT)-number-Calibrated Diameter to improve the accuracy of the vessel and stenosis diameter measurements with CCTA. METHODS: A calibration phantom containing cylindrical holes (diameters spanning from 0.8 mm through 4.0 mm) capturing the range of diameters found in human coronary vessels was three-dimensional printed. We also printed a human stenosis phantom with 17 tubular channels having the geometry of lesions derived from patient data. We acquired CT scans of the two phantoms with seven different imaging protocols. Calibration curves relating vessel intraluminal maximum voxel value (maximum CT number of a voxel, described in Hounsfield Units, HU) to true diameter, and full-width-at-half maximum (FWHM) to true diameter were constructed for each CCTA protocol. In addition, we acquired scans with a small constant motion (15 mm/s) and used a motion correction reconstruction (Snapshot Freeze) algorithm to correct motion artifacts. We applied our technique to measure the lesion diameter in the 17 lesions in the stenosis phantom and compared the performance of CT-number-Calibrated Diameter to the ground truth diameter and a FWHM estimate. RESULTS: In all cases, vessel intraluminal maximum voxel value vs diameter was found to have a simple functional form based on the two-dimensional point spread function yielding a constant maximum voxel value region above a cutoff diameter, and a decreasing maximum voxel value vs decreasing diameter below a cutoff diameter. After normalization, focal spot size and reconstruction kernel were the principal determinants of cutoff diameter and the rate of maximum voxel value reduction vs decreasing diameter. The small constant motion had a significant effect on the CT number calibration; however, the motion-correction algorithm returned the maximum voxel value vs diameter curve to that of stationary vessels. The CT number Calibration technique showed better performance than FWHM estimation of diameter, yielding a high accuracy in the tested range (0.8 mm through 2.5 mm). We found a strong linear correlation between the smallest diameter in each of 17 lesions measured by CT-number-Calibrated Diameter (DC ) and ground truth diameter (Dgt ), (DC  = 0.951 × Dgt  + 0.023 mm, r = 0.998 with a slope very close to 1.0 and intercept very close to 0 mm. CONCLUSIONS: Computed tomography-number-Calibrated Diameter is an effective method to enhance the accuracy of the estimate of small vessel diameters and degree of coronary stenosis in CCTA.


Assuntos
Angiografia por Tomografia Computadorizada , Estenose Coronária/diagnóstico por imagem , Estenose Coronária/patologia , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Artefatos , Calibragem , Estenose Coronária/fisiopatologia , Movimento , Imagens de Fantasmas
6.
EuroIntervention ; 14(15): e1609-e1618, 2019 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-29616627

RESUMO

AIMS: The aim of this study was to evaluate the accuracy of minimum lumen area (MLA) by coronary computed tomography angiography (cCTA) and its impact on fractional flow reserve (FFRCT). METHODS AND RESULTS: Fifty-seven patients (118 lesions, 72 vessels) who underwent cCTA and optical coherence tomography (OCT) were enrolled. OCT and cCTA were co-registered and MLAs were measured with both modalities. FFROCT was calculated using OCT-updated models with cCTA-based lumen geometry replaced by OCT-derived geometry. Lesions were grouped by Agatston score (AS) and minimum lumen diameter (MLD) using the OCT catheter and guidewire size (1.0 mm) as a threshold. For all lesions, the average absolute difference between cCTA and OCT MLA was 0.621±0.571 mm2. Pearson correlation coefficients between cCTA and OCT MLAs in lesions with low-intermediate and high AS were 0.873 and 0.787, respectively (both p<0.0001). Irrespective of AS score, excellent correlations were observed for MLA (r=0.839, p<0.0001) and FFR comparisons (r=0.918, p<0.0001) in lesions with MLD ≥1.0 mm but not for lesions with MLD <1.0 mm. CONCLUSIONS: The spatial resolution of cCTA or calcification does not practically limit the accuracy of lumen boundary identification by cCTA or FFRCT calculations for MLD ≥1.0 mm. The accuracy of cCTA MLA could not be adequately assessed for lesions with MLD <1.0 mm.


Assuntos
Angiografia por Tomografia Computadorizada , Estenose Coronária , Reserva Fracionada de Fluxo Miocárdico , Angiografia Coronária , Vasos Coronários , Humanos , Tomografia de Coerência Óptica
7.
IEEE Trans Biomed Eng ; 66(4): 946-955, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30113890

RESUMO

OBJECTIVE: In this paper, we propose an algorithm for the generation of a patient-specific cardiac vascular network starting from segmented epicardial vessels down to the arterioles. METHOD: We extend a tree generation method based on satisfaction of functional principles, named constrained constructive optimization, to account for multiple, competing vascular trees. The algorithm simulates angiogenesis under vascular volume minimization with flow-related and geometrical constraints adapting the simultaneous tree growths to patient priors. The generated trees fill the entire left ventricle myocardium up to the arterioles. RESULTS: From actual vascular tree models segmented from CT images, we generated networks with 6000 terminal segments for six patients. These networks contain between 33 and 62 synthetic trees. All vascular models match morphometry properties previously described. CONCLUSION AND SIGNIFICANCE: Image-based models derived from CT angiography are being used clinically to simulate blood flow in the coronary arteries of individual patients to aid in the diagnosis of disease and planning treatments. However, image resolution limits vessel segmentation to larger epicardial arteries. The generated model can be used to simulate the blood flow and derived quantities from the aorta into the myocardium. This is an important step for diagnosis and treatment planning of coronary artery disease.


Assuntos
Coração/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Modelos Cardiovasculares , Modelagem Computacional Específica para o Paciente , Algoritmos , Vasos Coronários/diagnóstico por imagem , Hemodinâmica/fisiologia , Humanos , Tomografia Computadorizada por Raios X
8.
JACC Cardiovasc Imaging ; 12(6): 1032-1043, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-29550316

RESUMO

OBJECTIVES: The authors investigated the utility of noninvasive hemodynamic assessment in the identification of high-risk plaques that caused subsequent acute coronary syndrome (ACS). BACKGROUND: ACS is a critical event that impacts the prognosis of patients with coronary artery disease. However, the role of hemodynamic factors in the development of ACS is not well-known. METHODS: Seventy-two patients with clearly documented ACS and available coronary computed tomographic angiography (CTA) acquired between 1 month and 2 years before the development of ACS were included. In 66 culprit and 150 nonculprit lesions as a case-control design, the presence of adverse plaque characteristics (APC) was assessed and hemodynamic parameters (fractional flow reserve derived by coronary computed tomographic angiography [FFRCT], change in FFRCT across the lesion [△FFRCT], wall shear stress [WSS], and axial plaque stress) were analyzed using computational fluid dynamics. The best cut-off values for FFRCT, △FFRCT, WSS, and axial plaque stress were used to define the presence of adverse hemodynamic characteristics (AHC). The incremental discriminant and reclassification abilities for ACS prediction were compared among 3 models (model 1: percent diameter stenosis [%DS] and lesion length, model 2: model 1 + APC, and model 3: model 2 + AHC). RESULTS: The culprit lesions showed higher %DS (55.5 ± 15.4% vs. 43.1 ± 15.0%; p < 0.001) and higher prevalence of APC (80.3% vs. 42.0%; p < 0.001) than nonculprit lesions. Regarding hemodynamic parameters, culprit lesions showed lower FFRCT and higher △FFRCT, WSS, and axial plaque stress than nonculprit lesions (all p values <0.01). Among the 3 models, model 3, which included hemodynamic parameters, showed the highest c-index, and better discrimination (concordance statistic [c-index] 0.789 vs. 0.747; p = 0.014) and reclassification abilities (category-free net reclassification index 0.287; p = 0.047; relative integrated discrimination improvement 0.368; p < 0.001) than model 2. Lesions with both APC and AHC showed significantly higher risk of the culprit for subsequent ACS than those with no APC/AHC (hazard ratio: 11.75; 95% confidence interval: 2.85 to 48.51; p = 0.001) and with either APC or AHC (hazard ratio: 3.22; 95% confidence interval: 1.86 to 5.55; p < 0.001). CONCLUSIONS: Noninvasive hemodynamic assessment enhanced the identification of high-risk plaques that subsequently caused ACS. The integration of noninvasive hemodynamic assessments may improve the identification of culprit lesions for future ACS. (Exploring the Mechanism of Plaque Rupture in Acute Coronary Syndrome Using Coronary CT Angiography and Computational Fluid Dynamic [EMERALD]; NCT02374775).


Assuntos
Síndrome Coronariana Aguda/etiologia , Angiografia por Tomografia Computadorizada , Angiografia Coronária/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Estenose Coronária/diagnóstico por imagem , Vasos Coronários/diagnóstico por imagem , Modelos Cardiovasculares , Modelagem Computacional Específica para o Paciente , Placa Aterosclerótica , Síndrome Coronariana Aguda/diagnóstico por imagem , Síndrome Coronariana Aguda/fisiopatologia , Idoso , Idoso de 80 Anos ou mais , Doença da Artéria Coronariana/complicações , Doença da Artéria Coronariana/fisiopatologia , Estenose Coronária/complicações , Estenose Coronária/fisiopatologia , Vasos Coronários/fisiopatologia , Feminino , Reserva Fracionada de Fluxo Miocárdico , Hemodinâmica , Humanos , Hidrodinâmica , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Estudos Retrospectivos , Fatores de Risco , Ruptura Espontânea , Índice de Gravidade de Doença , Estresse Mecânico
9.
IEEE Trans Pattern Anal Mach Intell ; 28(11): 1768-83, 2006 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17063682

RESUMO

A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
IEEE Trans Pattern Anal Mach Intell ; 28(3): 469-75, 2006 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-16526432

RESUMO

Spectral graph partitioning provides a powerful approach to image segmentation. We introduce an alternate idea that finds partitions with a small isoperimetric constant, requiring solution to a linear system rather than an eigenvector problem. This approach produces the high quality segmentations of spectral methods, but with improved speed and stability.


Assuntos
Algoritmos , Inteligência Artificial , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
IEEE Trans Image Process ; 25(6): 2508-18, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27019488

RESUMO

Minimization of boundary curvature is a classic regularization technique for image segmentation in the presence of noisy image data. Techniques for minimizing curvature have historically been derived from gradient descent methods which could be trapped by a local minimum and, therefore, required a good initialization. Recently, combinatorial optimization techniques have overcome this barrier by providing solutions that can achieve a global optimum. However, curvature regularization methods can fail when the true object has high curvature. In these circumstances, existing methods depend on a data term to overcome the high curvature of the object. Unfortunately, the data term may be ambiguous in some images, which causes these methods also to fail. To overcome these problems, we propose a contrast driven elastica model (including curvature), which can accommodate high curvature objects and an ambiguous data model. We demonstrate that we can accurately segment extremely challenging synthetic and real images with ambiguous data discrimination, poor boundary contrast, and sharp corners. We provide a quantitative evaluation of our segmentation approach when applied to a standard image segmentation data set.

12.
IEEE Trans Med Imaging ; 34(12): 2562-71, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26087484

RESUMO

Patient-specific blood flow modeling combining imaging data and computational fluid dynamics can aid in the assessment of coronary artery disease. Accurate coronary segmentation and realistic physiologic modeling of boundary conditions are important steps to ensure a high diagnostic performance. Segmentation of the coronary arteries can be constructed by a combination of automated algorithms with human review and editing. However, blood pressure and flow are not impacted equally by different local sections of the coronary artery tree. Focusing human review and editing towards regions that will most affect the subsequent simulations can significantly accelerate the review process. We define geometric sensitivity as the standard deviation in hemodynamics-derived metrics due to uncertainty in lumen segmentation. We develop a machine learning framework for estimating the geometric sensitivity in real time. Features used include geometric and clinical variables, and reduced-order models. We develop an anisotropic kernel regression method for assessment of lumen narrowing score, which is used as a feature in the machine learning algorithm. A multi-resolution sensitivity algorithm is introduced to hierarchically refine regions of high sensitivity so that we can quantify sensitivities to a desired spatial resolution. We show that the mean absolute error of the machine learning algorithm compared to 3D simulations is less than 0.01. We further demonstrate that sensitivity is not predicted simply by anatomic reduction but also encodes information about hemodynamics which in turn depends on downstream boundary conditions. This sensitivity approach can be extended to other systems such as cerebral flow, electro-mechanical simulations, etc.


Assuntos
Angiografia Coronária/métodos , Hemodinâmica/fisiologia , Imageamento Tridimensional/métodos , Aprendizado de Máquina , Algoritmos , Doença da Artéria Coronariana/diagnóstico por imagem , Vasos Coronários/diagnóstico por imagem , Humanos , Tomografia Computadorizada por Raios X
13.
Clin Infect Dis ; 39(5): 630-5, 2004 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-15356774

RESUMO

BACKGROUND: Polymerase chain reaction (PCR) is becoming more common in diagnostic laboratories. In some instances, its value has been established. In other cases, assays exist, but their beneficial use has not been determined. This article summarizes findings from 3485 patients who underwent testing over a 6-year period in our laboratory. METHODS: A panel of PCR assays was used for the detection of a range of viruses associated with central nervous system (CNS) infections. PCR results were analyzed in conjunction with information about patient age and sex, the time between onset and specimen collection, and other variables. Medical chart review was conducted for 280 patients to gain diagnostic and epidemiologic insight with regard to cases of unresolved encephalitis. RESULTS: A total of 498 PCR-positive samples (14.3%) were detected. Enteroviruses accounted for the largest number (360 [72.3%]) of positive PCR results, followed by herpes simplex virus (76 [15.3%]), varicella-zoster virus (29 [5.82%]), and West Nile virus (WNV) (18 [3.61%]). Of 360 patients who tested positive for enterovirus, only 46 met the Centers for Disease Control and Prevention's encephalitis definition. It resulted in the greatest decrease (87.2%) in positive PCR results. Overall, the PCR positivity rate for specimens collected within 5 days after illness onset was 17.2%, compared with 8.6% for specimens collected > or =6 days after onset. CONCLUSIONS: The value of PCR in the diagnosis of viral infections has been established. PCR is of lower value in the detection of WNV in CNS, compared with serological testing, but is of greater value in the detection of other arboviruses, particularly viruses in the California serogroup. Medical chart reviews indicated that apparent CNS infection resolves in approximately 50% of cases.


Assuntos
Infecções do Sistema Nervoso Central/diagnóstico , Infecções do Sistema Nervoso Central/virologia , Reação em Cadeia da Polimerase/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , DNA Viral , Feminino , Humanos , Lactente , Masculino , Pessoa de Meia-Idade , RNA Viral
14.
Artigo em Inglês | MEDLINE | ID: mdl-25485356

RESUMO

Patient-specific modeling of blood flow combining CT image data and computational fluid dynamics has significant potential for assessing the functional significance of coronary artery disease. An accurate segmentation of the coronary arteries, an essential ingredient for blood flow modeling methods, is currently attained by a combination of automated algorithms with human review and editing. However, not all portions of the coronary artery tree affect blood flow and pressure equally, and it is of significant importance to direct human review and editing towards regions that will most affect the subsequent simulations. We present a data-driven approach for real-time estimation of sensitivity of blood-flow simulations to uncertainty in lumen segmentation. A machine learning method is used to map patient-specific features to a sensitivity value, using a large database of patients with precomputed sensitivities. We validate the results of the machine learning algorithm using direct 3D blood flow simulations and demonstrate that the algorithm can predict sensitivities in real time with only a small reduction in accuracy as compared to the 3D solutions. This approach can also be applied to other medical applications where physiologic simulations are performed using patient-specific models created from image data.


Assuntos
Angiografia Coronária/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Doença da Artéria Coronariana/fisiopatologia , Circulação Coronária , Reserva Fracionada de Fluxo Miocárdico , Modelos Cardiovasculares , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Velocidade do Fluxo Sanguíneo , Simulação por Computador , Sistemas Computacionais , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/métodos
15.
IEEE Trans Pattern Anal Mach Intell ; 35(9): 2143-60, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23868776

RESUMO

Existing methods for surface matching are limited by the tradeoff between precision and computational efficiency. Here, we present an improved algorithm for dense vertex-to-vertex correspondence that uses direct matching of features defined on a surface and improves it by using spectral correspondence as a regularization. This algorithm has the speed of both feature matching and spectral matching while exhibiting greatly improved precision (distance errors of 1.4 percent). The method, FOCUSR, incorporates implicitly such additional features to calculate the correspondence and relies on the smoothness of the lowest-frequency harmonics of a graph Laplacian to spatially regularize the features. In its simplest form, FOCUSR is an improved spectral correspondence method that nonrigidly deforms spectral embeddings. We provide here a full realization of spectral correspondence where virtually any feature can be used as an additional information using weights on graph edges, but also on graph nodes and as extra embedded coordinates. As an example, the full power of FOCUSR is demonstrated in a real-case scenario with the challenging task of brain surface matching across several individuals. Our results show that combining features and regularizing them in a spectral embedding greatly improves the matching precision (to a submillimeter level) while performing at much greater speed than existing methods.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Modelos Teóricos , Animais , Encéfalo/anatomia & histologia , Bases de Dados Factuais , Cavalos , Humanos , Software , Propriedades de Superfície
16.
Med Image Comput Comput Assist Interv ; 16(Pt 1): 122-30, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24505657

RESUMO

Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Ultrassonografia/métodos , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
IEEE Trans Med Imaging ; 32(7): 1325-35, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23584259

RESUMO

The amount of calibration data needed to produce images of adequate quality can prevent auto-calibrating parallel imaging reconstruction methods like generalized autocalibrating partially parallel acquisitions (GRAPPA) from achieving a high total acceleration factor. To improve the quality of calibration when the number of auto-calibration signal (ACS) lines is restricted, we propose a sparsity-promoting regularized calibration method that finds a GRAPPA kernel consistent with the ACS fit equations that yields jointly sparse reconstructed coil channel images. Several experiments evaluate the performance of the proposed method relative to unregularized and existing regularized calibration methods for both low-quality and underdetermined fits from the ACS lines. These experiments demonstrate that the proposed method, like other regularization methods, is capable of mitigating noise amplification, and in addition, the proposed method is particularly effective at minimizing coherent aliasing artifacts caused by poor kernel calibration in real data. Using the proposed method, we can increase the total achievable acceleration while reducing degradation of the reconstructed image better than existing regularized calibration methods.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Encéfalo/anatomia & histologia , Calibragem , Simulação por Computador , Humanos , Neuroimagem , Imagens de Fantasmas
18.
Med Image Comput Comput Assist Interv ; 15(Pt 1): 528-36, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23285592

RESUMO

The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of probabilistic boosting classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice.


Assuntos
Pulmão/patologia , Algoritmos , Simulação por Computador , Reações Falso-Positivas , Humanos , Processamento de Imagem Assistida por Computador , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Probabilidade , Reprodutibilidade dos Testes
19.
Artigo em Inglês | MEDLINE | ID: mdl-25278742

RESUMO

We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence -the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages.

20.
Front Syst Neurosci ; 6: 78, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23267318

RESUMO

Brain imaging methods have long held promise as diagnostic aids for neuropsychiatric conditions with complex behavioral phenotypes such as Attention-Deficit/Hyperactivity Disorder. This promise has largely been unrealized, at least partly due to the heterogeneity of clinical populations and the small sample size of many studies. A large, multi-center dataset provided by the ADHD-200 Consortium affords new opportunities to test methods for individual diagnosis based on MRI-observable structural brain attributes and functional interactions observable from resting-state fMRI. In this study, we systematically calculated a large set of standard and new quantitative markers from individual subject datasets. These features (>12,000 per subject) consisted of local anatomical attributes such as cortical thickness and structure volumes, and both local and global resting-state network measures. Three methods were used to compute graphs representing interdependencies between activations in different brain areas, and a full set of network features was derived from each. Of these, features derived from the inverse of the time series covariance matrix, under an L1-norm regularization penalty, proved most powerful. Anatomical and network feature sets were used individually, and combined with non-imaging phenotypic features from each subject. Machine learning algorithms were used to rank attributes, and performance was assessed under cross-validation and on a separate test set of 168 subjects for a variety of feature set combinations. While non-imaging features gave highest performance in cross-validation, the addition of imaging features in sufficient numbers led to improved generalization to new data. Stratification by gender also proved to be a fruitful strategy to improve classifier performance. We describe the overall approach used, compare the predictive power of different classes of features, and describe the most impactful features in relation to the current literature.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA