Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
1.
J Pathol ; 254(2): 147-158, 2021 06.
Article in English | MEDLINE | ID: mdl-33904171

ABSTRACT

Artificial intelligence (AI)-based systems applied to histopathology whole-slide images have the potential to improve patient care through mitigation of challenges posed by diagnostic variability, histopathology caseload, and shortage of pathologists. We sought to define the performance of an AI-based automated prostate cancer detection system, Paige Prostate, when applied to independent real-world data. The algorithm was employed to classify slides into two categories: benign (no further review needed) or suspicious (additional histologic and/or immunohistochemical analysis required). We assessed the sensitivity, specificity, positive predictive values (PPVs), and negative predictive values (NPVs) of a local pathologist, two central pathologists, and Paige Prostate in the diagnosis of 600 transrectal ultrasound-guided prostate needle core biopsy regions ('part-specimens') from 100 consecutive patients, and to ascertain the impact of Paige Prostate on diagnostic accuracy and efficiency. Paige Prostate displayed high sensitivity (0.99; CI 0.96-1.0), NPV (1.0; CI 0.98-1.0), and specificity (0.93; CI 0.90-0.96) at the part-specimen level. At the patient level, Paige Prostate displayed optimal sensitivity (1.0; CI 0.93-1.0) and NPV (1.0; CI 0.91-1.0) at a specificity of 0.78 (CI 0.64-0.89). The 27 part-specimens considered by Paige Prostate as suspicious, whose final diagnosis was benign, were found to comprise atrophy (n = 14), atrophy and apical prostate tissue (n = 1), apical/benign prostate tissue (n = 9), adenosis (n = 2), and post-atrophic hyperplasia (n = 1). Paige Prostate resulted in the identification of four additional patients whose diagnoses were upgraded from benign/suspicious to malignant. Additionally, this AI-based test provided an estimated 65.5% reduction of the diagnostic time for the material analyzed. Given its optimal sensitivity and NPV, Paige Prostate has the potential to be employed for the automated identification of patients whose histologic slides could forgo full histopathologic review. In addition to providing incremental improvements in diagnostic accuracy and efficiency, this AI-based system identified patients whose prostate cancers were not initially diagnosed by three experienced histopathologists. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons, Ltd. on behalf of The Pathological Society of Great Britain and Ireland.


Subject(s)
Artificial Intelligence , Prostatic Neoplasms/diagnosis , Aged , Aged, 80 and over , Biopsy , Biopsy, Large-Core Needle , Humans , Machine Learning , Male , Middle Aged , Pathologists , Prostate/pathology , Prostatic Neoplasms/pathology
2.
Mod Pathol ; 33(10): 2058-2066, 2020 10.
Article in English | MEDLINE | ID: mdl-32393768

ABSTRACT

Prostate cancer (PrCa) is the second most common cancer among men in the United States. The gold standard for detecting PrCa is the examination of prostate needle core biopsies. Diagnosis can be challenging, especially for small, well-differentiated cancers. Recently, machine learning algorithms have been developed for detecting PrCa in whole slide images (WSIs) with high test accuracy. However, the impact of these artificial intelligence systems on pathologic diagnosis is not known. To address this, we investigated how pathologists interact with Paige Prostate Alpha, a state-of-the-art PrCa detection system, in WSIs of prostate needle core biopsies stained with hematoxylin and eosin. Three AP-board certified pathologists assessed 304 anonymized prostate needle core biopsy WSIs in 8 hours. The pathologists classified each WSI as benign or cancerous. After ~4 weeks, pathologists were tasked with re-reviewing each WSI with the aid of Paige Prostate Alpha. For each WSI, Paige Prostate Alpha was used to perform cancer detection and, for WSIs where cancer was detected, the system marked the area where cancer was detected with the highest probability. The original diagnosis for each slide was rendered by genitourinary pathologists and incorporated any ancillary studies requested during the original diagnostic assessment. Against this ground truth, the pathologists and Paige Prostate Alpha were measured. Without Paige Prostate Alpha, pathologists had an average sensitivity of 74% and an average specificity of 97%. With Paige Prostate Alpha, the average sensitivity for pathologists significantly increased to 90% with no statistically significant change in specificity. With Paige Prostate Alpha, pathologists more often correctly classified smaller, lower grade tumors, and spent less time analyzing each WSI. Future studies will investigate if similar benefit is yielded when such a system is used to detect other forms of cancer in a setting that more closely emulates real practice.


Subject(s)
Deep Learning , Diagnosis, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Pathology, Clinical/methods , Prostatic Neoplasms/diagnosis , Biopsy, Large-Core Needle , Humans , Male
3.
Med Phys ; 46(12): 5514-5527, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31603567

ABSTRACT

PURPOSE: Coronary x-ray computed tomography angiography (CCTA) continues to develop as a noninvasive method for the assessment of coronary vessel geometry and the identification of physiologically significant lesions. The uncertainty of quantitative lesion diameter measurement due to limited spatial resolution and vessel motion reduces the accuracy of CCTA diagnoses. In this paper, we introduce a new technique called computed tomography (CT)-number-Calibrated Diameter to improve the accuracy of the vessel and stenosis diameter measurements with CCTA. METHODS: A calibration phantom containing cylindrical holes (diameters spanning from 0.8 mm through 4.0 mm) capturing the range of diameters found in human coronary vessels was three-dimensional printed. We also printed a human stenosis phantom with 17 tubular channels having the geometry of lesions derived from patient data. We acquired CT scans of the two phantoms with seven different imaging protocols. Calibration curves relating vessel intraluminal maximum voxel value (maximum CT number of a voxel, described in Hounsfield Units, HU) to true diameter, and full-width-at-half maximum (FWHM) to true diameter were constructed for each CCTA protocol. In addition, we acquired scans with a small constant motion (15 mm/s) and used a motion correction reconstruction (Snapshot Freeze) algorithm to correct motion artifacts. We applied our technique to measure the lesion diameter in the 17 lesions in the stenosis phantom and compared the performance of CT-number-Calibrated Diameter to the ground truth diameter and a FWHM estimate. RESULTS: In all cases, vessel intraluminal maximum voxel value vs diameter was found to have a simple functional form based on the two-dimensional point spread function yielding a constant maximum voxel value region above a cutoff diameter, and a decreasing maximum voxel value vs decreasing diameter below a cutoff diameter. After normalization, focal spot size and reconstruction kernel were the principal determinants of cutoff diameter and the rate of maximum voxel value reduction vs decreasing diameter. The small constant motion had a significant effect on the CT number calibration; however, the motion-correction algorithm returned the maximum voxel value vs diameter curve to that of stationary vessels. The CT number Calibration technique showed better performance than FWHM estimation of diameter, yielding a high accuracy in the tested range (0.8 mm through 2.5 mm). We found a strong linear correlation between the smallest diameter in each of 17 lesions measured by CT-number-Calibrated Diameter (DC ) and ground truth diameter (Dgt ), (DC  = 0.951 × Dgt  + 0.023 mm, r = 0.998 with a slope very close to 1.0 and intercept very close to 0 mm. CONCLUSIONS: Computed tomography-number-Calibrated Diameter is an effective method to enhance the accuracy of the estimate of small vessel diameters and degree of coronary stenosis in CCTA.


Subject(s)
Computed Tomography Angiography , Coronary Stenosis/diagnostic imaging , Coronary Stenosis/pathology , Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Artifacts , Calibration , Coronary Stenosis/physiopathology , Movement , Phantoms, Imaging
4.
JACC Cardiovasc Imaging ; 12(6): 1032-1043, 2019 06.
Article in English | MEDLINE | ID: mdl-29550316

ABSTRACT

OBJECTIVES: The authors investigated the utility of noninvasive hemodynamic assessment in the identification of high-risk plaques that caused subsequent acute coronary syndrome (ACS). BACKGROUND: ACS is a critical event that impacts the prognosis of patients with coronary artery disease. However, the role of hemodynamic factors in the development of ACS is not well-known. METHODS: Seventy-two patients with clearly documented ACS and available coronary computed tomographic angiography (CTA) acquired between 1 month and 2 years before the development of ACS were included. In 66 culprit and 150 nonculprit lesions as a case-control design, the presence of adverse plaque characteristics (APC) was assessed and hemodynamic parameters (fractional flow reserve derived by coronary computed tomographic angiography [FFRCT], change in FFRCT across the lesion [△FFRCT], wall shear stress [WSS], and axial plaque stress) were analyzed using computational fluid dynamics. The best cut-off values for FFRCT, △FFRCT, WSS, and axial plaque stress were used to define the presence of adverse hemodynamic characteristics (AHC). The incremental discriminant and reclassification abilities for ACS prediction were compared among 3 models (model 1: percent diameter stenosis [%DS] and lesion length, model 2: model 1 + APC, and model 3: model 2 + AHC). RESULTS: The culprit lesions showed higher %DS (55.5 ± 15.4% vs. 43.1 ± 15.0%; p < 0.001) and higher prevalence of APC (80.3% vs. 42.0%; p < 0.001) than nonculprit lesions. Regarding hemodynamic parameters, culprit lesions showed lower FFRCT and higher △FFRCT, WSS, and axial plaque stress than nonculprit lesions (all p values <0.01). Among the 3 models, model 3, which included hemodynamic parameters, showed the highest c-index, and better discrimination (concordance statistic [c-index] 0.789 vs. 0.747; p = 0.014) and reclassification abilities (category-free net reclassification index 0.287; p = 0.047; relative integrated discrimination improvement 0.368; p < 0.001) than model 2. Lesions with both APC and AHC showed significantly higher risk of the culprit for subsequent ACS than those with no APC/AHC (hazard ratio: 11.75; 95% confidence interval: 2.85 to 48.51; p = 0.001) and with either APC or AHC (hazard ratio: 3.22; 95% confidence interval: 1.86 to 5.55; p < 0.001). CONCLUSIONS: Noninvasive hemodynamic assessment enhanced the identification of high-risk plaques that subsequently caused ACS. The integration of noninvasive hemodynamic assessments may improve the identification of culprit lesions for future ACS. (Exploring the Mechanism of Plaque Rupture in Acute Coronary Syndrome Using Coronary CT Angiography and Computational Fluid Dynamic [EMERALD]; NCT02374775).


Subject(s)
Acute Coronary Syndrome/etiology , Computed Tomography Angiography , Coronary Angiography/methods , Coronary Artery Disease/diagnostic imaging , Coronary Stenosis/diagnostic imaging , Coronary Vessels/diagnostic imaging , Models, Cardiovascular , Patient-Specific Modeling , Plaque, Atherosclerotic , Acute Coronary Syndrome/diagnostic imaging , Acute Coronary Syndrome/physiopathology , Aged , Aged, 80 and over , Coronary Artery Disease/complications , Coronary Artery Disease/physiopathology , Coronary Stenosis/complications , Coronary Stenosis/physiopathology , Coronary Vessels/physiopathology , Female , Fractional Flow Reserve, Myocardial , Hemodynamics , Humans , Hydrodynamics , Male , Middle Aged , Predictive Value of Tests , Prognosis , Retrospective Studies , Risk Factors , Rupture, Spontaneous , Severity of Illness Index , Stress, Mechanical
5.
EuroIntervention ; 14(15): e1609-e1618, 2019 Feb 08.
Article in English | MEDLINE | ID: mdl-29616627

ABSTRACT

AIMS: The aim of this study was to evaluate the accuracy of minimum lumen area (MLA) by coronary computed tomography angiography (cCTA) and its impact on fractional flow reserve (FFRCT). METHODS AND RESULTS: Fifty-seven patients (118 lesions, 72 vessels) who underwent cCTA and optical coherence tomography (OCT) were enrolled. OCT and cCTA were co-registered and MLAs were measured with both modalities. FFROCT was calculated using OCT-updated models with cCTA-based lumen geometry replaced by OCT-derived geometry. Lesions were grouped by Agatston score (AS) and minimum lumen diameter (MLD) using the OCT catheter and guidewire size (1.0 mm) as a threshold. For all lesions, the average absolute difference between cCTA and OCT MLA was 0.621±0.571 mm2. Pearson correlation coefficients between cCTA and OCT MLAs in lesions with low-intermediate and high AS were 0.873 and 0.787, respectively (both p<0.0001). Irrespective of AS score, excellent correlations were observed for MLA (r=0.839, p<0.0001) and FFR comparisons (r=0.918, p<0.0001) in lesions with MLD ≥1.0 mm but not for lesions with MLD <1.0 mm. CONCLUSIONS: The spatial resolution of cCTA or calcification does not practically limit the accuracy of lumen boundary identification by cCTA or FFRCT calculations for MLD ≥1.0 mm. The accuracy of cCTA MLA could not be adequately assessed for lesions with MLD <1.0 mm.


Subject(s)
Computed Tomography Angiography , Coronary Stenosis , Fractional Flow Reserve, Myocardial , Coronary Angiography , Coronary Vessels , Humans , Tomography, Optical Coherence
6.
IEEE Trans Biomed Eng ; 66(4): 946-955, 2019 04.
Article in English | MEDLINE | ID: mdl-30113890

ABSTRACT

OBJECTIVE: In this paper, we propose an algorithm for the generation of a patient-specific cardiac vascular network starting from segmented epicardial vessels down to the arterioles. METHOD: We extend a tree generation method based on satisfaction of functional principles, named constrained constructive optimization, to account for multiple, competing vascular trees. The algorithm simulates angiogenesis under vascular volume minimization with flow-related and geometrical constraints adapting the simultaneous tree growths to patient priors. The generated trees fill the entire left ventricle myocardium up to the arterioles. RESULTS: From actual vascular tree models segmented from CT images, we generated networks with 6000 terminal segments for six patients. These networks contain between 33 and 62 synthetic trees. All vascular models match morphometry properties previously described. CONCLUSION AND SIGNIFICANCE: Image-based models derived from CT angiography are being used clinically to simulate blood flow in the coronary arteries of individual patients to aid in the diagnosis of disease and planning treatments. However, image resolution limits vessel segmentation to larger epicardial arteries. The generated model can be used to simulate the blood flow and derived quantities from the aorta into the myocardium. This is an important step for diagnosis and treatment planning of coronary artery disease.


Subject(s)
Heart/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Models, Cardiovascular , Patient-Specific Modeling , Algorithms , Coronary Vessels/diagnostic imaging , Hemodynamics/physiology , Humans , Tomography, X-Ray Computed
7.
Eur Radiol ; 28(9): 4018-4026, 2018 Sep.
Article in English | MEDLINE | ID: mdl-29572635

ABSTRACT

OBJECTIVES: Our goal was to evaluate the efficacy of a fully automated method for assessing the image quality (IQ) of coronary computed tomography angiography (CCTA). METHODS: The machine learning method was trained using 75 CCTA studies by mapping features (noise, contrast, misregistration scores, and un-interpretability index) to an IQ score based on manual ground truth data. The automated method was validated on a set of 50 CCTA studies and subsequently tested on a new set of 172 CCTA studies against visual IQ scores on a 5-point Likert scale. RESULTS: The area under the curve in the validation set was 0.96. In the 172 CCTA studies, our method yielded a Cohen's kappa statistic for the agreement between automated and visual IQ assessment of 0.67 (p < 0.01). In the group where good to excellent (n = 163), fair (n = 6), and poor visual IQ scores (n = 3) were graded, 155, 5, and 2 of the patients received an automated IQ score > 50 %, respectively. CONCLUSION: Fully automated assessment of the IQ of CCTA data sets by machine learning was reproducible and provided similar results compared with visual analysis within the limits of inter-operator variability. KEY POINTS: • The proposed method enables automated and reproducible image quality assessment. • Machine learning and visual assessments yielded comparable estimates of image quality. • Automated assessment potentially allows for more standardised image quality. • Image quality assessment enables standardization of clinical trial results across different datasets.


Subject(s)
Computed Tomography Angiography/methods , Coronary Artery Disease/diagnostic imaging , Machine Learning , Radiographic Image Enhancement/methods , Aged , Area Under Curve , Female , Humans , Male , Middle Aged
8.
IEEE Trans Image Process ; 25(6): 2508-18, 2016 06.
Article in English | MEDLINE | ID: mdl-27019488

ABSTRACT

Minimization of boundary curvature is a classic regularization technique for image segmentation in the presence of noisy image data. Techniques for minimizing curvature have historically been derived from gradient descent methods which could be trapped by a local minimum and, therefore, required a good initialization. Recently, combinatorial optimization techniques have overcome this barrier by providing solutions that can achieve a global optimum. However, curvature regularization methods can fail when the true object has high curvature. In these circumstances, existing methods depend on a data term to overcome the high curvature of the object. Unfortunately, the data term may be ambiguous in some images, which causes these methods also to fail. To overcome these problems, we propose a contrast driven elastica model (including curvature), which can accommodate high curvature objects and an ambiguous data model. We demonstrate that we can accurately segment extremely challenging synthetic and real images with ambiguous data discrimination, poor boundary contrast, and sharp corners. We provide a quantitative evaluation of our segmentation approach when applied to a standard image segmentation data set.

9.
IEEE Trans Med Imaging ; 34(12): 2562-71, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26087484

ABSTRACT

Patient-specific blood flow modeling combining imaging data and computational fluid dynamics can aid in the assessment of coronary artery disease. Accurate coronary segmentation and realistic physiologic modeling of boundary conditions are important steps to ensure a high diagnostic performance. Segmentation of the coronary arteries can be constructed by a combination of automated algorithms with human review and editing. However, blood pressure and flow are not impacted equally by different local sections of the coronary artery tree. Focusing human review and editing towards regions that will most affect the subsequent simulations can significantly accelerate the review process. We define geometric sensitivity as the standard deviation in hemodynamics-derived metrics due to uncertainty in lumen segmentation. We develop a machine learning framework for estimating the geometric sensitivity in real time. Features used include geometric and clinical variables, and reduced-order models. We develop an anisotropic kernel regression method for assessment of lumen narrowing score, which is used as a feature in the machine learning algorithm. A multi-resolution sensitivity algorithm is introduced to hierarchically refine regions of high sensitivity so that we can quantify sensitivities to a desired spatial resolution. We show that the mean absolute error of the machine learning algorithm compared to 3D simulations is less than 0.01. We further demonstrate that sensitivity is not predicted simply by anatomic reduction but also encodes information about hemodynamics which in turn depends on downstream boundary conditions. This sensitivity approach can be extended to other systems such as cerebral flow, electro-mechanical simulations, etc.


Subject(s)
Coronary Angiography/methods , Hemodynamics/physiology , Imaging, Three-Dimensional/methods , Machine Learning , Algorithms , Coronary Artery Disease/diagnostic imaging , Coronary Vessels/diagnostic imaging , Humans , Tomography, X-Ray Computed
10.
Article in English | MEDLINE | ID: mdl-25485356

ABSTRACT

Patient-specific modeling of blood flow combining CT image data and computational fluid dynamics has significant potential for assessing the functional significance of coronary artery disease. An accurate segmentation of the coronary arteries, an essential ingredient for blood flow modeling methods, is currently attained by a combination of automated algorithms with human review and editing. However, not all portions of the coronary artery tree affect blood flow and pressure equally, and it is of significant importance to direct human review and editing towards regions that will most affect the subsequent simulations. We present a data-driven approach for real-time estimation of sensitivity of blood-flow simulations to uncertainty in lumen segmentation. A machine learning method is used to map patient-specific features to a sensitivity value, using a large database of patients with precomputed sensitivities. We validate the results of the machine learning algorithm using direct 3D blood flow simulations and demonstrate that the algorithm can predict sensitivities in real time with only a small reduction in accuracy as compared to the 3D solutions. This approach can also be applied to other medical applications where physiologic simulations are performed using patient-specific models created from image data.


Subject(s)
Coronary Angiography/methods , Coronary Artery Disease/diagnostic imaging , Coronary Artery Disease/physiopathology , Coronary Circulation , Fractional Flow Reserve, Myocardial , Models, Cardiovascular , Radiographic Image Interpretation, Computer-Assisted/methods , Blood Flow Velocity , Computer Simulation , Computer Systems , Humans , Reproducibility of Results , Sensitivity and Specificity , Tomography, X-Ray Computed/methods
11.
IEEE Trans Pattern Anal Mach Intell ; 35(9): 2143-60, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23868776

ABSTRACT

Existing methods for surface matching are limited by the tradeoff between precision and computational efficiency. Here, we present an improved algorithm for dense vertex-to-vertex correspondence that uses direct matching of features defined on a surface and improves it by using spectral correspondence as a regularization. This algorithm has the speed of both feature matching and spectral matching while exhibiting greatly improved precision (distance errors of 1.4 percent). The method, FOCUSR, incorporates implicitly such additional features to calculate the correspondence and relies on the smoothness of the lowest-frequency harmonics of a graph Laplacian to spatially regularize the features. In its simplest form, FOCUSR is an improved spectral correspondence method that nonrigidly deforms spectral embeddings. We provide here a full realization of spectral correspondence where virtually any feature can be used as an additional information using weights on graph edges, but also on graph nodes and as extra embedded coordinates. As an example, the full power of FOCUSR is demonstrated in a real-case scenario with the challenging task of brain surface matching across several individuals. Our results show that combining features and regularizing them in a spectral embedding greatly improves the matching precision (to a submillimeter level) while performing at much greater speed than existing methods.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Models, Theoretical , Animals , Brain/anatomy & histology , Databases, Factual , Horses , Humans , Software , Surface Properties
12.
IEEE Trans Med Imaging ; 32(7): 1325-35, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23584259

ABSTRACT

The amount of calibration data needed to produce images of adequate quality can prevent auto-calibrating parallel imaging reconstruction methods like generalized autocalibrating partially parallel acquisitions (GRAPPA) from achieving a high total acceleration factor. To improve the quality of calibration when the number of auto-calibration signal (ACS) lines is restricted, we propose a sparsity-promoting regularized calibration method that finds a GRAPPA kernel consistent with the ACS fit equations that yields jointly sparse reconstructed coil channel images. Several experiments evaluate the performance of the proposed method relative to unregularized and existing regularized calibration methods for both low-quality and underdetermined fits from the ACS lines. These experiments demonstrate that the proposed method, like other regularization methods, is capable of mitigating noise amplification, and in addition, the proposed method is particularly effective at minimizing coherent aliasing artifacts caused by poor kernel calibration in real data. Using the proposed method, we can increase the total achievable acceleration while reducing degradation of the reconstructed image better than existing regularized calibration methods.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Brain/anatomy & histology , Calibration , Computer Simulation , Humans , Neuroimaging , Phantoms, Imaging
13.
Med Image Comput Comput Assist Interv ; 16(Pt 1): 122-30, 2013.
Article in English | MEDLINE | ID: mdl-24505657

ABSTRACT

Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Ultrasonography/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
14.
Front Syst Neurosci ; 6: 78, 2012.
Article in English | MEDLINE | ID: mdl-23267318

ABSTRACT

Brain imaging methods have long held promise as diagnostic aids for neuropsychiatric conditions with complex behavioral phenotypes such as Attention-Deficit/Hyperactivity Disorder. This promise has largely been unrealized, at least partly due to the heterogeneity of clinical populations and the small sample size of many studies. A large, multi-center dataset provided by the ADHD-200 Consortium affords new opportunities to test methods for individual diagnosis based on MRI-observable structural brain attributes and functional interactions observable from resting-state fMRI. In this study, we systematically calculated a large set of standard and new quantitative markers from individual subject datasets. These features (>12,000 per subject) consisted of local anatomical attributes such as cortical thickness and structure volumes, and both local and global resting-state network measures. Three methods were used to compute graphs representing interdependencies between activations in different brain areas, and a full set of network features was derived from each. Of these, features derived from the inverse of the time series covariance matrix, under an L1-norm regularization penalty, proved most powerful. Anatomical and network feature sets were used individually, and combined with non-imaging phenotypic features from each subject. Machine learning algorithms were used to rank attributes, and performance was assessed under cross-validation and on a separate test set of 168 subjects for a variety of feature set combinations. While non-imaging features gave highest performance in cross-validation, the addition of imaging features in sufficient numbers led to improved generalization to new data. Stratification by gender also proved to be a fruitful strategy to improve classifier performance. We describe the overall approach used, compare the predictive power of different classes of features, and describe the most impactful features in relation to the current literature.

15.
Magn Reson Med ; 68(4): 1176-89, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22213069

ABSTRACT

To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with generalized autocalibrating partially parallel acquisitions (GRAPPA) alone, the DEnoising of Sparse Images from GRAPPA using the Nullspace method is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DEnoising of Sparse Images from GRAPPA using the Nullspace method are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), peak-signal-to-noise ratio, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DEnoising of Sparse Images from GRAPPA using the Nullspace method mitigates noise amplification better than both GRAPPA and L1 iterative self-consistent parallel imaging reconstruction (the latter limited here by uniform undersampling).


Subject(s)
Algorithms , Artifacts , Brain/anatomy & histology , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Humans , Reproducibility of Results , Sensitivity and Specificity , Signal-To-Noise Ratio
16.
Med Image Comput Comput Assist Interv ; 15(Pt 1): 528-36, 2012.
Article in English | MEDLINE | ID: mdl-23285592

ABSTRACT

The automatic delineation of the boundaries of organs and other anatomical structures is a key component of many medical image processing systems. In this paper we present a generic learning approach based on a novel space of segmentation features, which can be trained to predict the overlap error and Dice coefficient of an arbitrary organ segmentation without knowing the ground truth delineation. We show the regressor to be much stronger a predictor of these error metrics than the responses of probabilistic boosting classifiers trained on the segmentation boundary. The presented approach not only allows us to build reliable confidence measures and fidelity checks, but also to rank several segmentation hypotheses against each other during online usage of the segmentation algorithm in clinical practice.


Subject(s)
Lung/pathology , Algorithms , Computer Simulation , False Positive Reactions , Humans , Image Processing, Computer-Assisted , Models, Statistical , Pattern Recognition, Automated/methods , Probability , Reproducibility of Results
17.
Article in English | MEDLINE | ID: mdl-25278742

ABSTRACT

We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence -the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages.

18.
Article in English | MEDLINE | ID: mdl-22003738

ABSTRACT

We present the first system for measurement of proximal isovelocity surface area (PISA) on a 3D ultrasound acquisition using modified ultrasound hardware, volumetric image segmentation and a simple efficient workflow. Accurate measurement of the PISA in 3D flow through a valve is an emerging method for quantitatively assessing cardiac valve regurgitation and function. Current state of the art protocols for assessing regurgitant flow require laborious and time consuming user interaction with the data, where a precise execution is crucial for an accurate diagnosis. We propose a new improved 3D PISA workflow that is initialized interactively with two points, followed by fully automatic segmentation of the valve annulus and isovelocity surface area computation. Our system is first validated against several in vitro phantoms to verify the calculations of surface area, orifice area and regurgitant flow. Finally, we use our system to compare orifice area calculations obtained from in vivo patient imaging measurements to an independent measurement and then use our system to successfully classify patients into mild-moderate regurgitation and moderate-severe regurgitation categories.


Subject(s)
Echocardiography/methods , Mitral Valve Insufficiency/pathology , Ultrasonography, Doppler/methods , Algorithms , Automation , Blood Flow Velocity , Cardiology/methods , Coronary Circulation , Humans , Imaging, Three-Dimensional , Mitral Valve/pathology , Models, Statistical , Pattern Recognition, Automated , Phantoms, Imaging , Software
19.
Inf Process Med Imaging ; 22: 660-73, 2011.
Article in English | MEDLINE | ID: mdl-21761694

ABSTRACT

Brain matching is an important problem in neuroimaging studies. Current surface-based methods for cortex matching and atlasing, although quite accurate, can require long computation times. Here we propose an approach based on spectral correspondence, where spectra of graphs derived from the surface model meshes are matched. Cerebral cortex matching problems can thus benefit from the tremendous speed advantage of spectral methods, which are able to calculate a cortex matching in seconds rather than hours. Moreover, spectral methods are extended in order to use additional information that can improve matching. Additional information, such as sulcal depth, surface curvature, and cortical thickness can be represented in a flexible way into graph node weights (rather than only into graph edge weights) and as extra embedded coordinates. In control experiments, cortex matching becomes almost perfect when using additional information. With real data from 12 subjects, the results of 288 correspondence maps are 88% equivalent to (and strongly correlated with) the correspondences computed with FreeSurfer, a leading computational tool used for cerebral cortex matching. Our fast and flexible spectral correspondence method could open new possibilities for brain studies that involve different types of information and that were previously limited by the computational burden.


Subject(s)
Algorithms , Brain/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
20.
J Pathol Inform ; 2: S8, 2011.
Article in English | MEDLINE | ID: mdl-22811964

ABSTRACT

Several applications such as multiprojector displays and microscopy require the mosaicing of images (tiles) acquired by a camera as it traverses an unknown trajectory in 3D space. A homography relates the image coordinates of a point in each tile to those of a reference tile provided the 3D scene is planar. Our approach in such applications is to first perform pairwise alignment of the tiles that have imaged common regions in order to recover a homography relating the tile pair. We then find the global set of homographies relating each individual tile to a reference tile such that the homographies relating all tile pairs are kept as consistent as possible. Using these global homographies, one can generate a mosaic of the entire scene. We derive a general analytical solution for the global homographies by representing the pair-wise homographies on a connectivity graph. Our solution can accommodate imprecise prior information regarding the global homographies whenever such information is available. We also derive equations for the special case of translation estimation of an X-Y microscopy stage used in histology imaging and present examples of stitched microscopy slices of specimens obtained after radical prostatectomy or prostate biopsy. In addition, we demonstrate the superiority of our approach over tree-structured approaches for global error minimization.

SELECTION OF CITATIONS
SEARCH DETAIL
...