Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
IEEE Trans Med Imaging ; 43(5): 1995-2009, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38224508

RESUMO

Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated. However, medical image datasets are often low in sample size and only partially labeled, i.e., only a subset of organs are annotated. Therefore, it is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential. In this paper, we systematically investigate the partial-label segmentation problem with theoretical and empirical analyses on the prior techniques. We revisit the problem from a perspective of partial label supervision signals and identify two signals derived from ground truth and one from pseudo labels. We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training. Concretely, we first train an initial unified model using two ground truth-based signals and then iteratively incorporate the pseudo label signal to the initial model using self-training. To mitigate performance degradation caused by unreliable pseudo labels, we assess the reliability of pseudo labels via outlier detection in latent space and exclude the most unreliable pseudo labels from each self-training iteration. Extensive experiments are conducted on one public and three private partial-label segmentation tasks over 12 CT datasets. Experimental results show that our proposed COSST achieves significant improvement over the baseline method, i.e., individual networks trained on each partially labeled dataset. Compared to the state-of-the-art partial-label segmentation methods, COSST demonstrates consistent superior performance on various segmentation tasks and with different training data sizes.


Assuntos
Bases de Dados Factuais , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Tomografia Computadorizada por Raios X/métodos , Aprendizado de Máquina Supervisionado
2.
Sci Rep ; 13(1): 21097, 2023 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-38036602

RESUMO

The evaluation of deep-learning (DL) systems typically relies on the Area under the Receiver-Operating-Curve (AU-ROC) as a performance metric. However, AU-ROC, in its holistic form, does not sufficiently consider performance within specific ranges of sensitivity and specificity, which are critical for the intended operational context of the system. Consequently, two systems with identical AU-ROC values can exhibit significantly divergent real-world performance. This issue is particularly pronounced in the context of anomaly detection tasks, a commonly employed application of DL systems across various research domains, including medical imaging, industrial automation, manufacturing, cyber security, fraud detection, and drug research, among others. The challenge arises from the heavy class imbalance in training datasets, with the abnormality class often incurring a considerably higher misclassification cost compared to the normal class. Traditional DL systems address this by adjusting the weighting of the cost function or optimizing for specific points along the ROC curve. While these approaches yield reasonable results in many cases, they do not actively seek to maximize performance for the desired operating point. In this study, we introduce a novel technique known as AUCReshaping, designed to reshape the ROC curve exclusively within the specified sensitivity and specificity range, by optimizing sensitivity at a predetermined specificity level. This reshaping is achieved through an adaptive and iterative boosting mechanism that allows the network to focus on pertinent samples during the learning process. We primarily investigated the impact of AUCReshaping in the context of abnormality detection tasks, specifically in Chest X-Ray (CXR) analysis, followed by breast mammogram and credit card fraud detection tasks. The results reveal a substantial improvement, ranging from 2 to 40%, in sensitivity at high-specificity levels for binary classification tasks.


Assuntos
Algoritmos , Mamografia , Sensibilidade e Especificidade , Curva ROC , Radiografia
3.
J Med Imaging (Bellingham) ; 9(6): 064503, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36466078

RESUMO

Purpose: Building accurate and robust artificial intelligence systems for medical image assessment requires the creation of large sets of annotated training examples. However, constructing such datasets is very costly due to the complex nature of annotation tasks, which often require expert knowledge (e.g., a radiologist). To counter this limitation, we propose a method to learn from medical images at scale in a self-supervised way. Approach: Our approach, based on contrastive learning and online feature clustering, leverages training datasets of over 100,000,000 medical images of various modalities, including radiography, computed tomography (CT), magnetic resonance (MR) imaging, and ultrasonography (US). We propose to use the learned features to guide model training in supervised and hybrid self-supervised/supervised regime on various downstream tasks. Results: We highlight a number of advantages of this strategy on challenging image assessment problems in radiography, CT, and MR: (1) significant increase in accuracy compared to the state-of-the-art (e.g., area under the curve boost of 3% to 7% for detection of abnormalities from chest radiography scans and hemorrhage detection on brain CT); (2) acceleration of model convergence during training by up to 85% compared with using no pretraining (e.g., 83% when training a model for detection of brain metastases in MR scans); and (3) increase in robustness to various image augmentations, such as intensity variations, rotations or scaling reflective of data variation seen in the field. Conclusions: The proposed approach enables large gains in accuracy and robustness on challenging image assessment problems. The improvement is significant compared with other state-of-the-art approaches trained on medical or vision images (e.g., ImageNet).

5.
Radiat Oncol ; 17(1): 129, 2022 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-35869525

RESUMO

BACKGROUND: We describe and evaluate a deep network algorithm which automatically contours organs at risk in the thorax and pelvis on computed tomography (CT) images for radiation treatment planning. METHODS: The algorithm identifies the region of interest (ROI) automatically by detecting anatomical landmarks around the specific organs using a deep reinforcement learning technique. The segmentation is restricted to this ROI and performed by a deep image-to-image network (DI2IN) based on a convolutional encoder-decoder architecture combined with multi-level feature concatenation. The algorithm is commercially available in the medical products "syngo.via RT Image Suite VB50" and "AI-Rad Companion Organs RT VA20" (Siemens Healthineers). For evaluation, thoracic CT images of 237 patients and pelvic CT images of 102 patients were manually contoured following the Radiation Therapy Oncology Group (RTOG) guidelines and compared to the DI2IN results using metrics for volume, overlap and distance, e.g., Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD95). The contours were also compared visually slice by slice. RESULTS: We observed high correlations between automatic and manual contours. The best results were obtained for the lungs (DSC 0.97, HD95 2.7 mm/2.9 mm for left/right lung), followed by heart (DSC 0.92, HD95 4.4 mm), bladder (DSC 0.88, HD95 6.7 mm) and rectum (DSC 0.79, HD95 10.8 mm). Visual inspection showed excellent agreements with some exceptions for heart and rectum. CONCLUSIONS: The DI2IN algorithm automatically generated contours for organs at risk close to those by a human expert, making the contouring step in radiation treatment planning simpler and faster. Few cases still required manual corrections, mainly for heart and rectum.


Assuntos
Aprendizado Profundo , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Órgãos em Risco , Planejamento da Radioterapia Assistida por Computador/métodos , Tórax , Tomografia Computadorizada por Raios X/métodos
6.
J Med Imaging (Bellingham) ; 9(3): 034003, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35721308

RESUMO

Purpose: Rapid prognostication of COVID-19 patients is important for efficient resource allocation. We evaluated the relative prognostic value of baseline clinical variables (CVs), quantitative human-read chest CT (qCT), and AI-read chest radiograph (qCXR) airspace disease (AD) in predicting severe COVID-19. Approach: We retrospectively selected 131 COVID-19 patients (SARS-CoV-2 positive, March to October, 2020) at a tertiary hospital in the United States, who underwent chest CT and CXR within 48 hr of initial presentation. CVs included patient demographics and laboratory values; imaging variables included qCT volumetric percentage AD (POv) and qCXR area-based percentage AD (POa), assessed by a deep convolutional neural network. Our prognostic outcome was need for ICU admission. We compared the performance of three logistic regression models: using CVs known to be associated with prognosis (model I), using a dimension-reduced set of best predictor variables (model II), and using only age and AD (model III). Results: 60/131 patients required ICU admission, whereas 71/131 did not. Model I performed the poorest ( AUC = 0.67 [0.58 to 0.76]; accuracy = 77 % ). Model II performed the best ( AUC = 0.78 [0.71 to 0.86]; accuracy = 81 % ). Model III was equivalent ( AUC = 0.75 [0.67 to 0.84]; accuracy = 80 % ). Both models II and III outperformed model I ( AUC difference = 0.11 [0.02 to 0.19], p = 0.01 ; AUC difference = 0.08 [0.01 to 0.15], p = 0.04 , respectively). Model II and III results did not change significantly when POv was replaced by POa. Conclusions: Severe COVID-19 can be predicted using only age and quantitative AD imaging metrics at initial diagnosis, which outperform the set of CVs. Moreover, AI-read qCXR can replace qCT metrics without loss of prognostic performance, promising more resource-efficient prognostication.

7.
Medicine (Baltimore) ; 100(41): e27478, 2021 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-34731126

RESUMO

ABSTRACT: The COVID-19 pandemic has challenged institutions' diagnostic processes worldwide. The aim of this study was to assess the feasibility of an artificial intelligence (AI)-based software tool that automatically evaluates chest computed tomography for findings of suspected COVID-19.Two groups were retrospectively evaluated for COVID-19-associated ground glass opacities of the lungs (group A: real-time polymerase chain reaction positive COVID patients, n = 108; group B: asymptomatic pre-operative group, n = 88). The performance of an AI-based software assessment tool for detection of COVID-associated abnormalities was compared with human evaluation based on COVID-19 reporting and data system (CO-RADS) scores performed by 3 readers.All evaluated variables of the AI-based assessment showed significant differences between the 2 groups (P < .01). The inter-reader reliability of CO-RADS scoring was 0.87. The CO-RADS scores were substantially higher in group A (mean 4.28) than group B (mean 1.50). The difference between CO-RADS scoring and AI assessment was statistically significant for all variables but showed good correlation with the clinical context of the CO-RADS score. AI allowed to predict COVID positive cases with an accuracy of 0.94.The evaluated AI-based algorithm detects COVID-19-associated findings with high sensitivity and may support radiologic workflows during the pandemic.


Assuntos
Inteligência Artificial/normas , COVID-19/diagnóstico , Pulmão/diagnóstico por imagem , Idoso , Idoso de 80 Anos ou mais , COVID-19/epidemiologia , Teste de Ácido Nucleico para COVID-19/normas , Estudos de Viabilidade , Feminino , Humanos , Pulmão/patologia , Masculino , Pessoa de Meia-Idade , Pandemias , Estudos Retrospectivos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
8.
Eur Radiol ; 31(11): 8775-8785, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33934177

RESUMO

OBJECTIVES: To investigate machine learning classifiers and interpretable models using chest CT for detection of COVID-19 and differentiation from other pneumonias, interstitial lung disease (ILD) and normal CTs. METHODS: Our retrospective multi-institutional study obtained 2446 chest CTs from 16 institutions (including 1161 COVID-19 patients). Training/validation/testing cohorts included 1011/50/100 COVID-19, 388/16/33 ILD, 189/16/33 other pneumonias, and 559/17/34 normal (no pathologies) CTs. A metric-based approach for the classification of COVID-19 used interpretable features, relying on logistic regression and random forests. A deep learning-based classifier differentiated COVID-19 via 3D features extracted directly from CT attenuation and probability distribution of airspace opacities. RESULTS: Most discriminative features of COVID-19 are the percentage of airspace opacity and peripheral and basal predominant opacities, concordant with the typical characterization of COVID-19 in the literature. Unsupervised hierarchical clustering compares feature distribution across COVID-19 and control cohorts. The metrics-based classifier achieved AUC = 0.83, sensitivity = 0.74, and specificity = 0.79 versus respectively 0.93, 0.90, and 0.83 for the DL-based classifier. Most of ambiguity comes from non-COVID-19 pneumonia with manifestations that overlap with COVID-19, as well as mild COVID-19 cases. Non-COVID-19 classification performance is 91% for ILD, 64% for other pneumonias, and 94% for no pathologies, which demonstrates the robustness of our method against different compositions of control groups. CONCLUSIONS: Our new method accurately discriminates COVID-19 from other types of pneumonia, ILD, and CTs with no pathologies, using quantitative imaging features derived from chest CT, while balancing interpretability of results and classification performance and, therefore, may be useful to facilitate diagnosis of COVID-19. KEY POINTS: • Unsupervised clustering reveals the key tomographic features including percent airspace opacity and peripheral and basal opacities most typical of COVID-19 relative to control groups. • COVID-19-positive CTs were compared with COVID-19-negative chest CTs (including a balanced distribution of non-COVID-19 pneumonia, ILD, and no pathologies). Classification accuracies for COVID-19, pneumonia, ILD, and CT scans with no pathologies are respectively 90%, 64%, 91%, and 94%. • Our deep learning (DL)-based classification method demonstrates an AUC of 0.93 (sensitivity 90%, specificity 83%). Machine learning methods applied to quantitative chest CT metrics can therefore improve diagnostic accuracy in suspected COVID-19, particularly in resource-constrained environments.


Assuntos
COVID-19 , Humanos , Aprendizado de Máquina , Estudos Retrospectivos , SARS-CoV-2 , Tórax
9.
Med Image Anal ; 72: 102087, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34015595

RESUMO

Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.


Assuntos
Pneumopatias , Pulmão , Humanos , Pulmão/diagnóstico por imagem , Radiografia
10.
Korean J Radiol ; 22(6): 994-1004, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33686818

RESUMO

OBJECTIVE: To extract pulmonary and cardiovascular metrics from chest CTs of patients with coronavirus disease 2019 (COVID-19) using a fully automated deep learning-based approach and assess their potential to predict patient management. MATERIALS AND METHODS: All initial chest CTs of patients who tested positive for severe acute respiratory syndrome coronavirus 2 at our emergency department between March 25 and April 25, 2020, were identified (n = 120). Three patient management groups were defined: group 1 (outpatient), group 2 (general ward), and group 3 (intensive care unit [ICU]). Multiple pulmonary and cardiovascular metrics were extracted from the chest CT images using deep learning. Additionally, six laboratory findings indicating inflammation and cellular damage were considered. Differences in CT metrics, laboratory findings, and demographics between the patient management groups were assessed. The potential of these parameters to predict patients' needs for intensive care (yes/no) was analyzed using logistic regression and receiver operating characteristic curves. Internal and external validity were assessed using 109 independent chest CT scans. RESULTS: While demographic parameters alone (sex and age) were not sufficient to predict ICU management status, both CT metrics alone (including both pulmonary and cardiovascular metrics; area under the curve [AUC] = 0.88; 95% confidence interval [CI] = 0.79-0.97) and laboratory findings alone (C-reactive protein, lactate dehydrogenase, white blood cell count, and albumin; AUC = 0.86; 95% CI = 0.77-0.94) were good classifiers. Excellent performance was achieved by a combination of demographic parameters, CT metrics, and laboratory findings (AUC = 0.91; 95% CI = 0.85-0.98). Application of a model that combined both pulmonary CT metrics and demographic parameters on a dataset from another hospital indicated its external validity (AUC = 0.77; 95% CI = 0.66-0.88). CONCLUSION: Chest CT of patients with COVID-19 contains valuable information that can be accessed using automated image analysis. These metrics are useful for the prediction of patient management.


Assuntos
COVID-19/diagnóstico , Aprendizado Profundo , Tórax/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Área Sob a Curva , Automação , COVID-19/diagnóstico por imagem , COVID-19/virologia , Feminino , Humanos , Modelos Logísticos , Pulmão/fisiopatologia , Masculino , Pessoa de Meia-Idade , Curva ROC , Estudos Retrospectivos , SARS-CoV-2/isolamento & purificação , Adulto Jovem
11.
Invest Radiol ; 56(8): 471-479, 2021 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-33481459

RESUMO

OBJECTIVES: The aim of this study was to leverage volumetric quantification of airspace disease (AD) derived from a superior modality (computed tomography [CT]) serving as ground truth, projected onto digitally reconstructed radiographs (DRRs) to (1) train a convolutional neural network (CNN) to quantify AD on paired chest radiographs (CXRs) and CTs, and (2) compare the DRR-trained CNN to expert human readers in the CXR evaluation of patients with confirmed COVID-19. MATERIALS AND METHODS: We retrospectively selected a cohort of 86 COVID-19 patients (with positive reverse transcriptase-polymerase chain reaction test results) from March to May 2020 at a tertiary hospital in the northeastern United States, who underwent chest CT and CXR within 48 hours. The ground-truth volumetric percentage of COVID-19-related AD (POv) was established by manual AD segmentation on CT. The resulting 3-dimensional masks were projected into 2-dimensional anterior-posterior DRR to compute area-based AD percentage (POa). A CNN was trained with DRR images generated from a larger-scale CT dataset of COVID-19 and non-COVID-19 patients, automatically segmenting lungs, AD, and quantifying POa on CXR. The CNN POa results were compared with POa quantified on CXR by 2 expert readers and to the POv ground truth, by computing correlations and mean absolute errors. RESULTS: Bootstrap mean absolute error and correlations between POa and POv were 11.98% (11.05%-12.47%) and 0.77 (0.70-0.82) for average of expert readers and 9.56% to 9.78% (8.83%-10.22%) and 0.78 to 0.81 (0.73-0.85) for the CNN, respectively. CONCLUSIONS: Our CNN trained with DRR using CT-derived airspace quantification achieved expert radiologist level of accuracy in the quantification of AD on CXR in patients with positive reverse transcriptase-polymerase chain reaction test results for COVID-19.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Radiografia Torácica , Radiologistas , Tomografia Computadorizada por Raios X , Estudos de Coortes , Humanos , Pulmão/diagnóstico por imagem , Masculino , Estudos Retrospectivos
12.
IEEE Trans Med Imaging ; 40(1): 335-345, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32966215

RESUMO

Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Detecção Precoce de Câncer , Humanos , Processamento de Imagem Assistida por Computador , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
13.
Med Image Anal ; 68: 101855, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33260116

RESUMO

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Humanos , Aprendizado de Máquina , Incerteza
14.
ArXiv ; 2020 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-32550252

RESUMO

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobewise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April, 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO(P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

15.
J Thorac Imaging ; 35 Suppl 1: S21-S27, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32317574

RESUMO

The constantly increasing number of computed tomography (CT) examinations poses major challenges for radiologists. In this article, the additional benefits and potential of an artificial intelligence (AI) analysis platform for chest CT examinations in routine clinical practice will be examined. Specific application examples include AI-based, fully automatic lung segmentation with emphysema quantification, aortic measurements, detection of pulmonary nodules, and bone mineral density measurement. This contribution aims to appraise this AI-based application for value-added diagnosis during routine chest CT examinations and explore future development perspectives.


Assuntos
Pneumopatias/diagnóstico por imagem , Aprendizado de Máquina , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/métodos , Tomografia Computadorizada por Raios X/métodos , Fluxo de Trabalho , Humanos , Pulmão/diagnóstico por imagem , Redes Neurais de Computação
16.
J Nucl Med ; 61(12): 1786-1792, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32332147

RESUMO

Prostate-specific membrane antigen (PSMA)-targeting PET imaging is becoming the reference standard for prostate cancer staging, especially in advanced disease. Yet, the implications of PSMA PET-derived whole-body tumor volume for overall survival are poorly elucidated to date. This might be because semiautomated quantification of whole-body tumor volume as a PSMA PET biomarker is an unmet clinical challenge. Therefore, in the present study we propose and evaluate a software that enables the semiautomated quantification of PSMA PET biomarkers such as whole-body tumor volume. Methods: The proposed quantification is implemented as a research prototype. PSMA-accumulating foci were automatically segmented by a percental threshold (50% of local SUVmax). Neural networks were trained to segment organs in PET/CT acquisitions (training CTs: 8,632, validation CTs: 53). Thereby, PSMA foci within organs of physiologic PSMA uptake were semiautomatically excluded from the analysis. Pretherapeutic PSMA PET/CTs of 40 consecutive patients treated with 177Lu-PSMA-617 were evaluated in this analysis. The whole-body tumor volume (PSMATV50), SUVmax, SUVmean, and other whole-body imaging biomarkers were calculated for each patient. Semiautomatically derived results were compared with manual readings in a subcohort (by 1 nuclear medicine physician). Additionally, an interobserver evaluation of the semiautomated approach was performed in a subcohort (by 2 nuclear medicine physicians). Results: Manually and semiautomatically derived PSMA metrics were highly correlated (PSMATV50: R2 = 1.000, P < 0.001; SUVmax: R2 = 0.988, P < 0.001). The interobserver agreement of the semiautomated workflow was also high (PSMATV50: R2 = 1.000, P < 0.001, interclass correlation coefficient = 1.000; SUVmax: R2 = 0.988, P < 0.001, interclass correlation coefficient = 0.997). PSMATV50 (ml) was a significant predictor of overall survival (hazard ratio: 1.004; 95% confidence interval: 1.001-1.006, P = 0.002) and remained so in a multivariate regression including other biomarkers (hazard ratio: 1.004; 95% confidence interval: 1.001-1.006 P = 0.004). Conclusion: PSMATV50 is a promising PSMA PET biomarker that is reproducible and easily quantified by the proposed semiautomated software. Moreover, PSMATV50 is a significant predictor of overall survival in patients with advanced prostate cancer who receive 177Lu-PSMA-617 therapy.


Assuntos
Ácido Edético/análogos & derivados , Oligopeptídeos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Carga Tumoral , Idoso , Automação , Biomarcadores Tumorais/metabolismo , Isótopos de Gálio , Radioisótopos de Gálio , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Variações Dependentes do Observador , Neoplasias da Próstata/sangue , Neoplasias da Próstata/metabolismo , Software , Análise de Sobrevida
17.
Radiol Artif Intell ; 2(4): e200048, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33928255

RESUMO

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobe-wise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO (P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

18.
IEEE Trans Pattern Anal Mach Intell ; 41(1): 176-189, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-29990011

RESUMO

Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.

19.
Med Image Anal ; 48: 203-213, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29966940

RESUMO

Robust and fast detection of anatomical structures represents an important component of medical image analysis technologies. Current solutions for anatomy detection are based on machine learning, and are generally driven by suboptimal and exhaustive search strategies. In particular, these techniques do not effectively address cases of incomplete data, i.e., scans acquired with a partial field-of-view. We address these challenges by following a new paradigm, which reformulates the detection task to teaching an intelligent artificial agent how to actively search for an anatomical structure. Using the principles of deep reinforcement learning with multi-scale image analysis, artificial agents are taught optimal navigation paths in the scale-space representation of an image, while accounting for structures that are missing from the field-of-view. The spatial coherence of the observed anatomical landmarks is ensured using elements from statistical shape modeling and robust estimation theory. Experiments show that our solution outperforms marginal space deep learning, a powerful deep learning method, at detecting different anatomical structures without any failure. The dataset contains 5043 3D-CT volumes from over 2000 patients, totaling over 2,500,000 image slices. In particular, our solution achieves 0% false-positive and 0% false-negative rates at detecting whether the landmarks are captured in the field-of-view of the scan (excluding all border cases), with an average detection accuracy of 2.78 mm. In terms of runtime, we reduce the detection-time of the marginal space deep learning method by 20-30 times to under 40 ms, an unmatched performance for high resolution incomplete 3D-CT data.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Pontos de Referência Anatômicos , Humanos
20.
Med Image Anal ; 35: 238-249, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27475910

RESUMO

Intervention planning is essential for successful Mitral Valve (MV) repair procedures. Finite-element models (FEM) of the MV could be used to achieve this goal, but the translation to the clinical domain is challenging. Many input parameters for the FEM models, such as tissue properties, are not known. In addition, only simplified MV geometry models can be extracted from non-invasive modalities such as echocardiography imaging, lacking major anatomical details such as the complex chordae topology. A traditional approach for FEM computation is to use a simplified model (also known as parachute model) of the chordae topology, which connects the papillary muscle tips to the free-edges and select basal points. Building on the existing parachute model a new and comprehensive MV model was developed that utilizes a novel chordae representation capable of approximating regional connectivity. In addition, a fully automated personalization approach was developed for the chordae rest length, removing the need for tedious manual parameter selection. Based on the MV model extracted during mid-diastole (open MV) the MV geometric configuration at peak systole (closed MV) was computed according to the FEM model. In this work the focus was placed on validating MV closure computation. The method is evaluated on ten in vitro ovine cases, where in addition to echocardiography imaging, high-resolution µCT imaging is available for accurate validation.


Assuntos
Ecocardiografia Tridimensional/métodos , Valva Mitral/diagnóstico por imagem , Incerteza , Algoritmos , Animais , Análise de Elementos Finitos , Humanos , Insuficiência da Valva Mitral/diagnóstico por imagem , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Ovinos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...