Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 114
Filtrar
1.
Eur Radiol ; 31(11): 8775-8785, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-33934177

RESUMO

OBJECTIVES: To investigate machine learning classifiers and interpretable models using chest CT for detection of COVID-19 and differentiation from other pneumonias, interstitial lung disease (ILD) and normal CTs. METHODS: Our retrospective multi-institutional study obtained 2446 chest CTs from 16 institutions (including 1161 COVID-19 patients). Training/validation/testing cohorts included 1011/50/100 COVID-19, 388/16/33 ILD, 189/16/33 other pneumonias, and 559/17/34 normal (no pathologies) CTs. A metric-based approach for the classification of COVID-19 used interpretable features, relying on logistic regression and random forests. A deep learning-based classifier differentiated COVID-19 via 3D features extracted directly from CT attenuation and probability distribution of airspace opacities. RESULTS: Most discriminative features of COVID-19 are the percentage of airspace opacity and peripheral and basal predominant opacities, concordant with the typical characterization of COVID-19 in the literature. Unsupervised hierarchical clustering compares feature distribution across COVID-19 and control cohorts. The metrics-based classifier achieved AUC = 0.83, sensitivity = 0.74, and specificity = 0.79 versus respectively 0.93, 0.90, and 0.83 for the DL-based classifier. Most of ambiguity comes from non-COVID-19 pneumonia with manifestations that overlap with COVID-19, as well as mild COVID-19 cases. Non-COVID-19 classification performance is 91% for ILD, 64% for other pneumonias, and 94% for no pathologies, which demonstrates the robustness of our method against different compositions of control groups. CONCLUSIONS: Our new method accurately discriminates COVID-19 from other types of pneumonia, ILD, and CTs with no pathologies, using quantitative imaging features derived from chest CT, while balancing interpretability of results and classification performance and, therefore, may be useful to facilitate diagnosis of COVID-19. KEY POINTS: • Unsupervised clustering reveals the key tomographic features including percent airspace opacity and peripheral and basal opacities most typical of COVID-19 relative to control groups. • COVID-19-positive CTs were compared with COVID-19-negative chest CTs (including a balanced distribution of non-COVID-19 pneumonia, ILD, and no pathologies). Classification accuracies for COVID-19, pneumonia, ILD, and CT scans with no pathologies are respectively 90%, 64%, 91%, and 94%. • Our deep learning (DL)-based classification method demonstrates an AUC of 0.93 (sensitivity 90%, specificity 83%). Machine learning methods applied to quantitative chest CT metrics can therefore improve diagnostic accuracy in suspected COVID-19, particularly in resource-constrained environments.


Assuntos
COVID-19 , Humanos , Aprendizado de Máquina , Estudos Retrospectivos , SARS-CoV-2 , Tórax
2.
Eur Heart J ; 38(41): 3049-3055, 2017 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-29029109

RESUMO

The diagnostic evaluation of acute chest pain has been augmented in recent years by advances in the sensitivity and precision of cardiac troponin assays, new biomarkers, improvements in imaging modalities, and release of new clinical decision algorithms. This progress has enabled physicians to diagnose or rule-out acute myocardial infarction earlier after the initial patient presentation, usually in emergency department settings, which may facilitate prompt initiation of evidence-based treatments, investigation of alternative diagnoses for chest pain, or discharge, and permit better utilization of healthcare resources. A non-trivial proportion of patients fall in an indeterminate category according to rule-out algorithms, and minimal evidence-based guidance exists for the optimal evaluation, monitoring, and treatment of these patients. The Cardiovascular Round Table of the ESC proposes approaches for the optimal application of early strategies in clinical practice to improve patient care following the review of recent advances in the early diagnosis of acute coronary syndrome. The following specific 'indeterminate' patient categories were considered: (i) patients with symptoms and high-sensitivity cardiac troponin <99th percentile; (ii) patients with symptoms and high-sensitivity troponin <99th percentile but above the limit of detection; (iii) patients with symptoms and high-sensitivity troponin >99th percentile but without dynamic change; and (iv) patients with symptoms and high-sensitivity troponin >99th percentile and dynamic change but without coronary plaque rupture/erosion/dissection. Definitive evidence is currently lacking to manage these patients whose early diagnosis is 'indeterminate' and these areas of uncertainty should be assigned a high priority for research.


Assuntos
Síndrome Coronariana Aguda/diagnóstico , Infarto do Miocárdio/diagnóstico , Angina Pectoris/etiologia , Biomarcadores/metabolismo , Diagnóstico Precoce , Feminino , Humanos , Masculino , Medição de Risco , Sensibilidade e Especificidade , Troponina/metabolismo
3.
Sci Rep ; 14(1): 9380, 2024 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654066

RESUMO

Vision transformers (ViTs) have revolutionized computer vision by employing self-attention instead of convolutional neural networks and demonstrated success due to their ability to capture global dependencies and remove spatial biases of locality. In medical imaging, where input data may differ in size and resolution, existing architectures require resampling or resizing during pre-processing, leading to potential spatial resolution loss and information degradation. This study proposes a co-ordinate-based embedding that encodes the geometry of medical images, capturing physical co-ordinate and resolution information without the need for resampling or resizing. The effectiveness of the proposed embedding is demonstrated through experiments with UNETR and SwinUNETR models for infarct segmentation on MRI dataset with AxTrace and AxADC contrasts. The dataset consists of 1142 training, 133 validation and 143 test subjects. Both models with the addition of co-ordinate based positional embedding achieved substantial improvements in mean Dice score by 6.5% and 7.6%. The proposed embedding showcased a statistically significant advantage p-value< 0.0001 over alternative approaches. In conclusion, the proposed co-ordinate-based pixel-wise positional embedding method offers a promising solution for Transformer-based models in medical image analysis. It effectively leverages physical co-ordinate information to enhance performance without compromising spatial resolution and provides a foundation for future advancements in positional embedding techniques for medical applications.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Redes Neurais de Computação
4.
J Med Imaging (Bellingham) ; 11(3): 035001, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38756438

RESUMO

Purpose: The accurate detection and tracking of devices, such as guiding catheters in live X-ray image acquisitions, are essential prerequisites for endovascular cardiac interventions. This information is leveraged for procedural guidance, e.g., directing stent placements. To ensure procedural safety and efficacy, there is a need for high robustness/no failures during tracking. To achieve this, one needs to efficiently tackle challenges, such as device obscuration by the contrast agent or other external devices or wires and changes in the field-of-view or acquisition angle, as well as the continuous movement due to cardiac and respiratory motion. Approach: To overcome the aforementioned challenges, we propose an approach to learn spatio-temporal features from a very large data cohort of over 16 million interventional X-ray frames using self-supervision for image sequence data. Our approach is based on a masked image modeling technique that leverages frame interpolation-based reconstruction to learn fine inter-frame temporal correspondences. The features encoded in the resulting model are fine-tuned downstream in a light-weight model. Results: Our approach achieves state-of-the-art performance, in particular for robustness, compared to ultra optimized reference solutions (that use multi-stage feature fusion or multi-task and flow regularization). The experiments show that our method achieves a 66.31% reduction in the maximum tracking error against the reference solutions (23.20% when flow regularization is used), achieving a success score of 97.95% at a 3× faster inference speed of 42 frames-per-second (on GPU). In addition, we achieve a 20% reduction in the standard deviation of errors, which indicates a much more stable tracking performance. Conclusions: The proposed data-driven approach achieves superior performance, particularly in robustness and speed compared with the frequently used multi-modular approaches for device tracking. The results encourage the use of our approach in various other tasks within interventional image analytics that require effective understanding of spatio-temporal semantics.

5.
Acad Radiol ; 2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-38997881

RESUMO

RATIONALE AND OBJECTIVES: Given the high volume of chest radiographs, radiologists frequently encounter heavy workloads. In outpatient imaging, a substantial portion of chest radiographs show no actionable findings. Automatically identifying these cases could improve efficiency by facilitating shorter reading workflows. PURPOSE: A large-scale study to assess the performance of AI on identifying chest radiographs with no actionable disease (NAD) in an outpatient imaging population using comprehensive, objective, and reproducible criteria for NAD. MATERIALS AND METHODS: The independent validation study includes 15000 patients with chest radiographs in posterior-anterior (PA) and lateral projections from an outpatient imaging center in the United States. Ground truth was established by reviewing CXR reports and classifying cases as NAD or actionable disease (AD). The NAD definition includes completely normal chest radiographs and radiographs with well-defined non-actionable findings. The AI NAD Analyzer1 (trained with 100 million multimodal images and fine-tuned on 1.3 million radiographs) utilizes a tandem system with image-level rule in and compartment-level rule out to provide case level output as NAD or potential actionable disease (PAD). RESULTS: A total of 14057 cases met our eligibility criteria (age 56 ± 16.1 years, 55% women and 45% men). The prevalence of NAD cases in the study population was 70.7%. The AI NAD Analyzer correctly classified NAD cases with a sensitivity of 29.1% and a yield of 20.6%. The specificity was 98.9% which corresponds to a miss rate of 0.3% of cases. Significant findings were missed in 0.06% of cases, while no cases with critical findings were missed by AI. CONCLUSION: In an outpatient population, AI can identify 20% of chest radiographs as NAD with a very low rate of missed findings. These cases could potentially be read using a streamlined protocol, thus improving efficiency and consequently reducing daily workload for radiologists.

6.
J Med Imaging (Bellingham) ; 9(3): 034003, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35721308

RESUMO

Purpose: Rapid prognostication of COVID-19 patients is important for efficient resource allocation. We evaluated the relative prognostic value of baseline clinical variables (CVs), quantitative human-read chest CT (qCT), and AI-read chest radiograph (qCXR) airspace disease (AD) in predicting severe COVID-19. Approach: We retrospectively selected 131 COVID-19 patients (SARS-CoV-2 positive, March to October, 2020) at a tertiary hospital in the United States, who underwent chest CT and CXR within 48 hr of initial presentation. CVs included patient demographics and laboratory values; imaging variables included qCT volumetric percentage AD (POv) and qCXR area-based percentage AD (POa), assessed by a deep convolutional neural network. Our prognostic outcome was need for ICU admission. We compared the performance of three logistic regression models: using CVs known to be associated with prognosis (model I), using a dimension-reduced set of best predictor variables (model II), and using only age and AD (model III). Results: 60/131 patients required ICU admission, whereas 71/131 did not. Model I performed the poorest ( AUC = 0.67 [0.58 to 0.76]; accuracy = 77 % ). Model II performed the best ( AUC = 0.78 [0.71 to 0.86]; accuracy = 81 % ). Model III was equivalent ( AUC = 0.75 [0.67 to 0.84]; accuracy = 80 % ). Both models II and III outperformed model I ( AUC difference = 0.11 [0.02 to 0.19], p = 0.01 ; AUC difference = 0.08 [0.01 to 0.15], p = 0.04 , respectively). Model II and III results did not change significantly when POv was replaced by POa. Conclusions: Severe COVID-19 can be predicted using only age and quantitative AD imaging metrics at initial diagnosis, which outperform the set of CVs. Moreover, AI-read qCXR can replace qCT metrics without loss of prognostic performance, promising more resource-efficient prognostication.

7.
J Med Imaging (Bellingham) ; 9(6): 064503, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36466078

RESUMO

Purpose: Building accurate and robust artificial intelligence systems for medical image assessment requires the creation of large sets of annotated training examples. However, constructing such datasets is very costly due to the complex nature of annotation tasks, which often require expert knowledge (e.g., a radiologist). To counter this limitation, we propose a method to learn from medical images at scale in a self-supervised way. Approach: Our approach, based on contrastive learning and online feature clustering, leverages training datasets of over 100,000,000 medical images of various modalities, including radiography, computed tomography (CT), magnetic resonance (MR) imaging, and ultrasonography (US). We propose to use the learned features to guide model training in supervised and hybrid self-supervised/supervised regime on various downstream tasks. Results: We highlight a number of advantages of this strategy on challenging image assessment problems in radiography, CT, and MR: (1) significant increase in accuracy compared to the state-of-the-art (e.g., area under the curve boost of 3% to 7% for detection of abnormalities from chest radiography scans and hemorrhage detection on brain CT); (2) acceleration of model convergence during training by up to 85% compared with using no pretraining (e.g., 83% when training a model for detection of brain metastases in MR scans); and (3) increase in robustness to various image augmentations, such as intensity variations, rotations or scaling reflective of data variation seen in the field. Conclusions: The proposed approach enables large gains in accuracy and robustness on challenging image assessment problems. The improvement is significant compared with other state-of-the-art approaches trained on medical or vision images (e.g., ImageNet).

8.
Radiol Artif Intell ; 4(3): e210115, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35652116

RESUMO

Purpose: To present a method that automatically detects, subtypes, and locates acute or subacute intracranial hemorrhage (ICH) on noncontrast CT (NCCT) head scans; generates detection confidence scores to identify high-confidence data subsets with higher accuracy; and improves radiology worklist prioritization. Such scores may enable clinicians to better use artificial intelligence (AI) tools. Materials and Methods: This retrospective study included 46 057 studies from seven "internal" centers for development (training, architecture selection, hyperparameter tuning, and operating-point calibration; n = 25 946) and evaluation (n = 2947) and three "external" centers for calibration (n = 400) and evaluation (n = 16 764). Internal centers contributed developmental data, whereas external centers did not. Deep neural networks predicted the presence of ICH and subtypes (intraparenchymal, intraventricular, subarachnoid, subdural, and/or epidural hemorrhage) and segmentations per case. Two ICH confidence scores are discussed: a calibrated classifier entropy score and a Dempster-Shafer score. Evaluation was completed by using receiver operating characteristic curve analysis and report turnaround time (RTAT) modeling on the evaluation set and on confidence score-defined subsets using bootstrapping. Results: The areas under the receiver operating characteristic curve for ICH were 0.97 (0.97, 0.98) and 0.95 (0.94, 0.95) on internal and external center data, respectively. On 80% of the data stratified by calibrated classifier and Dempster-Shafer scores, the system improved the Youden indexes, increasing them from 0.84 to 0.93 (calibrated classifier) and from 0.84 to 0.92 (Dempster-Shafer) for internal centers and increasing them from 0.78 to 0.88 (calibrated classifier) and from 0.78 to 0.89 (Dempster-Shafer) for external centers (P < .001). Models estimated shorter RTAT for AI-prioritized worklists with confidence measures than for AI-prioritized worklists without confidence measures, shortening RTAT by 27% (calibrated classifier) and 27% (Dempster-Shafer) for internal centers and shortening RTAT by 25% (calibrated classifier) and 27% (Dempster-Shafer) for external centers (P < .001). Conclusion: AI that provided statistical confidence measures for ICH detection on NCCT scans reliably detected and subtyped hemorrhages, identified high-confidence predictions, and improved worklist prioritization in simulation.Keywords: CT, Head/Neck, Hemorrhage, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2022.

9.
IEEE Trans Med Imaging ; 40(1): 335-345, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32966215

RESUMO

Detecting malignant pulmonary nodules at an early stage can allow medical interventions which may increase the survival rate of lung cancer patients. Using computer vision techniques to detect nodules can improve the sensitivity and the speed of interpreting chest CT for lung cancer screening. Many studies have used CNNs to detect nodule candidates. Though such approaches have been shown to outperform the conventional image processing based methods regarding the detection accuracy, CNNs are also known to be limited to generalize on under-represented samples in the training set and prone to imperceptible noise perturbations. Such limitations can not be easily addressed by scaling up the dataset or the models. In this work, we propose to add adversarial synthetic nodules and adversarial attack samples to the training data to improve the generalization and the robustness of the lung nodule detection systems. To generate hard examples of nodules from a differentiable nodule synthesizer, we use projected gradient descent (PGD) to search the latent code within a bounded neighbourhood that would generate nodules to decrease the detector response. To make the network more robust to unanticipated noise perturbations, we use PGD to search for noise patterns that can trigger the network to give over-confident mistakes. By evaluating on two different benchmark datasets containing consensus annotations from three radiologists, we show that the proposed techniques can improve the detection performance on real CT data. To understand the limitations of both the conventional networks and the proposed augmented networks, we also perform stress-tests on the false positive reduction networks by feeding different types of artificially produced patches. We show that the augmented networks are more robust to both under-represented nodules as well as resistant to noise perturbations.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Detecção Precoce de Câncer , Humanos , Processamento de Imagem Assistida por Computador , Pulmão , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X
10.
Med Image Anal ; 72: 102087, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34015595

RESUMO

Chest radiography is the most common radiographic examination performed in daily clinical practice for the detection of various heart and lung abnormalities. The large amount of data to be read and reported, with more than 100 studies per day for a single radiologist, poses a challenge in consistently maintaining high interpretation accuracy. The introduction of large-scale public datasets has led to a series of novel systems for automated abnormality classification. However, the labels of these datasets were obtained using natural language processed medical reports, yielding a large degree of label noise that can impact the performance. In this study, we propose novel training strategies that handle label noise from such suboptimal data. Prior label probabilities were measured on a subset of training data re-read by 4 board-certified radiologists and were used during training to increase the robustness of the training model to the label noise. Furthermore, we exploit the high comorbidity of abnormalities observed in chest radiography and incorporate this information to further reduce the impact of label noise. Additionally, anatomical knowledge is incorporated by training the system to predict lung and heart segmentation, as well as spatial knowledge labels. To deal with multiple datasets and images derived from various scanners that apply different post-processing techniques, we introduce a novel image normalization strategy. Experiments were performed on an extensive collection of 297,541 chest radiographs from 86,876 patients, leading to a state-of-the-art performance level for 17 abnormalities from 2 datasets. With an average AUC score of 0.880 across all abnormalities, our proposed training strategies can be used to significantly improve performance scores.


Assuntos
Pneumopatias , Pulmão , Humanos , Pulmão/diagnóstico por imagem , Radiografia
11.
iScience ; 24(12): 103523, 2021 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-34870131

RESUMO

The SARS-CoV-2 virus has caused tremendous healthcare burden worldwide. Our focus was to develop a practical and easy-to-deploy system to predict the severe manifestation of disease in patients with COVID-19 with an aim to assist clinicians in triage and treatment decisions. Our proposed predictive algorithm is a trained artificial intelligence-based network using 8,427 COVID-19 patient records from four healthcare systems. The model provides a severity risk score along with likelihoods of various clinical outcomes, namely ventilator use and mortality. The trained model using patient age and nine laboratory markers has the prediction accuracy with an area under the curve (AUC) of 0.78, 95% CI: 0.77-0.82, and the negative predictive value NPV of 0.86, 95% CI: 0.84-0.88 for the need to use a ventilator and has an accuracy with AUC of 0.85, 95% CI: 0.84-0.86, and the NPV of 0.94, 95% CI: 0.92-0.96 for predicting in-hospital 30-day mortality.

12.
Med Image Anal ; 68: 101855, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33260116

RESUMO

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Humanos , Aprendizado de Máquina , Incerteza
13.
JACC Cardiovasc Imaging ; 14(1): 41-60, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32861647

RESUMO

Structural heart disease (SHD) is a new field within cardiovascular medicine. Traditional imaging modalities fall short in supporting the needs of SHD interventions, as they have been constructed around the concept of disease diagnosis. SHD interventions disrupt traditional concepts of imaging in requiring imaging to plan, simulate, and predict intraprocedural outcomes. In transcatheter SHD interventions, the absence of a gold-standard open cavity surgical field deprives physicians of the opportunity for tactile feedback and visual confirmation of cardiac anatomy. Hence, dependency on imaging in periprocedural guidance has led to evolution of a new generation of procedural skillsets, concept of a visual field, and technologies in the periprocedural planning period to accelerate preclinical device development, physician, and patient education. Adaptation of 3-dimensional (3D) printing in clinical care and procedural planning has demonstrated a reduction in early-operator learning curve for transcatheter interventions. Integration of computation modeling to 3D printing has accelerated research and development understanding of fluid mechanics within device testing. Application of 3D printing, computational modeling, and ultimately incorporation of artificial intelligence is changing the landscape of physician training and delivery of patient-centric care. Transcatheter structural heart interventions are requiring in-depth periprocedural understanding of cardiac pathophysiology and device interactions not afforded by traditional imaging metrics.


Assuntos
Procedimentos Cirúrgicos Cardíacos , Cardiopatias , Inteligência Artificial , Cateterismo Cardíaco , Humanos , Valor Preditivo dos Testes , Impressão Tridimensional
14.
Korean J Radiol ; 22(6): 994-1004, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33686818

RESUMO

OBJECTIVE: To extract pulmonary and cardiovascular metrics from chest CTs of patients with coronavirus disease 2019 (COVID-19) using a fully automated deep learning-based approach and assess their potential to predict patient management. MATERIALS AND METHODS: All initial chest CTs of patients who tested positive for severe acute respiratory syndrome coronavirus 2 at our emergency department between March 25 and April 25, 2020, were identified (n = 120). Three patient management groups were defined: group 1 (outpatient), group 2 (general ward), and group 3 (intensive care unit [ICU]). Multiple pulmonary and cardiovascular metrics were extracted from the chest CT images using deep learning. Additionally, six laboratory findings indicating inflammation and cellular damage were considered. Differences in CT metrics, laboratory findings, and demographics between the patient management groups were assessed. The potential of these parameters to predict patients' needs for intensive care (yes/no) was analyzed using logistic regression and receiver operating characteristic curves. Internal and external validity were assessed using 109 independent chest CT scans. RESULTS: While demographic parameters alone (sex and age) were not sufficient to predict ICU management status, both CT metrics alone (including both pulmonary and cardiovascular metrics; area under the curve [AUC] = 0.88; 95% confidence interval [CI] = 0.79-0.97) and laboratory findings alone (C-reactive protein, lactate dehydrogenase, white blood cell count, and albumin; AUC = 0.86; 95% CI = 0.77-0.94) were good classifiers. Excellent performance was achieved by a combination of demographic parameters, CT metrics, and laboratory findings (AUC = 0.91; 95% CI = 0.85-0.98). Application of a model that combined both pulmonary CT metrics and demographic parameters on a dataset from another hospital indicated its external validity (AUC = 0.77; 95% CI = 0.66-0.88). CONCLUSION: Chest CT of patients with COVID-19 contains valuable information that can be accessed using automated image analysis. These metrics are useful for the prediction of patient management.


Assuntos
COVID-19/diagnóstico , Aprendizado Profundo , Tórax/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Área Sob a Curva , Automação , COVID-19/diagnóstico por imagem , COVID-19/virologia , Feminino , Humanos , Modelos Logísticos , Pulmão/fisiopatologia , Masculino , Pessoa de Meia-Idade , Curva ROC , Estudos Retrospectivos , SARS-CoV-2/isolamento & purificação , Adulto Jovem
15.
Invest Radiol ; 56(10): 605-613, 2021 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-33787537

RESUMO

OBJECTIVE: The aim of this study was to evaluate the effect of a deep learning based computer-aided diagnosis (DL-CAD) system on radiologists' interpretation accuracy and efficiency in reading biparametric prostate magnetic resonance imaging scans. MATERIALS AND METHODS: We selected 100 consecutive prostate magnetic resonance imaging cases from a publicly available data set (PROSTATEx Challenge) with and without histopathologically confirmed prostate cancer. Seven board-certified radiologists were tasked to read each case twice in 2 reading blocks (with and without the assistance of a DL-CAD), with a separation between the 2 reading sessions of at least 2 weeks. Reading tasks were to localize and classify lesions according to Prostate Imaging Reporting and Data System (PI-RADS) v2.0 and to assign a radiologist's level of suspicion score (scale from 1-5 in 0.5 increments; 1, benign; 5, malignant). Ground truth was established by consensus readings of 3 experienced radiologists. The detection performance (receiver operating characteristic curves), variability (Fleiss κ), and average reading time without DL-CAD assistance were evaluated. RESULTS: The average accuracy of radiologists in terms of area under the curve in detecting clinically significant cases (PI-RADS ≥4) was 0.84 (95% confidence interval [CI], 0.79-0.89), whereas the same using DL-CAD was 0.88 (95% CI, 0.83-0.94) with an improvement of 4.4% (95% CI, 1.1%-7.7%; P = 0.010). Interreader concordance (in terms of Fleiss κ) increased from 0.22 to 0.36 (P = 0.003). Accuracy of radiologists in detecting cases with PI-RADS ≥3 was improved by 2.9% (P = 0.10). The median reading time in the unaided/aided scenario was reduced by 21% from 103 to 81 seconds (P < 0.001). CONCLUSIONS: Using a DL-CAD system increased the diagnostic accuracy in detecting highly suspicious prostate lesions and reduced both the interreader variability and the reading time.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Computadores , Humanos , Imageamento por Ressonância Magnética , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Radiologistas , Estudos Retrospectivos
16.
Sci Rep ; 11(1): 6876, 2021 03 25.
Artigo em Inglês | MEDLINE | ID: mdl-33767226

RESUMO

With the rapid growth and increasing use of brain MRI, there is an interest in automated image classification to aid human interpretation and improve workflow. We aimed to train a deep convolutional neural network and assess its performance in identifying abnormal brain MRIs and critical intracranial findings including acute infarction, acute hemorrhage and mass effect. A total of 13,215 clinical brain MRI studies were categorized to training (74%), validation (9%), internal testing (8%) and external testing (8%) datasets. Up to eight contrasts were included from each brain MRI and each image volume was reformatted to common resolution to accommodate for differences between scanners. Following reviewing the radiology reports, three neuroradiologists assigned each study to abnormal vs normal, and identified three critical findings including acute infarction, acute hemorrhage, and mass effect. A deep convolutional neural network was constructed by a combination of localization feature extraction (LFE) modules and global classifiers to identify the presence of 4 variables in brain MRIs including abnormal, acute infarction, acute hemorrhage and mass effect. Training, validation and testing sets were randomly defined on a patient basis. Training was performed on 9845 studies using balanced sampling to address class imbalance. Receiver operating characteristic (ROC) analysis was performed. The ROC analysis of our models for 1050 studies within our internal test data showed AUC/sensitivity/specificity of 0.91/83%/86% for normal versus abnormal brain MRI, 0.95/92%/88% for acute infarction, 0.90/89%/81% for acute hemorrhage, and 0.93/93%/85% for mass effect. For 1072 studies within our external test data, it showed AUC/sensitivity/specificity of 0.88/80%/80% for normal versus abnormal brain MRI, 0.97/90%/97% for acute infarction, 0.83/72%/88% for acute hemorrhage, and 0.87/79%/81% for mass effect. Our proposed deep convolutional network can accurately identify abnormal and critical intracranial findings on individual brain MRIs, while addressing the fact that some MR contrasts might not be available in individual studies.


Assuntos
Encéfalo/anatomia & histologia , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Redes Neurais de Computação , Neuroimagem/métodos , Humanos , Curva ROC
17.
J Thorac Imaging ; 35 Suppl 1: S11-S16, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32205816

RESUMO

In this review article, the current and future impact of artificial intelligence (AI) technologies on diagnostic imaging is discussed, with a focus on cardio-thoracic applications. The processing of imaging data is described at 4 levels of increasing complexity and wider implications. At the examination level, AI aims at improving, simplifying, and standardizing image acquisition and processing. Systems for AI-driven automatic patient iso-centering before a computed tomography (CT) scan, patient-specific adaptation of image acquisition parameters, and creation of optimized and standardized visualizations, for example, automatic rib-unfolding, are discussed. At the reading and reporting levels, AI focuses on automatic detection and characterization of features and on automatic measurements in the images. A recently introduced AI system for chest CT imaging is presented that reports specific findings such as nodules, low-attenuation parenchyma, and coronary calcifications, including automatic measurements of, for example, aortic diameters. At the prediction and prescription levels, AI focuses on risk prediction and stratification, as opposed to merely detecting, measuring, and quantifying images. An AI-based approach for individualizing radiation dose in lung stereotactic body radiotherapy is discussed. The digital twin is presented as a concept of individualized computational modeling of human physiology, with AI-based CT-fractional flow reserve modeling as a first example. Finally, at the cohort and population analysis levels, the focus of AI shifts from clinical decision-making to operational decisions.


Assuntos
Inteligência Artificial , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Tomografia Computadorizada por Raios X/tendências , Carga de Trabalho
18.
Eur J Radiol ; 126: 108918, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32171914

RESUMO

PURPOSE: To evaluate the performance of an artificial intelligence (AI) based software solution tested on liver volumetric analyses and to compare the results to the manual contour segmentation. MATERIALS AND METHODS: We retrospectively obtained 462 multiphasic CT datasets with six series for each patient: three different contrast phases and two slice thickness reconstructions (1.5/5 mm), totaling 2772 series. AI-based liver volumes were determined using multi-scale deep-reinforcement learning for 3D body markers detection and 3D structure segmentation. The algorithm was trained for liver volumetry on approximately 5000 datasets. We computed the absolute error of each automatically- and manually-derived volume relative to the mean manual volume. The mean processing time/dataset and method was recorded. Variations of liver volumes were compared using univariate generalized linear model analyses. A subgroup of 60 datasets was manually segmented by three radiologists, with a further subgroup of 20 segmented three times by each, to compare the automatically-derived results with the ground-truth. RESULTS: The mean absolute error of the automatically-derived measurement was 44.3 mL (representing 2.37 % of the averaged liver volumes). The liver volume was neither dependent on the contrast phase (p = 0.697), nor on the slice thickness (p = 0.446). The mean processing time/dataset with the algorithm was 9.94 s (sec) compared to manual segmentation with 219.34 s. We found an excellent agreement between both approaches with an ICC value of 0.996. CONCLUSION: The results of our study demonstrate that AI-powered fully automated liver volumetric analyses can be done with excellent accuracy, reproducibility, robustness, speed and agreement with the manual segmentation.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Hepatopatias/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Inteligência Artificial , Aprendizado Profundo , Humanos , Fígado/diagnóstico por imagem , Reprodutibilidade dos Testes , Estudos Retrospectivos
19.
Diagnostics (Basel) ; 10(11)2020 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-33202680

RESUMO

BACKGROUND: Opportunistic prostate cancer (PCa) screening is a controversial topic. Magnetic resonance imaging (MRI) has proven to detect prostate cancer with a high sensitivity and specificity, leading to the idea to perform an image-guided prostate cancer (PCa) screening; Methods: We evaluated a prospectively enrolled cohort of 49 healthy men participating in a dedicated image-guided PCa screening trial employing a biparametric MRI (bpMRI) protocol consisting of T2-weighted (T2w) and diffusion weighted imaging (DWI) sequences. Datasets were analyzed both by human readers and by a fully automated artificial intelligence (AI) software using deep learning (DL). Agreement between the algorithm and the reports-serving as the ground truth-was compared on a per-case and per-lesion level using metrics of diagnostic accuracy and k statistics; Results: The DL method yielded an 87% sensitivity (33/38) and 50% specificity (5/10) with a k of 0.42. 12/28 (43%) Prostate Imaging Reporting and Data System (PI-RADS) 3, 16/22 (73%) PI-RADS 4, and 5/5 (100%) PI-RADS 5 lesions were detected compared to the ground truth. Targeted biopsy revealed PCa in six participants, all correctly diagnosed by both the human readers and AI. CONCLUSIONS: The results of our study show that in our AI-assisted, image-guided prostate cancer screening the software solution was able to identify highly suspicious lesions and has the potential to effectively guide the targeted-biopsy workflow.

20.
ArXiv ; 2020 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-32550252

RESUMO

PURPOSE: To present a method that automatically segments and quantifies abnormal CT patterns commonly present in coronavirus disease 2019 (COVID-19), namely ground glass opacities and consolidations. MATERIALS AND METHODS: In this retrospective study, the proposed method takes as input a non-contrasted chest CT and segments the lesions, lungs, and lobes in three dimensions, based on a dataset of 9749 chest CT volumes. The method outputs two combined measures of the severity of lung and lobe involvement, quantifying both the extent of COVID-19 abnormalities and presence of high opacities, based on deep learning and deep reinforcement learning. The first measure of (PO, PHO) is global, while the second of (LSS, LHOS) is lobewise. Evaluation of the algorithm is reported on CTs of 200 participants (100 COVID-19 confirmed patients and 100 healthy controls) from institutions from Canada, Europe and the United States collected between 2002-Present (April, 2020). Ground truth is established by manual annotations of lesions, lungs, and lobes. Correlation and regression analyses were performed to compare the prediction to the ground truth. RESULTS: Pearson correlation coefficient between method prediction and ground truth for COVID-19 cases was calculated as 0.92 for PO (P < .001), 0.97 for PHO(P < .001), 0.91 for LSS (P < .001), 0.90 for LHOS (P < .001). 98 of 100 healthy controls had a predicted PO of less than 1%, 2 had between 1-2%. Automated processing time to compute the severity scores was 10 seconds per case compared to 30 minutes required for manual annotations. CONCLUSION: A new method segments regions of CT abnormalities associated with COVID-19 and computes (PO, PHO), as well as (LSS, LHOS) severity scores.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA