Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
BMC Infect Dis ; 22(1): 637, 2022 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-35864468

RESUMEN

BACKGROUND: Airspace disease as seen on chest X-rays is an important point in triage for patients initially presenting to the emergency department with suspected COVID-19 infection. The purpose of this study is to evaluate a previously trained interpretable deep learning algorithm for the diagnosis and prognosis of COVID-19 pneumonia from chest X-rays obtained in the ED. METHODS: This retrospective study included 2456 (50% RT-PCR positive for COVID-19) adult patients who received both a chest X-ray and SARS-CoV-2 RT-PCR test from January 2020 to March of 2021 in the emergency department at a single U.S. INSTITUTION: A total of 2000 patients were included as an additional training cohort and 456 patients in the randomized internal holdout testing cohort for a previously trained Siemens AI-Radiology Companion deep learning convolutional neural network algorithm. Three cardiothoracic fellowship-trained radiologists systematically evaluated each chest X-ray and generated an airspace disease area-based severity score which was compared against the same score produced by artificial intelligence. The interobserver agreement, diagnostic accuracy, and predictive capability for inpatient outcomes were assessed. Principal statistical tests used in this study include both univariate and multivariate logistic regression. RESULTS: Overall ICC was 0.820 (95% CI 0.790-0.840). The diagnostic AUC for SARS-CoV-2 RT-PCR positivity was 0.890 (95% CI 0.861-0.920) for the neural network and 0.936 (95% CI 0.918-0.960) for radiologists. Airspace opacities score by AI alone predicted ICU admission (AUC = 0.870) and mortality (0.829) in all patients. Addition of age and BMI into a multivariate log model improved mortality prediction (AUC = 0.906). CONCLUSION: The deep learning algorithm provides an accurate and interpretable assessment of the disease burden in COVID-19 pneumonia on chest radiographs. The reported severity scores correlate with expert assessment and accurately predicts important clinical outcomes. The algorithm contributes additional prognostic information not currently incorporated into patient management.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Adulto , Inteligencia Artificial , COVID-19/diagnóstico por imagen , Humanos , Pronóstico , Radiografía Torácica , Estudios Retrospectivos , SARS-CoV-2 , Tomografía Computarizada por Rayos X , Rayos X
2.
Eur Radiol ; 31(10): 7888-7900, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33774722

RESUMEN

OBJECTIVES: Diagnostic accuracy of artificial intelligence (AI) pneumothorax (PTX) detection in chest radiographs (CXR) is limited by the noisy annotation quality of public training data and confounding thoracic tubes (TT). We hypothesize that in-image annotations of the dehiscent visceral pleura for algorithm training boosts algorithm's performance and suppresses confounders. METHODS: Our single-center evaluation cohort of 3062 supine CXRs includes 760 PTX-positive cases with radiological annotations of PTX size and inserted TTs. Three step-by-step improved algorithms (differing in algorithm architecture, training data from public datasets/clinical sites, and in-image annotations included in algorithm training) were characterized by area under the receiver operating characteristics (AUROC) in detailed subgroup analyses and referenced to the well-established "CheXNet" algorithm. RESULTS: Performances of established algorithms exclusively trained on publicly available data without in-image annotations are limited to AUROCs of 0.778 and strongly biased towards TTs that can completely eliminate algorithm's discriminative power in individual subgroups. Contrarily, our final "algorithm 2" which was trained on a lower number of images but additionally with in-image annotations of the dehiscent pleura achieved an overall AUROC of 0.877 for unilateral PTX detection with a significantly reduced TT-related confounding bias. CONCLUSIONS: We demonstrated strong limitations of an established PTX-detecting AI algorithm that can be significantly reduced by designing an AI system capable of learning to both classify and localize PTX. Our results are aimed at drawing attention to the necessity of high-quality in-image localization in training data to reduce the risks of unintentionally biasing the training process of pathology-detecting AI algorithms. KEY POINTS: • Established pneumothorax-detecting artificial intelligence algorithms trained on public training data are strongly limited and biased by confounding thoracic tubes. • We used high-quality in-image annotated training data to effectively boost algorithm performance and suppress the impact of confounding thoracic tubes. • Based on our results, we hypothesize that even hidden confounders might be effectively addressed by in-image annotations of pathology-related image features.


Asunto(s)
Inteligencia Artificial , Neumotórax , Algoritmos , Curaduría de Datos , Humanos , Neumotórax/diagnóstico por imagen , Radiografía , Radiografía Torácica
3.
Pediatr Res ; 85(3): 293-298, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30631137

RESUMEN

BACKGROUND: To compare the ability of ventricular morphology on cranial ultrasound (CUS) versus standard clinical variables to predict the need for temporizing cerebrospinal fluid drainage in newborns with intraventricular hemorrhage (IVH). METHOD: This is a retrospective study of newborns (gestational age <29 weeks) diagnosed with IVH. Clinical variables known to increase the risk for post-hemorrhagic hydrocephalus were collected. The first CUS with IVH was identified and a slice in the coronal plane was selected. The frontal horns of the lateral ventricles were manually segmented. Automated quantitative morphological features were extracted from both lateral ventricles. Predictive models of the need of temporizing intervention were compared. RESULTS: Sixty-two newborns met inclusion criteria. Fifteen out of the 62 had a temporizing intervention. The morphological features had a better accuracy predicting temporizing interventions when compared to clinical variables: 0.94 versus 0.85, respectively; p < 0.01 for both. By considering both morphological and clinical variables, our method predicts the need of temporizing intervention with positive and negative predictive values of 0.83 and 1, respectively, and accuracy of 0.97. CONCLUSION: Early cranial ultrasound-based quantitative ventricular evaluation in premature newborns can predict the eventual use of a temporizing intervention to treat post-hemorrhagic hydrocephalus. This may be helpful for early monitoring and treatment.


Asunto(s)
Hemorragia Cerebral/complicaciones , Hemorragia Cerebral/diagnóstico por imagen , Ventrículos Cerebrales/diagnóstico por imagen , Hidrocefalia/diagnóstico por imagen , Hidrocefalia/etiología , Ecoencefalografía , Femenino , Edad Gestacional , Humanos , Procesamiento de Imagen Asistido por Computador , Recién Nacido , Cuidado Intensivo Neonatal , Masculino , Reproducibilidad de los Resultados , Estudios Retrospectivos , Riesgo , Máquina de Vectores de Soporte
4.
Radiographics ; 35(4): 1056-76, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26172351

RESUMEN

The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy-guided, and (e) machine learning-based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed.


Asunto(s)
Predicción , Enfermedades Pulmonares/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Reconocimiento de Normas Patrones Automatizadas/tendencias , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/tendencias , Humanos , Radiografía Torácica/tendencias , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Técnica de Sustracción/tendencias
5.
Acad Radiol ; 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38997881

RESUMEN

RATIONALE AND OBJECTIVES: Given the high volume of chest radiographs, radiologists frequently encounter heavy workloads. In outpatient imaging, a substantial portion of chest radiographs show no actionable findings. Automatically identifying these cases could improve efficiency by facilitating shorter reading workflows. PURPOSE: A large-scale study to assess the performance of AI on identifying chest radiographs with no actionable disease (NAD) in an outpatient imaging population using comprehensive, objective, and reproducible criteria for NAD. MATERIALS AND METHODS: The independent validation study includes 15000 patients with chest radiographs in posterior-anterior (PA) and lateral projections from an outpatient imaging center in the United States. Ground truth was established by reviewing CXR reports and classifying cases as NAD or actionable disease (AD). The NAD definition includes completely normal chest radiographs and radiographs with well-defined non-actionable findings. The AI NAD Analyzer1 (trained with 100 million multimodal images and fine-tuned on 1.3 million radiographs) utilizes a tandem system with image-level rule in and compartment-level rule out to provide case level output as NAD or potential actionable disease (PAD). RESULTS: A total of 14057 cases met our eligibility criteria (age 56 ± 16.1 years, 55% women and 45% men). The prevalence of NAD cases in the study population was 70.7%. The AI NAD Analyzer correctly classified NAD cases with a sensitivity of 29.1% and a yield of 20.6%. The specificity was 98.9% which corresponds to a miss rate of 0.3% of cases. Significant findings were missed in 0.06% of cases, while no cases with critical findings were missed by AI. CONCLUSION: In an outpatient population, AI can identify 20% of chest radiographs as NAD with a very low rate of missed findings. These cases could potentially be read using a streamlined protocol, thus improving efficiency and consequently reducing daily workload for radiologists.

6.
Sci Rep ; 13(1): 21097, 2023 11 30.
Artículo en Inglés | MEDLINE | ID: mdl-38036602

RESUMEN

The evaluation of deep-learning (DL) systems typically relies on the Area under the Receiver-Operating-Curve (AU-ROC) as a performance metric. However, AU-ROC, in its holistic form, does not sufficiently consider performance within specific ranges of sensitivity and specificity, which are critical for the intended operational context of the system. Consequently, two systems with identical AU-ROC values can exhibit significantly divergent real-world performance. This issue is particularly pronounced in the context of anomaly detection tasks, a commonly employed application of DL systems across various research domains, including medical imaging, industrial automation, manufacturing, cyber security, fraud detection, and drug research, among others. The challenge arises from the heavy class imbalance in training datasets, with the abnormality class often incurring a considerably higher misclassification cost compared to the normal class. Traditional DL systems address this by adjusting the weighting of the cost function or optimizing for specific points along the ROC curve. While these approaches yield reasonable results in many cases, they do not actively seek to maximize performance for the desired operating point. In this study, we introduce a novel technique known as AUCReshaping, designed to reshape the ROC curve exclusively within the specified sensitivity and specificity range, by optimizing sensitivity at a predetermined specificity level. This reshaping is achieved through an adaptive and iterative boosting mechanism that allows the network to focus on pertinent samples during the learning process. We primarily investigated the impact of AUCReshaping in the context of abnormality detection tasks, specifically in Chest X-Ray (CXR) analysis, followed by breast mammogram and credit card fraud detection tasks. The results reveal a substantial improvement, ranging from 2 to 40%, in sensitivity at high-specificity levels for binary classification tasks.


Asunto(s)
Algoritmos , Mamografía , Sensibilidad y Especificidad , Curva ROC , Radiografía
7.
J Vet Dent ; 39(2): 122-132, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35257605

RESUMEN

Oral health conditions (eg, plaque, calculus, gingivitis) cause morbidity and pain in companion animals. Thus, developing technologies that can ameliorate the accumulation of oral biofilm, a critical factor in the progression of these conditions, is vital. Quantitative light-induced fluorescence (QLF) is a method to quantify oral substrate accumulation, and therefore, it can assess biofilm attenuation of different products. New software has recently been developed that automates aspects of the procedure. However, few QLF studies in companion animals have been performed. QLF was used to collect digital images of oral substrate accumulation on the teeth of dogs and cats to demonstrate the ability of QLF to discriminate between foods known to differentially inhibit oral substrate accumulation. Images were taken as a function of time and diet. Software developed by the Cytometry Laboratory, Purdue University quantified biofilm coverage. Intra- and intergrader reproducibility was also assessed, as was a comparison of the results of the QLF software with those of an experienced grader using undisclosed coverage-only metrics similar to those used for the Logan and Boyce index. Quantification of oral substrate accumulation using QLF-derived images demonstrated the ability to distinguish between dental diets known to differentially inhibit oral biofilm accumulation. Little variance in intra- and intergrader reproducibility was observed, and the comparison between the experienced Logan and Boyce grader and the QLF software yielded a concordance correlation coefficient of 0.89 (95% CI = 0.84, 0.92). These results show that QLF is a useful tool that allows the semi-automated quantification of the accumulation of oral biofilm in companion animals.


Asunto(s)
Enfermedades de los Gatos , Caries Dental , Enfermedades de los Perros , Fluorescencia Cuantitativa Inducida por la Luz , Animales , Biopelículas , Enfermedades de los Gatos/diagnóstico , Gatos , Caries Dental/veterinaria , Enfermedades de los Perros/diagnóstico , Perros , Fluorescencia , Humanos , Luz , Fluorescencia Cuantitativa Inducida por la Luz/veterinaria , Reproducibilidad de los Resultados
8.
J Med Imaging (Bellingham) ; 9(6): 064503, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36466078

RESUMEN

Purpose: Building accurate and robust artificial intelligence systems for medical image assessment requires the creation of large sets of annotated training examples. However, constructing such datasets is very costly due to the complex nature of annotation tasks, which often require expert knowledge (e.g., a radiologist). To counter this limitation, we propose a method to learn from medical images at scale in a self-supervised way. Approach: Our approach, based on contrastive learning and online feature clustering, leverages training datasets of over 100,000,000 medical images of various modalities, including radiography, computed tomography (CT), magnetic resonance (MR) imaging, and ultrasonography (US). We propose to use the learned features to guide model training in supervised and hybrid self-supervised/supervised regime on various downstream tasks. Results: We highlight a number of advantages of this strategy on challenging image assessment problems in radiography, CT, and MR: (1) significant increase in accuracy compared to the state-of-the-art (e.g., area under the curve boost of 3% to 7% for detection of abnormalities from chest radiography scans and hemorrhage detection on brain CT); (2) acceleration of model convergence during training by up to 85% compared with using no pretraining (e.g., 83% when training a model for detection of brain metastases in MR scans); and (3) increase in robustness to various image augmentations, such as intensity variations, rotations or scaling reflective of data variation seen in the field. Conclusions: The proposed approach enables large gains in accuracy and robustness on challenging image assessment problems. The improvement is significant compared with other state-of-the-art approaches trained on medical or vision images (e.g., ImageNet).

9.
J Med Imaging (Bellingham) ; 9(3): 034003, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35721308

RESUMEN

Purpose: Rapid prognostication of COVID-19 patients is important for efficient resource allocation. We evaluated the relative prognostic value of baseline clinical variables (CVs), quantitative human-read chest CT (qCT), and AI-read chest radiograph (qCXR) airspace disease (AD) in predicting severe COVID-19. Approach: We retrospectively selected 131 COVID-19 patients (SARS-CoV-2 positive, March to October, 2020) at a tertiary hospital in the United States, who underwent chest CT and CXR within 48 hr of initial presentation. CVs included patient demographics and laboratory values; imaging variables included qCT volumetric percentage AD (POv) and qCXR area-based percentage AD (POa), assessed by a deep convolutional neural network. Our prognostic outcome was need for ICU admission. We compared the performance of three logistic regression models: using CVs known to be associated with prognosis (model I), using a dimension-reduced set of best predictor variables (model II), and using only age and AD (model III). Results: 60/131 patients required ICU admission, whereas 71/131 did not. Model I performed the poorest ( AUC = 0.67 [0.58 to 0.76]; accuracy = 77 % ). Model II performed the best ( AUC = 0.78 [0.71 to 0.86]; accuracy = 81 % ). Model III was equivalent ( AUC = 0.75 [0.67 to 0.84]; accuracy = 80 % ). Both models II and III outperformed model I ( AUC difference = 0.11 [0.02 to 0.19], p = 0.01 ; AUC difference = 0.08 [0.01 to 0.15], p = 0.04 , respectively). Model II and III results did not change significantly when POv was replaced by POa. Conclusions: Severe COVID-19 can be predicted using only age and quantitative AD imaging metrics at initial diagnosis, which outperform the set of CVs. Moreover, AI-read qCXR can replace qCT metrics without loss of prognostic performance, promising more resource-efficient prognostication.

10.
Invest Radiol ; 57(2): 90-98, 2022 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-34352804

RESUMEN

OBJECTIVES: Chest radiographs (CXRs) are commonly performed in emergency units (EUs), but the interpretation requires radiology experience. We developed an artificial intelligence (AI) system (precommercial) that aims to mimic board-certified radiologists' (BCRs') performance and can therefore support non-radiology residents (NRRs) in clinical settings lacking 24/7 radiology coverage. We validated by quantifying the clinical value of our AI system for radiology residents (RRs) and EU-experienced NRRs in a clinically representative EU setting. MATERIALS AND METHODS: A total of 563 EU CXRs were retrospectively assessed by 3 BCRs, 3 RRs, and 3 EU-experienced NRRs. Suspected pathologies (pleural effusion, pneumothorax, consolidations suspicious for pneumonia, lung lesions) were reported on a 5-step confidence scale (sum of 20,268 reported pathology suspicions [563 images × 9 readers × 4 pathologies]) separately by every involved reader. Board-certified radiologists' confidence scores were converted into 4 binary reference standards (RFSs) of different sensitivities. The RRs' and NRRs' performances were statistically compared with our AI system (trained on nonpublic data from different clinical sites) based on receiver operating characteristics (ROCs) and operating point metrics approximated to the maximum sum of sensitivity and specificity (Youden statistics). RESULTS: The NRRs lose diagnostic accuracy to RRs with increasingly sensitive BCRs' RFSs for all considered pathologies. Based on our external validation data set, the AI system/NRRs' consensus mimicked the most sensitive BCRs' RFSs with areas under ROC of 0.940/0.837 (pneumothorax), 0.953/0.823 (pleural effusion), and 0.883/0.747 (lung lesions), which were comparable to experienced RRs and significantly overcomes EU-experienced NRRs' diagnostic performance. For consolidation detection, the AI system performed on the NRRs' consensus level (and overcomes each individual NRR) with an area under ROC of 0.847 referenced to the BCRs' most sensitive RFS. CONCLUSIONS: Our AI system matched RRs' performance, meanwhile significantly outperformed NRRs' diagnostic accuracy for most of considered CXR pathologies (pneumothorax, pleural effusion, and lung lesions) and therefore might serve as clinical decision support for NRRs.


Asunto(s)
Enfermedades Pulmonares , Derrame Pleural , Neumotórax , Radiología , Inteligencia Artificial , Servicio de Urgencia en Hospital , Humanos , Derrame Pleural/diagnóstico por imagen , Neumotórax/diagnóstico por imagen , Radiografía , Radiografía Torácica/métodos , Estudios Retrospectivos
11.
Invest Radiol ; 56(8): 471-479, 2021 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-33481459

RESUMEN

OBJECTIVES: The aim of this study was to leverage volumetric quantification of airspace disease (AD) derived from a superior modality (computed tomography [CT]) serving as ground truth, projected onto digitally reconstructed radiographs (DRRs) to (1) train a convolutional neural network (CNN) to quantify AD on paired chest radiographs (CXRs) and CTs, and (2) compare the DRR-trained CNN to expert human readers in the CXR evaluation of patients with confirmed COVID-19. MATERIALS AND METHODS: We retrospectively selected a cohort of 86 COVID-19 patients (with positive reverse transcriptase-polymerase chain reaction test results) from March to May 2020 at a tertiary hospital in the northeastern United States, who underwent chest CT and CXR within 48 hours. The ground-truth volumetric percentage of COVID-19-related AD (POv) was established by manual AD segmentation on CT. The resulting 3-dimensional masks were projected into 2-dimensional anterior-posterior DRR to compute area-based AD percentage (POa). A CNN was trained with DRR images generated from a larger-scale CT dataset of COVID-19 and non-COVID-19 patients, automatically segmenting lungs, AD, and quantifying POa on CXR. The CNN POa results were compared with POa quantified on CXR by 2 expert readers and to the POv ground truth, by computing correlations and mean absolute errors. RESULTS: Bootstrap mean absolute error and correlations between POa and POv were 11.98% (11.05%-12.47%) and 0.77 (0.70-0.82) for average of expert readers and 9.56% to 9.78% (8.83%-10.22%) and 0.78 to 0.81 (0.73-0.85) for the CNN, respectively. CONCLUSIONS: Our CNN trained with DRR using CT-derived airspace quantification achieved expert radiologist level of accuracy in the quantification of AD on CXR in patients with positive reverse transcriptase-polymerase chain reaction test results for COVID-19.


Asunto(s)
COVID-19/diagnóstico por imagen , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Radiografía Torácica , Radiólogos , Tomografía Computarizada por Rayos X , Estudios de Cohortes , Humanos , Pulmón/diagnóstico por imagen , Masculino , Estudios Retrospectivos
12.
Med Image Anal ; 68: 101855, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33260116

RESUMEN

The interpretation of medical images is a challenging task, often complicated by the presence of artifacts, occlusions, limited contrast and more. Most notable is the case of chest radiography, where there is a high inter-rater variability in the detection and classification of abnormalities. This is largely due to inconclusive evidence in the data or subjective definitions of disease appearance. An additional example is the classification of anatomical views based on 2D Ultrasound images. Often, the anatomical context captured in a frame is not sufficient to recognize the underlying anatomy. Current machine learning solutions for these problems are typically limited to providing probabilistic predictions, relying on the capacity of underlying models to adapt to limited information and the high degree of label noise. In practice, however, this leads to overconfident systems with poor generalization on unseen data. To account for this, we propose a system that learns not only the probabilistic estimate for classification, but also an explicit uncertainty measure which captures the confidence of the system in the predicted output. We argue that this approach is essential to account for the inherent ambiguity characteristic of medical images from different radiologic exams including computed radiography, ultrasonography and magnetic resonance imaging. In our experiments we demonstrate that sample rejection based on the predicted uncertainty can significantly improve the ROC-AUC for various tasks, e.g., by 8% to 0.91 with an expected rejection rate of under 25% for the classification of different abnormalities in chest radiographs. In addition, we show that using uncertainty-driven bootstrapping to filter the training data, one can achieve a significant increase in robustness and accuracy. Finally, we present a multi-reader study showing that the predictive uncertainty is indicative of reader errors.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Humanos , Aprendizaje Automático , Incertidumbre
13.
JAMA Netw Open ; 4(12): e2141096, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-34964851

RESUMEN

Importance: Most early lung cancers present as pulmonary nodules on imaging, but these can be easily missed on chest radiographs. Objective: To assess if a novel artificial intelligence (AI) algorithm can help detect pulmonary nodules on radiographs at different levels of detection difficulty. Design, Setting, and Participants: This diagnostic study included 100 posteroanterior chest radiograph images taken between 2000 and 2010 of adult patients from an ambulatory health care center in Germany and a lung image database in the US. Included images were selected to represent nodules with different levels of detection difficulties (from easy to difficult), and comprised both normal and nonnormal control. Exposures: All images were processed with a novel AI algorithm, the AI Rad Companion Chest X-ray. Two thoracic radiologists established the ground truth and 9 test radiologists from Germany and the US independently reviewed all images in 2 sessions (unaided and AI-aided mode) with at least a 1-month washout period. Main Outcomes and Measures: Each test radiologist recorded the presence of 5 findings (pulmonary nodules, atelectasis, consolidation, pneumothorax, and pleural effusion) and their level of confidence for detecting the individual finding on a scale of 1 to 10 (1 representing lowest confidence; 10, highest confidence). The analyzed metrics for nodules included sensitivity, specificity, accuracy, and receiver operating characteristics curve area under the curve (AUC). Results: Images from 100 patients were included, with a mean (SD) age of 55 (20) years and including 64 men and 36 women. Mean detection accuracy across the 9 radiologists improved by 6.4% (95% CI, 2.3% to 10.6%) with AI-aided interpretation compared with unaided interpretation. Partial AUCs within the effective interval range of 0 to 0.2 false positive rate improved by 5.6% (95% CI, -1.4% to 12.0%) with AI-aided interpretation. Junior radiologists saw greater improvement in sensitivity for nodule detection with AI-aided interpretation as compared with their senior counterparts (12%; 95% CI, 4% to 19% vs 9%; 95% CI, 1% to 17%) while senior radiologists experienced similar improvement in specificity (4%; 95% CI, -2% to 9%) as compared with junior radiologists (4%; 95% CI, -3% to 5%). Conclusions and Relevance: In this diagnostic study, an AI algorithm was associated with improved detection of pulmonary nodules on chest radiographs compared with unaided interpretation for different levels of detection difficulty and for readers with different experience.


Asunto(s)
Algoritmos , Neoplasias Pulmonares/diagnóstico por imagen , Adulto , Inteligencia Artificial , Femenino , Alemania , Humanos , Masculino , Persona de Mediana Edad , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador , Radiografía Torácica , Sensibilidad y Especificidad , Nódulo Pulmonar Solitario/diagnóstico por imagen
14.
IEEE Trans Biomed Eng ; 67(4): 1206-1220, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31425015

RESUMEN

Computer-aided diagnosis (CAD) techniques for lung field segmentation from chest radiographs (CXR) have been proposed for adult cohorts, but rarely for pediatric subjects. Statistical shape models (SSMs), the workhorse of most state-of-the-art CXR-based lung field segmentation methods, do not efficiently accommodate shape variation of the lung field during the pediatric developmental stages. The main contributions of our work are: 1) a generic lung field segmentation framework from CXR accommodating large shape variation for adult and pediatric cohorts; 2) a deep representation learning detection mechanism, ensemble space learning, for robust object localization; and 3) marginal shape deep learning for the shape deformation parameter estimation. Unlike the iterative approach of conventional SSMs, the proposed shape learning mechanism transforms the parameter space into marginal subspaces that are solvable efficiently using the recursive representation learning mechanism. Furthermore, our method is the first to include the challenging retro-cardiac region in the CXR-based lung segmentation for accurate lung capacity estimation. The framework is evaluated on 668 CXRs of patients between 3 month to 89 year of age. We obtain a mean Dice similarity coefficient of 0.96 ±0.03 (including the retro-cardiac region). For a given accuracy, the proposed approach is also found to be faster than conventional SSM-based iterative segmentation methods. The computational simplicity of the proposed generic framework could be similarly applied to the fast segmentation of other deformable objects.


Asunto(s)
Diagnóstico por Computador , Pulmón , Niño , Humanos , Pulmón/diagnóstico por imagen , Modelos Estadísticos , Radiografía
15.
IEEE Trans Biomed Eng ; 67(11): 3026-3034, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32086190

RESUMEN

OBJECTIVE: Prediction of post-hemorrhagic hydrocephalus (PHH) outcome-i.e., whether it requires intervention or not-in premature neonates using cranial ultrasound (CUS) images is challenging. In this paper, we present a novel fully-automatic method to perform phenotyping of the brain lateral ventricles and predict PHH outcome from CUS. METHODS: Our method consists of two parts: ventricle quantification followed by prediction of PHH outcome. First, cranial bounding box and brain interhemispheric fissure are detected to determine the anatomical position of ventricles and correct the cranium rotation. Then, lateral ventricles are extracted using a new deep learning-based method by incorporating the convolutional neural network into a probabilistic atlas-based weighted loss function and an image-specific adaption. PHH outcome is predicted using a support vector machine classifier trained using ventricular morphological phenotypes and clinical information. RESULTS: Experiments demonstrated that our method achieves accurate ventricle segmentation results with an average Dice similarity coefficient of 0.86, as well as very good PHH outcome prediction with accuracy of 0.91. CONCLUSION: Automatic CUS-based ventricular phenotyping in premature newborns could objectively and accurately predict the progression to severe PHH. SIGNIFICANCE: Early prediction of severe PHH development in premature newborns could potentially advance criteria for diagnosis and offer an opportunity for early interventions to improve outcome.


Asunto(s)
Hidrocefalia , Ventrículos Laterales , Hemorragia Cerebral/diagnóstico por imagen , Ventrículos Cerebrales/diagnóstico por imagen , Ecoencefalografía , Humanos , Hidrocefalia/diagnóstico por imagen , Recién Nacido , Ventrículos Laterales/diagnóstico por imagen
16.
Sci Rep ; 10(1): 613, 2020 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-31953419

RESUMEN

We need a better risk stratification system for the increasing number of survivors of extreme prematurity suffering the most severe forms of bronchopulmonary dysplasia (BPD). However, there is still a paucity of studies providing scientific evidence to guide future updates of BPD severity definitions. Our goal was to validate a new predictive model for BPD severity that incorporates respiratory assessments beyond 36 weeks postmenstrual age (PMA). We hypothesized that this approach improves BPD risk assessment, particularly in extremely premature infants. This is a longitudinal cohort of premature infants (≤32 weeks PMA, n = 188; Washington D.C). We performed receiver operating characteristic analysis to define optimal BPD severity levels using the duration of supplementary O2 as predictor and respiratory hospitalization after discharge as outcome. Internal validation included lung X-ray imaging and phenotypical characterization of BPD severity levels. External validation was conducted in an independent longitudinal cohort of premature infants (≤36 weeks PMA, n = 130; Bogota). We found that incorporating the total number of days requiring O2 (without restricting at 36 weeks PMA) improved the prediction of respiratory outcomes according to BPD severity. In addition, we defined a new severity category (level IV) with prolonged exposure to supplemental O2 (≥120 days) that has the highest risk of respiratory hospitalizations after discharge. We confirmed these findings in our validation cohort using ambulatory determination of O2 requirements. In conclusion, a new predictive model for BPD severity that incorporates respiratory assessments beyond 36 weeks improves risk stratification and should be considered when updating current BPD severity definitions.


Asunto(s)
Displasia Broncopulmonar/fisiopatología , Hospitalización/estadística & datos numéricos , Enfermedades del Prematuro/fisiopatología , Oxígeno/administración & dosificación , Displasia Broncopulmonar/terapia , Femenino , Edad Gestacional , Humanos , Recien Nacido Extremadamente Prematuro , Recién Nacido , Enfermedades del Prematuro/terapia , Estudios Longitudinales , Masculino , Curva ROC , Medición de Riesgo , Índice de Severidad de la Enfermedad , Factores de Tiempo
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 3136-3139, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30441059

RESUMEN

Intraventricular hemorrhage (IVH) followed by post hemorrhagic hydrocephalus (PHH) in premature neonates is one of the recognized reasons of brain injury in newborns. Cranial ultrasound (CUS) is a noninvasive imaging tool that has been used widely to diagnose and monitor neonates with IVH. In our previous work, we showed the potential of quantitative morphological analysis of lateral ventricles from early CUS to predict the PHH outcome in neonates with IVH. In this paper, we first present a new automatic method for ventricle segmentation in 2D CUS images. We detect the brain bounding box and brain mid-line to estimate the anatomical positions of ventricles and correct the brain rotation. The ventricles are segmented using a combination of fuzzy c-means, phase congruency, and active contour algorithms. Finally, we compare this fully automated approach with our previous work for the prediction of the outcome of PHH on a set of 2D CUS images taken from 60 premature neonates with different IVH grades. Experimental results showed that our method could segment ventricles with an average Dice similarity coefficient of 0.8 ± 0.12. In addition, our fully automated method could predict the outcome of PHH based on the extracted ventricle regions with similar accuracy to our previous semi-automated approach (83% vs. 84%, respectively, p-value = 0.8). This method has the potential to standardize the evaluation of CUS images and can be a helpful clinical tool for early monitoring and treatment of IVH and PHH.


Asunto(s)
Hemorragia Cerebral , Hidrocefalia , Recien Nacido Prematuro , Ventrículos Cerebrales , Ecoencefalografía , Humanos
18.
Proc SPIE Int Soc Opt Eng ; 101332017 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-28592911

RESUMEN

Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM1 (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.

19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 169-172, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29059837

RESUMEN

Premature neonates with intraventricular hemorrhage (IVH) followed by post hemorrhagic hydrocephalus (PHH) are at high risk for brain injury. Cranial ultrasound (CUS) is used for monitoring of premature neonates during the first weeks after birth to identify IVH and follow the progression to PHH. However, the lack of a standardized method for CUS evaluation has led to significant variability in decision making regarding treatment. We propose a quantitative imaging tool for the evaluation of PHH on CUS for premature neonates based on morphological features of the cerebral ventricles. We retrospectively studied 64 extremely premature neonates born less than 29 weeks gestational age, less than 1,500 grams weight at birth, admitted to our center within two weeks of life, and diagnosed with different grades of IVH. We extracted morphological features of the lateral ventricles from CUS imaging using image analysis techniques to compare neonates who needed a temporizing intervention to treat PHH to the ones who did not. From the original set of features, an optimal ranking was obtained based on linear support vector machine. A subset of features was subsequently selected that maximizes the overall accuracy level. Regarding whether or not there was a need for temporizing intervention, we predicted the outcome of PHH with an improved accuracy level of 84%, compared to the 76% rate obtained by linear manual measurement. The proposed imaging tool allowed us to establish a quantitative method for PHH evaluation on CUS in extremely premature neonates with IVH. Further studies will help standardize the evaluation of CUS in those neonates to institute treatments earlier and improve outcomes.


Asunto(s)
Hidrocefalia/diagnóstico por imagen , Hemorragia Cerebral , Ventrículos Cerebrales , Ecoencefalografía , Edad Gestacional , Humanos , Recién Nacido , Recien Nacido Prematuro , Enfermedades del Prematuro
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 97-100, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28324924

RESUMEN

Significant progress has been made in recent years for computer-aided diagnosis of abnormal pulmonary textures from computed tomography (CT) images. Similar initiatives in chest radiographs (CXR), the common modality for pulmonary diagnosis, are much less developed. CXR are fast, cost effective and low-radiation solution to diagnosis over CT. However, the subtlety of textures in CXR makes them hard to discern even by trained eye. We explore the performance of deep learning abnormal tissue characterization from CXR. Prior studies have used CT imaging to characterize air trapping in subjects with pulmonary disease; however, the use of CT in children is not recommended mainly due to concerns pertaining to radiation dosage. In this work, we present a stacked autoencoder (SAE) deep learning architecture for automated tissue characterization of air-trapping from CXR. To our best knowledge this is the first study applying deep learning framework for the specific problem on 51 CXRs, an F-score of ≈ 76.5% and a strong correlation with the expert visual scoring (R=0.93, p =<; 0.01) demonstrate the potential of the proposed method to characterization of air trapping.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Pulmón/diagnóstico por imagen , Redes Neurales de la Computación , Radiografía Torácica/métodos , Aire , Diagnóstico por Computador , Humanos , Virosis/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA