Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 61
Filter
1.
PLoS One ; 16(6): e0253239, 2021.
Article in English | MEDLINE | ID: mdl-34153076

ABSTRACT

BACKGROUND: The World Health Organization (WHO)-defined radiological pneumonia is a preferred endpoint in pneumococcal vaccine efficacy and effectiveness studies in children. Automating the WHO methodology may support more widespread application of this endpoint. METHODS: We trained a deep learning model to classify pneumonia CXRs in children using the World Health Organization (WHO)'s standardized methodology. The model was pretrained on CheXpert, a dataset containing 224,316 adult CXRs, and fine-tuned on PERCH, a pediatric dataset containing 4,172 CXRs. The model was then tested on two pediatric CXR datasets released by WHO. We also compared the model's performance to that of radiologists and pediatricians. RESULTS: The average area under the receiver operating characteristic curve (AUC) for primary endpoint pneumonia (PEP) across 10-fold validation of PERCH images was 0.928; average AUC after testing on WHO images was 0.977. The model's classification performance was better on test images with high inter-observer agreement; however, the model still outperformed human assessments in AUC and precision-recall spaces on low agreement images. CONCLUSION: A deep learning model can classify pneumonia CXR images in children at a performance comparable to human readers. Our method lays a strong foundation for the potential inclusion of computer-aided readings of pediatric CXRs in vaccine trials and epidemiology studies.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/classification , Datasets as Topic , Female , Humans , Infant , Male , Models, Statistical , Observer Variation , Pneumonia/classification , Pneumonia/diagnostic imaging , ROC Curve , Reproducibility of Results , World Health Organization
2.
Sci Rep ; 11(1): 3964, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33597566

ABSTRACT

The interpretation of thoracic radiographs is a challenging and error-prone task for veterinarians. Despite recent advancements in machine learning and computer vision, the development of computer-aided diagnostic systems for radiographs remains a challenging and unsolved problem, particularly in the context of veterinary medicine. In this study, a novel method, based on multi-label deep convolutional neural network (CNN), for the classification of thoracic radiographs in dogs was developed. All the thoracic radiographs of dogs performed between 2010 and 2020 in the institution were retrospectively collected. Radiographs were taken with two different radiograph acquisition systems and were divided into two data sets accordingly. One data set (Data Set 1) was used for training and testing and another data set (Data Set 2) was used to test the generalization ability of the CNNs. Radiographic findings used as non mutually exclusive labels to train the CNNs were: unremarkable, cardiomegaly, alveolar pattern, bronchial pattern, interstitial pattern, mass, pleural effusion, pneumothorax, and megaesophagus. Two different CNNs, based on ResNet-50 and DenseNet-121 architectures respectively, were developed and tested. The CNN based on ResNet-50 had an Area Under the Receive-Operator Curve (AUC) above 0.8 for all the included radiographic findings except for bronchial and interstitial patterns both on Data Set 1 and Data Set 2. The CNN based on DenseNet-121 had a lower overall performance. Statistically significant differences in the generalization ability between the two CNNs were evident, with the CNN based on ResNet-50 showing better performance for alveolar pattern, interstitial pattern, megaesophagus, and pneumothorax.


Subject(s)
Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/classification , Animals , Cardiomegaly/diagnostic imaging , Deep Learning , Dogs , Lung/cytology , Lung/diagnostic imaging , Machine Learning , Neural Networks, Computer , Radiography/classification , Retrospective Studies
3.
BMC Med Imaging ; 21(1): 9, 2021 01 07.
Article in English | MEDLINE | ID: mdl-33413181

ABSTRACT

BACKGROUND: Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision-making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks, such as universal adversarial perturbation (UAP), which is a single perturbation that can induce DNN failure in most classification tasks has not been evaluated yet. METHODS: We focus on three representative DNN-based medical image classification tasks (i.e., skin cancer, referable diabetic retinopathy, and pneumonia classifications) and investigate their vulnerability to the seven model architectures of UAPs. RESULTS: We demonstrate that DNNs are vulnerable to both nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect class, and to targeted UAPs, which cause the DNN to classify an input into a specific class. The almost imperceptible UAPs achieved > 80% success rates for nontargeted and targeted attacks. The vulnerability to UAPs depended very little on the model architecture. Moreover, we discovered that adversarial retraining, which is known to be an effective method for adversarial defenses, increased DNNs' robustness against UAPs in only very few cases. CONCLUSION: Unlike previous assumptions, the results indicate that DNN-based clinical diagnosis is easier to deceive because of adversarial attacks. Adversaries can cause failed diagnoses at lower costs (e.g., without consideration of data distribution); moreover, they can affect the diagnosis. The effects of adversarial defenses may not be limited. Our findings emphasize that more careful consideration is required in developing DNNs for medical imaging and their practical applications.


Subject(s)
Diagnostic Imaging/classification , Image Interpretation, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/standards , Neural Networks, Computer , Diabetic Retinopathy/classification , Diabetic Retinopathy/diagnostic imaging , Diagnostic Imaging/standards , Humans , Photography/classification , Pneumonia/classification , Pneumonia/diagnostic imaging , Radiography, Thoracic/classification , Skin Neoplasms/classification , Skin Neoplasms/diagnostic imaging , Tomography, Optical Coherence/classification
4.
Diagn Interv Radiol ; 27(1): 20-27, 2021 Jan.
Article in English | MEDLINE | ID: mdl-32815519

ABSTRACT

PURPOSE: Chest X-ray plays a key role in diagnosis and management of COVID-19 patients and imaging features associated with clinical elements may assist with the development or validation of automated image analysis tools. We aimed to identify associations between clinical and radiographic features as well as to assess the feasibility of deep learning applied to chest X-rays in the setting of an acute COVID-19 outbreak. METHODS: A retrospective study of X-rays, clinical, and laboratory data was performed from 48 SARS-CoV-2 RT-PCR positive patients (age 60±17 years, 15 women) between February 22 and March 6, 2020 from a tertiary care hospital in Milan, Italy. Sixty-five chest X-rays were reviewed by two radiologists for alveolar and interstitial opacities and classified by severity on a scale from 0 to 3. Clinical factors (age, symptoms, comorbidities) were investigated for association with opacity severity and also with placement of central line or endotracheal tube. Deep learning models were then trained for two tasks: lung segmentation and opacity detection. Imaging characteristics were compared to clinical datapoints using the unpaired student's t-test or Mann-Whitney U test. Cohen's kappa analysis was used to evaluate the concordance of deep learning to conventional radiologist interpretation. RESULTS: Fifty-six percent of patients presented with alveolar opacities, 73% had interstitial opacities, and 23% had normal X-rays. The presence of alveolar or interstitial opacities was statistically correlated with age (P = 0.008) and comorbidities (P = 0.005). The extent of alveolar or interstitial opacities on baseline X-ray was significantly associated with the presence of endotracheal tube (P = 0.0008 and P = 0.049) or central line (P = 0.003 and P = 0.007). In comparison to human interpretation, the deep learning model achieved a kappa concordance of 0.51 for alveolar opacities and 0.71 for interstitial opacities. CONCLUSION: Chest X-ray analysis in an acute COVID-19 outbreak showed that the severity of opacities was associated with advanced age, comorbidities, as well as acuity of care. Artificial intelligence tools based upon deep learning of COVID-19 chest X-rays are feasible in the acute outbreak setting.


Subject(s)
COVID-19/diagnosis , Deep Learning/statistics & numerical data , Radiography, Thoracic/methods , SARS-CoV-2/genetics , Thorax/diagnostic imaging , Adult , Age Factors , Aged , COVID-19/epidemiology , COVID-19/therapy , COVID-19/virology , Comorbidity , Feasibility Studies , Female , Humans , Italy/epidemiology , Male , Middle Aged , Radiography, Thoracic/classification , Radiologists , Retrospective Studies , Severity of Illness Index , Thorax/pathology
5.
Sci Rep ; 10(1): 13590, 2020 08 12.
Article in English | MEDLINE | ID: mdl-32788602

ABSTRACT

Chest radiographs are among the most frequently acquired images in radiology and are often the subject of computer vision research. However, most of the models used to classify chest radiographs are derived from openly available deep neural networks, trained on large image datasets. These datasets differ from chest radiographs in that they are mostly color images and have substantially more labels. Therefore, very deep convolutional neural networks (CNN) designed for ImageNet and often representing more complex relationships, might not be required for the comparably simpler task of classifying medical image data. Sixteen different architectures of CNN were compared regarding the classification performance on two openly available datasets, the CheXpert and COVID-19 Image Data Collection. Areas under the receiver operating characteristics curves (AUROC) between 0.83 and 0.89 could be achieved on the CheXpert dataset. On the COVID-19 Image Data Collection, all models showed an excellent ability to detect COVID-19 and non-COVID pneumonia with AUROC values between 0.983 and 0.998. It could be observed, that more shallow networks may achieve results comparable to their deeper and more complex counterparts with shorter training times, enabling classification performances on medical image data close to the state-of-the-art methods even when using limited hardware.


Subject(s)
Betacoronavirus , Coronavirus Infections/diagnostic imaging , Deep Learning , Diagnosis, Computer-Assisted/methods , Neural Networks, Computer , Pneumonia, Viral/diagnostic imaging , Radiography, Thoracic/classification , Thorax/diagnostic imaging , COVID-19 , Coronavirus Infections/virology , Humans , Pandemics , Pneumonia, Viral/virology , ROC Curve , SARS-CoV-2 , Sensitivity and Specificity
6.
J Infect Public Health ; 13(10): 1381-1396, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32646771

ABSTRACT

This study presents a systematic review of artificial intelligence (AI) techniques used in the detection and classification of coronavirus disease 2019 (COVID-19) medical images in terms of evaluation and benchmarking. Five reliable databases, namely, IEEE Xplore, Web of Science, PubMed, ScienceDirect and Scopus were used to obtain relevant studies of the given topic. Several filtering and scanning stages were performed according to the inclusion/exclusion criteria to screen the 36 studies obtained; however, only 11 studies met the criteria. Taxonomy was performed, and the 11 studies were classified on the basis of two categories, namely, review and research studies. Then, a deep analysis and critical review were performed to highlight the challenges and critical gaps outlined in the academic literature of the given subject. Results showed that no relevant study evaluated and benchmarked AI techniques utilised in classification tasks (i.e. binary, multi-class, multi-labelled and hierarchical classifications) of COVID-19 medical images. In case evaluation and benchmarking will be conducted, three future challenges will be encountered, namely, multiple evaluation criteria within each classification task, trade-off amongst criteria and importance of these criteria. According to the discussed future challenges, the process of evaluation and benchmarking AI techniques used in the classification of COVID-19 medical images considered multi-complex attribute problems. Thus, adopting multi-criteria decision analysis (MCDA) is an essential and effective approach to tackle the problem complexity. Moreover, this study proposes a detailed methodology for the evaluation and benchmarking of AI techniques used in all classification tasks of COVID-19 medical images as future directions; such methodology is presented on the basis of three sequential phases. Firstly, the identification procedure for the construction of four decision matrices, namely, binary, multi-class, multi-labelled and hierarchical, is presented on the basis of the intersection of evaluation criteria of each classification task and AI classification techniques. Secondly, the development of the MCDA approach for benchmarking AI classification techniques is provided on the basis of the integrated analytic hierarchy process and VlseKriterijumska Optimizacija I Kompromisno Resenje methods. Lastly, objective and subjective validation procedures are described to validate the proposed benchmarking solutions.


Subject(s)
Artificial Intelligence/standards , Benchmarking , Coronavirus Infections/diagnostic imaging , Decision Support Techniques , Pneumonia, Viral/diagnostic imaging , Radiography, Thoracic/classification , Tomography, X-Ray Computed/classification , Betacoronavirus , COVID-19 , Humans , Pandemics , SARS-CoV-2
7.
IEEE J Biomed Health Inform ; 24(8): 2292-2302, 2020 08.
Article in English | MEDLINE | ID: mdl-31976915

ABSTRACT

Existing multi-label medical image learning tasks generally contain rich relationship information among pathologies such as label co-occurrence and interdependency, which is of great importance for assisting in clinical diagnosis and can be represented as the graph-structured data. However, most state-of-the-art works only focus on regression from the input to the binary labels, failing to make full use of such valuable graph-structured information due to the complexity of graph data. In this paper, we propose a novel label co-occurrence learning framework based on Graph Convolution Networks (GCNs) to explicitly explore the dependencies between pathologies for the multi-label chest X-ray (CXR) image classification task, which we term the "CheXGCN". Specifically, the proposed CheXGCN consists of two modules, i.e., the image feature embedding (IFE) module and label co-occurrence learning (LCL) module. Thanks to the LCL model, the relationship between pathologies is generalized into a set of classifier scores by introducing the word embedding of pathologies and multi-layer graph information propagation. During end-to-end training, it can be flexibly integrated into the IFE module and then adaptively recalibrate multi-label outputs with these scores. Extensive experiments on the ChestX-Ray14 and CheXpert datasets have demonstrated the effectiveness of CheXGCN as compared with the state-of-the-art baselines.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Lung Diseases/diagnostic imaging , Neural Networks, Computer , Radiography, Thoracic/classification , Thorax/diagnostic imaging , Data Curation/methods , Databases, Factual , Humans , Lung Diseases/classification , Lung Diseases/pathology
8.
Artif Intell Med ; 91: 72-81, 2018 09.
Article in English | MEDLINE | ID: mdl-29887337

ABSTRACT

Radiological reporting generates a large amount of free-text clinical narratives, a potentially valuable source of information for improving clinical care and supporting research. The use of automatic techniques to analyze such reports is necessary to make their content effectively available to radiologists in an aggregated form. In this paper we focus on the classification of chest computed tomography reports according to a classification schema proposed for this task by radiologists of the Italian hospital ASST Spedali Civili di Brescia. The proposed system is built exploiting a training data set containing reports annotated by radiologists. Each report is classified according to the schema developed by radiologists and textual evidences are marked in the report. The annotations are then used to train different machine learning based classifiers. We present in this paper a method based on a cascade of classifiers which make use of a set of syntactic and semantic features. The resulting system is a novel hierarchical classification system for the given task, that we have experimentally evaluated.


Subject(s)
Data Mining/methods , Information Storage and Retrieval/methods , Natural Language Processing , Radiography, Thoracic/classification , Tomography, X-Ray Computed/classification , Decision Trees , Humans , Interatrial Block , Machine Learning
9.
Clin Radiol ; 73(9): 827-831, 2018 09.
Article in English | MEDLINE | ID: mdl-29898829

ABSTRACT

AIM: To develop a machine learning-based model for the binary classification of chest radiography abnormalities, to serve as a retrospective tool in guiding clinician reporting prioritisation. MATERIALS AND METHODS: The open-source machine learning library, Tensorflow, was used to retrain a final layer of the deep convolutional neural network, Inception, to perform binary normality classification on two, anonymised, public image datasets. Re-training was performed on 47,644 images using commodity hardware, with validation testing on 5,505 previously unseen radiographs. Confusion matrix analysis was performed to derive diagnostic utility metrics. RESULTS: A final model accuracy of 94.6% (95% confidence interval [CI]: 94.3-94.7%) based on an unseen testing subset (n=5,505) was obtained, yielding a sensitivity of 94.6% (95% CI: 94.4-94.7%) and a specificity of 93.4% (95% CI: 87.2-96.9%) with a positive predictive value (PPV) of 99.8% (95% CI: 99.7-99.9%) and area under the curve (AUC) of 0.98 (95% CI: 0.97-0.99). CONCLUSION: This study demonstrates the application of a machine learning-based approach to classify chest radiographs as normal or abnormal. Its application to real-world datasets may be warranted in optimising clinician workload.


Subject(s)
Cloud Computing , Machine Learning , Neural Networks, Computer , Radiography, Thoracic/classification , Datasets as Topic , Diagnosis, Differential , Humans , Sensitivity and Specificity
10.
RECIIS (Online) ; 12(1): 1-29, jan.-mar. 2018.
Article in Portuguese | LILACS | ID: biblio-885065

ABSTRACT

Este artigo tem como base um estudo em que foram levantados e organizados termos do domínio da radiologia obstétrica e, então, foi identificado se os mesmos estão compreendidos em quatro distintos vocabulários controlados: OntoNeo, RadLex, LOINC e SNOMED. É apresentado o Sistema Integrado Catarinense de Telemedicina e Telessaúde (STT/SC) e o projeto de estruturação de laudos de exames de radiologia obstétrica, bem como o contexto teórico da ciência da informação sobre vocabulários controlados.Foram realizadas uma pesquisa de campo para o levantamento dos termos junto a um especialista da área e uma pesquisa documental para o levantamento estatístico dos termos em vocabulários controlados.Constituiu-se uma hierarquia dos termos levantados e verificou-se a cobertura de cada um dos vocabulários controlados em relação aos termos. O SNOMED é o vocabulário controlado com maior potencial de uso para a indexação de laudos no domínio da radiologia obstétrica.(AU)


This article bases on a study in which terms of the obstetric radiology domain were collected and arranged, and then we identified whether they are comprised in four distinct controlled vocabularies: OntoNeo, RadLex, LOINC and SNOMED. We present the STT/SC ­ Sistema Integrado Catarinense de Telemedicina e Telessaúde (Santa Catarina's integrated system of telemedicine and tele health) and theproject of structuring diagnostic reports of tests in obstetric radiology, as well as the theoretical contextof information science about controlled vocabulary. We carried out a survey of the terms jointly an expertand a documentary research to the statistical survey of the terms from controlled vocabularies. A hierarchy of the terms collected was established and the coverage of each of the controlled vocabularies in relation to the terms was verified. The SNOMED is the controlled vocabulary with greater potential of use for theindexation of diagnostic reports in the field of obstetric radiology.


Este artículo se basa en un estudio en el cual fueron levantados y arreglados términos del dominio de la radiología obstétrica, y entonces fue identificado si los mismos están comprendidos en cuatro distintos vocabularios controlados: el OntoNeo, el RadLex, el LOINC y el SNOMED. Presentamos el STT/SC ­ Sistema Integrado Catarinense de Telemedicina e Telessaúde (sistema integrado catarinense de telemedicina y telesalud) y el proyecto de estructuración de resultados de exámenes de radiología obstétrica, así como el contexto teórico de la ciencia de la información sobre vocabularios controlados. Una investigación de campo fue realizada para el levantamiento de los términos junto a un especialista y una investigación documental para el levantamiento estadístico de los términos en vocabularios controlados. Se ha constituido una jerarquía de los términos levantados y se ha verificado la cobertura de cada uno de los vocabularios controlados en relación a los términos. El SNOMED es el vocabulario controlado con mayor potencial deuso para la indexación de los resultados de exámenes en el dominio de la radiología obstétrica.


Subject(s)
Humans , Information Systems/standards , Obstetrics , Radiography, Thoracic/classification , Telemedicine , Terminology as Topic , Vocabulary, Controlled , Health Information Exchange , Information Storage and Retrieval
11.
J Digit Imaging ; 30(4): 460-468, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28600640

ABSTRACT

The goal of this study is to evaluate the efficacy of deep convolutional neural networks (DCNNs) in differentiating subtle, intermediate, and more obvious image differences in radiography. Three different datasets were created, which included presence/absence of the endotracheal (ET) tube (n = 300), low/normal position of the ET tube (n = 300), and chest/abdominal radiographs (n = 120). The datasets were split into training, validation, and test. Both untrained and pre-trained deep neural networks were employed, including AlexNet and GoogLeNet classifiers, using the Caffe framework. Data augmentation was performed for the presence/absence and low/normal ET tube datasets. Receiver operating characteristic (ROC), area under the curves (AUC), and 95% confidence intervals were calculated. Statistical differences of the AUCs were determined using a non-parametric approach. The pre-trained AlexNet and GoogLeNet classifiers had perfect accuracy (AUC 1.00) in differentiating chest vs. abdominal radiographs, using only 45 training cases. For more difficult datasets, including the presence/absence and low/normal position endotracheal tubes, more training cases, pre-trained networks, and data-augmentation approaches were helpful to increase accuracy. The best-performing network for classifying presence vs. absence of an ET tube was still very accurate with an AUC of 0.99. However, for the most difficult dataset, such as low vs. normal position of the endotracheal tube, DCNNs did not perform as well, but achieved a reasonable AUC of 0.81.


Subject(s)
Intubation, Intratracheal/methods , Neural Networks, Computer , Radiography, Abdominal/classification , Radiography, Thoracic/classification , Area Under Curve , Datasets as Topic , Humans , Intubation, Intratracheal/instrumentation , ROC Curve
12.
Radiology ; 284(3): 870-876, 2017 09.
Article in English | MEDLINE | ID: mdl-28430556

ABSTRACT

Purpose To assess the level of concordance between chest radiographic classifications of A and B Readers in a national surveillance program offered to U.S. coal miners over an approximate 36-year period. Materials and Methods The National Institute for Occupational Safety and Health (NIOSH) Coal Workers' Health Surveillance Program (CWHSP) is a surveillance program with nonresearch designation and is exempt from Human Subjects Review Board approval (11-DRDS-NR03). Thirty-six years of data (1979-2015) from the CWHSP were analyzed, which included all conventional screen-film radiographs with a classification by at least one A Reader and one B Reader. Agreement was assessed by using κ statistics; prevalence ratios were used to describe differences between A and B Reader determinations of image technical quality, small opacity profusion, and presence of large opacities and pleural abnormalities. Results The analysis included 79 185 matched A and B Reader chest radiograph classifications. A majority of both A and B Readers were radiologists (74.2% [213 of 287] vs 64.7% [22 of 34]; P = .04). A and B Readers had minimal agreement on technical image quality (κ = 0.0796; 95% confidence interval [CI]: 0.07, 0.08) and the distribution of small opacity profusion (subcategory κ, 0.2352; 95% CI: 0.22, 0.25). A Readers classified more images as "good" quality (prevalence ratio, 1.38; 95% CI: 1.35, 1.41) and identified more pneumoconiosis (prevalence ratio, 1.22; 95% CI: 1.20, 1.23). Conclusion A Readers classified substantially more radiographs with evidence of pneumoconiosis and classified higher small opacity profusion compared with B Readers. These observations reinforce the importance of multiple classifications by readers who have demonstrated ongoing competence in the International Labour Office classification system to ensure accurate radiographic classifications. © RSNA, 2017.


Subject(s)
Occupational Diseases/diagnostic imaging , Occupational Health/standards , Pneumoconiosis/diagnostic imaging , Radiography, Thoracic/classification , Coal Industry , Humans , Observer Variation , Reproducibility of Results , United States , United States Occupational Safety and Health Administration/organization & administration
13.
J Digit Imaging ; 30(1): 95-101, 2017 02.
Article in English | MEDLINE | ID: mdl-27730417

ABSTRACT

The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.


Subject(s)
Neural Networks, Computer , Radiography, Thoracic/classification , Humans , Radiography/classification , Radiography, Thoracic/statistics & numerical data , Random Allocation , Retrospective Studies
14.
Radiología (Madr., Ed. impr.) ; 56(6): 548-560, nov.-dic. 2014.
Article in Spanish | IBECS | ID: ibc-129927

ABSTRACT

La radiografía lateral forma parte esencial del estudio radiográfico del tórax, y el conocimiento anatómico es fundamental para poder detectar las variaciones que en ella producen las diferentes enfermedades. El objetivo de este trabajo es realizar una revisión anatómica de esta proyección, así como de las principales variantes normales. Para ello, y con fines únicamente didácticos, se ha dividido el tórax en diferentes espacios que se analizarán de manera ordenada, haciendo especial hincapié en los detalles anatómicos que más pueden ayudar a localizar lesiones ya detectadas en la proyección posteroanterior, o a detectar lesiones que pueden pasar desapercibidas en esa proyección (AU)


Lateral chest views constitute an essential part of chest X-ray examinations, so it is fundamental to know the anatomy on these images and to be able to detect the variations manifested on these images in different diseases. The aim of this article is to review the normal anatomy and main normal variants seen on lateral chest views. For teaching purposes, we divide the thorax into different spaces and analyze each in an orderly way, especially emphasizing the anatomic details that are most helpful for locating lesions that have already been detected in the posteroanterior view or for detecting lesions that can be missed in the posteroanterior view (AU)


Subject(s)
Humans , Male , Female , Radiography, Thoracic/instrumentation , Radiography, Thoracic/methods , Radiography, Thoracic , Anatomy/trends , Pathology/trends , Radiography, Thoracic/classification , Radiography, Thoracic/standards , Radiography, Thoracic/trends , Lung/pathology , Lung
16.
Acad Radiol ; 19(2): 131-40, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22098943

ABSTRACT

RATIONALE AND OBJECTIVES: Analog film radiographs are typically used to classify pneumoconiosis to allow comparison with standard film radiographs. The aim of this study was to determine if digital radiography is comparable to film for the purpose of classifying pneumoconiotic pleural abnormalities. MATERIALS AND METHODS: Subjects were 200 asbestos-exposed patients, from whom digital and film chest radiographs were obtained along with chest high-resolution computed tomographic scans. Using a crossover design, radiographs were independently read on two occasions by seven readers, using conventional International Labour Organization standards for film and digitized standards for digital. High-resolution computed tomographic scans were read independently by three readers. Areas under the receiver-operating characteristic curves were calculated using high-resolution computed tomographic ratings as the gold standard for disease status. Mixed linear models were fit to estimate the effects of order of presentation, occasion, and modality, treating the seven readers as a random effect. Comparing digital and film radiography for each reader and occasion, crude agreement and agreement beyond chance (κ) were also calculated. RESULTS: The linear models showed no statistically significant sequence effect for order of presentation (P = .73) or occasion (P = .28). Most important, the difference between modalities was not statistically significant (digital vs film, P = .54). The mean area under the curve for film was 0.736 and increased slightly to 0.741 for digital. Mean crude agreement for the presence of pleural abnormalities consistent with pneumoconiosis across all readers and occasions was 78.3%, while the mean κ value was 0.49. CONCLUSIONS: These results indicate that digital radiography is not statistically different from analog film for the purpose of classifying pneumoconiotic pleural abnormalities, when appropriate standards are used.


Subject(s)
Pleural Diseases/diagnostic imaging , Pneumoconiosis/classification , Pneumoconiosis/diagnostic imaging , Radiographic Image Enhancement/methods , Radiography, Thoracic/classification , Radiography, Thoracic/methods , Tomography, X-Ray Computed/methods , X-Ray Film , Area Under Curve , Cross-Over Studies , Female , Humans , Linear Models , Male , Middle Aged , ROC Curve , Reproducibility of Results
17.
IEEE Trans Biomed Eng ; 57(11)2010 Nov.
Article in English | MEDLINE | ID: mdl-20624701

ABSTRACT

Tuberculosis (TB) is a deadly infectious disease and the presence of cavities in the upper lung zones is a strong indicator that the disease has developed into a highly infectious state. Currently, the detection of TB cavities is mainly conducted by clinicians observing chest radiographs. Diagnoses performed by radiologists are labor intensive and very often there is insufficient healthcare personnel available, especially in remote communities. After assessing existing approaches, we propose an automated segmentation technique which takes a hybrid knowledge-based Bayesian classification approach to detect TB cavities automatically. We apply gradient inverse coefficient of variation (GICOV) and circularity measures to classify detected features and confirm true TB cavities. By comparing with non hybrid approaches and the classical active contour techniques for feature extraction in medical images, experimental results demonstrate that our approach achieves high accuracy with a low false positive rate in detecting TB cavities.


Subject(s)
Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/classification , Tuberculosis, Pulmonary/diagnostic imaging , Algorithms , Databases, Factual , Humans
18.
J Digit Imaging ; 21(4): 363-70, 2008 Dec.
Article in English | MEDLINE | ID: mdl-17661140

ABSTRACT

INTRODUCTION: To validate a preliminary version of a radiological lexicon (RadLex) against terms found in thoracic CT reports and to index report content in RadLex term categories. MATERIAL AND METHODS: Terms from a random sample of 200 thoracic CT reports were extracted using a text processor and matched against RadLex. Report content was manually indexed by two radiologists in consensus in term categories of Anatomic Location, Finding, Modifier, Relationship, Image Quality, and Uncertainty. Descriptive statistics were used and differences between age groups and report types were tested for significance using Kruskal-Wallis and Mann-Whitney Test (significance level <0.05). RESULTS: From 363 terms extracted, 304 (84%) were found and 59 (16%) were not found in RadLex. Report indexing showed a mean of 16.2 encoded items per report and 3.2 Finding per report. Term categories most frequently encoded were Modifier (1,030 of 3,244, 31.8%), Anatomic Location (813, 25.1%), Relationship (702, 21.6%) and Finding (638, 19.7%). Frequency of indexed items per report was higher in older age groups, but no significant difference was found between first study and follow up study reports. Frequency of distinct findings per report increased with patient age (p < 0.05). CONCLUSION: RadLex already covers most terms present in thoracic CT reports based on a small sample analysis from one institution. Applications for report encoding need to be developed to validate the lexicon against a larger sample of reports and address the issue of automatic relationship encoding.


Subject(s)
Abstracting and Indexing/methods , Radiography, Thoracic/classification , Radiology Information Systems , Software Validation , Tomography, X-Ray Computed/classification , Vocabulary, Controlled , Adult , Aged , Humans , Middle Aged , Observer Variation , Online Systems , Retrospective Studies , Software
19.
Rev. méd. Minas Gerais ; 17(1/2,supl.3): S185-S193, dez. 2007. ilus
Article in Portuguese | LILACS | ID: lil-552120

ABSTRACT

Objetivo: apresentar a terminologia básica utilizada na descrição de imagens radiológicas do tórax mais comuns na pediatria. Métodos: foram selecionados 16 referências bibliográficas em pesquisa realizada nas bases de dados medline e no Lilacs, em português, espanhol e inglês, utilizando-se as palavras-chave radiologia, tórax e terminologia. Conclusões: não existe padronização e normalização na descrição de radiografias do tórax. Embora existam termos específicos publicados na literatura, a sua utilização é, ainda, incipiente. A variedade de definições e nomenclatura utilizadas pelas diferentes universidades e residências médicas dificulta a interpretação das informações geradas, a formulação de pesquisas e a realização de estudos comparativos. Sistematização da análise da radiologia torácica deve ser realizada por todos profissionais.


Subject(s)
Humans , Radiography, Thoracic/classification , Terminology as Topic , Pediatrics
20.
Cienc. Trab ; 8(21): 104-116, jul.-sept. 2006. ilus
Article in Spanish | LILACS | ID: lil-452474

ABSTRACT

El asbesto es una sustancia que puede provocar distintas alteraciones en los exámenes de imagen pleuro-pulmonar: asbestosis, engrosamientos pleurales, derrame pleural, tumores pulmonares y mesotelioma. En este artículo se revisan los principales métodos diagnósticos de estas enfermedades: Radiografía de Tórax, Tomografía Computarizada de Tórax (tradicional y de alta resolución), y otros. Se discuten las ventajas y desventajas de cada uno de ellos, así como su indicación en la vigilancia en salud ocupacional. Por último, se menciona y explica brevemente la clasificación de la Organización Internacional del Trabajo (OIT) para las radiografías de tórax.


Subject(s)
Humans , Asbestos/adverse effects , Asbestos/toxicity , Asbestosis , Diagnostic Imaging/methods , Lung Diseases , Mesothelioma , Radiography, Thoracic/classification , Pleural Diseases/diagnosis , Pneumoconiosis
SELECTION OF CITATIONS
SEARCH DETAIL
...