Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
PLoS One ; 19(4): e0300716, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38578764

RESUMEN

BACKGROUND AND PURPOSE: Mean pulmonary artery pressure (mPAP) is a key index for chronic thromboembolic pulmonary hypertension (CTEPH). Using machine learning, we attempted to construct an accurate prediction model for mPAP in patients with CTEPH. METHODS: A total of 136 patients diagnosed with CTEPH were included, for whom mPAP was measured. The following patient data were used as explanatory variables in the model: basic patient information (age and sex), blood tests (brain natriuretic peptide (BNP)), echocardiography (tricuspid valve pressure gradient (TRPG)), and chest radiography (cardiothoracic ratio (CTR), right second arc ratio, and presence of avascular area). Seven machine learning methods including linear regression were used for the multivariable prediction models. Additionally, prediction models were constructed using the AutoML software. Among the 136 patients, 2/3 and 1/3 were used as training and validation sets, respectively. The average of R squared was obtained from 10 different data splittings of the training and validation sets. RESULTS: The optimal machine learning model was linear regression (averaged R squared, 0.360). The optimal combination of explanatory variables with linear regression was age, BNP level, TRPG level, and CTR (averaged R squared, 0.388). The R squared of the optimal multivariable linear regression model was higher than that of the univariable linear regression model with only TRPG. CONCLUSION: We constructed a more accurate prediction model for mPAP in patients with CTEPH than a model of TRPG only. The prediction performance of our model was improved by selecting the optimal machine learning method and combination of explanatory variables.


Asunto(s)
Hipertensión Pulmonar , Embolia Pulmonar , Humanos , Hipertensión Pulmonar/diagnóstico , Presión Arterial , Ecocardiografía/métodos , Válvula Tricúspide , Péptido Natriurético Encefálico , Embolia Pulmonar/complicaciones , Embolia Pulmonar/diagnóstico por imagen , Enfermedad Crónica
3.
Acad Radiol ; 31(3): 822-829, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37914626

RESUMEN

RATIONALE AND OBJECTIVES: Pericardial fat (PF)-the thoracic visceral fat surrounding the heart-promotes the development of coronary artery disease by inducing inflammation of the coronary arteries. To evaluate PF, we generated pericardial fat count images (PFCIs) from chest radiographs (CXRs) using a dedicated deep-learning model. MATERIALS AND METHODS: We reviewed data of 269 consecutive patients who underwent coronary computed tomography (CT). We excluded patients with metal implants, pleural effusion, history of thoracic surgery, or malignancy. Thus, the data of 191 patients were used. We generated PFCIs from the projection of three-dimensional CT images, wherein fat accumulation was represented by a high pixel value. Three different deep-learning models, including CycleGAN were combined in the proposed method to generate PFCIs from CXRs. A single CycleGAN-based model was used to generate PFCIs from CXRs for comparison with the proposed method. To evaluate the image quality of the generated PFCIs, structural similarity index measure (SSIM), mean squared error (MSE), and mean absolute error (MAE) of (i) the PFCI generated using the proposed method and (ii) the PFCI generated using the single model were compared. RESULTS: The mean SSIM, MSE, and MAE were 8.56 × 10-1, 1.28 × 10-2, and 3.57 × 10-2, respectively, for the proposed model, and 7.62 × 10-1, 1.98 × 10-2, and 5.04 × 10-2, respectively, for the single CycleGAN-based model. CONCLUSION: PFCIs generated from CXRs with the proposed model showed better performance than those generated with the single model. The evaluation of PF without CT may be possible using the proposed method.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Tomografía Computarizada por Rayos X
4.
Sci Rep ; 13(1): 19901, 2023 11 14.
Artículo en Inglés | MEDLINE | ID: mdl-37963952

RESUMEN

"Preprocessing" is the first step required in brain image analysis that improves the overall quality and reliability of the results. However, it is computationally demanding and time-consuming, particularly to handle and parcellate complicatedly folded cortical ribbons of the human brain. In this study, we aimed to shorten the analysis time for data preprocessing of 1410 brain images simultaneously on one of the world's highest-performing supercomputers, "Fugaku." The FreeSurfer was used as a benchmark preprocessing software for cortical surface reconstruction. All the brain images were processed simultaneously and successfully analyzed in a calculation time of 17.33 h. This result indicates that using a supercomputer for brain image preprocessing allows big data analysis to be completed shortly and flexibly, thus suggesting the possibility of supercomputers being used for expanding large data analysis and parameter optimization of preprocessing in the future.


Asunto(s)
Encéfalo , Programas Informáticos , Humanos , Reproducibilidad de los Resultados , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Computadores
5.
PeerJ Comput Sci ; 9: e1620, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37869462

RESUMEN

Purpose: The purpose of this study is to compare two libraries dedicated to the Markov chain Monte Carlo method: pystan and numpyro. In the comparison, we mainly focused on the agreement of estimated latent parameters and the performance of sampling using the Markov chain Monte Carlo method in Bayesian item response theory (IRT). Materials and methods: Bayesian 1PL-IRT and 2PL-IRT were implemented with pystan and numpyro. Then, the Bayesian 1PL-IRT and 2PL-IRT were applied to two types of medical data obtained from a published article. The same prior distributions of latent parameters were used in both pystan and numpyro. Estimation results of latent parameters of 1PL-IRT and 2PL-IRT were compared between pystan and numpyro. Additionally, the computational cost of the Markov chain Monte Carlo method was compared between the two libraries. To evaluate the computational cost of IRT models, simulation data were generated from the medical data and numpyro. Results: For all the combinations of IRT types (1PL-IRT or 2PL-IRT) and medical data types, the mean and standard deviation of the estimated latent parameters were in good agreement between pystan and numpyro. In most cases, the sampling time using the Markov chain Monte Carlo method was shorter in numpyro than that in pystan. When the large-sized simulation data were used, numpyro with a graphics processing unit was useful for reducing the sampling time. Conclusion: Numpyro and pystan were useful for applying the Bayesian 1PL-IRT and 2PL-IRT. Our results show that the two libraries yielded similar estimation result and that regarding to sampling time, the fastest libraries differed based on the dataset size.

6.
Sci Rep ; 13(1): 17533, 2023 10 16.
Artículo en Inglés | MEDLINE | ID: mdl-37845348

RESUMEN

To evaluate the diagnostic performance of our deep learning (DL) model of COVID-19 and investigate whether the diagnostic performance of radiologists was improved by referring to our model. Our datasets contained chest X-rays (CXRs) for the following three categories: normal (NORMAL), non-COVID-19 pneumonia (PNEUMONIA), and COVID-19 pneumonia (COVID). We used two public datasets and private dataset collected from eight hospitals for the development and external validation of our DL model (26,393 CXRs). Eight radiologists performed two reading sessions: one session was performed with reference to CXRs only, and the other was performed with reference to both CXRs and the results of the DL model. The evaluation metrics for the reading session were accuracy, sensitivity, specificity, and area under the curve (AUC). The accuracy of our DL model was 0.733, and that of the eight radiologists without DL was 0.696 ± 0.031. There was a significant difference in AUC between the radiologists with and without DL for COVID versus NORMAL or PNEUMONIA (p = 0.0038). Our DL model alone showed better diagnostic performance than that of most radiologists. In addition, our model significantly improved the diagnostic performance of radiologists for COVID versus NORMAL or PNEUMONIA.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Neumonía , Humanos , COVID-19/diagnóstico por imagen , Prueba de COVID-19 , Rayos X , Tomografía Computarizada por Rayos X/métodos , Neumonía/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiólogos , Computadores , Estudios Retrospectivos
7.
Med Phys ; 50(12): 7548-7557, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37651615

RESUMEN

BACKGROUND: Deep learning (DL) has been widely used for diagnosis and prognosis prediction of numerous frequently occurring diseases. Generally, DL models require large datasets to perform accurate and reliable prognosis prediction and avoid overlearning. However, prognosis prediction of rare diseases is still limited owing to the small number of cases, resulting in small datasets. PURPOSE: This paper proposes a multimodal DL method to predict the prognosis of patients with malignant pleural mesothelioma (MPM) with a small number of 3D positron emission tomography-computed tomography (PET/CT) images and clinical data. METHODS: A 3D convolutional conditional variational autoencoder (3D-CCVAE), which adds a 3D-convolutional layer and conditional VAE to process 3D images, was used for dimensionality reduction of PET images. We developed a two-step model that performs dimensionality reduction using the 3D-CCVAE, which is resistant to overlearning. In the first step, clinical data were input to condition the model and perform dimensionality reduction of PET images, resulting in more efficient dimension reduction. In the second step, a subset of the dimensionally reduced features and clinical data were combined to predict 1-year survival of patients using the random forest classifier. To demonstrate the usefulness of the 3D-CCVAE, we created a model without the conditional mechanism (3D-CVAE), one without the variational mechanism (3D-CCAE), and one without an autoencoder (without AE), and compared their prediction results. We used PET images and clinical data of 520 patients with histologically proven MPM. The data were randomly split in a 2:1 ratio (train : test) and three-fold cross-validation was performed. The models were trained on the training set and evaluated based on the test set results. The area under the receiver operating characteristic curve (AUC) for all models was calculated using their 1-year survival predictions, and the results were compared. RESULTS: We obtained AUC values of 0.76 (95% confidence interval [CI], 0.72-0.80) for the 3D-CCVAE model, 0.72 (95% CI, 0.68-0.77) for the 3D-CVAE model, 0.70 (95% CI, 0.66-0.75) for the 3D-CCAE model, and 0.69 (95% CI 0.65-0.74) for the without AE model. The 3D-CCVAE model performed better than the other models (3D-CVAE, p = 0.039; 3D-CCAE, p = 0.0032; and without AE, p = 0.0011). CONCLUSIONS: This study demonstrates the usefulness of the 3D-CCVAE in multimodal DL models learned using a small number of datasets. Additionally, it shows that dimensionality reduction via AE can be used to learn a DL model without increasing the overlearning risk. Moreover, the VAE mechanism can overcome the uncertainty of the model parameters that commonly occurs for small datasets, thereby eliminating the risk of overlearning. Additionally, more efficient dimensionality reduction of PET images can be performed by providing clinical data as conditions and ignoring clinical data-related features.


Asunto(s)
Mesotelioma Maligno , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Curva ROC
8.
Cancers (Basel) ; 15(5)2023 Feb 28.
Artículo en Inglés | MEDLINE | ID: mdl-36900325

RESUMEN

We aimed to develop and evaluate an automatic prediction system for grading histopathological images of prostate cancer. A total of 10,616 whole slide images (WSIs) of prostate tissue were used in this study. The WSIs from one institution (5160 WSIs) were used as the development set, while those from the other institution (5456 WSIs) were used as the unseen test set. Label distribution learning (LDL) was used to address a difference in label characteristics between the development and test sets. A combination of EfficientNet (a deep learning model) and LDL was utilized to develop an automatic prediction system. Quadratic weighted kappa (QWK) and accuracy in the test set were used as the evaluation metrics. The QWK and accuracy were compared between systems with and without LDL to evaluate the usefulness of LDL in system development. The QWK and accuracy were 0.364 and 0.407 in the systems with LDL and 0.240 and 0.247 in those without LDL, respectively. Thus, LDL improved the diagnostic performance of the automatic prediction system for the grading of histopathological images for cancer. By handling the difference in label characteristics using LDL, the diagnostic performance of the automatic prediction system could be improved for prostate cancer grading.

9.
Jpn J Radiol ; 41(4): 449-455, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36469224

RESUMEN

PURPOSE: This study proposes a Bayesian multidimensional nominal response model (MD-NRM) to statistically analyze the nominal response of multiclass classifications. MATERIALS AND METHODS: First, for MD-NRM, we extended the conventional nominal response model to achieve stable convergence of the Bayesian nominal response model and utilized multidimensional ability parameters. We then applied MD-NRM to a 3-class classification problem, where radiologists visually evaluated chest X-ray images and selected their diagnosis from one of the three classes. The classification problem consisted of 150 cases, and each of the six radiologists selected their diagnosis based on a visual evaluation of the images. Consequently, 900 (= 150 × 6) nominal responses were obtained. In MD-NRM, we assumed that the responses were determined by the softmax function, the ability of radiologists, and the difficulty of images. In addition, we assumed that the multidimensional ability of one radiologist were represented by a 3 × 3 matrix. The latent parameters of the MD-NRM (ability parameters of radiologists and difficulty parameters of images) were estimated from the 900 responses. To implement Bayesian MD-NRM and estimate the latent parameters, a probabilistic programming language (Stan, version 2.21.0) was used. RESULTS: For all parameters, the Rhat values were less than 1.10. This indicates that the latent parameters of the MD-NRM converged successfully. CONCLUSION: The results show that it is possible to estimate the latent parameters (ability and difficulty parameters) of the MD-NRM using Stan. Our code for the implementation of the MD-NRM is available as open source.


Asunto(s)
Radiólogos , Humanos , Teorema de Bayes
10.
Sci Rep ; 12(1): 11090, 2022 06 30.
Artículo en Inglés | MEDLINE | ID: mdl-35773366

RESUMEN

The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner simultaneously acquires metabolic information via PET and morphological information using MRI. However, attenuation correction, which is necessary for quantitative PET evaluation, is difficult as it requires the generation of attenuation-correction maps from MRI, which has no direct relationship with the gamma-ray attenuation information. MRI-based bone tissue segmentation is potentially available for attenuation correction in relatively rigid and fixed organs such as the head and pelvis regions. However, this is challenging for the chest region because of respiratory and cardiac motions in the chest, its anatomically complicated structure, and the thin bone cortex. We propose a new method using unsupervised generative attentional networks with adaptive layer-instance normalisation for image-to-image translation (U-GAT-IT), which specialised in unpaired image transformation based on attention maps for image transformation. We added the modality-independent neighbourhood descriptor (MIND) to the loss of U-GAT-IT to guarantee anatomical consistency in the image transformation between different domains. Our proposed method obtained a synthesised computed tomography of the chest. Experimental results showed that our method outperforms current approaches. The study findings suggest the possibility of synthesising clinically acceptable computed tomography images from chest MRI with minimal changes in anatomical structures without human annotation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Pelvis , Tomografía de Emisión de Positrones/métodos , Tomografía Computarizada por Rayos X
11.
Sci Rep ; 12(1): 8214, 2022 05 17.
Artículo en Inglés | MEDLINE | ID: mdl-35581272

RESUMEN

This retrospective study aimed to develop and validate a deep learning model for the classification of coronavirus disease-2019 (COVID-19) pneumonia, non-COVID-19 pneumonia, and the healthy using chest X-ray (CXR) images. One private and two public datasets of CXR images were included. The private dataset included CXR from six hospitals. A total of 14,258 and 11,253 CXR images were included in the 2 public datasets and 455 in the private dataset. A deep learning model based on EfficientNet with noisy student was constructed using the three datasets. The test set of 150 CXR images in the private dataset were evaluated by the deep learning model and six radiologists. Three-category classification accuracy and class-wise area under the curve (AUC) for each of the COVID-19 pneumonia, non-COVID-19 pneumonia, and healthy were calculated. Consensus of the six radiologists was used for calculating class-wise AUC. The three-category classification accuracy of our model was 0.8667, and those of the six radiologists ranged from 0.5667 to 0.7733. For our model and the consensus of the six radiologists, the class-wise AUC of the healthy, non-COVID-19 pneumonia, and COVID-19 pneumonia were 0.9912, 0.9492, and 0.9752 and 0.9656, 0.8654, and 0.8740, respectively. Difference of the class-wise AUC between our model and the consensus of the six radiologists was statistically significant for COVID-19 pneumonia (p value = 0.001334). Thus, an accurate model of deep learning for the three-category classification could be constructed; the diagnostic performance of our model was significantly better than that of the consensus interpretation by the six radiologists for COVID-19 pneumonia.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Neumonía , COVID-19/diagnóstico por imagen , Humanos , Neumonía/diagnóstico , Estudios Retrospectivos , SARS-CoV-2
12.
Front Artif Intell ; 4: 694815, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34337394

RESUMEN

Purpose: The purpose of this study was to develop and evaluate lung cancer segmentation with a pretrained model and transfer learning. The pretrained model was constructed from an artificial dataset generated using a generative adversarial network (GAN). Materials and Methods: Three public datasets containing images of lung nodules/lung cancers were used: LUNA16 dataset, Decathlon lung dataset, and NSCLC radiogenomics. The LUNA16 dataset was used to generate an artificial dataset for lung cancer segmentation with the help of the GAN and 3D graph cut. Pretrained models were then constructed from the artificial dataset. Subsequently, the main segmentation model was constructed from the pretrained models and the Decathlon lung dataset. Finally, the NSCLC radiogenomics dataset was used to evaluate the main segmentation model. The Dice similarity coefficient (DSC) was used as a metric to evaluate the segmentation performance. Results: The mean DSC for the NSCLC radiogenomics dataset improved overall when using the pretrained models. At maximum, the mean DSC was 0.09 higher with the pretrained model than that without it. Conclusion: The proposed method comprising an artificial dataset and a pretrained model can improve lung cancer segmentation as confirmed in terms of the DSC metric. Moreover, the construction of the artificial dataset for the segmentation using the GAN and 3D graph cut was found to be feasible.

13.
Oncotarget ; 12(12): 1187-1196, 2021 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-34136087

RESUMEN

OBJECTIVES: This study analyzed an artificial intelligence (AI) deep learning method with a three-dimensional deep convolutional neural network (3D DCNN) in regard to diagnostic accuracy to differentiate malignant pleural mesothelioma (MPM) from benign pleural disease using FDG-PET/CT results. RESULTS: For protocol A, the area under the ROC curve (AUC)/sensitivity/specificity/accuracy values were 0.825/77.9% (81/104)/76.4% (55/72)/77.3% (136/176), while those for protocol B were 0.854/80.8% (84/104)/77.8% (56/72)/79.5% (140/176), for protocol C were 0.881/85.6% (89/104)/75.0% (54/72)/81.3% (143/176), and for protocol D were 0.896/88.5% (92/104)/73.6% (53/72)/82.4% (145/176). Protocol D showed significantly better diagnostic performance as compared to A, B, and C in ROC analysis (p = 0.031, p = 0.0020, p = 0.041, respectively). MATERIALS AND METHODS: Eight hundred seventy-five consecutive patients with histologically proven or suspected MPM, shown by history, physical examination findings, and chest CT results, who underwent FDG-PET/CT examinations between 2007 and 2017 were investigated in a retrospective manner. There were 525 patients (314 MPM, 211 benign pleural disease) in the deep learning training set, 174 (102 MPM, 72 benign pleural disease) in the validation set, and 176 (104 MPM, 72 benign pleural disease) in the test set. Using AI with PET/CT alone (protocol A), human visual reading (protocol B), a quantitative method that incorporated maximum standardized uptake value (SUVmax) (protocol C), and a combination of PET/CT, SUVmax, gender, and age (protocol D), obtained data were subjected to ROC curve analyses. CONCLUSIONS: Deep learning with 3D DCNN in combination with FDG-PET/CT imaging results as well as clinical features comprise a novel potential tool shows flexibility for differential diagnosis of MPM.

14.
Eur Radiol ; 31(6): 3775-3782, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-33852048

RESUMEN

OBJECTIVES: To evaluate a deep learning model for predicting gestational age from fetal brain MRI acquired after the first trimester in comparison to biparietal diameter (BPD). MATERIALS AND METHODS: Our Institutional Review Board approved this retrospective study, and a total of 184 T2-weighted MRI acquisitions from 184 fetuses (mean gestational age: 29.4 weeks) who underwent MRI between January 2014 and June 2019 were included. The reference standard gestational age was based on the last menstruation and ultrasonography measurements in the first trimester. The deep learning model was trained with T2-weighted images from 126 training cases and 29 validation cases. The remaining 29 cases were used as test data, with fetal age estimated by both the model and BPD measurement. The relationship between the estimated gestational age and the reference standard was evaluated with Lin's concordance correlation coefficient (ρc) and a Bland-Altman plot. The ρc was assessed with McBride's definition. RESULTS: The ρc of the model prediction was substantial (ρc = 0.964), but the ρc of the BPD prediction was moderate (ρc = 0.920). Both the model and BPD predictions had greater differences from the reference standard at increasing gestational age. However, the upper limit of the model's prediction (2.45 weeks) was significantly shorter than that of BPD (5.62 weeks). CONCLUSIONS: Deep learning can accurately predict gestational age from fetal brain MR acquired after the first trimester. KEY POINTS: • The prediction of gestational age using ultrasound is accurate in the first trimester but becomes inaccurate as gestational age increases. • Deep learning can accurately predict gestational age from fetal brain MRI acquired in the second and third trimester. • Prediction of gestational age by deep learning may have benefits for prenatal care in pregnancies that are underserved during the first trimester.


Asunto(s)
Aprendizaje Profundo , Atención Prenatal , Femenino , Feto/diagnóstico por imagen , Edad Gestacional , Humanos , Lactante , Imagen por Resonancia Magnética , Embarazo , Primer Trimestre del Embarazo , Estudios Retrospectivos , Ultrasonografía Prenatal
15.
Sci Rep ; 10(1): 19388, 2020 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-33168936

RESUMEN

We hypothesized that, in discrimination between benign and malignant parotid gland tumors, high diagnostic accuracy could be obtained with a small amount of imbalanced data when anomaly detection (AD) was combined with deep leaning (DL) model and the L2-constrained softmax loss. The purpose of this study was to evaluate whether the proposed method was more accurate than other commonly used DL or AD methods. Magnetic resonance (MR) images of 245 parotid tumors (22.5% malignant) were retrospectively collected. We evaluated the diagnostic accuracy of the proposed method (VGG16-based DL and AD) and that of classification models using conventional DL and AD methods. A radiologist also evaluated the MR images. ROC and precision-recall (PR) analyses were performed, and the area under the curve (AUC) was calculated. In terms of diagnostic performance, the VGG16-based model with the L2-constrained softmax loss and AD (local outlier factor) outperformed conventional DL and AD methods and a radiologist (ROC-AUC = 0.86 and PR-ROC = 0.77). The proposed method could discriminate between benign and malignant parotid tumors in MR images even when only a small amount of data with imbalanced distribution is available.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Glándula Parótida/diagnóstico por imagen , Neoplasias de la Parótida/diagnóstico por imagen , Femenino , Humanos , Masculino , Estudios Retrospectivos
16.
Sci Rep ; 10(1): 17532, 2020 10 16.
Artículo en Inglés | MEDLINE | ID: mdl-33067538

RESUMEN

This study aimed to develop and validate computer-aided diagnosis (CXDx) system for classification between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy on chest X-ray (CXR) images. From two public datasets, 1248 CXR images were obtained, which included 215, 533, and 500 CXR images of COVID-19 pneumonia patients, non-COVID-19 pneumonia patients, and the healthy samples, respectively. The proposed CADx system utilized VGG16 as a pre-trained model and combination of conventional method and mixup as data augmentation methods. Other types of pre-trained models were compared with the VGG16-based model. Single type or no data augmentation methods were also evaluated. Splitting of training/validation/test sets was used when building and evaluating the CADx system. Three-category accuracy was evaluated for test set with 125 CXR images. The three-category accuracy of the CAD system was 83.6% between COVID-19 pneumonia, non-COVID-19 pneumonia, and the healthy. Sensitivity for COVID-19 pneumonia was more than 90%. The combination of conventional method and mixup was more useful than single type or no data augmentation method. In conclusion, this study was able to create an accurate CADx system for the 3-category classification. Source code of our CADx system is available as open source for COVID-19 research.


Asunto(s)
Infecciones por Coronavirus/diagnóstico , Neumonía Viral/diagnóstico , Neumonía/diagnóstico , Tórax/diagnóstico por imagen , Adulto , Anciano , Automatización , Betacoronavirus/aislamiento & purificación , COVID-19 , Infecciones por Coronavirus/virología , Bases de Datos Factuales , Aprendizaje Profundo , Diagnóstico por Computador , Femenino , Humanos , Masculino , Persona de Mediana Edad , Pandemias , Neumonía/clasificación , Neumonía Viral/virología , SARS-CoV-2
17.
J Emerg Med ; 56(5): 536-539, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-30745197

RESUMEN

BACKGROUND: Although fractures of the sternum are rare in young children, owing to the compliance of the chest wall, these fractures are still possible and require thorough examination. We present a case that emphasizes the usefulness of point-of-care ultrasound in the diagnosis of a pediatric sternal fracture complicated by a subcutaneous abscess. CASE REPORT: A 5-year-old boy presented with tenderness of the sternum, with diffuse swelling extending bilaterally to the anterior chest wall. Ultrasound imaging identified irregular alignment of the sternum with a subcutaneous abscess and swirling of purulent material within the abscess in the fracture area. These findings were confirmed on enhanced chest computed tomography and had not been visible at the time of the first evaluation 6 days prior. WHY SHOULD AN EMERGENCY PHYSICIAN BE AWARE OF THIS?: Our case demonstrates the usefulness of point-of-care ultrasound for the diagnosis and appropriate management of a sternal fracture complicated by a subcutaneous abscess in a young child. As ultrasound imaging is easy to perform at the bedside, it is useful for examining pediatric patients with swelling of the anterior chest and local tenderness of the sternum to rule out a sternal fracture, even if these fractures are deemed to be uncommon in children.


Asunto(s)
Absceso/diagnóstico , Fracturas Óseas/diagnóstico , Esternón/lesiones , Absceso/diagnóstico por imagen , Preescolar , Fracturas Óseas/complicaciones , Fracturas Óseas/diagnóstico por imagen , Humanos , Masculino , Sistemas de Atención de Punto , Esternón/diagnóstico por imagen , Tejido Subcutáneo/anomalías , Tejido Subcutáneo/fisiopatología , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA