Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
1.
J Digit Imaging ; 35(4): 1061-1068, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35304676

RESUMEN

Algorithms that automatically identify nodular patterns in chest X-ray (CXR) images could benefit radiologists by reducing reading time and improving accuracy. A promising approach is to use deep learning, where a deep neural network (DNN) is trained to classify and localize nodular patterns (including mass) in CXR images. Such algorithms, however, require enough abnormal cases to learn representations of nodular patterns arising in practical clinical settings. Obtaining large amounts of high-quality data is impractical in medical imaging where (1) acquiring labeled images is extremely expensive, (2) annotations are subject to inaccuracies due to the inherent difficulty in interpreting images, and (3) normal cases occur far more frequently than abnormal cases. In this work, we devise a framework to generate realistic nodules and demonstrate how they can be used to train a DNN identify and localize nodular patterns in CXR images. While most previous research applying generative models to medical imaging are limited to generating visually plausible abnormalities and using these patterns for augmentation, we go a step further to show how the training algorithm can be adjusted accordingly to maximally benefit from synthetic abnormal patterns. A high-precision detection model was first developed and tested on internal and external datasets, and the proposed method was shown to enhance the model's recall while retaining the low level of false positives.


Asunto(s)
Redes Neurales de la Computación , Radiografía Torácica , Algoritmos , Humanos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía , Radiografía Torácica/métodos
2.
Radiology ; 299(2): 450-459, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33754828

RESUMEN

Background Previous studies assessing the effects of computer-aided detection on observer performance in the reading of chest radiographs used a sequential reading design that may have biased the results because of reading order or recall bias. Purpose To compare observer performance in detecting and localizing major abnormal findings including nodules, consolidation, interstitial opacity, pleural effusion, and pneumothorax on chest radiographs without versus with deep learning-based detection (DLD) system assistance in a randomized crossover design. Materials and Methods This study included retrospectively collected normal and abnormal chest radiographs between January 2016 and December 2017 (https://cris.nih.go.kr/; registration no. KCT0004147). The radiographs were randomized into two groups, and six observers, including thoracic radiologists, interpreted each radiograph without and with use of a commercially available DLD system by using a crossover design with a washout period. Jackknife alternative free-response receiver operating characteristic (JAFROC) figure of merit (FOM), area under the receiver operating characteristic curve (AUC), sensitivity, specificity, false-positive findings per image, and reading times of observers with and without the DLD system were compared by using McNemar and paired t tests. Results A total of 114 normal (mean patient age ± standard deviation, 51 years ± 11; 58 men) and 114 abnormal (mean patient age, 60 years ± 15; 75 men) chest radiographs were evaluated. The radiographs were randomized to two groups: group A (n = 114) and group B (n = 114). Use of the DLD system improved the observers' JAFROC FOM (from 0.90 to 0.95, P = .002), AUC (from 0.93 to 0.98, P = .002), per-lesion sensitivity (from 83% [822 of 990 lesions] to 89.1% [882 of 990 lesions], P = .009), per-image sensitivity (from 80% [548 of 684 radiographs] to 89% [608 of 684 radiographs], P = .009), and specificity (from 89.3% [611 of 684 radiographs] to 96.6% [661 of 684 radiographs], P = .01) and reduced the reading time (from 10-65 seconds to 6-27 seconds, P < .001). The DLD system alone outperformed the pooled observers (JAFROC FOM: 0.96 vs 0.90, respectively, P = .007; AUC: 0.98 vs 0.93, P = .003). Conclusion Observers including thoracic radiologists showed improved performance in the detection and localization of major abnormal findings on chest radiographs and reduced reading time with use of a deep learning-based detection system. © RSNA, 2021 Online supplemental material is available for this article.


Asunto(s)
Aprendizaje Profundo , Enfermedades Pulmonares/diagnóstico por imagen , Radiografía Torácica/métodos , Estudios Cruzados , Femenino , Humanos , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , República de Corea , Estudios Retrospectivos , Sensibilidad y Especificidad
3.
J Digit Imaging ; 33(1): 221-230, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31152273

RESUMEN

Lung lobe segmentation in chest CT has been used for the analysis of lung functions and surgical planning. However, accurate lobe segmentation is difficult as 80% of patients have incomplete and/or fake fissures. Furthermore, lung diseases such as chronic obstructive pulmonary disease (COPD) can increase the difficulty of differentiating the lobar fissures. Lobar fissures have similar intensities to those of the vessels and airway wall, which could lead to segmentation error in automated segmentation. In this study, a fully automated lung lobe segmentation method with 3D U-Net was developed and validated with internal and external datasets. The volumetric chest CT scans of 196 normal and mild-to-moderate COPD patients from three centers were obtained. Each scan was segmented using a conventional image processing method and manually corrected by an expert thoracic radiologist to create gold standards. The lobe regions in the CT images were then segmented using a 3D U-Net architecture with a deep convolutional neural network (CNN) using separate training, validation, and test datasets. In addition, 40 independent external CT images were used to evaluate the model. The segmentation results for both the conventional and deep learning methods were compared quantitatively to the gold standards using four accuracy metrics including the Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), mean surface distance (MSD), and Hausdorff surface distance (HSD). In internal validation, the segmentation method achieved high accuracy for the DSC, JSC, MSD, and HSD (0.97 ± 0.02, 0.94 ± 0.03, 0.69 ± 0.36, and 17.12 ± 11.07, respectively). In external validation, high accuracy was also obtained for the DSC, JSC, MSD, and HSD (0.96 ± 0.02, 0.92 ± 0.04, 1.31 ± 0.56, and 27.89 ± 7.50, respectively). This method took 6.49 ± 1.19 s and 8.61 ± 1.08 s for lobe segmentation of the left and right lungs, respectively. Although various automatic lung lobe segmentation methods have been developed, it is difficult to develop a robust segmentation method. However, the deep learning-based 3D U-Net method showed reasonable segmentation accuracy and computational time. In addition, this method could be adapted and applied to severe lung diseases in a clinical workflow.


Asunto(s)
Pulmón , Tomografía Computarizada por Rayos X , Humanos , Pulmón/diagnóstico por imagen , Redes Neurales de la Computación
4.
J Digit Imaging ; 32(6): 1019-1026, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31396776

RESUMEN

A robust lung segmentation method using a deep convolutional neural network (CNN) was developed and evaluated on high-resolution computed tomography (HRCT) and volumetric CT of various types of diffuse interstitial lung disease (DILD). Chest CT images of 617 patients with various types of DILD, including cryptogenic organizing pneumonia (COP), usual interstitial pneumonia (UIP), and nonspecific interstitial pneumonia (NSIP), were scanned using HRCT (1-2-mm slices, 5-10-mm intervals) and volumetric CT (sub-millimeter thickness without intervals). Each scan was segmented using a conventional image processing method and then manually corrected by an expert thoracic radiologist to create gold standards. The lung regions in the HRCT images were then segmented using a two-dimensional U-Net architecture with the deep CNN, using separate training, validation, and test sets. In addition, 30 independent volumetric CT images of UIP patients were used to further evaluate the model. The segmentation results for both conventional and deep-learning methods were compared quantitatively with the gold standards using four accuracy metrics: the Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), mean surface distance (MSD), and Hausdorff surface distance (HSD). The mean and standard deviation values of those metrics for the HRCT images were 98.84 ± 0.55%, 97.79 ± 1.07%, 0.27 ± 0.18 mm, and 25.47 ± 13.63 mm, respectively. Our deep-learning method showed significantly better segmentation performance (p < 0.001), and its segmentation accuracies for volumetric CT were similar to those for HRCT. We have developed an accurate and robust U-Net-based DILD lung segmentation method that can be used for patients scanned with different clinical protocols, including HRCT and volumetric CT.


Asunto(s)
Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Pulmón/diagnóstico por imagen
5.
J Digit Imaging ; 31(2): 235-244, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-28884381

RESUMEN

A computer-aided differential diagnosis (CADD) system that distinguishes between usual interstitial pneumonia (UIP) and non-specific interstitial pneumonia (NSIP) using high-resolution computed tomography (HRCT) images was developed, and its results compared against the decision of a radiologist. Six local interstitial lung disease patterns in the images were determined, and 900 typical regions of interest were marked by an experienced radiologist. A support vector machine classifier was used to train and label the regions of interest of the lung parenchyma based on the texture and shape characteristics. Based on the regional classifications of the entire lung using HRCT, the distributions and extents of the six regional patterns were characterized through their CADD features. The disease division index of every area fraction combination and the asymmetric index between the left and right lungs were also evaluated. A second SVM classifier was employed to classify the UIP and NSIP, and features were selected through sequential-forward floating feature selection. For the evaluation, 54 HRCT images of UIP (n = 26) and NSIP (n = 28) patients clinically diagnosed by a pulmonologist were included and evaluated. The classification accuracy was measured based on a fivefold cross-validation with 20 repetitions using random shuffling. For comparison, thoracic radiologists assessed each case using HRCT images without clinical information or diagnosis. The accuracies of the radiologists' decisions were 75 and 87%. The accuracies of the CADD system using different features ranged from 70 to 81%. Finally, the accuracy of the proposed CADD system after sequential-forward feature selection was 91%.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Diagnóstico Diferencial , Humanos , Pulmón/diagnóstico por imagen , Reproducibilidad de los Resultados , Estudios Retrospectivos
6.
Nat Commun ; 13(1): 4128, 2022 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-35840566

RESUMEN

International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos
7.
IEEE Trans Biomed Eng ; 68(10): 3151-3160, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33819145

RESUMEN

Intraoral functions are results of complex and well-orchestrated sensorimotor loop operations, and therefore vulnerable to small functional or neural defects. To secure the vital intraoral functions, it is important to find a way to favorably intervene the intraoral sensorimotor loop operations. The tongue and the soft palate are heavily associated with intraoral sensorimotor loops, with their dense neural innervations and occupancy of intraoral space. Therefore, electrical stimulation onto the tongue and the soft palate has a great potential to solve the problems in the intraoral functions. However, the electrical interface for both of them have not been characterized yet as a lumped-element model, for designing electrical stimulation and analyzing its effect. In this study, we measured stimulation thresholds to evoke electrotactile feedback and characterized electrical impedance across electrodes using lumped-element models. We found that average perception/discomfort thresholds for the tongue tip, lateral-inferior side of the tongue, and anterolateral side of the soft palate as 0.18/1.31, 0.37/3.99, and 1.19/7.55 mA, respectively. An R-C-R-R-C model represented the electrical interface across the tongue and the soft palate with the highest accuracy. The average component values of the R-C-R-R-C model were found as 2.72kΩ, 45.25nF, 1.27kΩ, 22.09GΩ, and 53.00nF, on average.


Asunto(s)
Paladar Blando , Lengua , Estimulación Eléctrica , Humanos
8.
Comput Biol Med ; 136: 104750, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34392128

RESUMEN

BACKGROUND AND OBJECTIVE: It is important to alleviate annotation efforts and costs by efficiently training on medical images. We performed a stress test on several strong labels for curriculum learning with a convolutional neural network to differentiate normal and five types of pulmonary abnormalities in chest radiograph images. METHODS: The numbers of CXR images of healthy subjects and patients, acquired at Asan Medical Center (AMC), were 6069 and 3465, respectively. The numbers of CXR images of patients with nodules, consolidation, interstitial opacity, pleural effusion, and pneumothorax were 944, 550, 280, 1360, and 331, respectively. The AMC dataset was split into training, tuning, and test, with a ratio of 7:1:2. All lesions were strongly labeled by thoracic expert radiologists, with confirmation of the corresponding CT. For curriculum learning, normal and abnormal patches (N = 26658) were randomly extracted around the normal lung and strongly labeled abnormal lesions, respectively. In addition, 1%, 5%, 20%, 50%, and 100% of strong labels were used to determine an optimal number for them. Each patch dataset was trained with the ResNet-50 architecture, and all CXRs with weak labels were used for fine-tuning them in a transfer-learning manner. A dataset acquired from the Seoul National University Bundang Hospital (SNUBH) was used for external validation. RESULTS: The detection accuracies of the 1%, 5%, 20%, 50%, and 100% datasets were 90.51, 92.15, 93.90, 94.54, and 95.39, respectively, in the AMC dataset and 90.01, 90.14, 90.97, 91.92, and 93.00 in the SNUBH dataset. CONCLUSIONS: Our results showed that curriculum learning with over 20% sampling rate for strong labels are sufficient to train a model with relatively high performance, which can be easily and efficiently developed in an actual clinical setting.


Asunto(s)
Enfermedades Pulmonares , Redes Neurales de la Computación , Curriculum , Humanos , Aprendizaje , Enfermedades Pulmonares/diagnóstico por imagen , Radiografía
9.
Korean J Radiol ; 22(2): 281-290, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33169547

RESUMEN

OBJECTIVE: To assess the performance of content-based image retrieval (CBIR) of chest CT for diffuse interstitial lung disease (DILD). MATERIALS AND METHODS: The database was comprised by 246 pairs of chest CTs (initial and follow-up CTs within two years) from 246 patients with usual interstitial pneumonia (UIP, n = 100), nonspecific interstitial pneumonia (NSIP, n = 101), and cryptogenic organic pneumonia (COP, n = 45). Sixty cases (30-UIP, 20-NSIP, and 10-COP) were selected as the queries. The CBIR retrieved five similar CTs as a query from the database by comparing six image patterns (honeycombing, reticular opacity, emphysema, ground-glass opacity, consolidation and normal lung) of DILD, which were automatically quantified and classified by a convolutional neural network. We assessed the rates of retrieving the same pairs of query CTs, and the number of CTs with the same disease class as query CTs in top 1-5 retrievals. Chest radiologists evaluated the similarity between retrieved CTs and queries using a 5-scale grading system (5-almost identical; 4-same disease; 3-likelihood of same disease is half; 2-likely different; and 1-different disease). RESULTS: The rate of retrieving the same pairs of query CTs in top 1 retrieval was 61.7% (37/60) and in top 1-5 retrievals was 81.7% (49/60). The CBIR retrieved the same pairs of query CTs more in UIP compared to NSIP and COP (p = 0.008 and 0.002). On average, it retrieved 4.17 of five similar CTs from the same disease class. Radiologists rated 71.3% to 73.0% of the retrieved CTs with a similarity score of 4 or 5. CONCLUSION: The proposed CBIR system showed good performance for retrieving chest CTs showing similar patterns for DILD.


Asunto(s)
Neumonías Intersticiales Idiopáticas/diagnóstico , Redes Neurales de la Computación , Tórax/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Neumonía en Organización Criptogénica/diagnóstico , Bases de Datos Factuales , Diagnóstico Diferencial , Humanos , Procesamiento de Imagen Asistido por Computador , Estudios Retrospectivos
10.
Comput Methods Programs Biomed ; 196: 105615, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32599340

RESUMEN

PURPOSE: Computed tomography (CT) volume sets reconstructed with different kernels are helping to increase diagnostic accuracy. However, several CT volumes reconstructed with different kernels are difficult to sustain, due to limited storage and maintenance issues. A CT kernel conversion method is proposed using convolutional neural networks (CNN). METHODS: A total of 3289 CT images from ten patients (five men and five women; mean age, 63.0 ± 8.6 years) were obtained in May 2016 (Somatom Sensation 16, Siemens Medical Systems, Forchheim, Germany). These CT images were reconstructed with various kernels, including B10f (very smooth), B30f (medium smooth), B50f (medium sharp), and B70f (very sharp) kernels. Smooth kernel images were converted into sharp kernel images using super-resolution (SR) network with Squeeze-and-Excitation (SE) blocks and auxiliary losses, and vice versa. In this study, the single-conversion model and multi-conversion model were presented. In case of the single-conversion model, for the one corresponding output image (e.g., B10f to B70), SE-Residual blocks were stacked. For the multi-conversion model, to convert an image into several output images (e.g., B10f to B30f, B50f, and B70f, and vice versa), progressive learning (PL) was employed by calculating auxiliary losses in every four SE-Residual blocks. Through auxiliary losses, the model could learn mutual relationships between different kernel types. The conversion quality was evaluated by the root-mean-square-error (RMSE), structural similarity (SSIM) index and mutual information (MI) between original and converted images. RESULTS: The RMSE (SSIM index , MI) of the multi-conversion model was 4.541 ± 0.688 (0.998 ± 0.001 , 2.587 ± 0.137), 27.555 ± 5.876 (0.944 ± 0.021 , 1.735 ± 0.137), 72.327 ± 17.387 (0.815 ± 0.053 , 1.176 ± 0.096), 8.748 ± 1.798 (0.996 ± 0.002 , 2.464 ± 0.121), 9.470 ± 1.772 (0.994 ± 0.003 , 2.336 ± 0.133), and 9.184 ± 1.605 (0.994 ± 0.002 , 2.342 ± 0.138) in conversion between B10f-B30f, B10f-B50f, B10f-B70f, B70f-B50f, B70f-B30f, and B70f-B10f, respectively, which showed significantly better image quality than the conventional model. CONCLUSIONS: We proposed deep learning-based CT kernel conversion using SR network. By introducing simplified SE blocks and PL, the model performance was significantly improved.


Asunto(s)
Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA