Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Eur Radiol ; 32(2): 1054-1064, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34331112

RESUMEN

OBJECTIVES: To evaluate the effects of computer-aided diagnosis (CAD) on inter-reader agreement in Lung Imaging Reporting and Data System (Lung-RADS) categorization. METHODS: Two hundred baseline CT scans covering all Lung-RADS categories were randomly selected from the National Lung Cancer Screening Trial. Five radiologists independently reviewed the CT scans and assigned Lung-RADS categories without CAD and with CAD. The CAD system presented up to five of the most risk-dominant nodules with measurements and predicted Lung-RADS category. Inter-reader agreement was analyzed using multirater Fleiss κ statistics. RESULTS: The five readers reported 139-151 negative screening results without CAD and 126-142 with CAD. With CAD, readers tended to upstage (average, 12.3%) rather than downstage Lung-RADS category (average, 4.4%). Inter-reader agreement of five readers for Lung-RADS categorization was moderate (Fleiss kappa, 0.60 [95% confidence interval, 0.57, 0.63]) without CAD, and slightly improved to substantial (Fleiss kappa, 0.65 [95% CI, 0.63, 0.68]) with CAD. The major cause for disagreement was assignment of different risk-dominant nodules in the reading sessions without and with CAD (54.2% [201/371] vs. 63.6% [232/365]). The proportion of disagreement in nodule size measurement was reduced from 5.1% (102/2000) to 3.1% (62/2000) with the use of CAD (p < 0.001). In 31 cancer-positive cases, substantial management discrepancies (category 1/2 vs. 4A/B) between reader pairs decreased with application of CAD (pooled sensitivity, 85.2% vs. 91.6%; p = 0.004). CONCLUSIONS: Application of CAD demonstrated a minor improvement in inter-reader agreement of Lung-RADS category, while showing the potential to reduce measurement variability and substantial management change in cancer-positive cases. KEY POINTS: • Inter-reader agreement of five readers for Lung-RADS categorization was minimally improved by application of CAD, with a Fleiss kappa value of 0.60 to 0.65. • The major cause for disagreement was assignment of different risk-dominant nodules in the reading sessions without and with CAD (54.2% vs. 63.6%). • In 31 cancer-positive cases, substantial management discrepancies between reader pairs, referring to a difference in follow-up interval of at least 9 months (category 1/2 vs. 4A/B), were reduced in half by application of CAD (32/310 to 16/310) (pooled sensitivity, 85.2% vs. 91.6%; p = 0.004).


Asunto(s)
Neoplasias Pulmonares , Computadores , Detección Precoz del Cáncer , Humanos , Pulmón/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Variaciones Dependientes del Observador , Estudios Retrospectivos , Tomografía Computarizada por Rayos X
2.
J Digit Imaging ; 35(4): 1061-1068, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35304676

RESUMEN

Algorithms that automatically identify nodular patterns in chest X-ray (CXR) images could benefit radiologists by reducing reading time and improving accuracy. A promising approach is to use deep learning, where a deep neural network (DNN) is trained to classify and localize nodular patterns (including mass) in CXR images. Such algorithms, however, require enough abnormal cases to learn representations of nodular patterns arising in practical clinical settings. Obtaining large amounts of high-quality data is impractical in medical imaging where (1) acquiring labeled images is extremely expensive, (2) annotations are subject to inaccuracies due to the inherent difficulty in interpreting images, and (3) normal cases occur far more frequently than abnormal cases. In this work, we devise a framework to generate realistic nodules and demonstrate how they can be used to train a DNN identify and localize nodular patterns in CXR images. While most previous research applying generative models to medical imaging are limited to generating visually plausible abnormalities and using these patterns for augmentation, we go a step further to show how the training algorithm can be adjusted accordingly to maximally benefit from synthetic abnormal patterns. A high-precision detection model was first developed and tested on internal and external datasets, and the proposed method was shown to enhance the model's recall while retaining the low level of false positives.


Asunto(s)
Redes Neurales de la Computación , Radiografía Torácica , Algoritmos , Humanos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía , Radiografía Torácica/métodos
3.
Radiology ; 299(2): 450-459, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33754828

RESUMEN

Background Previous studies assessing the effects of computer-aided detection on observer performance in the reading of chest radiographs used a sequential reading design that may have biased the results because of reading order or recall bias. Purpose To compare observer performance in detecting and localizing major abnormal findings including nodules, consolidation, interstitial opacity, pleural effusion, and pneumothorax on chest radiographs without versus with deep learning-based detection (DLD) system assistance in a randomized crossover design. Materials and Methods This study included retrospectively collected normal and abnormal chest radiographs between January 2016 and December 2017 (https://cris.nih.go.kr/; registration no. KCT0004147). The radiographs were randomized into two groups, and six observers, including thoracic radiologists, interpreted each radiograph without and with use of a commercially available DLD system by using a crossover design with a washout period. Jackknife alternative free-response receiver operating characteristic (JAFROC) figure of merit (FOM), area under the receiver operating characteristic curve (AUC), sensitivity, specificity, false-positive findings per image, and reading times of observers with and without the DLD system were compared by using McNemar and paired t tests. Results A total of 114 normal (mean patient age ± standard deviation, 51 years ± 11; 58 men) and 114 abnormal (mean patient age, 60 years ± 15; 75 men) chest radiographs were evaluated. The radiographs were randomized to two groups: group A (n = 114) and group B (n = 114). Use of the DLD system improved the observers' JAFROC FOM (from 0.90 to 0.95, P = .002), AUC (from 0.93 to 0.98, P = .002), per-lesion sensitivity (from 83% [822 of 990 lesions] to 89.1% [882 of 990 lesions], P = .009), per-image sensitivity (from 80% [548 of 684 radiographs] to 89% [608 of 684 radiographs], P = .009), and specificity (from 89.3% [611 of 684 radiographs] to 96.6% [661 of 684 radiographs], P = .01) and reduced the reading time (from 10-65 seconds to 6-27 seconds, P < .001). The DLD system alone outperformed the pooled observers (JAFROC FOM: 0.96 vs 0.90, respectively, P = .007; AUC: 0.98 vs 0.93, P = .003). Conclusion Observers including thoracic radiologists showed improved performance in the detection and localization of major abnormal findings on chest radiographs and reduced reading time with use of a deep learning-based detection system. © RSNA, 2021 Online supplemental material is available for this article.


Asunto(s)
Aprendizaje Profundo , Enfermedades Pulmonares/diagnóstico por imagen , Radiografía Torácica/métodos , Estudios Cruzados , Femenino , Humanos , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , República de Corea , Estudios Retrospectivos , Sensibilidad y Especificidad
4.
Radiology ; 299(1): 211-219, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33560190

RESUMEN

Background Studies on the optimal CT section thickness for detecting subsolid nodules (SSNs) with computer-aided detection (CAD) are lacking. Purpose To assess the effect of CT section thickness on CAD performance in the detection of SSNs and to investigate whether deep learning-based super-resolution algorithms for reducing CT section thickness can improve performance. Materials and Methods CT images obtained with 1-, 3-, and 5-mm-thick sections were obtained in patients who underwent surgery between March 2018 and December 2018. Patients with resected synchronous SSNs and those without SSNs (negative controls) were retrospectively evaluated. The SSNs, which ranged from 6 to 30 mm, were labeled ground-truth lesions. A deep learning-based CAD system was applied to SSN detection on CT images of each section thickness and those converted from 3- and 5-mm section thickness into 1-mm section thickness by using the super-resolution algorithm. The CAD performance on each section thickness was evaluated and compared by using the jackknife alternative free response receiver operating characteristic figure of merit. Results A total of 308 patients (mean age ± standard deviation, 62 years ± 10; 183 women) with 424 SSNs (310 part-solid and 114 nonsolid nodules) and 182 patients without SSNs (mean age, 65 years ± 10; 97 men) were evaluated. The figures of merit differed across the three section thicknesses (0.92, 0.90, and 0.89 for 1, 3, and 5 mm, respectively; P = .04) and between 1- and 5-mm sections (P = .04). The figures of merit varied for nonsolid nodules (0.78, 0.72, and 0.66 for 1, 3, and 5 mm, respectively; P < .001) but not for part-solid nodules (range, 0.93-0.94; P = .76). The super-resolution algorithm improved CAD sensitivity on 3- and 5-mm-thick sections (P = .02 for 3 mm, P < .001 for 5 mm). Conclusion Computer-aided detection (CAD) of subsolid nodules performed better at 1-mm section thickness CT than at 3- and 5-mm section thickness CT, particularly with nonsolid nodules. Application of a super-resolution algorithm improved the sensitivity of CAD at 3- and 5-mm section thickness CT. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Goo in this issue.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Estudios Retrospectivos
5.
Eur Radiol ; 31(8): 6239-6247, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33555355

RESUMEN

OBJECTIVES: To evaluate a deep learning-based model using model-generated segmentation masks to differentiate invasive pulmonary adenocarcinoma (IPA) from preinvasive lesions or minimally invasive adenocarcinoma (MIA) on CT, making comparisons with radiologist-derived measurements of solid portion size. METHODS: Four hundred eleven subsolid nodules (SSNs) (120 preinvasive lesions or MIAs and 291 IPAs) in 333 patients who underwent surgery between June 2010 and August 2016 were retrospectively included to develop the model (370 SSNs in 293 patients for training and 41 SSNs in 40 patients for tuning). Ninety SSNs of 2 cm or smaller (45 preinvasive lesions or MIAs and 45 IPAs) resected in 2018 formed a validation set. Six radiologists measured the solid portion of each nodule. Performances of the model and radiologists were assessed using receiver operating characteristics curve analysis. RESULTS: The deep learning model differentiated IPA from preinvasive lesions or MIA with areas under the curve (AUCs) of 0.914, 0.956, and 0.833 for the training, tuning, and validation sets, respectively. The mean AUC of the radiologists was 0.835 in the validation set, without significant differences between radiologists and the model (p = 0.97). The sensitivity, specificity, and accuracy of the model were 71% (32/45), 87% (39/45), and 79% (71/90), respectively, whereas the corresponding values of the radiologists were 75.2% (203/270), 76.7% (207/270), and 75.9% (410/540) with a 5-mm threshold for the solid portion size. CONCLUSIONS: The performance of the model for differentiating IPA from preinvasive lesions or MIA was comparable to that of the radiologists' measurements of solid portion size. KEY POINTS: • A deep learning-based model differentiated IPA from preinvasive lesions or MIA with AUCs of 0.914 and 0.956 for the training and tuning sets, respectively. • In the validation set including subsolid nodules of 2 cm or smaller, the model showed an AUC of 0.833, being on par with the performance of the solid portion size measurements made by the radiologists (AUC, 0.835; p = 0.97). • SSNs with a solid portion measuring > 10 mm on CT showed a high probability of being IPA (positive predictive value, 93.5-100.0%).


Asunto(s)
Adenocarcinoma , Aprendizaje Profundo , Neoplasias Pulmonares , Adenocarcinoma/diagnóstico por imagen , Adenocarcinoma/cirugía , Diagnóstico Diferencial , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Invasividad Neoplásica , Estudios Retrospectivos , Tomografía Computarizada por Rayos X
6.
Eur Radiol ; 31(12): 8947-8955, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34115194

RESUMEN

OBJECTIVES: Bone age is considered an indicator for the diagnosis of precocious or delayed puberty and a predictor of adult height. We aimed to evaluate the performance of a deep neural network model in assessing rapidly advancing bone age during puberty using elbow radiographs. METHODS: In all, 4437 anteroposterior and lateral pairs of elbow radiographs were obtained from pubertal individuals from two institutions to implement and validate a deep neural network model. The reference standard bone age was established by five trained researchers using the Sauvegrain method, a scoring system based on the shapes of the lateral condyle, trochlea, olecranon apophysis, and proximal radial epiphysis. A test set (n = 141) was obtained from an external institution. The differences between the assessment of the model and that of reviewers were compared. RESULTS: The mean absolute difference (MAD) in bone age estimation between the model and reviewers was 0.15 years on internal validation. In the test set, the MAD between the model and the five experts ranged from 0.19 to 0.30 years. Compared with the reference standard, the MAD was 0.22 years. Interobserver agreement was excellent among reviewers (ICC: 0.99) and between the model and the reviewers (ICC: 0.98). In the subpart analysis, the olecranon apophysis exhibited the highest accuracy (74.5%), followed by the trochlea (73.7%), lateral condyle (73.7%), and radial epiphysis (63.1%). CONCLUSIONS: Assessment of rapidly advancing bone age during puberty on elbow radiographs using our deep neural network model was similar to that of experts. KEY POINTS: • Bone age during puberty is particularly important for patients with scoliosis or limb-length discrepancy to determine the phase of the disease, which influences the timing and method of surgery. • The commonly used hand radiographs-based methods have limitations in assessing bone age during puberty due to the less prominent morphological changes of the hand and wrist bones in this period. • A deep neural network model trained with elbow radiographs exhibited similar performance to human experts on estimating rapidly advancing bone age during puberty.


Asunto(s)
Determinación de la Edad por el Esqueleto , Codo , Adulto , Codo/diagnóstico por imagen , Humanos , Lactante , Redes Neurales de la Computación , Pubertad , Radiografía
7.
Ophthalmology ; 127(1): 85-94, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31281057

RESUMEN

PURPOSE: To develop and evaluate deep learning models that screen multiple abnormal findings in retinal fundus images. DESIGN: Cross-sectional study. PARTICIPANTS: For the development and testing of deep learning models, 309 786 readings from 103 262 images were used. Two additional external datasets (the Indian Diabetic Retinopathy Image Dataset and e-ophtha) were used for testing. A third external dataset (Messidor) was used for comparison of the models with human experts. METHODS: Macula-centered retinal fundus images from the Seoul National University Bundang Hospital Retina Image Archive, obtained at the health screening center and ophthalmology outpatient clinic at Seoul National University Bundang Hospital, were assessed for 12 major findings (hemorrhage, hard exudate, cotton-wool patch, drusen, membrane, macular hole, myelinated nerve fiber, chorioretinal atrophy or scar, any vascular abnormality, retinal nerve fiber layer defect, glaucomatous disc change, and nonglaucomatous disc change) with their regional information using deep learning algorithms. MAIN OUTCOME MEASURES: Area under the receiver operating characteristic curve and sensitivity and specificity of the deep learning algorithms at the highest harmonic mean were evaluated and compared with the performance of retina specialists, and visualization of the lesions was qualitatively analyzed. RESULTS: Areas under the receiver operating characteristic curves for all findings were high at 96.2% to 99.9% when tested in the in-house dataset. Lesion heatmaps highlight salient regions effectively in various findings. Areas under the receiver operating characteristic curves for diabetic retinopathy-related findings tested in the Indian Diabetic Retinopathy Image Dataset and e-ophtha dataset were 94.7% to 98.0%. The model demonstrated a performance that rivaled that of human experts, especially in the detection of hemorrhage, hard exudate, membrane, macular hole, myelinated nerve fiber, and glaucomatous disc change. CONCLUSIONS: Our deep learning algorithms with region guidance showed reliable performance for detection of multiple findings in macula-centered retinal fundus images. These interpretable, as well as reliable, classification outputs open the possibility for clinical use as an automated screening system for retinal fundus images.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Enfermedades de la Retina/diagnóstico por imagen , Adulto , Anciano , Anciano de 80 o más Años , Área Bajo la Curva , Estudios Transversales , Conjuntos de Datos como Asunto , Femenino , Fondo de Ojo , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Curva ROC , Sensibilidad y Especificidad
8.
Eur Radiol ; 30(3): 1359-1368, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-31748854

RESUMEN

OBJECTIVE: To investigate the feasibility of a deep learning-based detection (DLD) system for multiclass lesions on chest radiograph, in comparison with observers. METHODS: A total of 15,809 chest radiographs were collected from two tertiary hospitals (7204 normal and 8605 abnormal with nodule/mass, interstitial opacity, pleural effusion, or pneumothorax). Except for the test set (100 normal and 100 abnormal (nodule/mass, 70; interstitial opacity, 10; pleural effusion, 10; pneumothorax, 10)), radiographs were used to develop a DLD system for detecting multiclass lesions. The diagnostic performance of the developed model and that of nine observers with varying experiences were evaluated and compared using area under the receiver operating characteristic curve (AUROC), on a per-image basis, and jackknife alternative free-response receiver operating characteristic figure of merit (FOM) on a per-lesion basis. The false-positive fraction was also calculated. RESULTS: Compared with the group-averaged observations, the DLD system demonstrated significantly higher performances on image-wise normal/abnormal classification and lesion-wise detection with pattern classification (AUROC, 0.985 vs. 0.958; p = 0.001; FOM, 0.962 vs. 0.886; p < 0.001). In lesion-wise detection, the DLD system outperformed all nine observers. In the subgroup analysis, the DLD system exhibited consistently better performance for both nodule/mass (FOM, 0.913 vs. 0.847; p < 0.001) and the other three abnormal classes (FOM, 0.995 vs. 0.843; p < 0.001). The false-positive fraction of all abnormalities was 0.11 for the DLD system and 0.19 for the observers. CONCLUSIONS: The DLD system showed the potential for detection of lesions and pattern classification on chest radiographs, performing normal/abnormal classifications and achieving high diagnostic performance. KEY POINTS: • The DLD system was feasible for detection with pattern classification of multiclass lesions on chest radiograph. • The DLD system had high performance of image-wise classification as normal or abnormal chest radiographs (AUROC, 0.985) and showed especially high specificity (99.0%). • In lesion-wise detection of multiclass lesions, the DLD system outperformed all 9 observers (FOM, 0.962 vs. 0.886; p < 0.001).


Asunto(s)
Aprendizaje Profundo , Enfermedades Pulmonares/diagnóstico por imagen , Enfermedades Pleurales/diagnóstico por imagen , Radiografía Torácica/métodos , Adulto , Anciano , Área Bajo la Curva , Femenino , Humanos , Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Derrame Pleural/diagnóstico por imagen , Neumotórax/diagnóstico por imagen , Curva ROC , Radiografía , Sensibilidad y Especificidad , Nódulo Pulmonar Solitario/diagnóstico por imagen
9.
J Digit Imaging ; 32(3): 499-512, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30291477

RESUMEN

Automatic segmentation of the retinal vasculature and the optic disc is a crucial task for accurate geometric analysis and reliable automated diagnosis. In recent years, Convolutional Neural Networks (CNN) have shown outstanding performance compared to the conventional approaches in the segmentation tasks. In this paper, we experimentally measure the performance gain for Generative Adversarial Networks (GAN) framework when applied to the segmentation tasks. We show that GAN achieves statistically significant improvement in area under the receiver operating characteristic (AU-ROC) and area under the precision and recall curve (AU-PR) on two public datasets (DRIVE, STARE) by segmenting fine vessels. Also, we found a model that surpassed the current state-of-the-art method by 0.2 - 1.0% in AU-ROC and 0.8 - 1.2% in AU-PR and 0.5 - 0.7% in dice coefficient. In contrast, significant improvements were not observed in the optic disc segmentation task on DRIONS-DB, RIM-ONE (r3) and Drishti-GS datasets in AU-ROC and AU-PR.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Oftalmoscopía , Disco Óptico/diagnóstico por imagen , Reconocimiento de Normas Patrones Automatizadas/métodos , Vasos Retinianos/diagnóstico por imagen , Humanos
10.
Chemphyschem ; 19(10): 1123-1127, 2018 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-29542276

RESUMEN

The schwann cells of the peripheral nervous system are indispensable for the formation, maintenance, and modulation of synapses over the life cycle. They not only recognize neuron-glia signaling molecules, but also secrete gliotransmitters. Through these processes, they regulate neuronal excitability and thus the release of neurotransmitters from the nerve terminal at the neuromuscular junction. Gliotransmitters strongly affect nerve communication, and their secretion is mainly triggered by synchronized Ca2+ signaling, implicating Ca2+ waves in synapse function. Reciprocally, neurotransmitters released during synaptic activity can evoke increases in intracellular Ca2+ levels. A reconsideration of the interplay between the two main types of cells in the nervous system is due, as the concept of nervous system activity comprising only neuron-neuron and neuron-muscle action has become untenable. A more precise understanding of the roles of schwann cells in nerve-muscle signaling is required.


Asunto(s)
Células de Schwann/metabolismo , Sinapsis/metabolismo , Animales , Humanos , Células de Schwann/citología
11.
J Korean Med Sci ; 33(43): e239, 2018 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-30344460

RESUMEN

BACKGROUND: We described a novel multi-step retinal fundus image reading system for providing high-quality large data for machine learning algorithms, and assessed the grader variability in the large-scale dataset generated with this system. METHODS: A 5-step retinal fundus image reading tool was developed that rates image quality, presence of abnormality, findings with location information, diagnoses, and clinical significance. Each image was evaluated by 3 different graders. Agreements among graders for each decision were evaluated. RESULTS: The 234,242 readings of 79,458 images were collected from 55 licensed ophthalmologists during 6 months. The 34,364 images were graded as abnormal by at-least one rater. Of these, all three raters agreed in 46.6% in abnormality, while 69.9% of the images were rated as abnormal by two or more raters. Agreement rate of at-least two raters on a certain finding was 26.7%-65.2%, and complete agreement rate of all-three raters was 5.7%-43.3%. As for diagnoses, agreement of at-least two raters was 35.6%-65.6%, and complete agreement rate was 11.0%-40.0%. Agreement of findings and diagnoses were higher when restricted to images with prior complete agreement on abnormality. Retinal/glaucoma specialists showed higher agreements on findings and diagnoses of their corresponding subspecialties. CONCLUSION: This novel reading tool for retinal fundus images generated a large-scale dataset with high level of information, which can be utilized in future development of machine learning-based algorithms for automated identification of abnormal conditions and clinical decision supporting system. These results emphasize the importance of addressing grader variability in algorithm developments.


Asunto(s)
Bases de Datos Factuales , Aprendizaje Automático , Retina/diagnóstico por imagen , Fondo de Ojo , Humanos , República de Corea
12.
J Digit Imaging ; 31(6): 923-928, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29948436

RESUMEN

In this paper, we aimed to understand and analyze the outputs of a convolutional neural network model that classifies the laterality of fundus images. Our model not only automatizes the classification process, which results in reducing the labors of clinicians, but also highlights the key regions in the image and evaluates the uncertainty for the decision with proper analytic tools. Our model was trained and tested with 25,911 fundus images (43.4% of macula-centered images and 28.3% each of superior and nasal retinal fundus images). Also, activation maps were generated to mark important regions in the image for the classification. Then, uncertainties were quantified to support explanations as to why certain images were incorrectly classified under the proposed model. Our model achieved a mean training accuracy of 99%, which is comparable to the performance of clinicians. Strong activations were detected at the location of optic disc and retinal blood vessels around the disc, which matches to the regions that clinicians attend when deciding the laterality. Uncertainty analysis discovered that misclassified images tend to accompany with high prediction uncertainties and are likely ungradable. We believe that visualization of informative regions and the estimation of uncertainty, along with presentation of the prediction result, would enhance the interpretability of neural network models in a way that clinicians can be benefitted from using the automatic classification system.


Asunto(s)
Oftalmopatías/diagnóstico por imagen , Fondo de Ojo , Redes Neurales de la Computación , Vasos Retinianos/diagnóstico por imagen , Algoritmos , Bases de Datos Factuales , Humanos , Reproducibilidad de los Resultados
13.
J Digit Imaging ; 31(4): 415-424, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29043528

RESUMEN

This study aimed to compare shallow and deep learning of classifying the patterns of interstitial lung diseases (ILDs). Using high-resolution computed tomography images, two experienced radiologists marked 1200 regions of interest (ROIs), in which 600 ROIs were each acquired using a GE or Siemens scanner and each group of 600 ROIs consisted of 100 ROIs for subregions that included normal and five regional pulmonary disease patterns (ground-glass opacity, consolidation, reticular opacity, emphysema, and honeycombing). We employed the convolution neural network (CNN) with six learnable layers that consisted of four convolution layers and two fully connected layers. The classification results were compared with the results classified by a shallow learning of a support vector machine (SVM). The CNN classifier showed significantly better performance for accuracy compared with that of the SVM classifier by 6-9%. As the convolution layer increases, the classification accuracy of the CNN showed better performance from 81.27 to 95.12%. Especially in the cases showing pathological ambiguity such as between normal and emphysema cases or between honeycombing and reticular opacity cases, the increment of the convolution layer greatly drops the misclassification rate between each case. Conclusively, the CNN classifier showed significantly greater accuracy than the SVM classifier, and the results implied structural characteristics that are inherent to the specific ILD patterns.


Asunto(s)
Aprendizaje Profundo , Enfermedades Pulmonares Intersticiales/clasificación , Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Reconocimiento de Normas Patrones Automatizadas/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Estudios de Cohortes , Femenino , Humanos , Enfermedades Pulmonares Intersticiales/patología , Masculino , Redes Neurales de la Computación , Estudios Retrospectivos
14.
Can J Surg ; 57(1): 21-5, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24461222

RESUMEN

BACKGROUND: The jugular vein cutdown for a totally implantable central venous port (TICVP) has 2 disadvantages: 2 separate incisions are needed and the risk for multiple vein occlusions. We sought to evaluate the feasibility of a cephalic vein (CV) cutdown in children. METHODS: We prospectively followed patients who underwent a venous cutdown for implantation of a TICVP between Jan. 1, 2002, and Dec. 31, 2006. For patients younger than 8 months, an external jugular vein cutdown was initially tried without attempting a CV cutdown. For patients older than 8 months, a CV cutdown was tried initially. We recorded information on age, weight, outcome of the CV cutdown and complications. RESULTS: During the study period, 143 patients underwent a venous cutdown for implantation of a TICVP: 25 younger and 118 older than 8 months. The CV cutdown was successful in 73 of 118 trials. The 25th percentile and median body weight for 73 successful cases were 15.4 kg and 28.3 kg, respectively. There was a significant difference in the success rate using the criterion of 15 kg as the cutoff. The overall complication rate was 8.2%. CONCLUSION: The CV cutdown was an acceptable procedure for TICVP in children. It could be preferentially considered for patients weighing more than 15 kg who require TICVP.


CONTEXTE: La dissection de la jugulaire pour la mise en place d'un dispositif d'accès veineux central totalement implantable comporte 2 inconvénients : 2 incisions distinctes sont nécessaires et il y a un risque de multiples occlusions veineuses. Nous avons voulu évaluer la faisabilité d'une dissection de la veine céphalique chez les enfants. MÉTHODES: Nous avons suivi de manière prospective des patients soumis à une dissection veineuse pour implantation d'un dispositif d'accès veineux central entre le 1er janvier 2002 et le 31 décembre 2006. Pour les patients de moins de 8 mois, une dissection de la jugulaire externe a d'abord été tentée, sans tentative de dissection de la veine céphalique. Pour les patients de plus de 8 mois, une dissection de la veine céphalique a d'abord été tentée. Nous avons noté l'âge, le poids, l'issue de la dissection de la veine céphalique et les complications. RÉSULTATS: Au cours de la période de l'étude, 143 patients ont subi une dissection veineuse pour pose d'un dispositif d'accès veineux central totalement implantable : 25 avaient moins de 8 mois et 118 avaient plus de 8 mois. La dissection de la veine céphalique a réussi lors de 73 tentatives sur 118. Le poids du 25e percentile et le poids médian pour les 73 cas réussis étaient de 15,4 kg et de 28,3 kg, respectivement. On a observé une différence significative du taux de réussite associé au critère du poids seuil de 15 kg. Le taux global de complications a été de 8,2 %. CONCLUSION: La dissection de la veine céphalique a été une intervention acceptable pour la pose de dispositifs d'accès veineux centraux totalement implantables chez les enfants. Elle pourrait être envisagée chez les patients de plus de 15 kg qui ont besoin d'un dispositif d'accès veineux central implantable.


Asunto(s)
Cateterismo Venoso Central/métodos , Incisión Venosa/métodos , Adolescente , Peso Corporal , Niño , Preescolar , Estudios de Factibilidad , Femenino , Humanos , Lactante , Venas Yugulares/cirugía , Masculino , Evaluación de Resultado en la Atención de Salud , Estudios Retrospectivos
15.
Front Cell Neurosci ; 17: 1249043, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37868193

RESUMEN

Optogenetic techniques combine optics and genetics to enable cell-specific targeting and precise spatiotemporal control of excitable cells, and they are increasingly being employed. One of the most significant advantages of the optogenetic approach is that it allows for the modulation of nearby cells or circuits with millisecond precision, enabling researchers to gain a better understanding of the complex nervous system. Furthermore, optogenetic neuron activation permits the regulation of information processing in the brain, including synaptic activity and transmission, and also promotes nerve structure development. However, the optimal conditions remain unclear, and further research is required to identify the types of cells that can most effectively and precisely control nerve function. Recent studies have described optogenetic glial manipulation for coordinating the reciprocal communication between neurons and glia. Optogenetically stimulated glial cells can modulate information processing in the central nervous system and provide structural support for nerve fibers in the peripheral nervous system. These advances promote the effective use of optogenetics, although further experiments are needed. This review describes the critical role of glial cells in the nervous system and reviews the optogenetic applications of several types of glial cells, as well as their significance in neuron-glia interactions. Together, it briefly discusses the therapeutic potential and feasibility of optogenetics.

16.
Ultrasonography ; 42(2): 297-306, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36935594

RESUMEN

PURPOSE: The purpose of this study was to elucidate whether contrast-enhanced ultrasonography (CEUS) can visualize orally administered Sonazoid leaking into the peritoneal cavity in a postoperative stomach leakage mouse model. METHODS: Adult female mice (n=33, 9-10 weeks old) were used. Preoperative CEUS was performed after delivering Sonazoid via intraperitoneal injection and the per oral route. A gastric leakage model was then generated by making a surgical incision of about 0.5 cm at the stomach wall, and CEUS with per oral Sonazoid administration was performed. A region of interest was drawn on the CEUS images and the signal intensity was quantitatively measured. Statistical analysis was performed using a mixed model to compare the signal intensity sampled from the pre-contrast images with those of the post-contrast images obtained at different time points. RESULTS: CEUS after Sonazoid intraperitoneal injection in normal mice and after oral administration in mice with gastric perforation visualized the contrast medium spreading within the liver interlobar fissures continuous to the peritoneal cavity. A quantitative analysis showed that in the mice with gastric perforation, the orally delivered Sonazoid leaking into the peritoneal cavity induced a statistically significant (P<0.05) increase in signal intensity in all CEUS images obtained 10 seconds or longer after contrast delivery. However, enhancement was not observed before gastric perforation surgery (P=0.167). CONCLUSION: CEUS with oral Sonazoid administration efficiently visualized the contrast medium spreading within the peritoneal cavity in a postoperative stomach leakage mouse model.

17.
Sci Rep ; 13(1): 5934, 2023 04 12.
Artículo en Inglés | MEDLINE | ID: mdl-37045856

RESUMEN

The identification of abnormal findings manifested in retinal fundus images and diagnosis of ophthalmic diseases are essential to the management of potentially vision-threatening eye conditions. Recently, deep learning-based computer-aided diagnosis systems (CADs) have demonstrated their potential to reduce reading time and discrepancy amongst readers. However, the obscure reasoning of deep neural networks (DNNs) has been the leading cause to reluctance in its clinical use as CAD systems. Here, we present a novel architectural and algorithmic design of DNNs to comprehensively identify 15 abnormal retinal findings and diagnose 8 major ophthalmic diseases from macula-centered fundus images with the accuracy comparable to experts. We then define a notion of counterfactual attribution ratio (CAR) which luminates the system's diagnostic reasoning, representing how each abnormal finding contributed to its diagnostic prediction. By using CAR, we show that both quantitative and qualitative interpretation and interactive adjustment of the CAD result can be achieved. A comparison of the model's CAR with experts' finding-disease diagnosis correlation confirms that the proposed model identifies the relationship between findings and diseases similarly as ophthalmologists do.


Asunto(s)
Aprendizaje Profundo , Oftalmopatías , Humanos , Algoritmos , Redes Neurales de la Computación , Fondo de Ojo , Retina/diagnóstico por imagen
18.
Korean J Radiol ; 24(11): 1151-1163, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37899524

RESUMEN

OBJECTIVE: To develop a deep-learning-based bone age prediction model optimized for Korean children and adolescents and evaluate its feasibility by comparing it with a Greulich-Pyle-based deep-learning model. MATERIALS AND METHODS: A convolutional neural network was trained to predict age according to the bone development shown on a hand radiograph (bone age) using 21036 hand radiographs of Korean children and adolescents without known bone development-affecting diseases/conditions obtained between 1998 and 2019 (median age [interquartile range {IQR}], 9 [7-12] years; male:female, 11794:9242) and their chronological ages as labels (Korean model). We constructed 2 separate external datasets consisting of Korean children and adolescents with healthy bone development (Institution 1: n = 343; median age [IQR], 10 [4-15] years; male: female, 183:160; Institution 2: n = 321; median age [IQR], 9 [5-14] years; male: female, 164:157) to test the model performance. The mean absolute error (MAE), root mean square error (RMSE), and proportions of bone age predictions within 6, 12, 18, and 24 months of the reference age (chronological age) were compared between the Korean model and a commercial model (VUNO Med-BoneAge version 1.1; VUNO) trained with Greulich-Pyle-based age as the label (GP-based model). RESULTS: Compared with the GP-based model, the Korean model showed a lower RMSE (11.2 vs. 13.8 months; P = 0.004) and MAE (8.2 vs. 10.5 months; P = 0.002), a higher proportion of bone age predictions within 18 months of chronological age (88.3% vs. 82.2%; P = 0.031) for Institution 1, and a lower MAE (9.5 vs. 11.0 months; P = 0.022) and higher proportion of bone age predictions within 6 months (44.5% vs. 36.4%; P = 0.044) for Institution 2. CONCLUSION: The Korean model trained using the chronological ages of Korean children and adolescents without known bone development-affecting diseases/conditions as labels performed better in bone age assessment than the GP-based model in the Korean pediatric population. Further validation is required to confirm its accuracy.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Adolescente , Humanos , Niño , Masculino , Femenino , Lactante , Determinación de la Edad por el Esqueleto , Radiografía , República de Corea
19.
Sci Rep ; 11(1): 2876, 2021 02 03.
Artículo en Inglés | MEDLINE | ID: mdl-33536550

RESUMEN

There have been substantial efforts in using deep learning (DL) to diagnose cancer from digital images of pathology slides. Existing algorithms typically operate by training deep neural networks either specialized in specific cohorts or an aggregate of all cohorts when there are only a few images available for the target cohort. A trade-off between decreasing the number of models and their cancer detection performance was evident in our experiments with The Cancer Genomic Atlas dataset, with the former approach achieving higher performance at the cost of having to acquire large datasets from the cohort of interest. Constructing annotated datasets for individual cohorts is extremely time-consuming, with the acquisition cost of such datasets growing linearly with the number of cohorts. Another issue associated with developing cohort-specific models is the difficulty of maintenance: all cohort-specific models may need to be adjusted when a new DL algorithm is to be used, where training even a single model may require a non-negligible amount of computation, or when more data is added to some cohorts. In resolving the sub-optimal behavior of a universal cancer detection model trained on an aggregate of cohorts, we investigated how cohorts can be grouped to augment a dataset without increasing the number of models linearly with the number of cohorts. This study introduces several metrics which measure the morphological similarities between cohort pairs and demonstrates how the metrics can be used to control the trade-off between performance and the number of models.


Asunto(s)
Conjuntos de Datos como Asunto , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias/diagnóstico , Estudios de Cohortes , Humanos , Neoplasias/patología
20.
Clin Cancer Res ; 27(3): 719-728, 2021 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-33172897

RESUMEN

PURPOSE: Gastric cancer remains the leading cause of cancer-related deaths in Northeast Asia. Population-based endoscopic screenings in the region have yielded successful results in early detection of gastric tumors. Endoscopic screening rates are continuously increasing, and there is a need for an automatic computerized diagnostic system to reduce the diagnostic burden. In this study, we developed an algorithm to classify gastric epithelial tumors automatically and assessed its performance in a large series of gastric biopsies and its benefits as an assistance tool. EXPERIMENTAL DESIGN: Using 2,434 whole-slide images, we developed an algorithm based on convolutional neural networks to classify a gastric biopsy image into one of three categories: negative for dysplasia (NFD), tubular adenoma, or carcinoma. The performance of the algorithm was evaluated by using 7,440 biopsy specimens collected prospectively. The impact of algorithm-assisted diagnosis was assessed by six pathologists using 150 gastric biopsy cases. RESULTS: Diagnostic performance evaluated by the AUROC curve in the prospective study was 0.9790 for two-tier classification: negative (NFD) versus positive (all cases except NFD). When limited to epithelial tumors, the sensitivity and specificity were 1.000 and 0.9749. Algorithm-assisted digital image viewer (DV) resulted in 47% reduction in review time per image compared with DV only and 58% decrease to microscopy. CONCLUSIONS: Our algorithm has demonstrated high accuracy in classifying epithelial tumors and its benefits as an assistance tool, which can serve as a potential screening aid system in diagnosing gastric biopsy specimens.


Asunto(s)
Aprendizaje Profundo , Mucosa Gástrica/patología , Interpretación de Imagen Asistida por Computador/métodos , Patólogos/estadística & datos numéricos , Neoplasias Gástricas/diagnóstico , Adulto , Anciano , Anciano de 80 o más Años , Biopsia/estadística & datos numéricos , Estudios de Factibilidad , Femenino , Mucosa Gástrica/diagnóstico por imagen , Gastroscopía/estadística & datos numéricos , Humanos , Interpretación de Imagen Asistida por Computador/estadística & datos numéricos , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , Estudios Prospectivos , Estudios Retrospectivos , Sensibilidad y Especificidad , Neoplasias Gástricas/patología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA