Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Front Pediatr ; 10: 829372, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35463905

RESUMEN

Study Objectives: In previous research, we built a deep neural network model based on Inception-Resnet-v2 to predict bone age (EFAI-BAA). The primary objective of the study was to determine if the EFAI-BAA was substantially concordant with the qualified physicians in assessing bone ages. The secondary objective of the study was to determine if the EFAI-BAA was no different in the clinical rating (advanced, normal, or delayed) with the qualified physicians. Method: This was a retrospective study. The left-hand X-ray images of male subjects aged 3-16 years old and female subjects aged 2-15 years old were collected from China Medical University Hospital (CMUH) and Asia University Hospital (AUH) retrospectively since the trial began until the included image amount reached 368. This was a blinded study. The qualified physicians who ran, read, and interpreted the tests were blinded to the values assessed by the other qualified physicians and the EFAI-BAA. Results: The concordance correlation coefficient (CCC) between the EFAI-BAA (EFAI-BAA), the evaluation of bone age by physician in Kaohsiung Veterans General Hospital (KVGH), Taichung Veterans General Hospital (TVGH2), and in Taipei Tzu Chi Hospital (TZUCHI-TP) was 0.9828 (95% CI: 0.9790-0.9859, p-value = 0.6782), 0.9739 (95% CI: 0.9681-0.9786, p-value = 0.0202), and 0.9592 (95% CI: 0.9501-0.9666, p-value = 0.4855), respectively. Conclusion: There was a consistency of bone age assessment between the EFAI-BAA and each one of the three qualified physicians (CCC = 0.9). As the significant difference in the clinical rating was only found between the EFAI-BAA and the qualified physician in TVGH2, the performance of the EFAI-BAA was considered similar to the qualified physicians.

2.
Biomedicine (Taipei) ; 11(3): 50-58, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35223411

RESUMEN

INTRODUCTION: A deep learning-based automatic bone age identification system (ABAIs) was introduced in medical imaging. This ABAIs enhanced accurate, consistent, and timely clinical diagnostics and enlightened research fields of deep learning and artificial intelligence (AI) in medical imaging. AIM: The goal of this study was to use the Deep Neural Network (DNN) model to assess bone age in months based on a database of pediatric left-hand radiographs. METHODS: The Inception Resnet V2 model with a Global Average Pooling layer to connect to a single fully connected layer with one neuron using the Rectified Linear Unit (ReLU) activation function consisted of the DNN model for bone age assessment (BAA) in this study. The medical data in each case contained posterior view of X-ray image of left hand, information of age, gender and weight, and clinical skeletal bone assessment. RESULTS: A database consisting of 8,061 hand radiographs with their gender and age (0-18 years) as the reference standard was used. The DNN model's accuracies on the testing set were 77.4%, 95.3%, 99.1% and 99.7% within 0.5, 1, 1.5 and 2 years of the ground truth respectively. The MAE for the study subjects was 0.33 and 0.25 year for male and female models, respectively. CONCLUSION: In this study, Inception Resnet V2 model was used for automatic interpretation of bone age. The convolutional neural network based on feature extraction has good performance in the bone age regression model, and further improves the accuracy and efficiency of image-based bone age evaluation. This system helps to greatly reduce the burden on clinical personnel.

3.
Medicine (Baltimore) ; 98(18): e15446, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-31045814

RESUMEN

This study used radiomics image analysis to examine the differences of texture feature values extracted from oropharyngeal and hypopharyngeal cancer positron emission tomography (PET) images on various tumor segmentations, and finds the proper and stable feature groups. A total of 80 oropharyngeal and hypopharyngeal cancer cases were retrospectively recruited. Radiomics method was applied to the PET image for the 80 oropharyngeal and hypopharyngeal cancer cases to extract texture features from various defined metabolic volumes. Kruskal-Wallis one-way analysis of variance method was used to test whether feature value difference exists between groups, which were grouped by stage, response to treatment, and recurrence. If there was a significant difference, the corresponding feature cutoff value was applied to the Kaplan-Meier estimator to estimate the survival functions. For the various defined metabolic volumes, there were 16 features that had significant differences between early (T1, T2) and late tumor stages (T3, T4). Five images and 2 textural features were found to be able to predict the tumor response and recurrence, respectively, with the areas under the receiver operating characteristic curves reaching 0.7. The histogram entropy was found to be a good predictor of overall survival (OS) and primary relapse-free survival (PRFS) of oropharyngeal and hypopharyngeal cancer patients. Textural features from PET images provide predictive and prognostic information in tumor staging, tumor response, recurrence, and have the potential to be a prognosticator for OS and PRFS in oropharyngeal and hypopharyngeal cancer.


Asunto(s)
Neoplasias Hipofaríngeas/patología , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Orofaríngeas/patología , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Adulto , Anciano , Femenino , Humanos , Neoplasias Hipofaríngeas/mortalidad , Estimación de Kaplan-Meier , Masculino , Persona de Mediana Edad , Estadificación de Neoplasias , Neoplasias Orofaríngeas/mortalidad , Pronóstico , Curva ROC , Estudios Retrospectivos
4.
Medicine (Baltimore) ; 98(19): e15200, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-31083152

RESUMEN

Breast cancer is one of the most harmful diseases for women with the highest morbidity. An efficient way to decrease its mortality is to diagnose cancer earlier by screening. Clinically, the best approach of screening for Asian women is ultrasound images combined with biopsies. However, biopsy is invasive and it gets incomprehensive information of the lesion. The aim of this study is to build a model for automatic detection, segmentation, and classification of breast lesions with ultrasound images. Based on deep learning, a technique using Mask regions with convolutional neural network was developed for lesion detection and differentiation between benign and malignant. The mean average precision was 0.75 for the detection and segmentation. The overall accuracy of benign/malignant classification was 85%. The proposed method provides a comprehensive and noninvasive way to detect and classify breast lesions.


Asunto(s)
Neoplasias de la Mama/clasificación , Neoplasias de la Mama/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Ultrasonografía Mamaria , Humanos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Estudios Retrospectivos , Ultrasonografía Mamaria/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...