Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(2)2022 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-35062465

RESUMEN

This paper reported a study on the 3-dimensional deep-learning-based automatic diagnosis of nasal fractures. (1) Background: The nasal bone is the most protuberant feature of the face; therefore, it is highly vulnerable to facial trauma and its fractures are known as the most common facial fractures worldwide. In addition, its adhesion causes rapid deformation, so a clear diagnosis is needed early after fracture onset. (2) Methods: The collected computed tomography images were reconstructed to isotropic voxel data including the whole region of the nasal bone, which are represented in a fixed cubic volume. The configured 3-dimensional input data were then automatically classified by the deep learning of residual neural networks (3D-ResNet34 and ResNet50) with the spatial context information using a single network, whose performance was evaluated by 5-fold cross-validation. (3) Results: The classification of nasal fractures with simple 3D-ResNet34 and ResNet50 networks achieved areas under the receiver operating characteristic curve of 94.5% and 93.4% for binary classification, respectively, both indicating unprecedented high performance in the task. (4) Conclusions: In this paper, it is presented the possibility of automatic nasal bone fracture diagnosis using a 3-dimensional Resnet-based single classification network and it will improve the diagnostic environment with future research.


Asunto(s)
Aprendizaje Profundo , Fracturas Óseas , Humanos , Redes Neurales de la Computación , Curva ROC , Tomografía Computarizada por Rayos X
2.
Sensors (Basel) ; 22(12)2022 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-35746310

RESUMEN

This paper proposes a development of automatic rib sequence labeling systems on chest computed tomography (CT) images with two suggested methods and three-dimensional (3D) region growing. In clinical practice, radiologists usually define anatomical terms of location depending on the rib's number. Thus, with the manual process of labeling 12 pairs of ribs and counting their sequence, it is necessary to refer to the annotations every time the radiologists read chest CT. However, the process is tedious, repetitive, and time-consuming as the demand for chest CT-based medical readings has increased. To handle the task efficiently, we proposed an automatic rib sequence labeling system and implemented comparison analysis on two methods. With 50 collected chest CT images, we implemented intensity-based image processing (IIP) and a convolutional neural network (CNN) for rib segmentation on this system. Additionally, three-dimensional (3D) region growing was used to classify each rib's label and put in a sequence label. The IIP-based method reported a 92.0% and the CNN-based method reported a 98.0% success rate, which is the rate of labeling appropriate rib sequences over whole pairs (1st to 12th) for all slices. We hope for the applicability thereof in clinical diagnostic environments by this method-efficient automatic rib sequence labeling system.


Asunto(s)
Costillas , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Costillas/diagnóstico por imagen , Tórax , Tomografía Computarizada por Rayos X/métodos
3.
J Gastroenterol Hepatol ; 36(12): 3548-3555, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34431545

RESUMEN

BACKGROUND AND AIM: Endoscopic ultrasound (EUS) is the most accurate diagnostic modality for polypoid lesions of the gallbladder (GB), but is limited by subjective interpretation. Deep learning-based artificial intelligence (AI) algorithms are under development. We evaluated the diagnostic performance of AI in differentiating polypoid lesions using EUS images. METHODS: The diagnostic performance of the EUS-AI system with ResNet50 architecture was evaluated via three processes: training, internal validation, and testing using an AI development cohort of 1039 EUS images (836 GB polyps and 203 gallstones). The diagnostic performance was verified using an external validation cohort of 83 patients and compared with the performance of EUS endoscopists. RESULTS: In the AI development cohort, we developed an EUS-AI algorithm and evaluated the diagnostic performance of the EUS-AI including sensitivity, specificity, positive predictive value, negative predictive value, and accuracy. For the differential diagnosis of neoplastic and non-neoplastic GB polyps, these values for EUS-AI were 57.9%, 96.5%, 77.8%, 91.6%, and 89.8%, respectively. In the external validation cohort, we compared diagnostic performances between EUS-AI and endoscopists. For the differential diagnosis of neoplastic and non-neoplastic GB polyps, the sensitivity and specificity were 33.3% and 96.1% for EUS-AI; they were 74.2% and 44.9%, respectively, for the endoscopists. Besides, the accuracy of the EUS-AI was between the accuracies of mid-level (66.7%) and expert EUS endoscopists (77.5%). CONCLUSIONS: This newly developed EUS-AI system showed favorable performance for the diagnosis of neoplastic GB polyps, with a performance comparable to that of EUS endoscopists.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Vesícula Biliar , Pólipos , Aprendizaje Profundo , Endosonografía , Neoplasias de la Vesícula Biliar/diagnóstico por imagen , Humanos , Pólipos/diagnóstico por imagen , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA