Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Clin Orthop Surg ; 16(1): 113-124, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38304219

RESUMEN

Background: Recently, deep learning techniques have been used in medical imaging studies. We present an algorithm that measures radiologic parameters of distal radius fractures using a deep learning technique and compares the predicted parameters with those measured by an orthopedic hand surgeon. Methods: We collected anteroposterior (AP) and lateral X-ray images of 634 wrists in 624 patients with distal radius fractures treated conservatively with a follow-up of at least 2 months. We allocated 507 AP and 507 lateral images to the training set (80% of the images were used to train the model, and 20% were utilized for validation) and 127 AP and 127 lateral images to the test set. The margins of the radius and ulna were annotated for ground truth, and the scaphoid in the lateral views was annotated in the box configuration to determine the volar side of the images. Radius segmentation was performed using attention U-Net, and the volar/dorsal side was identified using a detection and classification model based on RetinaNet. The proposed algorithm measures the radial inclination, dorsal or volar tilt, and radial height by index axes and points from the segmented radius and ulna. Results: The segmentation model for the radius exhibited an accuracy of 99.98% and a Dice similarity coefficient (DSC) of 98.07% for AP images, and an accuracy of 99.75% and a DSC of 94.84% for lateral images. The segmentation model for the ulna showed an accuracy of 99.84% and a DSC of 96.48%. Based on the comparison of the radial inclinations measured by the algorithm and the manual method, the Pearson correlation coefficient was 0.952, and the intraclass correlation coefficient was 0.975. For dorsal/volar tilt, the correlation coefficient was 0.940, and the intraclass correlation coefficient was 0.968. For radial height, it was 0.768 and 0.868, respectively. Conclusions: The deep learning-based algorithm demonstrated excellent segmentation of the distal radius and ulna in AP and lateral radiographs of the wrist with distal radius fractures and afforded automatic measurements of radiologic parameters.


Asunto(s)
Aprendizaje Profundo , Fracturas del Radio , Fracturas de la Muñeca , Humanos , Fracturas del Radio/cirugía , Radiografía , Radio (Anatomía)/diagnóstico por imagen , Placas Óseas
2.
Comput Math Methods Med ; 2023: 7714483, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37284168

RESUMEN

The primary symptom of both appendicitis and diverticulitis is a pain in the right lower abdomen; it is almost impossible to diagnose these conditions through symptoms alone. However, there will be misdiagnoses happening when using abdominal computed tomography (CT) scans. Most previous studies have used a 3D convolutional neural network (CNN) suitable for processing sequences of images. However, 3D CNN models can be difficult to implement in typical computing systems because they require large amounts of data, GPU memory, and extensive training times. We propose a deep learning method, utilizing red, green, and blue (RGB) channel superposition images reconstructed from three slices of sequence images. Using the RGB superposition image as the input image of the model, the average accuracy was shown as 90.98% in EfficietNetB0, 91.27% in EfficietNetB2, and 91.98% in EfficietNetB4. The AUC score using the RGB superposition image was higher than the original image of the single channel for EfficientNetB4 (0.967 vs. 0.959, p = 0.0087). The comparison in performance between the model architectures using the RGB superposition method showed the highest learning performance in the EfficientNetB4 model among all indicators; accuracy was 91.98% and recall was 95.35%. EfficientNetB4 using the RGB superposition method had a 0.011 (p value = 0.0001) AUC score higher than EfficientNetB0 using the same method. The superposition of sequential slice images in CT scans was used to enhance the distinction in features like shape, size of the target, and spatial information used to classify disease. The proposed method has fewer constraints than the 3D CNN method and is suitable for an environment using 2D CNN; thus, we can achieve performance improvement with limited resources.


Asunto(s)
Abdomen , Apendicitis , Diverticulitis , Tomografía Computarizada por Rayos X , Humanos , Abdomen/diagnóstico por imagen , Apendicitis/diagnóstico por imagen , Diverticulitis/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
3.
PLoS One ; 18(5): e0281498, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37224137

RESUMEN

This study aimed to develop a convolutional neural network (CNN) using the EfficientNet algorithm for the automated classification of acute appendicitis, acute diverticulitis, and normal appendix and to evaluate its diagnostic performance. We retrospectively enrolled 715 patients who underwent contrast-enhanced abdominopelvic computed tomography (CT). Of these, 246 patients had acute appendicitis, 254 had acute diverticulitis, and 215 had normal appendix. Training, validation, and test data were obtained from 4,078 CT images (1,959 acute appendicitis, 823 acute diverticulitis, and 1,296 normal appendix cases) using both single and serial (RGB [red, green, blue]) image methods. We augmented the training dataset to avoid training disturbances caused by unbalanced CT datasets. For classification of the normal appendix, the RGB serial image method showed a slightly higher sensitivity (89.66 vs. 87.89%; p = 0.244), accuracy (93.62% vs. 92.35%), and specificity (95.47% vs. 94.43%) than did the single image method. For the classification of acute diverticulitis, the RGB serial image method also yielded a slightly higher sensitivity (83.35 vs. 80.44%; p = 0.019), accuracy (93.48% vs. 92.15%), and specificity (96.04% vs. 95.12%) than the single image method. Moreover, the mean areas under the receiver operating characteristic curve (AUCs) were significantly higher for acute appendicitis (0.951 vs. 0.937; p < 0.0001), acute diverticulitis (0.972 vs. 0.963; p = 0.0025), and normal appendix (0.979 vs. 0.972; p = 0.0101) with the RGB serial image method than those obtained by the single method for each condition. Thus, acute appendicitis, acute diverticulitis, and normal appendix could be accurately distinguished on CT images by our model, particularly when using the RGB serial image method.


Asunto(s)
Apendicitis , Apéndice , Diverticulitis , Humanos , Apendicitis/diagnóstico por imagen , Apéndice/diagnóstico por imagen , Estudios Retrospectivos , Enfermedad Aguda , Diverticulitis/diagnóstico por imagen , Tomografía Computarizada por Rayos X
4.
Diagnostics (Basel) ; 14(1)2023 Dec 28.
Artículo en Inglés | MEDLINE | ID: mdl-38201385

RESUMEN

Most of the development of gastric disease prediction models has utilized pre-trained models from natural data, such as ImageNet, which lack knowledge of medical domains. This study proposes Gastro-BaseNet, a classification model trained using gastroscopic image data for abnormal gastric lesions. To prove performance, we compared transfer-learning based on two pre-trained models (Gastro-BaseNet and ImageNet) and two training methods (freeze and fine-tune modes). The effectiveness was verified in terms of classification at the image-level and patient-level, as well as the localization performance of lesions. The development of Gastro-BaseNet had demonstrated superior transfer learning performance compared to random weight settings in ImageNet. When developing a model for predicting the diagnosis of gastric cancer and gastric ulcers, the transfer-learned model based on Gastro-BaseNet outperformed that based on ImageNet. Furthermore, the model's performance was highest when fine-tuning the entire layer in the fine-tune mode. Additionally, the trained model was based on Gastro-BaseNet, which showed higher localization performance, which confirmed its accurate detection and classification of lesions in specific locations. This study represents a notable advancement in the development of image analysis models within the medical field, resulting in improved diagnostic predictive accuracy and aiding in making more informed clinical decisions in gastrointestinal endoscopy.

5.
Eur Radiol ; 32(5): 3469-3479, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-34973101

RESUMEN

OBJECTIVES: We aim ed to evaluate a commercial artificial intelligence (AI) solution on a multicenter cohort of chest radiographs and to compare physicians' ability to detect and localize referable thoracic abnormalities with and without AI assistance. METHODS: In this retrospective diagnostic cohort study, we investigated 6,006 consecutive patients who underwent both chest radiography and CT. We evaluated a commercially available AI solution intended to facilitate the detection of three chest abnormalities (nodule/masses, consolidation, and pneumothorax) against a reference standard to measure its diagnostic performance. Moreover, twelve physicians, including thoracic radiologists, board-certified radiologists, radiology residents, and pulmonologists, assessed a dataset of 230 randomly sampled chest radiographic images. The images were reviewed twice per physician, with and without AI, with a 4-week washout period. We measured the impact of AI assistance on observer's AUC, sensitivity, specificity, and the area under the alternative free-response ROC (AUAFROC). RESULTS: In the entire set (n = 6,006), the AI solution showed average sensitivity, specificity, and AUC of 0.885, 0.723, and 0.867, respectively. In the test dataset (n = 230), the average AUC and AUAFROC across observers significantly increased with AI assistance (from 0.861 to 0.886; p = 0.003 and from 0.797 to 0.822; p = 0.003, respectively). CONCLUSIONS: The diagnostic performance of the AI solution was found to be acceptable for the images from respiratory outpatient clinics. The diagnostic performance of physicians marginally improved with the use of AI solutions. Further evaluation of AI assistance for chest radiographs using a prospective design is required to prove the efficacy of AI assistance. KEY POINTS: • AI assistance for chest radiographs marginally improved physicians' performance in detecting and localizing referable thoracic abnormalities on chest radiographs. • The detection or localization of referable thoracic abnormalities by pulmonologists and radiology residents improved with the use of AI assistance.


Asunto(s)
Inteligencia Artificial , Radiografía Torácica , Estudios de Cohortes , Humanos , Pacientes Ambulatorios , Estudios Prospectivos , Radiografía , Radiografía Torácica/métodos , Estudios Retrospectivos , Sensibilidad y Especificidad
6.
Front Oncol ; 11: 739639, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34778056

RESUMEN

BACKGROUND: Although accurate treatment response assessment for brain metastases (BMs) is crucial, it is highly labor intensive. This retrospective study aimed to develop a computer-aided detection (CAD) system for automated BM detection and treatment response evaluation using deep learning. METHODS: We included 214 consecutive MRI examinations of 147 patients with BM obtained between January 2015 and August 2016. These were divided into the training (174 MR images from 127 patients) and test datasets according to temporal separation (temporal test set #1; 40 MR images from 20 patients). For external validation, 24 patients with BM and 11 patients without BM from other institutions were included (geographic test set). In addition, we included 12 MRIs from BM patients obtained between August 2017 and March 2020 (temporal test set #2). Detection sensitivity, dice similarity coefficient (DSC) for segmentation, and agreements in one-dimensional and volumetric Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) criteria between CAD and radiologists were assessed. RESULTS: In the temporal test set #1, the sensitivity was 75.1% (95% confidence interval [CI]: 69.6%, 79.9%), mean DSC was 0.69 ± 0.22, and false-positive (FP) rate per scan was 0.8 for BM ≥ 5 mm. Agreements in the RANO-BM criteria were moderate (κ, 0.52) and substantial (κ, 0.68) for one-dimensional and volumetric, respectively. In the geographic test set, sensitivity was 87.7% (95% CI: 77.2%, 94.5%), mean DSC was 0.68 ± 0.20, and FP rate per scan was 1.9 for BM ≥ 5 mm. In the temporal test set #2, sensitivity was 94.7% (95% CI: 74.0%, 99.9%), mean DSC was 0.82 ± 0.20, and FP per scan was 0.5 (6/12) for BM ≥ 5 mm. CONCLUSIONS: Our CAD showed potential for automated treatment response assessment of BM ≥ 5 mm.

8.
PLoS One ; 16(2): e0246472, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33606779

RESUMEN

PURPOSE: This study evaluated the performance of a commercially available deep-learning algorithm (DLA) (Insight CXR, Lunit, Seoul, South Korea) for referable thoracic abnormalities on chest X-ray (CXR) using a consecutively collected multicenter health screening cohort. METHODS AND MATERIALS: A consecutive health screening cohort of participants who underwent both CXR and chest computed tomography (CT) within 1 month was retrospectively collected from three institutions' health care clinics (n = 5,887). Referable thoracic abnormalities were defined as any radiologic findings requiring further diagnostic evaluation or management, including DLA-target lesions of nodule/mass, consolidation, or pneumothorax. We evaluated the diagnostic performance of the DLA for referable thoracic abnormalities using the area under the receiver operating characteristic (ROC) curve (AUC), sensitivity, and specificity using ground truth based on chest CT (CT-GT). In addition, for CT-GT-positive cases, three independent radiologist readings were performed on CXR and clear visible (when more than two radiologists called) and visible (at least one radiologist called) abnormalities were defined as CXR-GTs (clear visible CXR-GT and visible CXR-GT, respectively) to evaluate the performance of the DLA. RESULTS: Among 5,887 subjects (4,329 males; mean age 54±11 years), referable thoracic abnormalities were found in 618 (10.5%) based on CT-GT. DLA-target lesions were observed in 223 (4.0%), nodule/mass in 202 (3.4%), consolidation in 31 (0.5%), pneumothorax in one 1 (<0.1%), and DLA-non-target lesions in 409 (6.9%). For referable thoracic abnormalities based on CT-GT, the DLA showed an AUC of 0.771 (95% confidence interval [CI], 0.751-0.791), a sensitivity of 69.6%, and a specificity of 74.0%. Based on CXR-GT, the prevalence of referable thoracic abnormalities decreased, with visible and clear visible abnormalities found in 405 (6.9%) and 227 (3.9%) cases, respectively. The performance of the DLA increased significantly when using CXR-GTs, with an AUC of 0.839 (95% CI, 0.829-0.848), a sensitivity of 82.7%, and s specificity of 73.2% based on visible CXR-GT and an AUC of 0.872 (95% CI, 0.863-0.880, P <0.001 for the AUC comparison of GT-CT vs. clear visible CXR-GT), a sensitivity of 83.3%, and a specificity of 78.8% based on clear visible CXR-GT. CONCLUSION: The DLA provided fair-to-good stand-alone performance for the detection of referable thoracic abnormalities in a multicenter consecutive health screening cohort. The DLA showed varied performance according to the different methods of ground truth.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Radiografía Torácica/métodos , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Multicéntricos como Asunto , Curva ROC , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA