Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
PLoS One ; 19(8): e0308346, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39150966

RESUMO

BACKGROUND/PURPOSE: Distal radius fractures (DRFs) account for approximately 18% of fractures in patients 65 years and older. While plain radiographs are standard, the value of high-resolution computed tomography (CT) for detailed imaging crucial for diagnosis, prognosis, and intervention planning, and increasingly recognized. High-definition 3D reconstructions from CT scans are vital for applications like 3D printing in orthopedics and for the utility of mobile C-arm CT in orthopedic diagnostics. However, concerns over radiation exposure and suboptimal image resolution from some devices necessitate the exploration of advanced computational techniques for refining CT imaging without compromising safety. Therefore, this study aims to utilize conditional Generative Adversarial Networks (cGAN) to improve the resolution of 3 mm CT images (CT enhancement). METHODS: Following institutional review board approval, 3 mm-1 mm paired CT data from 11 patients with DRFs were collected. cGAN was used to improve the resolution of 3 mm CT images to match that of 1 mm images (CT enhancement). Two distinct methods were employed for training and generating CT images. In Method 1, a 3 mm CT raw image was used as input with the aim of generating a 1 mm CT raw image. Method 2 was designed to emphasize the difference value between the 3 mm and 1 mm images; using a 3 mm CT raw image as input, it produced the difference in image values between the 3 mm and 1 mm CT scans. Both quantitative metrics, such as peak signal-to-noise ratio (PSNR), mean squared error (MSE), and structural similarity index (SSIM), and qualitative assessments by two orthopedic surgeons were used to evaluate image quality by assessing the grade (1~4, which low number means high quality of resolution). RESULTS: Quantitative evaluations showed that our proposed techniques, particularly emphasizing the difference value in Method 2, consistently outperformed traditional approaches in achieving higher image resolution. In qualitative evaluation by two clinicians, images from method 2 showed better quality of images (grade: method 1, 2.7; method 2, 2.2). And more choice was found in method 2 for similar image with 1 mm slice image (15 vs 7, p = 201). CONCLUSION: In our study utilizing cGAN for enhancing CT imaging resolution, the authors found that the method, which focuses on the difference value between 3 mm and 1 mm images (Method 2), consistently outperformed.


Assuntos
Fraturas do Rádio , Tomografia Computadorizada por Raios X , Fraturas do Punho , Humanos , Redes Neurais de Computação , Fraturas do Rádio/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Fraturas do Punho/diagnóstico por imagem
2.
Cancers (Basel) ; 14(23)2022 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-36497481

RESUMO

We previously constructed a VGG-16 based artificial intelligence (AI) model (image classifier [IC]) to predict the invasion depth in early gastric cancer (EGC) using endoscopic static images. However, images cannot capture the spatio-temporal information available during real-time endoscopy-the AI trained on static images could not estimate invasion depth accurately and reliably. Thus, we constructed a video classifier [VC] using videos for real-time depth prediction in EGC. We built a VC by attaching sequential layers to the last convolutional layer of IC v2, using video clips. We computed the standard deviation (SD) of output probabilities for a video clip and the sensitivities in the manner of frame units to observe consistency. The sensitivity, specificity, and accuracy of IC v2 for static images were 82.5%, 82.9%, and 82.7%, respectively. However, for video clips, the sensitivity, specificity, and accuracy of IC v2 were 33.6%, 85.5%, and 56.6%, respectively. The VC performed better analysis of the videos, with a sensitivity of 82.3%, a specificity of 85.8%, and an accuracy of 83.7%. Furthermore, the mean SD was lower for the VC than IC v2 (0.096 vs. 0.289). The AI model developed utilizing videos can predict invasion depth in EGC more precisely and consistently than image-trained models, and is more appropriate for real-world situations.

3.
Transl Lung Cancer Res ; 11(1): 14-23, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35242624

RESUMO

BACKGROUND: Thoracic lymph node (LN) evaluation is essential for the accurate diagnosis of lung cancer and deciding the appropriate course of treatment. Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is considered a standard method for mediastinal nodal staging. This study aims to build a deep convolutional neural network (CNN) for the automatic classification of metastatic malignancies involving thoracic LN, using EBUS-TBNA. METHODS: Patients who underwent EBUS-TBNAs to assess the presence of malignancy in mediastinal LNs during a ten-month period at Severance Hospital, Seoul, Republic of Korea, were included in the study. Corresponding LN ultrasound images, pathology reports, demographic data, and clinical history were collected and analyzed. RESULTS: A total of 2,394 endobronchial ultrasound (EBUS) images of 1,459 benign LNs from 193 patients, and 935 malignant LNs from 177 patients, were collected. We employed the visual geometry group (VGG)-16 network to classify malignant LNs using only traditional cross-entropy for classification loss. The sensitivity, specificity, and accuracy of predicting malignancy were 69.7%, 74.3%, and 72.0%, respectively, and the overall area under the curve (AUC) was 0.782. We applied the new loss function to train the network and, using the modified VGG-16, the AUC improved to a value of 0.8. The sensitivity, specificity, and accuracy improved to 72.7%, 79.0%, and 75.8%, respectively. In addition, the proposed network can process 63 images per second on a single mainstream graphics processing unit (GPU) device, making it suitable for real-time analysis of EBUS images. CONCLUSIONS: Deep CNNs can effectively classify malignant LNs from EBUS images. Selecting LNs that require biopsy using real-time EBUS image analysis with deep learning is expected to shorten the EBUS-TBNA procedure time, increase lung cancer nodal staging accuracy, and improve patient safety.

4.
J Clin Med ; 8(9)2019 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-31454949

RESUMO

In early gastric cancer (EGC), tumor invasion depth is an important factor for determining the treatment method. However, as endoscopic ultrasonography has limitations when measuring the exact depth in a clinical setting as endoscopists often depend on gross findings and personal experience. The present study aimed to develop a model optimized for EGC detection and depth prediction, and we investigated factors affecting artificial intelligence (AI) diagnosis. We employed a visual geometry group(VGG)-16 model for the classification of endoscopic images as EGC (T1a or T1b) or non-EGC. To induce the model to activate EGC regions during training, we proposed a novel loss function that simultaneously measured classification and localization errors. We experimented with 11,539 endoscopic images (896 T1a-EGC, 809 T1b-EGC, and 9834 non-EGC). The areas under the curves of receiver operating characteristic curves for EGC detection and depth prediction were 0.981 and 0.851, respectively. Among the factors affecting AI prediction of tumor depth, only histologic differentiation was significantly associated, where undifferentiated-type histology exhibited a lower AI accuracy. Thus, the lesion-based model is an appropriate training method for AI in EGC. However, further improvements and validation are required, especially for undifferentiated-type histology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA