Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Transl Oncol ; 49: 102087, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39159554

RESUMEN

PURPOSE: To establish a radiomics nomogram based on MRI radiomics features combined with clinical characteristics for distinguishing pleomorphic adenoma (PA) from warthin tumor (WT). METHODS: 294 patients with PA (n = 159) and WT (n = 135) confirmed by histopathology were included in this study between July 2017 and June 2023. Clinical factors including clinical data and MRI features were analyzed to establish clinical model. 10 MRI radiomics features were extracted and selected from T1WI and FS-T2WI, used to establish radiomics model and calculate radiomics scores (Rad-scores). Clinical factors and Rad-scores were combined to serve as crucial parameters for combined model. Through Receiver operator characteristics (ROC) curve and decision curve analysis (DCA), the discriminative values of the three models were qualified and compared, the best-performing combined model was visualized in the form of a radiomics nomogram. RESULTS: The combined model demonstrated excellent discriminative performance for PA and WT in the training set (AUC=0.998) and testing set (AUC=0.993) and performed better compared with the clinical model and radiomics model in the training set (AUC=0.996, 0.952) and testing model (AUC=0.954, 0.849). The DCA showed that the combined model provided more overall clinical usefulness in distinguishing parotid PA from WT than another two models. CONCLUSION: An analytical radiomics nomogram based on MRI radiomics features, incorporating clinical factors, can effectively distinguish between PA and WT.

2.
Acad Radiol ; 31(8): 3427-3437, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38458886

RESUMEN

RATIONALE AND OBJECTIVES: To develop a Dual generative-adversarial-network (GAN) Cascaded Network (DGCN) for generating super-resolution computed tomography (SRCT) images from normal-resolution CT (NRCT) images and evaluate the performance of DGCN in multi-center datasets. MATERIALS AND METHODS: This retrospective study included 278 patients with chest CT from two hospitals between January 2020 and June 2023, and each patient had all three NRCT (512×512 matrix CT images with a resolution of 0.70 mm, 0.70 mm,1.0 mm), high-resolution CT (HRCT, 1024×1024 matrix CT images with a resolution of 0.35 mm, 0.35 mm,1.0 mm), and ultra-high-resolution CT (UHRCT, 1024×1024 matrix CT images with a resolution of 0.17 mm, 0.17 mm, 0.5 mm) examinations. Initially, a deep chest CT super-resolution residual network (DCRN) was built to generate HRCT from NRCT. Subsequently, we employed the DCRN as a pre-trained model for the training of DGCN to further enhance resolution along all three axes, ultimately yielding SRCT. PSNR, SSIM, FID, subjective evaluation scores, and objective evaluation parameters related to pulmonary nodule segmentation in the testing set were recorded and analyzed. RESULTS: DCRN obtained a PSNR of 52.16, SSIM of 0.9941, FID of 137.713, and an average diameter difference of 0.0981 mm. DGCN obtained a PSNR of 46.50, SSIM of 0.9990, FID of 166.421, and an average diameter difference of 0.0981 mm on 39 testing cases. There were no significant differences between the SRCT and UHRCT images in subjective evaluation. CONCLUSION: Our model exhibited a significant enhancement in generating HRCT and SRCT images and outperformed established methods regarding image quality and clinical segmentation accuracy across both internal and external testing datasets.


Asunto(s)
Interpretación de Imagen Radiográfica Asistida por Computador , Tomografía Computarizada por Rayos X , Humanos , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Masculino , Femenino , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Persona de Mediana Edad , Neoplasias Pulmonares/diagnóstico por imagen , Anciano , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Nódulo Pulmonar Solitario/diagnóstico por imagen , Adulto
3.
J Digit Imaging ; 36(5): 2138-2147, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37407842

RESUMEN

To develop a deep learning-based model for detecting rib fractures on chest X-Ray and to evaluate its performance based on a multicenter study. Chest digital radiography (DR) images from 18,631 subjects were used for the training, testing, and validation of the deep learning fracture detection model. We first built a pretrained model, a simple framework for contrastive learning of visual representations (simCLR), using contrastive learning with the training set. Then, simCLR was used as the backbone for a fully convolutional one-stage (FCOS) objective detection network to identify rib fractures from chest X-ray images. The detection performance of the network for four different types of rib fractures was evaluated using the testing set. A total of 127 images from Data-CZ and 109 images from Data-CH with the annotations for four types of rib fractures were used for evaluation. The results showed that for Data-CZ, the sensitivities of the detection model with no pretraining, pretrained ImageNet, and pretrained DR were 0.465, 0.735, and 0.822, respectively, and the average number of false positives per scan was five in all cases. For the Data-CH test set, the sensitivities of three different pretraining methods were 0.403, 0.655, and 0.748. In the identification of four fracture types, the detection model achieved the highest performance for displaced fractures, with sensitivities of 0.873 and 0.774 for the Data-CZ and Data-CH test sets, respectively, with 5 false positives per scan, followed by nondisplaced fractures, buckle fractures, and old fractures. A pretrained model can significantly improve the performance of the deep learning-based rib fracture detection based on X-ray images, which can reduce missed diagnoses and improve the diagnostic efficacy.


Asunto(s)
Fracturas de las Costillas , Humanos , Fracturas de las Costillas/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Rayos X , Radiografía , Estudios Retrospectivos
4.
J Digit Imaging ; 36(5): 2278-2289, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37268840

RESUMEN

Image quality control (QC) is crucial for the accurate diagnosis of knee diseases using radiographs. However, the manual QC process is subjective, labor intensive, and time-consuming. In this study, we aimed to develop an artificial intelligence (AI) model to automate the QC procedure typically performed by clinicians. We proposed an AI-based fully automatic QC model for knee radiographs using high-resolution net (HR-Net) to identify predefined key points in images. We then performed geometric calculations to transform the identified key points into three QC criteria, namely, anteroposterior (AP)/lateral (LAT) overlap ratios and LAT flexion angle. The proposed model was trained and validated using 2212 knee plain radiographs from 1208 patients and an additional 1572 knee radiographs from 753 patients collected from six external centers for further external validation. For the internal validation cohort, the proposed AI model and clinicians showed high intraclass consistency coefficients (ICCs) for AP/LAT fibular head overlap and LAT knee flexion angle of 0.952, 0.895, and 0.993, respectively. For the external validation cohort, the ICCs were also high, with values of 0.934, 0.856, and 0.991, respectively. There were no significant differences between the AI model and clinicians in any of the three QC criteria, and the AI model required significantly less measurement time than clinicians. The experimental results demonstrated that the AI model performed comparably to clinicians and required less time. Therefore, the proposed AI-based model has great potential as a convenient tool for clinical practice by automating the QC procedure for knee radiographs.


Asunto(s)
Inteligencia Artificial , Articulación de la Rodilla , Humanos , Articulación de la Rodilla/diagnóstico por imagen , Control de Calidad , Radiografía
5.
Acta Radiol ; 64(3): 1184-1193, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36039494

RESUMEN

BACKGROUND: Differentiating diagnosis between the benign schwannoma and the malignant counterparts merely by neuroimaging is not always clear and remains still confounding in many cases because of atypical imaging presentation encountered in clinic and the lack of specific diagnostic markers. PURPOSE: To construct and validate a novel deep learning model based on multi-source magnetic resonance imaging (MRI) in automatically differentiating malignant spinal schwannoma from benign. MATERIAL AND METHODS: We retrospectively reviewed MRI imaging data from 119 patients with the initial diagnosis of benign or malignant spinal schwannoma confirmed by postoperative pathology. A novel convolutional neural network (CNN)-based deep learning model named GAIN-CP (Guided Attention Inference Network with Clinical Priors) was constructed. An ablation study for the fivefold cross-validation and cross-source experiments were conducted to validate the novel model. The diagnosis performance among our GAIN-CP model, the conventional radiomics model, and the radiologist-based clinical assessment were compared using the area under the receiver operating characteristic curve (AUC) and balanced accuracy (BAC). RESULTS: The AUC score of the proposed GAIN method is 0.83, which outperforms the radiomics method (0.65) and the evaluations from the radiologists (0.67). By incorporating both the image data and the clinical prior features, our GAIN-CP achieves an AUC score of 0.95. The GAIN-CP also achieves the best performance on fivefold cross-validation and cross-source experiments. CONCLUSION: The novel GAIN-CP method can successfully classify malignant spinal schwannoma from benign cases using the provided multi-source MR images exhibiting good prospect in clinical diagnosis.


Asunto(s)
Imagen por Resonancia Magnética , Neurilemoma , Humanos , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Neurilemoma/diagnóstico por imagen , Radiólogos
6.
Med Phys ; 50(6): 3612-3622, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36542389

RESUMEN

BACKGROUND: Ultra-high resolution computed tomography (UHRCT) has shown great potential for the detection of pulmonary diseases. However, UHRCT scanning generally induces increases in scanning time and radiation exposure. Super resolution is a gradually prosperous application in CT imaging despite higher radiation dose. Recent works have proved that the convolution neural network especially the generative adversarial network (GAN) based model could generate high-resolution CT using phantom images or simulated low resolution data without extra dose. Research that used clinical CT particularly lung images are rare due to the difficulty in collecting paired dataset. PURPOSE: To generate clinical UHRCT in lung from low resolution computed tomography (LRCT) using a GAN model. METHODS: 43 clinical scans with LRCT and UHRCT were collected in this study. Paired patches were selected using the structural similarity index measure (SSIM) and the peak signal-to-noise ratio (PSNR) threshold. A relativistic GAN with gradient guidance was trained to learn the mapping from LRCT to UHRCT. The performance of the proposed method was evaluated using PSNR and SSIM. A reader study with five-point Likert score (five for the worst and one for the best) is also applied to assess the proposed method in terms of general quality, diagnostic confidence, sharpness and denoise level. RESULTS: Experimental results show that our method got PSNR 32.60 ± 2.92 and SSIM 0.881 ± 0.057 on our clinical CT dataset, outperforming other state-of-the-art methods based on the simulated scenarios. Moreover, reader study shows that our method reached the good clinical performance in terms of general quality (1.14 ± 0.36), diagnostic confidence (1.36 ± 0.49), sharpness (1.07 ± 0.27) and high denoise level (1.29 ± 0.61) compared to other SR methods. CONCLUSION: This study demonstrated the feasibility of generating UHRCT images from LRCT without longer scanning time or increased radiation dose.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Pulmón , Relación Señal-Ruido
7.
J Thorac Dis ; 13(3): 1327-1337, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33841926

RESUMEN

BACKGROUND: The peri-tumor microenvironment plays an important role in the occurrence, growth and metastasis of cancer. The aim of this study is to explore the value and application of a CT image-based deep learning model of tumors and peri-tumors in predicting the invasiveness of ground-glass nodules (GGNs). METHODS: Preoperative thin-section chest CT images were reviewed retrospectively in 622 patients with a total of 687 pulmonary GGNs. GGNs are classified according to clinical management strategies as invasive lesions (IAC) and non-invasive lesions (AAH, AIS and MIA). The two volumes of interest (VOIs) identified on CT were the gross tumor volume (GTV) and the gross volume of tumor incorporating peritumoral region (GPTV). Three dimensional (3D) DenseNet was used to model and predict GGN invasiveness, and five-fold cross validation was performed. We used GTV and GPTV as inputs for the comparison model. Prediction performance was evaluated by sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). RESULTS: The GTV-based model was able to successfully predict GGN invasiveness, with an AUC of 0.921 (95% CI, 0.896-0.937). Using GPTV, the AUC of the model increased to 0.955 (95% CI, 0.939-0.971). CONCLUSIONS: The deep learning method performed well in predicting GGN invasiveness. The predictive ability of the GPTV-based model was more effective than that of the GTV-based model.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...