Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Eur Radiol ; 33(12): 8879-8888, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37392233

RESUMEN

OBJECTIVES: To develop a deep learning (DL) method that can determine the Liver Imaging Reporting and Data System (LI-RADS) grading of high-risk liver lesions and distinguish hepatocellular carcinoma (HCC) from non-HCC based on multiphase CT. METHODS: This retrospective study included 1049 patients with 1082 lesions from two independent hospitals that were pathologically confirmed as HCC or non-HCC. All patients underwent a four-phase CT imaging protocol. All lesions were graded (LR 4/5/M) by radiologists and divided into an internal (n = 886) and external cohort (n = 196) based on the examination date. In the internal cohort, Swin-Transformer based on different CT protocols were trained and tested for their ability to LI-RADS grading and distinguish HCC from non-HCC, and then validated in the external cohort. We further developed a combined model with the optimal protocol and clinical information for distinguishing HCC from non-HCC. RESULTS: In the test and external validation cohorts, the three-phase protocol without pre-contrast showed κ values of 0.6094 and 0.4845 for LI-RADS grading, and its accuracy was 0.8371 and 0.8061, while the accuracy of the radiologist was 0.8596 and 0.8622, respectively. The AUCs in distinguishing HCC from non-HCC were 0.865 and 0.715 in the test and external validation cohorts, while those of the combined model were 0.887 and 0.808. CONCLUSION: The Swin-Transformer based on three-phase CT protocol without pre-contrast could feasibly simplify LI-RADS grading and distinguish HCC from non-HCC. Furthermore, the DL model have the potential in accurately distinguishing HCC from non-HCC using imaging and highly characteristic clinical data as inputs. CLINICAL RELEVANCE STATEMENT: The application of deep learning model for multiphase CT has proven to improve the clinical applicability of the Liver Imaging Reporting and Data System and provide support to optimize the management of patients with liver diseases. KEY POINTS: • Deep learning (DL) simplifies LI-RADS grading and helps distinguish hepatocellular carcinoma (HCC) from non-HCC. • The Swin-Transformer based on the three-phase CT protocol without pre-contrast outperformed other CT protocols. • The Swin-Transformer provide help in distinguishing HCC from non-HCC by using CT and characteristic clinical information as inputs.


Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/patología , Neoplasias Hepáticas/patología , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Imagen por Resonancia Magnética/métodos , Medios de Contraste , Sensibilidad y Especificidad
2.
Biomed Eng Online ; 22(1): 129, 2023 Dec 19.
Artículo en Inglés | MEDLINE | ID: mdl-38115029

RESUMEN

BACKGROUND: Haemorrhage transformation (HT) is a serious complication of intravenous thrombolysis (IVT) in acute ischaemic stroke (AIS). Accurate and timely prediction of the risk of HT before IVT may change the treatment decision and improve clinical prognosis. We aimed to develop a deep learning method for predicting HT after IVT for AIS using noncontrast computed tomography (NCCT) images. METHODS: We retrospectively collected data from 828 AIS patients undergoing recombinant tissue plasminogen activator (rt-PA) treatment within a 4.5-h time window (n = 665) or of undergoing urokinase treatment within a 6-h time window (n = 163) and divided them into the HT group (n = 69) and non-HT group (n = 759). HT was defined based on the criteria of the European Cooperative Acute Stroke Study-II trial. To address the problems of indiscernible features and imbalanced data, a weakly supervised deep learning (WSDL) model for HT prediction was constructed based on multiple instance learning and active learning using admission NCCT images and clinical information in addition to conventional deep learning models. Threefold cross-validation and transfer learning were performed to confirm the robustness of the network. Of note, the predictive value of the commonly used scales in clinics associated with NCCT images (i.e., the HAT and SEDAN score) was also analysed and compared to measure the feasibility of our proposed DL algorithms. RESULTS: Compared to the conventional DL and ML models, the WSDL model had the highest AUC of 0.799 (95% CI 0.712-0.883). Significant differences were observed between the WSDL model and five ML models (P < 0.05). The prediction performance of the WSDL model outperforms the HAT and SEDAN scores at the optimal operating point (threshold = 1.5). Further subgroup analysis showed that the WSDL model performed better for symptomatic intracranial haemorrhage (AUC = 0.833, F1 score = 0.909). CONCLUSIONS: Our WSDL model based on NCCT images had relatively good performance for predicting HT in AIS and may be suitable for assisting in clinical treatment decision-making.


Asunto(s)
Isquemia Encefálica , Aprendizaje Profundo , Accidente Cerebrovascular Isquémico , Accidente Cerebrovascular , Humanos , Activador de Tejido Plasminógeno/uso terapéutico , Accidente Cerebrovascular/diagnóstico por imagen , Accidente Cerebrovascular/tratamiento farmacológico , Accidente Cerebrovascular/complicaciones , Isquemia Encefálica/diagnóstico por imagen , Isquemia Encefálica/tratamiento farmacológico , Isquemia Encefálica/complicaciones , Estudios Retrospectivos , Terapia Trombolítica , Accidente Cerebrovascular Isquémico/diagnóstico por imagen , Accidente Cerebrovascular Isquémico/tratamiento farmacológico , Accidente Cerebrovascular Isquémico/complicaciones , Tomografía Computarizada por Rayos X , Hemorragia/complicaciones , Hemorragia/tratamiento farmacológico
3.
Radiol Med ; 128(12): 1483-1496, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37749461

RESUMEN

OBJECTIVE: To investigate the value of Computed Tomography (CT) radiomics derived from different peritumoral volumes of interest (VOIs) in predicting epidermal growth factor receptor (EGFR) mutation status in lung adenocarcinoma patients. MATERIALS AND METHODS: A retrospective cohort of 779 patients who had pathologically confirmed lung adenocarcinoma were enrolled. 640 patients were randomly divided into a training set, a validation set, and an internal testing set (3:1:1), and the remaining 139 patients were defined as an external testing set. The intratumoral VOI (VOI_I) was manually delineated on the thin-slice CT images, and seven peritumoral VOIs (VOI_P) were automatically generated with 1, 2, 3, 4, 5, 10, and 15 mm expansion along the VOI_I. 1454 radiomic features were extracted from each VOI. The t-test, the least absolute shrinkage and selection operator (LASSO), and the minimum redundancy maximum relevance (mRMR) algorithm were used for feature selection, followed by the construction of radiomics models (VOI_I model, VOI_P model and combined model). The performance of the models were evaluated by the area under the curve (AUC). RESULTS: 399 patients were classified as EGFR mutant (EGFR+), while 380 were wild-type (EGFR-). In the training and validation sets, internal and external testing sets, VOI4 (intratumoral and peritumoral 4 mm) model achieved the best predictive performance, with AUCs of 0.877, 0.727, and 0.701, respectively, outperforming the VOI_I model (AUCs of 0.728, 0.698, and 0.653, respectively). CONCLUSIONS: Radiomics extracted from peritumoral region can add extra value in predicting EGFR mutation status of lung adenocarcinoma patients, with the optimal peritumoral range of 4 mm.


Asunto(s)
Adenocarcinoma del Pulmón , Neoplasias Pulmonares , Humanos , Adenocarcinoma del Pulmón/diagnóstico por imagen , Adenocarcinoma del Pulmón/genética , Receptores ErbB/genética , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/genética , Mutación , Estudios Retrospectivos , Tomografía Computarizada por Rayos X , Distribución Aleatoria
4.
Eur Radiol ; 32(3): 1496-1505, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34553256

RESUMEN

OBJECTIVES: To develop a deep-learning (DL) model for identifying fresh VCFs from digital radiography (DR), with magnetic resonance imaging (MRI) as the reference standard. METHODS: Patients with lumbar VCFs were retrospectively enrolled from January 2011 to May 2020. All patients underwent DR and MRI scanning. VCFs were categorized as fresh or old according to MRI results, and the VCF grade and type were assessed. The raw DR data were sent to InferScholar Center for annotation. A DL-based prediction model was built, and its diagnostic performance was evaluated. The DeLong test was applied to assess differences in ROC curves between different models. RESULTS: A total of 1877 VCFs in 1099 patients were included in our study and randomly divided into development (n = 824 patients) and test (n = 275 patients) datasets. The ensemble model identified fresh and old VCFs, reaching an AUC of 0.80 (95% confidence interval [CI], 0.77-0.83), an accuracy of 74% (95% CI, 72-77%), a sensitivity of 80% (95% CI, 77-83%), and a specificity of 68% (95% CI, 63-72%). Lateral (AUC, 0.83) views exhibited better performance than anteroposterior views (AUC, 0.77), and the best performance among respective subgroupings was obtained for grade 3 (AUC, 0.89) and crush-type (AUC, 0.87) subgroups. CONCLUSION: The proposed DL model achieved adequate performance in identifying fresh VCFs from DR. KEY POINTS: • The ensemble deep-learning model identified fresh VCFs from DR, reaching an AUC of 0.80, an accuracy of 74%, a sensitivity of 80%, and a specificity of 68% with the reference standard of MRI. • The lateral views (AUC, 0.83) exhibited better performance than anteroposterior views (AUC, 0.77). • The grade 3 (AUC, 0.89) and crush-type (AUC, 0.87) subgroups showed the best performance among their respective subgroupings.


Asunto(s)
Aprendizaje Profundo , Fracturas por Compresión , Fracturas de la Columna Vertebral , Humanos , Intensificación de Imagen Radiográfica , Estudios Retrospectivos
5.
BMC Med Imaging ; 22(1): 221, 2022 12 17.
Artículo en Inglés | MEDLINE | ID: mdl-36528577

RESUMEN

BACKGROUND: It is difficult to predict normal-sized lymph node metastasis (LNM) in cervical cancer clinically. We aimed to investigate the feasibility of using deep learning (DL) nomogram based on readout segmentation of long variable echo-trains diffusion weighted imaging (RESOLVE-DWI) and related patient information to preoperatively predict normal-sized LNM in patients with cervical cancer. METHODS: A dataset of MR images [RESOLVE-DWI and apparent diffusion coefficient (ADC)] and patient information (age, tumor size, International Federation of Gynecology and Obstetrics stage, ADC value and squamous cell carcinoma antigen level) of 169 patients with cervical cancer between November 2013 and January 2022 were retrospectively collected. The LNM status was determined by final histopathology. The collected studies were randomly divided into a development cohort (n = 126) and a test cohort (n = 43). A single-channel convolutional neural network (CNN) and a multi-channel CNN based on ResNeSt architectures were proposed for predicting normal-sized LNM from single or multi modalities of MR images, respectively. A DL nomogram was constructed by incorporating the clinical information and the multi-channel CNN. These models' performance was analyzed by the receiver operating characteristic analysis in the test cohort. RESULTS: Compared to the single-channel CNN model using RESOLVE-DWI and ADC respectively, the multi-channel CNN model that integrating both two MR modalities showed improved performance in development cohort [AUC 0.848; 95% confidence interval (CI) 0.774-0.906] and test cohort (AUC 0.767; 95% CI 0.613-0.882). The DL nomogram showed the best performance in development cohort (AUC 0.890; 95% CI 0.821-0.938) and test cohort (AUC 0.844; 95% CI 0.701-0.936). CONCLUSION: The DL nomogram incorporating RESOLVE-DWI and clinical information has the potential to preoperatively predict normal-sized LNM of cervical cancer.


Asunto(s)
Aprendizaje Profundo , Neoplasias del Cuello Uterino , Femenino , Humanos , Metástasis Linfática/diagnóstico por imagen , Nomogramas , Neoplasias del Cuello Uterino/diagnóstico por imagen , Neoplasias del Cuello Uterino/patología , Estudios Retrospectivos , Ganglios Linfáticos/diagnóstico por imagen , Ganglios Linfáticos/patología
6.
Artif Intell Med ; 149: 102788, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38462288

RESUMEN

BACKGROUND: Deep learning methods have shown great potential in processing multi-modal Magnetic Resonance Imaging (MRI) data, enabling improved accuracy in brain tumor segmentation. However, the performance of these methods can suffer when dealing with incomplete modalities, which is a common issue in clinical practice. Existing solutions, such as missing modality synthesis, knowledge distillation, and architecture-based methods, suffer from drawbacks such as long training times, high model complexity, and poor scalability. METHOD: This paper proposes IMS2Trans, a novel lightweight scalable Swin Transformer network by utilizing a single encoder to extract latent feature maps from all available modalities. This unified feature extraction process enables efficient information sharing and fusion among the modalities, resulting in efficiency without compromising segmentation performance even in the presence of missing modalities. RESULTS: Two datasets, BraTS 2018 and BraTS 2020, containing incomplete modalities for brain tumor segmentation are evaluated against popular benchmarks. On the BraTS 2018 dataset, our model achieved higher average Dice similarity coefficient (DSC) scores for the whole tumor, tumor core, and enhancing tumor regions (86.57, 75.67, and 58.28, respectively), in comparison with a state-of-the-art model, i.e. mmFormer (86.45, 75.51, and 57.79, respectively). Similarly, on the BraTS 2020 dataset, our model scored higher DSC scores in these three brain tumor regions (87.33, 79.09, and 62.11, respectively) compared to mmFormer (86.17, 78.34, and 60.36, respectively). We also conducted a Wilcoxon test on the experimental results, and the generated p-value confirmed that our model's performance was statistically significant. Moreover, our model exhibits significantly reduced complexity with only 4.47 M parameters, 121.89G FLOPs, and a model size of 77.13 MB, whereas mmFormer comprises 34.96 M parameters, 265.79 G FLOPs, and a model size of 559.74 MB. These indicate our model, being light-weighted with significantly reduced parameters, is still able to achieve better performance than a state-of-the-art model. CONCLUSION: By leveraging a single encoder for processing the available modalities, IMS2Trans offers notable scalability advantages over methods that rely on multiple encoders. This streamlined approach eliminates the need for maintaining separate encoders for each modality, resulting in a lightweight and scalable network architecture. The source code of IMS2Trans and the associated weights are both publicly available at https://github.com/hudscomdz/IMS2Trans.


Asunto(s)
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Difusión de la Información , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador
7.
Eur J Radiol ; 176: 111533, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38833770

RESUMEN

PURPOSE: To develop and validate an end-to-end model for automatically predicting hematoma expansion (HE) after spontaneous intracerebral hemorrhage (sICH) using a novel deep learning framework. METHODS: This multicenter retrospective study collected cranial noncontrast computed tomography (NCCT) images of 490 patients with sICH at admission for model training (n = 236), internal testing (n = 60), and external testing (n = 194). A HE-Mind model was designed to predict HE, which consists of a densely connected U-net for segmentation process, a multi-instance learning strategy for resolving label ambiguity and a Siamese network for classification process. Two radiomics models based on support vector machine or logistic regression and two deep learning models based on residual network or Swin transformer were developed for performance comparison. Reader experiments including physician diagnosis mode and artificial intelligence mode were conducted for efficiency comparison. RESULTS: The HE-Mind model showed better performance compared to the comparative models in predicting HE, with areas under the curve of 0.849 and 0.809 in the internal and external test sets respectively. With the assistance of the HE-Mind model, the predictive accuracy and work efficiency of the emergency physician, junior radiologist, and senior radiologist were significantly improved, with accuracies of 0.768, 0.789, and 0.809 respectively, and reporting times of 7.26 s, 5.08 s, and 3.99 s respectively. CONCLUSIONS: The HE-Mind model could rapidly and automatically process the NCCT data and predict HE after sICH within three seconds, indicating its potential to assist physicians in the clinical diagnosis workflow of HE.


Asunto(s)
Hemorragia Cerebral , Hematoma , Tomografía Computarizada por Rayos X , Humanos , Hemorragia Cerebral/diagnóstico por imagen , Hemorragia Cerebral/complicaciones , Masculino , Tomografía Computarizada por Rayos X/métodos , Estudios Retrospectivos , Hematoma/diagnóstico por imagen , Femenino , Persona de Mediana Edad , Anciano , Aprendizaje Profundo , Máquina de Vectores de Soporte , Progresión de la Enfermedad , Valor Predictivo de las Pruebas
8.
Abdom Radiol (NY) ; 2024 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-39003651

RESUMEN

PURPOSE: To develop and validate a model for predicting suboptimal debulking surgery (SDS) of serous ovarian carcinoma (SOC) using radiomics method, clinical and MRI features. METHODS: 228 patients eligible from institution A (randomly divided into the training and internal validation cohorts) and 45 patients from institution B (external validation cohort) were collected and retrospectively analyzed. All patients underwent abdominal pelvic enhanced MRI scan, including T2-weighted imaging fat-suppressed fast spin-echo (T2FSE), T1-weighted dual-echo magnetic resonance imaging (T1DEI), diffusion weighted imaging (DWI), and T1 with contrast enhancement (T1CE). We extracted, selected and eliminated highly correlated radiomic features for each sequence. Then, Radiomic models were made by each single sequence, dual-sequence (T1CE + T2FSE), and all-sequence, respectively. Univariate and multivariate analyses were performed to screen the clinical and MRI independent predictors. The radiomic model with the highest area under the curve (AUC) was used to combine the independent predictors as a combined model. RESULTS: The optimal radiomic model was based on dual sequences (T2FSE + T1CE) among the five radiomic models (AUC = 0.720, P < 0.05). Serum carbohydrate antigen 125, the relationship between sigmoid colon/rectum and ovarian mass or mass implanted in Douglas' pouch, diaphragm nodules, and peritoneum/mesentery nodules were considered independent predictors. The AUC of the radiomic-clinical-radiological model was higher than either the optimal radiomic model or the clinical-radiological model in the training cohort (AUC = 0.908 vs. 0.720/0.854). CONCLUSIONS: The radiomic-clinical-radiological model has an overall algorithm reproducibility and may help create individualized treatment programs and improve the prognosis of patients with SOC.

9.
Front Artif Intell ; 7: 1321884, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38952409

RESUMEN

Background: Carotid plaques are major risk factors for stroke. Carotid ultrasound can help to assess the risk and incidence rate of stroke. However, large-scale carotid artery screening is time-consuming and laborious, the diagnostic results inevitably involve the subjectivity of the diagnostician to a certain extent. Deep learning demonstrates the ability to solve the aforementioned challenges. Thus, we attempted to develop an automated algorithm to provide a more consistent and objective diagnostic method and to identify the presence and stability of carotid plaques using deep learning. Methods: A total of 3,860 ultrasound images from 1,339 participants who underwent carotid plaque assessment between January 2021 and March 2023 at the Shanghai Eighth People's Hospital were divided into a 4:1 ratio for training and internal testing. The external test included 1,564 ultrasound images from 674 participants who underwent carotid plaque assessment between January 2022 and May 2023 at Xinhua Hospital affiliated with Dalian University. Deep learning algorithms, based on the fusion of a bilinear convolutional neural network with a residual neural network (BCNN-ResNet), were used for modeling to detect carotid plaques and assess plaque stability. We chose AUC as the main evaluation index, along with accuracy, sensitivity, and specificity as auxiliary evaluation indices. Results: Modeling for detecting carotid plaques involved training and internal testing on 1,291 ultrasound images, with 617 images showing plaques and 674 without plaques. The external test comprised 470 ultrasound images, including 321 images with plaques and 149 without. Modeling for assessing plaque stability involved training and internal testing on 764 ultrasound images, consisting of 494 images with unstable plaques and 270 with stable plaques. The external test was composed of 279 ultrasound images, including 197 images with unstable plaques and 82 with stable plaques. For the task of identifying the presence of carotid plaques, our model achieved an AUC of 0.989 (95% CI: 0.840, 0.998) with a sensitivity of 93.2% and a specificity of 99.21% on the internal test. On the external test, the AUC was 0.951 (95% CI: 0.962, 0.939) with a sensitivity of 95.3% and a specificity of 82.24%. For the task of identifying the stability of carotid plaques, our model achieved an AUC of 0.896 (95% CI: 0.865, 0.922) on the internal test with a sensitivity of 81.63% and a specificity of 87.27%. On the external test, the AUC was 0.854 (95% CI: 0.889, 0.830) with a sensitivity of 68.52% and a specificity of 89.49%. Conclusion: Deep learning using BCNN-ResNet algorithms based on routine ultrasound images could be useful for detecting carotid plaques and assessing plaque instability.

10.
Eur J Radiol ; 172: 111348, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38325190

RESUMEN

PURPOSE: To develop a deep learning (DL) model based on preoperative contrast-enhanced computed tomography (CECT) images to predict microvascular invasion (MVI) and pathological differentiation of hepatocellular carcinoma (HCC). METHODS: This retrospective study included 640 consecutive patients who underwent surgical resection and were pathologically diagnosed with HCC at two medical institutions from April 2017 to May 2022. CECT images and relevant clinical parameters were collected. All the data were divided into 368 training sets, 138 test sets and 134 validation sets. Through DL, a segmentation model was used to obtain a region of interest (ROI) of the liver, and a classification model was established to predict the pathological status of HCC. RESULTS: The liver segmentation model based on the 3D U-Network had a mean intersection over union (mIoU) score of 0.9120 and a Dice score of 0.9473. Among all the classification prediction models based on the Swin transformer, the fusion models combining image information and clinical parameters exhibited the best performance. The area under the curve (AUC) of the fusion model for predicting the MVI status was 0.941, its accuracy was 0.917, and its specificity was 0.908. The AUC values of the fusion model for predicting poorly differentiated, moderately differentiated and highly differentiated HCC based on the test set were 0.962, 0.957 and 0.996, respectively. CONCLUSION: The established DL models established can be used to noninvasively and effectively predict the MVI status and the degree of pathological differentiation of HCC, and aid in clinical diagnosis and treatment.


Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Estudios Retrospectivos , Neoplasias Hepáticas/diagnóstico por imagen , Invasividad Neoplásica/diagnóstico por imagen
11.
Clin Transl Gastroenterol ; 14(10): e00551, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-36434804

RESUMEN

INTRODUCTION: The aim of this study was to develop a novel artificial intelligence (AI) system that can automatically detect and classify protruded gastric lesions and help address the challenges of diagnostic accuracy and inter-reader variability encountered in routine diagnostic workflow. METHODS: We analyzed data from 1,366 participants who underwent gastroscopy at Jiangsu Provincial People's Hospital and Yangzhou First People's Hospital between December 2010 and December 2020. These patients were diagnosed with submucosal tumors (SMTs) including gastric stromal tumors (GISTs), gastric leiomyomas (GILs), and gastric ectopic pancreas (GEP). We trained and validated a multimodal, multipath AI system (MMP-AI) using the data set. We assessed the diagnostic performance of the proposed AI system using the area under the receiver-operating characteristic curve (AUC) and compared its performance with that of endoscopists with more than 5 years of experience in endoscopic diagnosis. RESULTS: In the ternary classification task among subtypes of SMTs using modality images, MMP-AI achieved the highest AUCs of 0.896, 0.890, and 0.999 for classifying GIST, GIL, and GEP, respectively. The performance of the model was verified using both external and internal longitudinal data sets. Compared with endoscopists, MMP-AI achieved higher recognition accuracy for SMTs. DISCUSSION: We developed a system called MMP-AI to identify protruding benign gastric lesions. This system can be used not only for white-light endoscope image recognition but also for endoscopic ultrasonography image analysis.


Asunto(s)
Endosonografía , Tumores del Estroma Gastrointestinal , Humanos , Inteligencia Artificial , Endoscopía Gastrointestinal , Estómago/diagnóstico por imagen
12.
Cancer Med ; 12(19): 19383-19393, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37772478

RESUMEN

BACKGROUND AND PURPOSE: Neoadjuvant chemotherapy (NACT) has become an essential component of the comprehensive treatment of cervical squamous cell carcinoma (CSCC). However, not all patients respond to chemotherapy due to individual differences in sensitivity and tolerance to chemotherapy drugs. Therefore, accurately predicting the sensitivity of CSCC patients to NACT was vital for individual chemotherapy. This study aims to construct a machine learning radiomics model based on magnetic resonance imaging (MRI) to assess its efficacy in predicting NACT susceptibility among CSCC patients. METHODS: This study included 234 patients with CSCC from two hospitals, who were divided into a training set (n = 180), a testing set (n = 20), and an external validation set (n = 34). Manual radiomic features were extracted from transverse section MRI images, and feature selection was performed using the recursive feature elimination (RFE) method. A prediction model was then generated using three machine learning algorithms, namely logistic regression, random forest, and support vector machines (SVM), for predicting NACT susceptibility. The model's performance was assessed based on the area under the receiver operating characteristic curve (AUC), accuracy, and sensitivity. RESULTS: The SVM approach achieves the highest scores on both the testing set and the external validation set. In the testing set and external validation set, the AUC of the model was 0.88 and 0.764, and the accuracy was 0.90 and 0.853, the sensitivity was 0.93 and 0.962, respectively. CONCLUSIONS: Machine learning radiomics models based on MRI images have achieved satisfactory performance in predicting the sensitivity of NACT in CSCC patients with high accuracy and robustness, which has great significance for the treatment and personalized medicine of CSCC patients.


Asunto(s)
Carcinoma de Células Escamosas , Neoplasias del Cuello Uterino , Humanos , Femenino , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/tratamiento farmacológico , Neoplasias del Cuello Uterino/diagnóstico por imagen , Neoplasias del Cuello Uterino/tratamiento farmacológico , Terapia Neoadyuvante , Imagen por Resonancia Magnética , Aprendizaje Automático , Estudios Retrospectivos
13.
Front Immunol ; 13: 897959, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35774780

RESUMEN

Background: Differential diagnosis of demyelinating diseases of the central nervous system is a challenging task that is prone to errors and inconsistent reading, requiring expertise and additional examination approaches. Advancements in deep-learning-based image interpretations allow for prompt and automated analyses of conventional magnetic resonance imaging (MRI), which can be utilized in classifying multi-sequence MRI, and thus may help in subsequent treatment referral. Methods: Imaging and clinical data from 290 patients diagnosed with demyelinating diseases from August 2013 to October 2021 were included for analysis, including 67 patients with multiple sclerosis (MS), 162 patients with aquaporin 4 antibody-positive (AQP4+) neuromyelitis optica spectrum disorder (NMOSD), and 61 patients with myelin oligodendrocyte glycoprotein antibody-associated disease (MOGAD). Considering the heterogeneous nature of lesion size and distribution in demyelinating diseases, multi-modal MRI of brain and/or spinal cord were utilized to build the deep-learning model. This novel transformer-based deep-learning model architecture was designed to be versatile in handling with multiple image sequences (coronal T2-weighted and sagittal T2-fluid attenuation inversion recovery) and scanning locations (brain and spinal cord) for differentiating among MS, NMOSD, and MOGAD. Model performances were evaluated using the area under the receiver operating curve (AUC) and the confusion matrices measurements. The classification accuracy between the fusion model and the neuroradiological raters was also compared. Results: The fusion model that was trained with combined brain and spinal cord MRI achieved an overall improved performance, with the AUC of 0.933 (95%CI: 0.848, 0.991), 0.942 (95%CI: 0.879, 0.987) and 0.803 (95%CI: 0.629, 0.949) for MS, AQP4+ NMOSD, and MOGAD, respectively. This exceeded the performance using the brain or spinal cord MRI alone for the identification of the AQP4+ NMOSD (AUC of 0.940, brain only and 0.689, spinal cord only) and MOGAD (0.782, brain only and 0.714, spinal cord only). In the multi-category classification, the fusion model had an accuracy of 81.4%, which was significantly higher compared to rater 1 (64.4%, p=0.04<0.05) and comparable to rater 2 (74.6%, p=0.388). Conclusion: The proposed novel transformer-based model showed desirable performance in the differentiation of MS, AQP4+ NMOSD, and MOGAD on brain and spinal cord MRI, which is comparable to that of neuroradiologists. Our model is thus applicable for interpretating conventional MRI in the differential diagnosis of demyelinating diseases with overlapping lesions.


Asunto(s)
Aprendizaje Profundo , Esclerosis Múltiple , Neuromielitis Óptica , Acuaporina 4 , Humanos , Esclerosis Múltiple/diagnóstico por imagen , Esclerosis Múltiple/patología , Neuroimagen , Neuromielitis Óptica/diagnóstico por imagen , Neuromielitis Óptica/patología , Médula Espinal/patología
14.
J Clin Med ; 11(15)2022 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-35956236

RESUMEN

Background: Deep learning (DL) could predict isocitrate dehydrogenase (IDH) mutation status from MRIs. Yet, previous work focused on CNNs with refined tumor segmentation. To bridge the gap, this study aimed to evaluate the feasibility of developing a Transformer-based network to predict the IDH mutation status free of refined tumor segmentation. Methods: A total of 493 glioma patients were recruited from two independent institutions for model development (TCIA; N = 259) and external test (AHXZ; N = 234). IDH mutation status was predicted directly from T2 images with a Swin Transformer and conventional ResNet. Furthermore, to investigate the necessity of refined tumor segmentation, seven strategies for the model input image were explored: (i) whole tumor slice; (ii-iii) tumor mask and/or not edema; (iv-vii) tumor bounding box of 0.8, 1.0, 1.2, 1.5 times. Performance comparison was made among the networks of different architectures along with different image input strategies, using area under the curve (AUC) and accuracy (ACC). Finally, to further boost the performance, a hybrid model was built by incorporating the images with clinical features. Results: With the seven proposed input strategies, seven Swin Transformer models and seven ResNet models were built, respectively. Based on the seven Swin Transformer models, an averaged AUC of 0.965 (internal test) and 0.842 (external test) were achieved, outperforming 0.922 and 0.805 resulting from the seven ResNet models, respectively. When a bounding box of 1.0 times was used, Swin Transformer (AUC = 0.868, ACC = 80.7%), achieved the best results against the one that used tumor segmentation (Tumor + Edema, AUC = 0.862, ACC = 78.5%). The hybrid model that integrated age and location features into images yielded improved performance (AUC = 0.878, Accuracy = 82.0%) over the model that used images only. Conclusions: Swin Transformer outperforms the CNN-based ResNet in IDH prediction. Using bounding box input images benefits the DL networks in IDH prediction and makes the IDH prediction free of refined glioma segmentation feasible.

15.
Front Endocrinol (Lausanne) ; 13: 876559, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35655800

RESUMEN

Objective: To construct and validate prediction models for the risk of diabetic retinopathy (DR) in patients with type 2 diabetes mellitus. Methods: Patients with type 2 diabetes mellitus hospitalized over the period between January 2010 and September 2018 were retrospectively collected. Eighteen baseline demographic and clinical characteristics were used as predictors to train five machine-learning models. The model that showed favorable predictive efficacy was evaluated at annual follow-ups. Multi-point data of the patients in the test set were utilized to further evaluate the model's performance. We also assessed the relative prognostic importance of the selected risk factors for DR outcomes. Results: Of 7943 collected patients, 1692 (21.30%) developed DR during follow-up. Among the five models, the XGBoost model achieved the highest predictive performance with an AUC, accuracy, sensitivity, and specificity of 0.803, 88.9%, 74.0%, and 81.1%, respectively. The XGBoost model's AUCs in the different follow-up periods were 0.834 to 0.966. In addition to the classical risk factors of DR, serum uric acid (SUA), low-density lipoprotein cholesterol (LDL-C), total cholesterol (TC), estimated glomerular filtration rate (eGFR), and triglyceride (TG) were also identified to be important and strong predictors for the disease. Compared with the clinical diagnosis method of DR, the XGBoost model achieved an average of 2.895 years prior to the first diagnosis. Conclusion: The proposed model achieved high performance in predicting the risk of DR among patients with type 2 diabetes mellitus at each time point. This study established the potential of the XGBoost model to facilitate clinicians in identifying high-risk patients and making type 2 diabetes management-related decisions.


Asunto(s)
Diabetes Mellitus Tipo 2 , Retinopatía Diabética , LDL-Colesterol , Estudios de Cohortes , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/epidemiología , Retinopatía Diabética/etiología , Humanos , Aprendizaje Automático , Estudios Retrospectivos , Ácido Úrico
16.
Magn Reson Imaging ; 92: 232-242, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35842194

RESUMEN

BACKGROUND: In monkey neuroimaging, particularly magnetic resonance imaging (MRI) studies, quick and accurate automatic macaque brain segmentation is essential. However, there are few processing and analysis tools dedicated to automatic brain tissue segmentation and labeling of the macaque brain on a subject-specific basis. As a result, currently most adopted methods are through direct implementation of existing tools that have been designed for human brain. However, the operation steps combining different functional modules of a variety of processing and analysis tool software are inevitably complicated, cumbersome, time-consuming and labor-intensive. NEW METHOD: In this study, we proposed a novel quick and accurate automatic macaque brain segmentation method based on multi-atlas registration and majority-vote algorithm. First, the single-atlas method based on S-HAMMER is used to register each template image of the reference atlas set (including brain tissue labeled images and brain anatomical structure labeled images) to the preprocessed image to be segmented. Thus, we obtain the corresponding deformation field and spatially transform the labeled image, and then get multiple segmentation results by local weighted voting method, which perform label fusion to obtain the final labeled images of brain structures segmentation result. RESULTS: By collecting high SNR and high spatial resolution images of macaque brain images from our 7T human MRI scanner, we have constructed two brain templates for each individual macaque subject, and macaque brain tissues and brain anatomical structure by one-atlas method. However, segmentation result of single-atlas method is not much accurate in some brain tissue area. It takes about 2 h and need more manual correction for segmentation. Automatic segmentation of macaque brain structure based on multi-atlas method was reasonably successful, the accuracy of segmentation was greatly improved without manual correction. Also, the proposed method provided good tissue fitting to V1 with smooth and continuous boundary. The Dice similarity of multi-atlas method showing 3.24%, 4.24%, 2.55%, 2.85%, 3.05%, and 0.35% improvement in image slices of 63, 66, 70, 71, 99 and 100, respectively. The entire processing time for the construction of a single template map took ~40 min. CONCLUSIONS: This study proposed a novel automatic segmentation method of individual macaque brain structure based on multi-atlas registration method, which is concise and reliable. It may offer a valuable tool to applications in the field of brain and neuroscience research using the macaque as an experimental animal model.


Asunto(s)
Macaca , Imagen por Resonancia Magnética , Algoritmos , Animales , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Neuroimagen
17.
Front Oncol ; 12: 846589, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36059655

RESUMEN

Background: To investigate the value of computed tomography (CT)-based radiomics signatures in combination with clinical and CT morphological features to identify epidermal growth factor receptor (EGFR)-mutation subtypes in lung adenocarcinoma (LADC). Methods: From February 2012 to October 2019, 608 patients were confirmed with LADC and underwent chest CT scans. Among them, 307 (50.5%) patients had a positive EGFR-mutation and 301 (49.5%) had a negative EGFR-mutation. Of the EGFR-mutant patients, 114 (37.1%) had a 19del -mutation, 155 (50.5%) had a L858R-mutation, and 38 (12.4%) had other rare mutations. Three combined models were generated by incorporating radiomics signatures, clinical, and CT morphological features to predict EGFR-mutation status. Patients were randomly split into training and testing cohorts, 80% and 20%, respectively. Model 1 was used to predict positive and negative EGFR-mutation, model 2 was used to predict 19del and non-19del mutations, and model 3 was used to predict L858R and non-L858R mutations. The receiver operating characteristic curve and the area under the curve (AUC) were used to evaluate their performance. Results: For the three models, model 1 had AUC values of 0.969 and 0.886 in the training and validation cohorts, respectively. Model 2 had AUC values of 0.999 and 0.847 in the training and validation cohorts, respectively. Model 3 had AUC values of 0.984 and 0.806 in the training and validation cohorts, respectively. Conclusion: Combined models that incorporate radiomics signature, clinical, and CT morphological features may serve as an auxiliary tool to predict EGFR-mutation subtypes and contribute to individualized treatment for patients with LADC.

18.
Artículo en Inglés | MEDLINE | ID: mdl-35862326

RESUMEN

Noninvasively and accurately predicting the epidermal growth factor receptor (EGFR) mutation status is a clinically vital problem. Moreover, further identifying the most suspicious area related to the EGFR mutation status can guide the biopsy to avoid false negatives. Deep learning methods based on computed tomography (CT) images may improve the noninvasive prediction of EGFR mutation status and potentially help clinicians guide biopsies by visual methods. Inspired by the potential inherent links between EGFR mutation status and invasiveness information, we hypothesized that the predictive performance of a deep learning network can be improved through extra utilization of the invasiveness information. Here, we created a novel explainable transformer network for EGFR classification named gated multiple instance learning transformer (GMILT) by integrating multi-instance learning and discriminative weakly supervised feature learning. Pathological invasiveness information was first introduced into the multitask model as embeddings. GMILT was trained and validated on a total of 512 patients with adenocarcinoma and tested on three datasets (the internal test dataset, the external test dataset, and The Cancer Imaging Archive (TCIA) public dataset). The performance (area under the curve (AUC) = 0.772 on the internal test dataset) of GMILT exceeded that of previously published methods and radiomics-based methods (i.e., random forest and support vector machine) and attained a preferable generalization ability (AUC = 0.856 in the TCIA test dataset and AUC = 0.756 in the external dataset). A diameter-based subgroup analysis further verified the efficiency of our model (most of the AUCs exceeded 0.772) to noninvasively predict EGFR mutation status from computed tomography (CT) images. In addition, because our method also identified the "core area" of the most suspicious area related to the EGFR mutation status, it has the potential ability to guide biopsies.

19.
Insights Imaging ; 13(1): 184, 2022 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-36471022

RESUMEN

OBJECTIVE: This study aimed to develop a deep learning (DL) model to improve the diagnostic performance of EIC and ASPECTS in acute ischemic stroke (AIS). METHODS: Acute ischemic stroke patients were retrospectively enrolled from 5 hospitals. We proposed a deep learning model to simultaneously segment the infarct and estimate ASPECTS automatically using baseline CT. The model performance of segmentation and ASPECTS scoring was evaluated using dice similarity coefficient (DSC) and ROC, respectively. Four raters participated in the multi-reader and multicenter (MRMC) experiment to fulfill the region-based ASPECTS reading under the assistance of the model or not. At last, sensitivity, specificity, interpretation time and interrater agreement were used to evaluate the raters' reading performance. RESULTS: In total, 1391 patients were enrolled for model development and 85 patients for external validation with onset to CT scanning time of 176.4 ± 93.6 min and NIHSS of 5 (IQR 2-10). The model achieved a DSC of 0.600 and 0.762 and an AUC of 0.876 (CI 0.846-0.907) and 0.729 (CI 0.679-0.779), in the internal and external validation set, respectively. The assistance of the DL model improved the raters' average sensitivities and specificities from 0.254 (CI 0.22-0.26) and 0.896 (CI 0.884-0.907), to 0.333 (CI 0.301-0.345) and 0.915 (CI 0.904-0.926), respectively. The average interpretation time of the raters was reduced from 219.0 to 175.7 s (p = 0.035). Meanwhile, the interrater agreement increased from 0.741 to 0.980. CONCLUSIONS: With the assistance of our proposed DL model, radiologists got better performance in the detection of AIS lesions on NCCT.

20.
Ann Palliat Med ; 10(7): 7329-7339, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34263624

RESUMEN

BACKGROUND: This study aimed to build a radiomics model with deep learning (DL) and human auditing and examine its diagnostic value in differentiating between coronavirus disease 2019 (COVID-19) and community-acquired pneumonia (CAP). METHODS: Forty-three COVID-19 patients, whose diagnoses had been confirmed with reverse-transcriptase polymerase-chain-reaction (RT-PCR) tests, and 60 CAP patients, whose diagnoses had been confirmed with sputum cultures, were enrolled in this retrospective study. The candidate regions of interest (ROIs) on the computed tomography (CT) images of the 103 patients were determined using a DL-based segmentation model powered by transfer learning. These ROIs were manually audited and corrected by 3 radiologists (with an average of 12 years of experience; range 6-17 years) to check the segmentation acceptance for the radiomics analysis. ROI-derived radiomics features were subsequently extracted to build the classification model and processed using 4 different algorithms (L1 regularization, Lasso, Ridge, and Z test) and 4 classifiers, including the logistic regression (LR), multi-layer perceptron (MLP), support vector machine (SVM), and extreme Gradient Boosting (XGboost). A receiver operating characteristic curve (ROC) analysis was conducted to evaluate the performance of the model. RESULTS: Quantitative CT measurements derived from human-audited segmentation results showed that COVID-19 patients had significantly decreased numbers of infected lobes compared to patients in the CAP group {median [interquartile range (IQR)]: 4 [3, 4] and 4 [4, 5]; P=0.031}. The infected percentage (%) of the whole lung was significantly more elevated in the CAP group [6.40 (2.77, 11.11)] than the COVID-19 group [1.83 (0.65, 4.42); P<0.001], and the same trend applied to each lobe, except for the superior lobe of the right lung [1.81 (0.09, 5.28) for COVID-19 vs. 1.32 (0.14, 7.02) for CAP; P=0.649]. Additionally, the highest proportion of infected lesions were observed in the CT value range of (-470, -370) Hounsfield units (HU) in the COVID-19 group. Conversely, the CAP group had a value range of (30, 60) HU. Radiomic model using corrected ROIs exhibited the highest area under ROC (AUC) of 0.990 [95% confidence interval (CI): 0.962-1.000] using Lasso for feature selection and MLP for classification. CONCLUSIONS: The proposed radiomics model based on human-audited segmentation made accurate differential diagnoses of COVID-19 and CAP. The quantification of CT measurements derived from DL could potentially be used as effective biomarkers in current clinical practice.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Computadores , Humanos , Estudios Retrospectivos , SARS-CoV-2
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA