Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Abdom Radiol (NY) ; 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38782785

RESUMO

PURPOSE: Gain-of-function mutations in CTNNB1, gene encoding for ß-catenin, are observed in 25-30% of hepatocellular carcinomas (HCCs). Recent studies have shown ß-catenin activation to have distinct roles in HCC susceptibility to mTOR inhibitors and resistance to immunotherapy. Our goal was to develop and test a computational imaging-based model to non-invasively assess ß-catenin activation in HCC, since liver biopsies are often not done due to risk of complications. METHODS: This IRB-approved retrospective study included 134 subjects with pathologically proven HCC and available ß-catenin activation status, who also had either CT or MR imaging of the liver performed within 1 year of histological assessment. For qualitative descriptors, experienced radiologists assessed the presence of imaging features listed in LI-RADS v2018. For quantitative analysis, a single biopsy proven tumor underwent a 3D segmentation and radiomics features were extracted. We developed prediction models to assess the ß-catenin activation in HCC using both qualitative and quantitative descriptors. RESULTS: There were 41 cases (31%) with ß-catenin mutation and 93 cases (69%) without. The model's AUC was 0.70 (95% CI 0.60, 0.79) using radiomics features and 0.64 (0.52, 0.74; p = 0.468) using qualitative descriptors. However, when combined, the AUC increased to 0.88 (0.80, 0.92; p = 0.009). Among the LI-RADS descriptors, the presence of a nodule-in-nodule showed a significant association with ß-catenin mutations (p = 0.015). Additionally, 88 radiomics features exhibited a significant association (p < 0.05) with ß-catenin mutations. CONCLUSION: Combination of LI-RADS descriptors and CT/MRI-derived radiomics determine ß-catenin activation status in HCC with high confidence, making precision medicine a possibility.

2.
Breast Cancer Res ; 26(1): 82, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38790005

RESUMO

BACKGROUND: Patients with a Breast Imaging Reporting and Data System (BI-RADS) 4 mammogram are currently recommended for biopsy. However, 70-80% of the biopsies are negative/benign. In this study, we developed a deep learning classification algorithm on mammogram images to classify BI-RADS 4 suspicious lesions aiming to reduce unnecessary breast biopsies. MATERIALS AND METHODS: This retrospective study included 847 patients with a BI-RADS 4 breast lesion that underwent biopsy at a single institution and included 200 invasive breast cancers, 200 ductal carcinoma in-situ (DCIS), 198 pure atypias, 194 benign, and 55 atypias upstaged to malignancy after excisional biopsy. We employed convolutional neural networks to perform 4 binary classification tasks: (I) benign vs. all atypia + invasive + DCIS, aiming to identify the benign cases for whom biopsy may be avoided; (II) benign + pure atypia vs. atypia-upstaged + invasive + DCIS, aiming to reduce excision of atypia that is not upgraded to cancer at surgery; (III) benign vs. each of the other 3 classes individually (atypia, DCIS, invasive), aiming for a precise diagnosis; and (IV) pure atypia vs. atypia-upstaged, aiming to reduce unnecessary excisional biopsies on atypia patients. RESULTS: A 95% sensitivity for the "higher stage disease" class was ensured for all tasks. The specificity value was 33% in Task I, and 25% in Task II, respectively. In Task III, the respective specificity value was 30% (vs. atypia), 30% (vs. DCIS), and 46% (vs. invasive tumor). In Task IV, the specificity was 35%. The AUC values for the 4 tasks were 0.72, 0.67, 0.70/0.73/0.72, and 0.67, respectively. CONCLUSION: Deep learning of digital mammograms containing BI-RADS 4 findings can identify lesions that may not need breast biopsy, leading to potential reduction of unnecessary procedures and the attendant costs and stress.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Mamografia , Humanos , Feminino , Mamografia/métodos , Neoplasias da Mama/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico , Pessoa de Meia-Idade , Estudos Retrospectivos , Biópsia , Idoso , Adulto , Carcinoma Intraductal não Infiltrante/diagnóstico por imagem , Carcinoma Intraductal não Infiltrante/patologia , Carcinoma Intraductal não Infiltrante/diagnóstico , Procedimentos Desnecessários/estatística & dados numéricos , Mama/patologia , Mama/diagnóstico por imagem
3.
Radiology ; 310(1): e230269, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38259203

RESUMO

Background Background parenchymal enhancement (BPE) at dynamic contrast-enhanced (DCE) MRI of cancer-free breasts increases the risk of developing breast cancer; implications of quantitative BPE in ipsilateral breasts with breast cancer are largely unexplored. Purpose To determine whether quantitative BPE measurements in one or both breasts could be used to predict recurrence risk in women with breast cancer, using the Oncotype DX recurrence score as the reference standard. Materials and Methods This HIPAA-compliant retrospective single-institution study included women diagnosed with breast cancer between January 2007 and January 2012 (development set) and between January 2012 and January 2017 (internal test set). Quantitative BPE was automatically computed using an in-house-developed computer algorithm in both breasts. Univariable logistic regression was used to examine the association of BPE with Oncotype DX recurrence score binarized into high-risk (recurrence score >25) and low- or intermediate-risk (recurrence score ≤25) categories. Models including BPE measures were assessed for their ability to distinguish patients with high risk versus those with low or intermediate risk and the actual recurrence outcome. Results The development set included 127 women (mean age, 58 years ± 10.2 [SD]; 33 with high risk and 94 with low or intermediate risk) with an actual local or distant recurrence rate of 15.7% (20 of 127) at a minimum 10 years of follow-up. The test set included 60 women (mean age, 57.8 years ± 11.6; 16 with high risk and 44 with low or intermediate risk). BPE measurements quantified in both breasts were associated with increased odds of a high-risk Oncotype DX recurrence score (odds ratio range, 1.27-1.66 [95% CI: 1.02, 2.56]; P < .001 to P = .04). Measures of BPE combined with tumor radiomics helped distinguish patients with a high-risk Oncotype DX recurrence score from those with a low- or intermediate-risk score, with an area under the receiver operating characteristic curve of 0.94 in the development set and 0.79 in the test set. For the combined models, the negative predictive values were 0.97 and 0.93 in predicting actual distant recurrence and local recurrence, respectively. Conclusion Ipsilateral and contralateral DCE MRI measures of BPE quantified in patients with breast cancer can help distinguish patients with high recurrence risk from those with low or intermediate recurrence risk, similar to Oncotype DX recurrence score. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Zhou and Rahbar in this issue.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Pessoa de Meia-Idade , Neoplasias da Mama/diagnóstico por imagem , Estudos Retrospectivos , Mama/diagnóstico por imagem , Fatores de Risco , Imageamento por Ressonância Magnética
4.
Artigo em Inglês | MEDLINE | ID: mdl-37885672

RESUMO

Curriculum learning is a learning method that trains models in a meaningful order from easier to harder samples. A key here is to devise automatic and objective difficulty measures of samples. In the medical domain, previous work applied domain knowledge from human experts to qualitatively assess classification difficulty of medical images to guide curriculum learning, which requires extra annotation efforts, relies on subjective human experience, and may introduce bias. In this work, we propose a new automated curriculum learning technique using the variance of gradients (VoG) to compute an objective difficulty measure of samples and evaluated its effects on elbow fracture classification from X-ray images. Specifically, we used VoG as a metric to rank each sample in terms of the classification difficulty, where high VoG scores indicate more difficult cases for classification, to guide the curriculum training process We compared the proposed technique to a baseline (without curriculum learning), a previous method that used human annotations on classification difficulty, and anti-curriculum learning. Our experiment results showed comparable and higher performance for the binary and multi-class bone fracture classification tasks.

5.
Neurosurg Focus ; 54(6): E14, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37552699

RESUMO

OBJECTIVE: An estimated 1.5 million people die every year worldwide from traumatic brain injury (TBI). Physicians are relatively poor at predicting long-term outcomes early in patients with severe TBI. Machine learning (ML) has shown promise at improving prediction models across a variety of neurological diseases. The authors sought to explore the following: 1) how various ML models performed compared to standard logistic regression techniques, and 2) if properly calibrated ML models could accurately predict outcomes up to 2 years posttrauma. METHODS: A secondary analysis of a prospectively collected database of patients with severe TBI treated at a single level 1 trauma center between November 2002 and December 2018 was performed. Neurological outcomes were assessed at 3, 6, 12, and 24 months postinjury with the Glasgow Outcome Scale. The authors used ML models including support vector machine, neural network, decision tree, and naïve Bayes models to predict outcome across all 4 time points by using clinical information available on admission, and they compared performance to a logistic regression model. The authors attempted to predict unfavorable versus favorable outcomes (Glasgow Outcome Scale scores of 1-3 vs 4-5), as well as mortality. Models' performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) with 95% confidence interval and balanced accuracy. RESULTS: Of the 599 patients in the database, the authors included 501, 537, 469, and 395 at 3, 6, 12, and 24 months posttrauma. Across all time points, the AUCs ranged from 0.71 to 0.85 for mortality and from 0.62 to 0.82 for unfavorable outcomes with various modeling strategies. Decision tree models performed worse than all other modeling approaches for multiple time points regarding both unfavorable outcomes and mortality. There were no statistically significant differences between any other models. After proper calibration, the models had little variation (0.02-0.05) across various time points. CONCLUSIONS: The ML models tested herein performed with equivalent success compared with logistic regression techniques for prognostication in TBI. The TBI prognostication models could predict outcomes beyond 6 months, out to 2 years postinjury.


Assuntos
Lesões Encefálicas Traumáticas , Lesões Encefálicas , Humanos , Teorema de Bayes , Lesões Encefálicas Traumáticas/diagnóstico , Lesões Encefálicas Traumáticas/terapia , Modelos Logísticos , Aprendizado de Máquina , Prognóstico
6.
Resuscitation ; 191: 109894, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37414243

RESUMO

INTRODUCTION: Early identification of brain injury patterns in computerized tomography (CT) imaging is crucial for post-cardiac arrest prognostication. Lack of interpretability of machine learning prediction reduces trustworthiness by clinicians and prevents translation to clinical practice. We aimed to identify CT imaging patterns associated with prognosis with interpretable machine learning. METHODS: In this IRB-approved retrospective study, we included consecutive comatose adult patients hospitalized at a single academic medical center after resuscitation from in- and out-of-hospital cardiac arrest between August 2011 and August 2019 who underwent unenhanced CT imaging of the brain within 24 hours of their arrest. We decomposed the CT images into subspaces to identify interpretable and informative patterns of injury, and developed machine learning models to predict patient outcomes (i.e., survival and awakening status) using the identified imaging patterns. Practicing physicians visually examined the imaging patterns to assess clinical relevance. We evaluated machine learning models using 80%-20% random data split and reported AUC values to measure the model performance. RESULTS: We included 1284 subjects of whom 35% awakened from coma and 34% survived hospital discharge. Our expert physicians were able to visualize decomposed image patterns and identify those believed to be clinically relevant on multiple brain locations. For machine learning models, the AUC was 0.710 ± 0.012 for predicting survival and 0.702 ± 0.053 for predicting awakening, respectively. DISCUSSION: We developed an interpretable method to identify patterns of early post-cardiac arrest brain injury on CT imaging and showed these imaging patterns are predictive of patient outcomes (i.e., survival and awakening status).


Assuntos
Lesões Encefálicas , Parada Cardíaca , Parada Cardíaca Extra-Hospitalar , Adulto , Humanos , Estudos Retrospectivos , Parada Cardíaca/complicações , Parada Cardíaca/terapia , Prognóstico , Aprendizado de Máquina , Coma/complicações , Parada Cardíaca Extra-Hospitalar/diagnóstico por imagem , Parada Cardíaca Extra-Hospitalar/terapia , Parada Cardíaca Extra-Hospitalar/complicações
7.
J Breast Imaging ; 5(2): 148-158, 2023 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38416936

RESUMO

OBJECTIVE: Evaluate lesion visibility and radiologist confidence during contrast-enhanced mammography (CEM)-guided biopsy. METHODS: Women with BI-RADS ≥4A enhancing breast lesions were prospectively recruited for 9-g vacuum-assisted CEM-guided biopsy. Breast density, background parenchymal enhancement (BPE), lesion characteristics (enhancement and conspicuity), radiologist confidence (scale 1-5), and acquisition times were collected. Signal intensities in specimens were analyzed. Patient surveys were collected. RESULTS: A cohort of 28 women aged 40-81 years (average 57) had 28 enhancing lesions (7/28, 25% malignant). Breast tissue was scattered (10/28, 36%) or heterogeneously dense (18/28, 64%) with minimal (12/28, 43%), mild (7/28, 25%), or moderate (9/28, 32%) BPE on CEM. Twelve non-mass enhancements, 11 masses, 3 architectural distortions, and 2 calcification groups demonstrated weak (12/28, 43%), moderate (14/28, 50%), or strong (2/28, 7%) enhancement. Specimen radiography demonstrated lesion enhancement in 27/28 (96%). Radiologists reported complete lesion removal on specimen radiography in 8/28 (29%). Average time from contrast injection to specimen radiography was 18 minutes (SD = 5) and, to post-procedure mammogram (PPM), 34 minutes (SD = 10). Contrast-enhanced mammography PPM was performed in 27/28 cases; 13/19 (68%) of incompletely removed lesions on specimen radiography showed residual enhancement; 6/19 (32%) did not. Across all time points, average confidence was 2.2 (SD = 1.2). Signal intensities of enhancing lesions were similar to iodine. Patients had an overall positive assessment. CONCLUSION: Lesion enhancement persisted through PPM and was visible on low energy specimen radiography, with an average "confident" score. Contrast-enhanced mammography-guided breast biopsy is easily implemented clinically. Its availability will encourage adoption of CEM.


Assuntos
Meios de Contraste , Mamografia , Feminino , Humanos , Mamografia/métodos , Mama/diagnóstico por imagem , Biópsia por Agulha/métodos , Biópsia Guiada por Imagem
8.
Artif Intell Med ; 134: 102424, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36462894

RESUMO

Radiological images have shown promising effects in patient prognostication. Deep learning provides a powerful approach for in-depth analysis of imaging data and integration of multi-modal data for modeling. In this work, we propose SurvivalCNN, a deep learning structure for cancer patient survival prediction using CT imaging data and non-imaging clinical data. In SurvivalCNN, a supervised convolutional neural network is designed to extract volumetric image features, and radiomics features are also integrated to provide potentially different imaging information. Within SurvivalCNN, a novel multi-thread multi-layer perceptron module, namely, SurvivalMLP, is proposed to perform survival prediction from censored survival data. We evaluate the proposed SurvivalCNN framework on a large clinical dataset of 1061 gastric cancer patients for both overall survival (OS) and progression-free survival (PFS) prediction. We compare SurvivalCNN to three different modeling methods and examine the effects of various sets of data/features when used individually or in combination. With five-fold cross validation, our experimental results show that SurvivalCNN achieves averaged concordance index 0.849 and 0.783 for predicting OS and PFS, respectively, outperforming the compared state-of-the-art methods and the clinical model. After future validation, the proposed SurvivalCNN model may serve as a clinical tool to improve gastric cancer patient survival estimation and prognosis analysis.


Assuntos
Aprendizado Profundo , Radiologia , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/diagnóstico por imagem , Pesquisa , Redes Neurais de Computação
9.
Artif Intell Med ; 132: 102366, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36207073

RESUMO

Deep learning on a limited number of labels/annotations is a challenging task for medical imaging analysis. In this paper, we propose a novel self-training segmentation pipeline (Self-Seg in short) for segmenting skeletal muscle in CT images. Self-Seg starts with a small set of annotated images and then iteratively learns from unlabeled datasets to gradually improve the segmentation performance. Self-Seg follows a semi-supervised teacher-student learning scheme and there are two contributions: 1) we construct a self-attention UNet to improve segmentation over the classical UNet model, and 2) we implement an automatic label grader to implicitly incorporate medical knowledge for quality assurance of pseudo labels, from which good quality pseudo labels are identified to enhance learning of the segmentation model. We perform extensive experiments on three CT image datasets and show promising results on five evaluation settings, and we also compared our method to several baseline and related methods and achieved superior performance.


Assuntos
Músculo Esquelético , Aprendizado de Máquina Supervisionado , Humanos , Processamento de Imagem Assistida por Computador , Músculo Esquelético/diagnóstico por imagem , Estudantes
10.
Surg Neurol Int ; 13: 241, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35855176

RESUMO

Background: Posttraumatic seizures (PTSs) are a major source of disability after traumatic brain injury (TBI). The Brain Trauma Foundation Guidelines recommend prophylactic anti-epileptics (AEDs) for early PTS in severe TBI, but high-quality evidence is lacking in mild TBI. Methods: To determine the benefit of administering prophylactic AEDs, we performed a prospective and multicenter study evaluating consecutive patients who presented to a Level 1 trauma center from January 2017 to December 2020. We included all patients with mild TBI defined as Glasgow Coma Scale (GCS) 13-15 and a positive head computed tomography (CT). Patients were excluded for previous seizure history, current AED use, or a neurosurgical procedure. Patients were given a prophylactic 7-day course of AEDs on a week-on versus week-off basis and followed with in-person clinic visits, in-hospital evaluation, or a validated phone questionnaire. Results: Four hundred and ninety patients were enrolled, 349 (71.2%) had follow-up, and 139 (39.8%) were given prophylactic AEDs. There was no difference between seizure rates for the prophylactic AED group (0.7%) and those without (2.9%; P = 0.25). Patients who had a PTS were on average older (81.4 years) than patients without a seizure (64.8 years; P = 0.02). Seizure rate increased linearly by age groups: <60 years old (0%); 60-70 years old (1.7%); 70-80 years old (2.3%); and >80 years old (4.6%). Conclusion: Prophylactic AEDs did not provide a benefit for PTS reduction in mild TBI patients with a positive CT head scan.

11.
Radiology ; 304(2): 385-394, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35471108

RESUMO

Background After severe traumatic brain injury (sTBI), physicians use long-term prognostication to guide acute clinical care yet struggle to predict outcomes in comatose patients. Purpose To develop and evaluate a prognostic model combining deep learning of head CT scans and clinical information to predict long-term outcomes after sTBI. Materials and Methods This was a retrospective analysis of two prospectively collected databases. The model-building set included 537 patients (mean age, 40 years ± 17 [SD]; 422 men) from one institution from November 2002 to December 2018. Transfer learning and curriculum learning were applied to a convolutional neural network using admission head CT to predict mortality and unfavorable outcomes (Glasgow Outcomes Scale scores 1-3) at 6 months. This was combined with clinical input for a holistic fusion model. The models were evaluated using an independent internal test set and an external cohort of 220 patients with sTBI (mean age, 39 years ± 17; 166 men) from 18 institutions in the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) study from February 2014 to April 2018. The models were compared with the International Mission on Prognosis and Analysis of Clinical Trials in TBI (IMPACT) model and the predictions of three neurosurgeons. Area under the receiver operating characteristic curve (AUC) was used as the main model performance metric. Results The fusion model had higher AUCs than did the IMPACT model in the prediction of mortality (AUC, 0.92 [95% CI: 0.86, 0.97] vs 0.80 [95% CI: 0.71, 0.88]; P < .001) and unfavorable outcomes (AUC, 0.88 [95% CI: 0.82, 0.94] vs 0.82 [95% CI: 0.75, 0.90]; P = .04) on the internal data set. For external TRACK-TBI testing, there was no evidence of a significant difference in the performance of any models compared with the IMPACT model (AUC, 0.83; 95% CI: 0.77, 0.90) in the prediction of mortality. The Imaging model (AUC, 0.73; 95% CI: 0.66-0.81; P = .02) and the fusion model (AUC, 0.68; 95% CI: 0.60, 0.76; P = .02) underperformed as compared with the IMPACT model (AUC, 0.83; 95% CI: 0.77, 0.89) in the prediction of unfavorable outcomes. The fusion model outperformed the predictions of the neurosurgeons. Conclusion A deep learning model of head CT and clinical information can be used to predict 6-month outcomes after severe traumatic brain injury. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Haller in this issue.


Assuntos
Lesões Encefálicas Traumáticas , Aprendizado Profundo , Adulto , Lesões Encefálicas Traumáticas/diagnóstico por imagem , Lesões Encefálicas Traumáticas/cirurgia , Escala de Coma de Glasgow , Humanos , Masculino , Prognóstico , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
12.
BMC Med Imaging ; 22(1): 15, 2022 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-35094674

RESUMO

BACKGROUND: Renal cell carcinoma (RCC) is a heterogeneous group of kidney cancers. Renal capsule invasion is an essential factor for RCC staging. To develop radiomics models from CT images for the preoperative prediction of capsule invasion in RCC patients. METHODS: This retrospective study included patients with RCC admitted to the Chongqing University Cancer Hospital (01/2011-05/2019). We built a radiomics model to distinguish patients grouped as capsule invasion versus non-capsule invasion, using preoperative CT scans. We evaluated effects of three imaging phases, i.e., unenhanced phases (UP), corticomedullary phases (CMP), and nephrographic phases (NP). Five different machine learning classifiers were compared. The effects of tumor and tumor margins are also compared. Five-fold cross-validation and the area under the receiver operating characteristic curve (AUC) are used to evaluate model performance. RESULTS: This study included 126 RCC patients, including 46 (36.5%) with capsule invasion. CMP exhibited the highest AUC (AUC = 0.81) compared to UP and NP, when using the forward neural network (FNN) classifier. The AUCs using features extracted from the tumor region were generally higher than those of the marginal regions in the CMP (0.81 vs. 0.73) and NP phase (AUC = 0.77 vs. 0.76). For UP, the best result was obtained from the marginal region (AUC = 0.80). The robustness analysis on the UP, CMP, and NP achieved the AUC of 0.76, 0.79, and 0.77, respectively. CONCLUSIONS: Radiomics features in renal CT imaging are associated with the renal capsule invasion in RCC patients. Further evaluation of the models is warranted.


Assuntos
Carcinoma de Células Renais/diagnóstico por imagem , Carcinoma de Células Renais/patologia , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/patologia , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Diagnóstico Diferencial , Feminino , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Invasividade Neoplásica , Período Pré-Operatório , Estudos Retrospectivos
13.
Resuscitation ; 172: 17-23, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35041875

RESUMO

INTRODUCTION: Guidelines recommend use of computerized tomography (CT) and electroencephalography (EEG) in post-arrest prognostication. Strong associations between CT and EEG might obviate the need to acquire both modalities. We quantified these associations via deep learning. METHODS: We performed a single-center, retrospective study including comatose patients hospitalized after cardiac arrest. We extracted brain CT DICOMs, resized and registered each to a standard anatomical atlas, performed skull stripping and windowed images to optimize contrast of the gray-white junction. We classified initial EEG as generalized suppression, other highly pathological findings or benign activity. We extracted clinical information available on presentation from our prospective registry. We trained three machine learning (ML) models to predict EEG from clinical covariates. We used three state-of-the-art approaches to build multi-headed deep learning models using similar model architectures. Finally, we combined the best performing clinical and imaging models. We evaluated discrimination in test sets. RESULTS: We included 500 patients, of whom 218 (44%) had benign EEG findings, 135 (27%) showed generalized suppression and 147 (29%) had other highly pathological findings that were most commonly (93%) burst suppression with identical bursts. Clinical ML models had moderate discrimination (test set AUCs 0.73-0.80). Image-based deep learning performed worse (test set AUCs 0.51-0.69), particularly discriminating benign from highly pathological findings. Adding image-based deep learning to clinical models improved prediction of generalized suppression due to accurate detection of severe cerebral edema. DISCUSSION: CT and EEG provide complementary information about post-arrest brain injury. Our results do not support selective acquisition of only one of these modalities, except in the most severely injured patients.


Assuntos
Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Eletroencefalografia/métodos , Humanos , Neuroimagem , Prognóstico , Estudos Retrospectivos
14.
Pattern Recognit ; 1322022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37089470

RESUMO

Information in digital mammogram images has been shown to be associated with the risk of developing breast cancer. Longitudinal breast cancer screening mammogram examinations may carry spatiotemporal information that can enhance breast cancer risk prediction. No deep learning models have been designed to capture such spatiotemporal information over multiple examinations to predict the risk. In this study, we propose a novel deep learning structure, LRP-NET, to capture the spatiotemporal changes of breast tissue over multiple negative/benign screening mammogram examinations to predict near-term breast cancer risk in a case-control setting. Specifically, LRP-NET is designed based on clinical knowledge to capture the imaging changes of bilateral breast tissue over four sequential mammogram examinations. We evaluate our proposed model with two ablation studies and compare it to three models/settings, including 1) a "loose" model without explicitly capturing the spatiotemporal changes over longitudinal examinations, 2) LRP-NET but using a varying number (i.e., 1 and 3) of sequential examinations, and 3) a previous model that uses only a single mammogram examination. On a case-control cohort of 200 patients, each with four examinations, our experiments on a total of 3200 images show that the LRP-NET model outperforms the compared models/settings.

15.
Nat Commun ; 12(1): 7281, 2021 12 14.
Artigo em Inglês | MEDLINE | ID: mdl-34907229

RESUMO

While active efforts are advancing medical artificial intelligence (AI) model development and clinical translation, safety issues of the AI models emerge, but little research has been done. We perform a study to investigate the behaviors of an AI diagnosis model under adversarial images generated by Generative Adversarial Network (GAN) models and to evaluate the effects on human experts when visually identifying potential adversarial images. Our GAN model makes intentional modifications to the diagnosis-sensitive contents of mammogram images in deep learning-based computer-aided diagnosis (CAD) of breast cancer. In our experiments the adversarial samples fool the AI-CAD model to output a wrong diagnosis on 69.1% of the cases that are initially correctly classified by the AI-CAD model. Five breast imaging radiologists visually identify 29%-71% of the adversarial samples. Our study suggests an imperative need for continuing research on medical AI model's safety issues and for developing potential defensive solutions against adversarial attacks.


Assuntos
Inteligência Artificial , Diagnóstico por Computador/métodos , Radiologistas , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Segurança Computacional , Feminino , Humanos , Mamografia , Radiologistas/educação
16.
Front Endocrinol (Lausanne) ; 12: 713592, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34335479

RESUMO

Background and objective: Clinical characteristics of obesity are heterogenous, but current classification for diagnosis is simply based on BMI or metabolic healthiness. The purpose of this study was to use machine learning to explore a more precise classification of obesity subgroups towards informing individualized therapy. Subjects and Methods: In a multi-center study (n=2495), we used unsupervised machine learning to cluster patients with obesity from Shanghai Tenth People's hospital (n=882, main cohort) based on three clinical variables (AUCs of glucose and of insulin during OGTT, and uric acid). Verification of the clustering was performed in three independent cohorts from external hospitals in China (n = 130, 137, and 289, respectively). Statistics of a healthy normal-weight cohort (n=1057) were measured as controls. Results: Machine learning revealed four stable metabolic different obese clusters on each cohort. Metabolic healthy obesity (MHO, 44% patients) was characterized by a relatively healthy-metabolic status with lowest incidents of comorbidities. Hypermetabolic obesity-hyperuricemia (HMO-U, 33% patients) was characterized by extremely high uric acid and a large increased incidence of hyperuricemia (adjusted odds ratio [AOR] 73.67 to MHO, 95%CI 35.46-153.06). Hypermetabolic obesity-hyperinsulinemia (HMO-I, 8% patients) was distinguished by overcompensated insulin secretion and a large increased incidence of polycystic ovary syndrome (AOR 14.44 to MHO, 95%CI 1.75-118.99). Hypometabolic obesity (LMO, 15% patients) was characterized by extremely high glucose, decompensated insulin secretion, and the worst glucolipid metabolism (diabetes: AOR 105.85 to MHO, 95%CI 42.00-266.74; metabolic syndrome: AOR 13.50 to MHO, 95%CI 7.34-24.83). The assignment of patients in the verification cohorts to the main model showed a mean accuracy of 0.941 in all clusters. Conclusion: Machine learning automatically identified four subtypes of obesity in terms of clinical characteristics on four independent patient cohorts. This proof-of-concept study provided evidence that precise diagnosis of obesity is feasible to potentially guide therapeutic planning and decisions for different subtypes of obesity. Clinical Trial Registration: www.ClinicalTrials.gov, NCT04282837.


Assuntos
Aprendizado de Máquina , Obesidade/classificação , Adulto , Glicemia/análise , Índice de Massa Corporal , China/epidemiologia , Comorbidade , Feminino , Teste de Tolerância a Glucose , Humanos , Hiperuricemia/epidemiologia , Insulina/sangue , Masculino , Síndrome Metabólica/epidemiologia , Obesidade/epidemiologia , Obesidade/metabolismo , Obesidade Metabolicamente Benigna , Síndrome do Ovário Policístico/epidemiologia , Ácido Úrico
17.
Front Oncol ; 11: 658887, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33996583

RESUMO

OBJECTIVES: To evaluate the effectiveness of radiomic features on classifying histological subtypes of central lung cancer in contrast-enhanced CT (CECT) images. MATERIALS AND METHODS: A total of 200 patients with radiologically defined central lung cancer were recruited. All patients underwent dual-phase chest CECT, and the histological subtypes (adenocarcinoma (ADC), squamous cell carcinoma (SCC), small cell lung cancer (SCLC)) were confirmed by histopathological samples. 107 features were used in five machine learning classifiers to perform the predictive analysis among three subtypes. Models were trained and validated in two conditions: using radiomic features alone, and combining clinical features with radiomic features. The performance of the classification models was evaluated by the area under the receiver operating characteristic curve (AUC). RESULTS: The highest AUCs in classifying ADC vs. SCC, ADC vs. SCLC, and SCC vs. SCLC were 0.879, 0.836, 0.783, respectively by using only radiomic features in a feedforward neural network. CONCLUSION: Our study indicates that radiomic features based on the CECT images might be a promising tool for noninvasive prediction of histological subtypes in central lung cancer and the neural network classifier might be well-suited to this task.

18.
BMC Cancer ; 21(1): 370, 2021 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-33827490

RESUMO

BACKGROUND: The abundance of immune and stromal cells in the tumor microenvironment (TME) is informative of levels of inflammation, angiogenesis, and desmoplasia. Radiomics, an approach of extracting quantitative features from radiological imaging to characterize diseases, have been shown to predict molecular classification, cancer recurrence risk, and many other disease outcomes. However, the ability of radiomics methods to predict the abundance of various cell types in the TME remains unclear. In this study, we employed a radio-genomics approach and machine learning models to predict the infiltration of 10 cell types in breast cancer lesions utilizing radiomic features extracted from breast Dynamic Contrast Enhanced Magnetic Resonance Imaging. METHODS: We performed a retrospective study utilizing 73 patients from two independent institutions with imaging and gene expression data provided by The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA), respectively. A set of 199 radiomic features including shape-based, morphological, texture, and kinetic characteristics were extracted from the lesion volumes. To capture one-to-one relationships between radiomic features and cell type abundance, we performed linear regression on each radiomic feature/cell type abundance combination. Each regression model was tested for statistical significance. In addition, multivariate models were built for the cell type infiltration status (i.e. "high" vs "low") prediction. A feature selection process via Recursive Feature Elimination was applied to the radiomic features on the training set. The classification models took the form of a binary logistic extreme gradient boosting framework. Two evaluation methods including leave-one-out cross validation and external independent test, were used for radiomic model learning and testing. The models' performance was measured via area under the receiver operating characteristic curve (AUC). RESULTS: Univariate relationships were identified between a set of radiomic features and the abundance of fibroblasts. Multivariate models yielded leave-one-out cross validation AUCs ranging from 0.5 to 0.83, and independent test AUCs ranging from 0.5 to 0.68 for the multiple cell type invasion predictions. CONCLUSIONS: On two independent breast cancer cohorts, breast MRI-derived radiomics are associated with the tumor's microenvironment in terms of the abundance of several cell types. Further evaluation with larger cohorts is needed.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado de Máquina/normas , Feminino , Humanos , Pessoa de Meia-Idade , Invasividade Neoplásica , Fenótipo , Estudos Retrospectivos , Microambiente Tumoral
19.
Mach Learn Med Imaging ; 12966: 555-564, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37808083

RESUMO

Elbow fracture diagnosis often requires patients to take both frontal and lateral views of elbow X-ray radiographs. In this paper, we propose a multiview deep learning method for an elbow fracture subtype classification task. Our strategy leverages transfer learning by first training two single-view models, one for frontal view and the other for lateral view, and then transferring the weights to the corresponding layers in the proposed multiview network architecture. Meanwhile, quantitative medical knowledge was integrated into the training process through a curriculum learning framework, which enables the model to first learn from "easier" samples and then transition to "harder" samples to reach better performance. In addition, our multiview network can work both in a dual-view setting and with a single view as input. We evaluate our method through extensive experiments on a classification task of elbow fracture with a dataset of 1,964 images. Results show that our method outperforms two related methods on bone fracture study in multiple settings, and our technique is able to boost the performance of the compared methods. The code is available at https://github.com/ljaiverson/multiview-curriculum.

20.
Front Oncol ; 11: 725889, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35186707

RESUMO

BACKGROUND: Gastric cancer is one of the leading causes of cancer death in the world. Improving gastric cancer survival prediction can enhance patient prognostication and treatment planning. METHODS: In this study, we performed gastric cancer survival prediction using machine learning and multi-modal data of 1061 patients, including 743 for model learning and 318 independent patients for evaluation. A Cox proportional-hazard model was trained to integrate clinical variables and CT imaging features (extracted by radiomics and deep learning) for overall and progression-free survival prediction. We further analyzed the prediction effects of clinical, radiomics, and deep learning features. Concordance index (c-index) was used as the model performance metric, and the predictive effects of multi-modal features were measured by hazard ratios (HRs) at pre- and post-operative settings. RESULTS: Among 318 patients in the independent testing group, the hazard predicted by Cox from multi-modal features is associated with their survival. The highest c-index was 0.783 (95% CI, 0.782-0.783) and 0.770 (95% CI, 0.769-0.771) for overall and progression-free survival prediction, respectively. The post-operative variables are significantly (p<0.001) more predictive than the pre-operative variables. Pathological tumor stage (HR=1.336 [overall survival]/1.768 [progression-free survival], p<0.005), pathological lymph node stage (HR=1.665/1.433, p<0.005), carcinoembryonic antigen (CEA) (HR=1.632/1.522, p=0.02), chemotherapy treatment (HR=0.254/0.287, p<0.005), radiomics signature [HR=1.540/1.310, p<0.005], and deep learning signature [HR=1.950/1.420, p<0.005]) are significant survival predictors. CONCLUSION: Our study showed that CT radiomics and deep learning imaging features are significant pre-operative predictors, providing additional prognostic information to the pathological staging markers. Lower CEA levels and chemotherapy treatments also increase survival chances. These findings can enhance gastric cancer patient prognostication and inform treatment planning.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...