Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38663992

RESUMO

BACKGROUND AND PURPOSE: Artificial intelligence (AI) models in radiology are frequently developed and validated using datasets from a single institution and are rarely tested on independent, external datasets, raising questions about their generalizability and applicability in clinical practice. The American Society of Functional Neuroradiology (ASFNR) organized a multi-center AI competition to evaluate the proficiency of developed models in identifying various pathologies on NCCT, assessing age-based normality and estimating medical urgency. MATERIALS AND METHODS: In total, 1201 anonymized, full-head NCCT clinical scans from five institutions were pooled to form the dataset. The dataset encompassed normal studies as well as pathologies including acute ischemic stroke, intracranial hemorrhage, traumatic brain injury, and mass effect (detection of these-task 1). NCCTs were also assessed to determine if findings were consistent with expected brain changes for the patient's age (task 2: age-based normality assessment) and to identify any abnormalities requiring immediate medical attention (task 3: evaluation of findings for urgent intervention). Five neuroradiologists labeled each NCCT, with consensus interpretations serving as the ground truth. The competition was announced online, inviting academic institutions and companies. Independent central analysis assessed each model's performance. Accuracy, sensitivity, specificity, positive and negative predictive values, and receiver operating characteristic (ROC) curves were generated for each AI model, along with the area under the ROC curve (AUROC). RESULTS: 1177 studies were processed by four teams. The median age of patients was 62, with an interquartile range of 33. 19 teams from various academic institutions registered for the competition. Of these, four teams submitted their final results. No commercial entities participated in the competition. For task 1, AUROCs ranged from 0.49 to 0.59. For task 2, two teams completed the task with AUROC values of 0.57 and 0.52. For task 3, teams had little to no agreement with the ground truth. CONCLUSIONS: To assess the performance of AI models in real-world clinical scenarios, we analyzed their performance in the ASFNR AI Competition. The first ASFNR Competition underscored the gap between expectation and reality; the models largely fell short in their assessments. As the integration of AI tools into clinical workflows increases, neuroradiologists must carefully recognize the capabilities, constraints, and consistency of these technologies. Before institutions adopt these algorithms, thorough validation is essential to ensure acceptable levels of performance in clinical settings.ABBREVIATIONS: AI = artificial intelligence; ASFNR = American Society of Functional Neuroradiology; AUROC = area under the receiver operating characteristic curve; DICOM = Digital Imaging and Communications in Medicine; GEE = generalized estimation equation; IQR = interquartile range; NPV = negative predictive value; PPV = positive predictive value; ROC = receiver operating characteristic; TBI = traumatic brain injury.

2.
AJNR Am J Neuroradiol ; 45(3): 312-319, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38453408

RESUMO

BACKGROUND AND PURPOSE: Recent developments in deep learning methods offer a potential solution to the need for alternative imaging methods due to concerns about the toxicity of gadolinium-based contrast agents. The purpose of the study was to synthesize virtual gadolinium contrast-enhanced T1-weighted MR images from noncontrast multiparametric MR images in patients with primary brain tumors by using deep learning. MATERIALS AND METHODS: We trained and validated a deep learning network by using MR images from 335 subjects in the Brain Tumor Segmentation Challenge 2019 training data set. A held out set of 125 subjects from the Brain Tumor Segmentation Challenge 2019 validation data set was used to test the generalization of the model. A residual inception DenseNet network, called T1c-ET, was developed and trained to simultaneously synthesize virtual contrast-enhanced T1-weighted (vT1c) images and segment the enhancing portions of the tumor. Three expert neuroradiologists independently scored the synthesized vT1c images by using a 3-point Likert scale, evaluating image quality and contrast enhancement against ground truth T1c images (1 = poor, 2 = good, 3 = excellent). RESULTS: The synthesized vT1c images achieved structural similarity index, peak signal-to-noise ratio, and normalized mean square error scores of 0.91, 64.35, and 0.03, respectively. There was moderate interobserver agreement between the 3 raters, regarding the algorithm's performance in predicting contrast enhancement, with a Fleiss kappa value of 0.61. Our model was able to accurately predict contrast enhancement in 88.8% of the cases (scores of 2 to 3 on the 3-point scale). CONCLUSIONS: We developed a novel deep learning architecture to synthesize virtual postcontrast enhancement by using only conventional noncontrast brain MR images. Our results demonstrate the potential of deep learning methods to reduce the need for gadolinium contrast in the evaluation of primary brain tumors.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Humanos , Gadolínio , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Encéfalo/patologia , Meios de Contraste , Imageamento por Ressonância Magnética/métodos
5.
Nat Mach Intell ; 5(7): 799-810, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38706981

RESUMO

Medical artificial intelligence (AI) has tremendous potential to advance healthcare by supporting and contributing to the evidence-based practice of medicine, personalizing patient treatment, reducing costs, and improving both healthcare provider and patient experience. Unlocking this potential requires systematic, quantitative evaluation of the performance of medical AI models on large-scale, heterogeneous data capturing diverse patient populations. Here, to meet this need, we introduce MedPerf, an open platform for benchmarking AI models in the medical domain. MedPerf focuses on enabling federated evaluation of AI models, by securely distributing them to different facilities, such as healthcare organizations. This process of bringing the model to the data empowers each facility to assess and verify the performance of AI models in an efficient and human-supervised process, while prioritizing privacy. We describe the current challenges healthcare and AI communities face, the need for an open platform, the design philosophy of MedPerf, its current implementation status and real-world deployment, our roadmap and, importantly, the use of MedPerf with multiple international institutions within cloud-based technology and on-premises scenarios. Finally, we welcome new contributions by researchers and organizations to further strengthen MedPerf as an open benchmarking platform.

6.
J Med Imaging (Bellingham) ; 9(1): 016001, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35118164

RESUMO

Purpose: Deep learning has shown promise for predicting the molecular profiles of gliomas using MR images. Prior to clinical implementation, ensuring robustness to real-world problems, such as patient motion, is crucial. The purpose of this study is to perform a preliminary evaluation on the effects of simulated motion artifact on glioma marker classifier performance and determine if motion correction can restore classification accuracies. Approach: T2w images and molecular information were retrieved from the TCIA and TCGA databases. Simulated motion was added in the k-space domain along the phase encoding direction. Classifier performance for IDH mutation, 1p/19q co-deletion, and MGMT methylation was assessed over the range of 0% to 100% corrupted k-space lines. Rudimentary motion correction networks were trained on the motion-corrupted images. The performance of the three glioma marker classifiers was then evaluated on the motion-corrected images. Results: Glioma marker classifier performance decreased markedly with increasing motion corruption. Applying motion correction effectively restored classification accuracy for even the most motion-corrupted images. For isocitrate dehydrogenase (IDH) classification, 99% accuracy was achieved, exceeding the original performance of the network and representing a new benchmark in non-invasive MRI-based IDH classification. Conclusions: Robust motion correction can facilitate highly accurate deep learning MRI-based molecular marker classification, rivaling invasive tissue-based characterization methods. Motion correction may be able to increase classification accuracy even in the absence of a visible artifact, representing a new strategy for boosting classifier performance.

7.
Artigo em Inglês | MEDLINE | ID: mdl-36998700

RESUMO

Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.

8.
Brain Connect ; 10(8): 422-435, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-33030350

RESUMO

Background: To develop a new functional magnetic resonance image (fMRI) network inference method, BrainNET, that utilizes an efficient machine learning algorithm to quantify contributions of various regions of interests (ROIs) in the brain to a specific ROI. Methods: BrainNET is based on extremely randomized trees to estimate network topology from fMRI data and modified to generate an adjacency matrix representing brain network topology, without reliance on arbitrary thresholds. Open-source simulated fMRI data of 50 subjects in 28 different simulations under various confounding conditions with known ground truth were used to validate the method. Performance was compared with correlation and partial correlation (PC). The real-world performance was then evaluated in a publicly available attention-deficit/hyperactivity disorder (ADHD) data set, including 134 typically developing children (mean age: 12.03, males: 83), 75 ADHD inattentive (mean age: 11.46, males: 56), and 93 ADHD combined (mean age: 11.86, males: 77) subjects. Network topologies in ADHD were inferred using BrainNET, correlation, and PC. Graph metrics were extracted to determine differences between the ADHD groups. Results: BrainNET demonstrated excellent performance across all simulations and varying confounders in identifying the true presence of connections. In the ADHD data set, BrainNET was able to identify significant changes (p < 0.05) in graph metrics between groups. No significant changes in graph metrics between ADHD groups were identified using correlation and PC. Conclusion: We describe BrainNET, a new network inference method to estimate fMRI connectivity that was adapted from gene regulatory methods. BrainNET out-performed Pearson correlation and PC in fMRI simulation data and real-world ADHD data. BrainNET can be used independently or combined with other existing methods as a useful tool to understand network changes and to determine the true network topology of the brain under various conditions and disease states. Impact statement Developed a new functional magnetic resonance image (fMRI) network inference method named as BrainNET using machine learning. BrainNET out-performed Pearson correlation and partial correlation in fMRI simulation data and real-world attention-deficit/hyperactivity disorder data. BrainNET does not need to be pretrained and can be applied to infer fMRI network topology independently on individual subjects and for varying number of nodes.


Assuntos
Encéfalo/anatomia & histologia , Encéfalo/diagnóstico por imagem , Aprendizado de Máquina , Adolescente , Algoritmos , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico por imagem , Mapeamento Encefálico/métodos , Criança , Simulação por Computador , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Vias Neurais/diagnóstico por imagem , Sensibilidade e Especificidade
9.
Neurooncol Adv ; 2(1): vdaa066, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32705083

RESUMO

BACKGROUND: One of the most important recent discoveries in brain glioma biology has been the identification of the isocitrate dehydrogenase (IDH) mutation and 1p/19q co-deletion status as markers for therapy and prognosis. 1p/19q co-deletion is the defining genomic marker for oligodendrogliomas and confers a better prognosis and treatment response than gliomas without it. Our group has previously developed a highly accurate deep-learning network for determining IDH mutation status using T2-weighted (T2w) MRI only. The purpose of this study was to develop a similar 1p/19q deep-learning classification network. METHODS: Multiparametric brain MRI and corresponding genomic information were obtained for 368 subjects from The Cancer Imaging Archive and The Cancer Genome Atlas. 1p/19 co-deletions were present in 130 subjects. Two-hundred and thirty-eight subjects were non-co-deleted. A T2w image-only network (1p/19q-net) was developed to perform 1p/19q co-deletion status classification and simultaneous single-label tumor segmentation using 3D-Dense-UNets. Three-fold cross-validation was performed to generalize the network performance. Receiver operating characteristic analysis was also performed. Dice scores were computed to determine tumor segmentation accuracy. RESULTS: 1p/19q-net demonstrated a mean cross-validation accuracy of 93.46% across the 3 folds (93.4%, 94.35%, and 92.62%, SD = 0.8) in predicting 1p/19q co-deletion status with a sensitivity and specificity of 0.90 ± 0.003 and 0.95 ± 0.01, respectively and a mean area under the curve of 0.95 ± 0.01. The whole tumor segmentation mean Dice score was 0.80 ± 0.007. CONCLUSION: We demonstrate high 1p/19q co-deletion classification accuracy using only T2w MR images. This represents an important milestone toward using MRI to predict glioma histology, prognosis, and response to treatment.

10.
Tomography ; 6(2): 186-193, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32548295

RESUMO

We developed a fully automated method for brain tumor segmentation using deep learning; 285 brain tumor cases with multiparametric magnetic resonance images from the BraTS2018 data set were used. We designed 3 separate 3D-Dense-UNets to simplify the complex multiclass segmentation problem into individual binary-segmentation problems for each subcomponent. We implemented a 3-fold cross-validation to generalize the network's performance. The mean cross-validation Dice-scores for whole tumor (WT), tumor core (TC), and enhancing tumor (ET) segmentations were 0.92, 0.84, and 0.80, respectively. We then retrained the individual binary-segmentation networks using 265 of the 285 cases, with 20 cases held-out for testing. We also tested the network on 46 cases from the BraTS2017 validation data set, 66 cases from the BraTS2018 validation data set, and 52 cases from an independent clinical data set. The average Dice-scores for WT, TC, and ET were 0.90, 0.84, and 0.80, respectively, on the 20 held-out testing cases. The average Dice-scores for WT, TC, and ET on the BraTS2017 validation data set, the BraTS2018 validation data set, and the clinical data set were as follows: 0.90, 0.80, and 0.78; 0.90, 0.82, and 0.80; and 0.85, 0.80, and 0.77, respectively. A fully automated deep learning method was developed to segment brain tumors into their subcomponents, which achieved high prediction accuracy on the BraTS data set and on the independent clinical data set. This method is promising for implementation into a clinical workflow.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/genética , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação
11.
Neuro Oncol ; 22(3): 402-411, 2020 03 05.
Artigo em Inglês | MEDLINE | ID: mdl-31637430

RESUMO

BACKGROUND: Isocitrate dehydrogenase (IDH) mutation status has emerged as an important prognostic marker in gliomas. Currently, reliable IDH mutation determination requires invasive surgical procedures. The purpose of this study was to develop a highly accurate, MRI-based, voxelwise deep-learning IDH classification network using T2-weighted (T2w) MR images and compare its performance to a multicontrast network. METHODS: Multiparametric brain MRI data and corresponding genomic information were obtained for 214 subjects (94 IDH-mutated, 120 IDH wild-type) from The Cancer Imaging Archive and The Cancer Genome Atlas. Two separate networks were developed, including a T2w image-only network (T2-net) and a multicontrast (T2w, fluid attenuated inversion recovery, and T1 postcontrast) network (TS-net) to perform IDH classification and simultaneous single label tumor segmentation. The networks were trained using 3D Dense-UNets. Three-fold cross-validation was performed to generalize the networks' performance. Receiver operating characteristic analysis was also performed. Dice scores were computed to determine tumor segmentation accuracy. RESULTS: T2-net demonstrated a mean cross-validation accuracy of 97.14% ± 0.04 in predicting IDH mutation status, with a sensitivity of 0.97 ± 0.03, specificity of 0.98 ± 0.01, and an area under the curve (AUC) of 0.98 ± 0.01. TS-net achieved a mean cross-validation accuracy of 97.12% ± 0.09, with a sensitivity of 0.98 ± 0.02, specificity of 0.97 ± 0.001, and an AUC of 0.99 ± 0.01. The mean whole tumor segmentation Dice scores were 0.85 ± 0.009 for T2-net and 0.89 ± 0.006 for TS-net. CONCLUSION: We demonstrate high IDH classification accuracy using only T2-weighted MR images. This represents an important milestone toward clinical translation.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/genética , Aprendizado Profundo , Glioma/diagnóstico por imagem , Glioma/genética , Isocitrato Desidrogenase/genética , Imageamento por Ressonância Magnética , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Sensibilidade e Especificidade
12.
J Med Imaging (Bellingham) ; 6(4): 046003, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31824982

RESUMO

Isocitrate dehydrogenase (IDH) mutation status is an important marker in glioma diagnosis and therapy. We propose an automated pipeline for noninvasively predicting IDH status using deep learning and T2-weighted (T2w) magnetic resonance (MR) images with minimal preprocessing (N4 bias correction and normalization to zero mean and unit variance). T2w MR images and genomic data were obtained from The Cancer Imaging Archive dataset for 260 subjects (120 high-grade and 140 low-grade gliomas). A fully automated two-dimensional densely connected model was trained to classify IDH mutation status on 208 subjects and tested on another held-out set of 52 subjects using fivefold cross validation. Data leakage was avoided by ensuring subject separation during the slice-wise randomization. Mean classification accuracy of 90.5% was achieved for each axial slice in predicting the three classes of no tumor, IDH mutated, and IDH wild type. Test accuracy of 83.8% was achieved in predicting IDH mutation status for individual subjects on the test dataset of 52 subjects. We demonstrate a deep learning method to predict IDH mutation status using T2w MRI alone. Radiologic imaging studies using deep learning methods must address data leakage (subject duplication) in the randomization process to avoid upward bias in the reported classification accuracy.

13.
Neurophotonics ; 5(1): 011004, 2018 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-28948191

RESUMO

Transcranial infrared laser stimulation (TILS) has shown effectiveness in improving human cognition and was investigated using broadband near-infrared spectroscopy (bb-NIRS) in our previous study, but the effect of laser heating on the actual bb-NIRS measurements was not investigated. To address this potential confounding factor, 11 human participants were studied. First, we measured time-dependent temperature increases on forehead skin using clinical-grade thermometers following the TILS experimental protocol used in our previous study. Second, a subject-averaged, time-dependent temperature alteration curve was obtained, based on which a heat generator was controlled to induce the same temperature increase at the same forehead location that TILS was delivered on each participant. Third, the same bb-NIRS system was employed to monitor hemodynamic and metabolic changes of forehead tissue near the thermal stimulation site before, during, and after the heat stimulation. The results showed that cytochrome-c-oxidase of forehead tissue was not significantly modified by this heat stimulation. Significant differences in oxyhemoglobin, total hemoglobin, and differential hemoglobin concentrations were observed during the heat stimulation period versus the laser stimulation. The study demonstrated a transient hemodynamic effect of heat-based stimulation distinct to that of TILS. We concluded that the observed effects of TILS on cerebral hemodynamics and metabolism are not induced by heating the skin.

14.
J Cereb Blood Flow Metab ; 37(12): 3789-3802, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28178891

RESUMO

Transcranial infrared laser stimulation (TILS) is a noninvasive form of brain photobiomulation. Cytochrome-c-oxidase (CCO), the terminal enzyme in the mitochondrial electron transport chain, is hypothesized to be the primary intracellular photoacceptor. We hypothesized that TILS up-regulates cerebral CCO and causes hemodynamic changes. We delivered 1064-nm laser stimulation to the forehead of healthy participants ( n = 11), while broadband near-infrared spectroscopy was utilized to acquire light reflectance from the TILS-treated cortical region before, during, and after TILS. Placebo experiments were also performed for accurate comparison. Time course of spectroscopic readings were analyzed and fitted to the modified Beer-Lambert law. With respect to the placebo readings, we observed (1) significant increases in cerebral concentrations of oxidized CCO (Δ[CCO]; >0.08 µM; p < 0.01), oxygenated hemoglobin (Δ[HbO]; >0.8 µM; p < 0.01), and total hemoglobin (Δ[HbT]; >0.5 µM; p < 0.01) during and after TILS, and (2) linear interplays between Δ[CCO] versus Δ[HbO] and between Δ[CCO] versus Δ[HbT]. Ratios of Δ[CCO]/Δ[HbO] and Δ[CCO]/Δ[HbT] were introduced as TILS-induced metabolic-hemodynamic coupling indices to quantify the coupling strength between TILS-enhanced cerebral metabolism and blood oxygen supply. This study provides the first demonstration that TILS causes up-regulation of oxidized CCO in the human brain, and contributes important insight into the physiological mechanisms.


Assuntos
Encéfalo/irrigação sanguínea , Complexo IV da Cadeia de Transporte de Elétrons/genética , Hemodinâmica , Terapia com Luz de Baixa Intensidade , Regulação para Cima , Adulto , Encéfalo/metabolismo , Encéfalo/efeitos da radiação , Complexo IV da Cadeia de Transporte de Elétrons/metabolismo , Metabolismo Energético/efeitos da radiação , Desenho de Equipamento , Hemodinâmica/efeitos da radiação , Humanos , Raios Infravermelhos , Terapia com Luz de Baixa Intensidade/instrumentação , Neuroproteção/efeitos da radiação , Oxirredução/efeitos da radiação , Oxiemoglobinas/metabolismo , Espectroscopia de Luz Próxima ao Infravermelho , Regulação para Cima/efeitos da radiação , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...