RESUMO
Background: Existing criteria for predicting patient survival from immunotherapy are primarily centered on the PD-L1 status of patients. We tested the hypothesis that noninvasively captured baseline whole-lung radiomics features from CT images, baseline clinical parameters, combined with advanced machine learning approaches, can help to build models of patient survival that compare favorably with PD-L1 status for predicting 'less-than-median-survival risk' in the metastatic NSCLC setting for patients on durvalumab. With a total of 1062 patients, inclusive of model training and validation, this is the largest such study yet. Methods: To ensure a sufficient sample size, we combined data from treatment arms of three metastatic NSCLC studies. About 80% of this data was used for model training, and the remainder was held-out for validation. We first trained two independent models; Model-C trained to predict survival using clinical data; and Model-R trained to predict survival using whole-lung radiomics features. Finally, we created Model-C+R which leveraged both clinical and radiomics features. Results: The classification accuracy (for median survival) of Model-C, Model-R, and Model-C+R was 63%, 55%, and 68% respectively. Sensitivity analysis of survival prediction across different training and validation cohorts showed concordance indices ([95 percentile]) of 0.64 ([0.63, 0.65]), 0.60 ([0.59, 0.60]), and 0.66 ([0.65,0.67]), respectively. We additionally evaluated generalization of these models on a comparable cohort of 144 patients from an independent study, demonstrating classification accuracies of 65%, 62%, and 72% respectively. Conclusion: Machine Learning models combining baseline whole-lung CT radiomic and clinical features may be a useful tool for patient selection in immunotherapy. Further validation through prospective studies is needed.
Assuntos
Anticorpos Monoclonais , Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Pulmonares/mortalidade , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/tratamento farmacológico , Neoplasias Pulmonares/patologia , Carcinoma Pulmonar de Células não Pequenas/mortalidade , Carcinoma Pulmonar de Células não Pequenas/tratamento farmacológico , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/patologia , Masculino , Feminino , Tomografia Computadorizada por Raios X/métodos , Anticorpos Monoclonais/uso terapêutico , Pessoa de Meia-Idade , Idoso , Aprendizado de Máquina , Medição de Risco , Antineoplásicos Imunológicos/uso terapêutico , Prognóstico , Antígeno B7-H1 , RadiômicaRESUMO
BACKGROUND: The development of adipose tissue during adolescence may provide valuable insights into obesity-associated diseases. We propose an automated convolutional neural network (CNN) approach using Dixon-based magnetic resonance imaging (MRI) to quantity abdominal subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in children and adolescents. METHODS: 474 abdominal Dixon MRI scans of 136 young healthy volunteers (aged 8-18) were included in this study. For each scan, an axial fat-only Dixon image located at the L2-L3 disc space and another image at the L4-L5 disc space were selected for quantification. For each image, an outer and an inner region around the abdomen wall, as well as SAT and VAT pixel masks, were generated by expert readers as reference standards. A standard U-Net CNN architecture was then used to train two models: one for region segmentation and one for fat pixel classification. The performance was evaluated using the dice similarity coefficient (DSC) with fivefold cross-validation, and by Pearson correlation and the Student's t-test against the reference standards. RESULTS: For the DSC results, means and standard deviations of the outer region, inner region, SAT, and VAT comparisons were 0.974 ± 0.026, 0.997 ± 0.003, 0.981 ± 0.025, and 0.932 ± 0.047, respectively. Pearson coefficients were 1.000 for both outer and inner regions, and 1.000 and 0.982 for SAT and VAT comparisons, respectively (all p = NS). CONCLUSION: These results show that our method not only provides excellent agreement with the reference SAT and VAT measurements, but also accurate abdominal wall region segmentation. The proposed combined region- and pixel-based CNN approach provides automated abdominal wall segmentation as well as SAT and VAT quantification with Dixon MRI and enables objective longitudinal assessment of adipose tissues in children during adolescence.
Assuntos
Aprendizado Profundo , Criança , Humanos , Adolescente , Algoritmos , Reprodutibilidade dos Testes , Gordura Abdominal/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodosRESUMO
A common task in brain image analysis includes diagnosis of a certain medical condition wherein groups of healthy controls and diseased subjects are analyzed and compared. On the other hand, for two groups of healthy participants with different proficiency in a certain skill, a distinctive analysis of the brain function remains a challenging problem. In this study, we develop new computational tools to explore the functional and anatomical differences that could exist between the brain of healthy individuals identified on the basis of different levels of task experience/proficiency. Toward this end, we look at a dataset of amateur and professional chess players, where we utilize resting-state functional magnetic resonance images to generate functional connectivity (FC) information. In addition, we utilize T1-weighted magnetic resonance imaging to estimate morphometric connectivity (MC) information. We combine functional and anatomical features into a new connectivity matrix, which we term as the functional morphometric similarity connectome (FMSC). Since, both the FC and MC information is susceptible to redundancy, the size of this information is reduced using statistical feature selection. We employ off-the-shelf machine learning classifier, support vector machine, for both single- and multi-modality classifications. From our experiments, we establish that the saliency and ventral attention network of the brain is functionally and anatomically different between two groups of healthy subjects (chess players). We argue that, since chess involves many aspects of higher order cognition such as systematic thinking and spatial reasoning and the identified network is task-positive to cognition tasks requiring a response, our results are valid and supporting the feasibility of the proposed computational pipeline. Moreover, we quantitatively validate an existing neuroscience hypothesis that learning a certain skill could cause a change in the brain (functional connectivity and anatomy) and this can be tested via our novel FMSC algorithm.
RESUMO
Purpose: Deep learning has achieved major breakthroughs during the past decade in almost every field. There are plenty of publicly available algorithms, each designed to address a different task of computer vision in general. However, most of these algorithms cannot be directly applied to images in the medical domain. Herein, we are focused on the required preprocessing steps that should be applied to medical images prior to deep neural networks. Approach: To be able to employ the publicly available algorithms for clinical purposes, we must make a meaningful pixel/voxel representation from medical images which facilitates the learning process. Based on the ultimate goal expected from an algorithm (classification, detection, or segmentation), one may infer the required pre-processing steps that can ideally improve the performance of that algorithm. Required pre-processing steps for computed tomography (CT) and magnetic resonance (MR) images in their correct order are discussed in detail. We further supported our discussion by relevant experiments to investigate the efficiency of the listed preprocessing steps. Results: Our experiments confirmed how using appropriate image pre-processing in the right order can improve the performance of deep neural networks in terms of better classification and segmentation. Conclusions: This work investigates the appropriate pre-processing steps for CT and MR images of prostate cancer patients, supported by several experiments that can be useful for educating those new to the field (https://github.com/NIH-MIP/Radiology_Image_Preprocessing_for_Deep_Learning).
RESUMO
BACKGROUND: Young patients with Cushing Syndrome (CS) may develop cognitive and behavioral alterations during disease course. METHODS: To investigate the effects of CS on the brain, we analyzed consecutive MRI scans of patients with (n = 29) versus without CS (n = 8). Multiple brain compartments were processed for total and gray/white matter (GM/WM) volumes and intensities, and cortical volume, thickness, and surface area. Dynamics (last/baseline scans ratio per parameter) were analyzed versus cortisol levels and CS status (persistent, resolved, and non-CS). RESULTS: Twenty-four-hour urinary free cortisol (24hUFC) measurements had inverse correlation with the intensity of subcortical GM structures and of the corpus callosum, and with the cerebral WM intensity. 24hUFC dynamics had negative correlation with volume dynamics of multiple cerebral and cerebellar structures. Patients with persistent CS had less of an increase in cortical thickness and WM intensity, and less of a decrease in WM volume compared with patients with resolution of CS. Patients with resolution of their CS had less of an increase in subcortical GM and cerebral WM volumes, but a greater increase in cortical thickness of frontal lobe versus controls. CONCLUSION: Changes in WM/GM consistency, intensity, and homogeneity in patients with CS may correlate with CS clinical consequences better than volume dynamics alone.
Assuntos
Encéfalo/diagnóstico por imagem , Síndrome de Cushing/diagnóstico por imagem , Adolescente , Adulto , Encéfalo/crescimento & desenvolvimento , Encéfalo/patologia , Estudos de Casos e Controles , Criança , Desenvolvimento Infantil , Pré-Escolar , Síndrome de Cushing/patologia , Síndrome de Cushing/psicologia , Síndrome de Cushing/urina , Feminino , Substância Cinzenta/diagnóstico por imagem , Substância Cinzenta/crescimento & desenvolvimento , Substância Cinzenta/patologia , Humanos , Hidrocortisona/urina , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Neuroimagem , Tamanho do Órgão , Estudos Retrospectivos , Substância Branca/diagnóstico por imagem , Substância Branca/crescimento & desenvolvimento , Substância Branca/patologia , Adulto JovemRESUMO
The success of surgical resection in epilepsy patients depends on preserving functionally critical brain regions, while removing pathological tissues. Being the gold standard, electro-cortical stimulation mapping (ESM) helps surgeons in localizing the function of eloquent cortex through electrical stimulation of electrodes placed directly on the cortical brain surface. Due to the potential hazards of ESM, including increased risk of provoked seizures, electrocorticography based functional mapping (ECoG-FM) was introduced as a safer alternative approach. However, ECoG-FM has a low success rate when compared to the ESM. In this study, we address this critical limitation by developing a new algorithm based on deep learning for ECoG-FM and thereby we achieve an accuracy comparable to ESM in identifying eloquent language cortex. In our experiments, with 11 epilepsy patients who underwent presurgical evaluation (through deep learning-based signal analysis on 637 electrodes), our proposed algorithm obtained an accuracy of 83.05% in identifying language regions, an exceptional 23% improvement with respect to the conventional ECoG-FM analysis (â¼60%). Our findings have demonstrated, for the first time, that deep learning powered ECoG-FM can serve as a stand-alone modality and avoid likely hazards of the ESM in epilepsy surgery. Hence, reducing the potential for developing post-surgical morbidity in the language function.
RESUMO
Deep learning has demonstrated tremendous revolutionary changes in the computing industry and its effects in radiology and imaging sciences have begun to dramatically change screening paradigms. Specifically, these advances have influenced the development of computer-aided detection and diagnosis (CAD) systems. These technologies have long been thought of as "second-opinion" tools for radiologists and clinicians. However, with significant improvements in deep neural networks, the diagnostic capabilities of learning algorithms are approaching levels of human expertise (radiologists, clinicians etc.), shifting the CAD paradigm from a "second opinion" tool to a more collaborative utility. This paper reviews recently developed CAD systems based on deep learning technologies for breast cancer diagnosis, explains their superiorities with respect to previously established systems, defines the methodologies behind the improved achievements including algorithmic developments, and describes remaining challenges in breast cancer screening and diagnosis. We also discuss possible future directions for new CAD models that continue to change as artificial intelligence algorithms evolve.