RESUMO
BACKGROUND AND AIMS: NAFLD strongly associates with cardiovascular disease (CVD) risk factors; however, the association between NAFLD and incident CVD, CVD-related mortality, incident cancer, and all-cause mortality is unclear. APPROACH AND RESULTS: We included 10,040 participants from the Framingham Heart Study, the Coronary Artery Risk Development in Young Adults Study, and the Multi-ethnic Study of Atherosclerosis to assess the longitudinal association between liver fat (defined on CT) and incident CVD, CVD-related mortality, incident cancer, and all-cause mortality. We performed multivariable-adjusted Cox regression models including age, sex, diabetes, systolic blood pressure, alcohol use, smoking, HDL, triglycerides, and body mass index at baseline or time-varying covariates. The average age was 51.3±3.3 years and 50.6% were women. Hepatic steatosis was associated with all-cause mortality after 12.7 years of mean follow-up when adjusting for baseline CVD risk factors, including body mass index (HR: 1.21, 1.04-1.40); however, the results were attenuated when utilizing time-varying covariates. The association between hepatic steatosis and incident CVD was not statistically significant after we accounted for body mass index in models considering baseline covariates or time-varying covariates. We observed no association between hepatic steatosis and CVD-related mortality or incident cancer. CONCLUSIONS: In this large, multicohort study of participants with CT-defined hepatic steatosis, accounting for change in CVD risk factors over time attenuated associations between liver fat and overall mortality or incident CVD. Our work highlights the need to consider concurrent cardiometabolic disease when determining associations between NAFLD and CVD and mortality outcomes.
Assuntos
Doenças Cardiovasculares , Neoplasias , Hepatopatia Gordurosa não Alcoólica , Adulto Jovem , Humanos , Feminino , Pessoa de Meia-Idade , Masculino , Hepatopatia Gordurosa não Alcoólica/complicações , Doenças Cardiovasculares/epidemiologia , Doenças Cardiovasculares/etiologia , Fatores de Risco , Estudos Longitudinais , Neoplasias/epidemiologia , IncidênciaRESUMO
Prenatal depressive symptoms are linked to negative child behavioral and cognitive outcomes and predict later psychopathology in adolescent children. Prior work links prenatal depressive symptoms to child brain structure in regions like the amygdala; however, the relationship between symptoms and the development of brain structure over time remains unclear. We measured maternal depressive symptoms during pregnancy and acquired longitudinal T1-weighted and diffusion imaging data in children (n = 111; 60 females) between 2.6 and 8 years of age. Controlling for postnatal symptoms, we used linear mixed effects models to test relationships between prenatal depressive symptoms and age-related changes in (i) amygdala and hippocampal volume and (ii) structural properties of the limbic and default-mode networks using graph theory. Higher prenatal depressive symptoms in the second trimester were associated with more curvilinear trajectories of left amygdala volume changes. Higher prenatal depressive symptoms in the third trimester were associated with slower age-related changes in limbic global efficiency and average node degree across childhood. Our work provides evidence that moderate symptoms of prenatal depression in a low sociodemographic risk sample are associated with structural brain development in regions and networks implicated in emotion processing.
Assuntos
Depressão , Efeitos Tardios da Exposição Pré-Natal , Feminino , Gravidez , Adolescente , Criança , Humanos , Depressão/diagnóstico por imagem , Rede de Modo Padrão/patologia , Imageamento por Ressonância Magnética/métodos , Efeitos Tardios da Exposição Pré-Natal/diagnóstico por imagem , Efeitos Tardios da Exposição Pré-Natal/patologia , Encéfalo/patologiaRESUMO
Background An artificial intelligence (AI) algorithm has been developed for fully automated body composition assessment of lung cancer screening noncontrast low-dose CT of the chest (LDCT) scans, but the utility of these measurements in disease risk prediction models has not been assessed. Purpose To evaluate the added value of CT-based AI-derived body composition measurements in risk prediction of lung cancer incidence, lung cancer death, cardiovascular disease (CVD) death, and all-cause mortality in the National Lung Screening Trial (NLST). Materials and Methods In this secondary analysis of the NLST, body composition measurements, including area and attenuation attributes of skeletal muscle and subcutaneous adipose tissue, were derived from baseline LDCT examinations by using a previously developed AI algorithm. The added value of these measurements was assessed with sex- and cause-specific Cox proportional hazards models with and without the AI-derived body composition measurements for predicting lung cancer incidence, lung cancer death, CVD death, and all-cause mortality. Models were adjusted for confounding variables including age; body mass index; quantitative emphysema; coronary artery calcification; history of diabetes, heart disease, hypertension, and stroke; and other PLCOM2012 lung cancer risk factors. Goodness-of-fit improvements were assessed with the likelihood ratio test. Results Among 20 768 included participants (median age, 61 years [IQR, 57-65 years]; 12 317 men), 865 were diagnosed with lung cancer and 4180 died during follow-up. Including the AI-derived body composition measurements improved risk prediction for lung cancer death (male participants: χ2 = 23.09, P < .001; female participants: χ2 = 15.04, P = .002), CVD death (males: χ2 = 69.94, P < .001; females: χ2 = 16.60, P < .001), and all-cause mortality (males: χ2 = 248.13, P < .001; females: χ2 = 94.54, P < .001), but not for lung cancer incidence (male participants: χ2 = 2.53, P = .11; female participants: χ2 = 1.73, P = .19). Conclusion The body composition measurements automatically derived from baseline low-dose CT examinations added predictive value for lung cancer death, CVD death, and all-cause death, but not for lung cancer incidence in the NLST. Clinical trial registration no. NCT00047385 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Fintelmann in this issue.
Assuntos
Doenças Cardiovasculares , Neoplasias Pulmonares , Feminino , Masculino , Humanos , Pessoa de Meia-Idade , Detecção Precoce de Câncer , Inteligência Artificial , Composição Corporal , PulmãoRESUMO
PURPOSE OF REVIEW: The purpose was to summarize the current role and state of artificial intelligence and machine learning in the diagnosis and management of melanoma. RECENT FINDINGS: Deep learning algorithms can identify melanoma from clinical, dermoscopic, and whole slide pathology images with increasing accuracy. Efforts to provide more granular annotation to datasets and to identify new predictors are ongoing. There have been many incremental advances in both melanoma diagnostics and prognostic tools using artificial intelligence and machine learning. Higher quality input data will further improve these models' capabilities.
Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Inteligência Artificial , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia , Dermoscopia/métodos , Melanoma/diagnóstico , Melanoma/patologia , Aprendizado de Máquina , PrognósticoRESUMO
PURPOSE: Hepatic steatosis (fatty liver disease) affects 25% of the world's population, particularly people with HIV (PWH). Pharmacoepidemiologic studies to identify medications associated with steatosis have not been conducted because methods to evaluate liver fat within digitized images have not been developed. We determined the accuracy of a deep learning algorithm (automatic liver attenuation region-of-interest-based measurement [ALARM]) to identify steatosis within clinically obtained noncontrast abdominal CT images compared to manual radiologist review and evaluated its performance by HIV status. METHODS: We performed a cross-sectional study to evaluate the performance of ALARM within noncontrast abdominal CT images from a sample of patients with and without HIV in the US Veterans Health Administration. We evaluated the ability of ALARM to identify moderate-to-severe hepatic steatosis, defined by mean absolute liver attenuation <40 Hounsfield units (HU), compared to manual radiologist assessment. RESULTS: Among 120 patients (51 PWH) who underwent noncontrast abdominal CT, moderate-to-severe hepatic steatosis was identified in 15 (12.5%) persons via ALARM and 12 (10%) by radiologist assessment. Percent agreement between ALARM and radiologist assessment of absolute liver attenuation <40 HU was 95.8%. Sensitivity, specificity, positive predictive value, and negative predictive value of ALARM were 91.7% (95%CI, 51.5%-99.8%), 96.3% (95%CI, 90.8%-99.0%), 73.3% (95%CI, 44.9%-92.2%), and 99.0% (95%CI, 94.8%-100%), respectively. No differences in performance were observed by HIV status. CONCLUSIONS: ALARM demonstrated excellent accuracy for moderate-to-severe hepatic steatosis regardless of HIV status. Application of ALARM to radiographic repositories could facilitate real-world studies to evaluate medications associated with steatosis and assess differences by HIV status.
Assuntos
Aprendizado Profundo , Fígado Gorduroso , Infecções por HIV , Humanos , Estudos Transversais , Fígado Gorduroso/diagnóstico por imagem , Fígado Gorduroso/epidemiologia , Tomografia Computadorizada por Raios X/métodos , Infecções por HIV/complicações , Infecções por HIV/diagnóstico por imagem , Estudos RetrospectivosRESUMO
Since 2000, there have been more than 8000 publications on radiology artificial intelligence (AI). AI breakthroughs allow complex tasks to be automated and even performed beyond human capabilities. However, the lack of details on the methods and algorithm code undercuts its scientific value. Many science subfields have recently faced a reproducibility crisis, eroding trust in processes and results, and influencing the rise in retractions of scientific papers. For the same reasons, conducting research in deep learning (DL) also requires reproducibility. Although several valuable manuscript checklists for AI in medical imaging exist, they are not focused specifically on reproducibility. In this study, we conducted a systematic review of recently published papers in the field of DL to evaluate if the description of their methodology could allow the reproducibility of their findings. We focused on the Journal of Digital Imaging (JDI), a specialized journal that publishes papers on AI and medical imaging. We used the keyword "Deep Learning" and collected the articles published between January 2020 and January 2022. We screened all the articles and included the ones which reported the development of a DL tool in medical imaging. We extracted the reported details about the dataset, data handling steps, data splitting, model details, and performance metrics of each included article. We found 148 articles. Eighty were included after screening for articles that reported developing a DL model for medical image analysis. Five studies have made their code publicly available, and 35 studies have utilized publicly available datasets. We provided figures to show the ratio and absolute count of reported items from included studies. According to our cross-sectional study, in JDI publications on DL in medical imaging, authors infrequently report the key elements of their study to make it reproducible.
Assuntos
Inteligência Artificial , Diagnóstico por Imagem , Humanos , Estudos Transversais , Reprodutibilidade dos Testes , AlgoritmosRESUMO
Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3×3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI.
RESUMO
Reading disorders are common in children and can impact academic success, mental health, and career prospects. Reading is supported by network of interconnected left hemisphere brain regions, including temporo-parietal, occipito-temporal, and inferior-frontal circuits. Poor readers often show hypoactivation and reduced gray matter volumes in this reading network, with hyperactivation and increased volumes in the posterior right hemisphere. We assessed gray matter development longitudinally in pre-reading children aged 2-5 years using magnetic resonance imaging (MRI) (N = 32, 110 MRI scans; mean age: 4.40 ± 0.77 years), half of whom had a family history of reading disorder. The family history group showed slower proportional growth (relative to total brain volume) in the left supramarginal and inferior frontal gyri, and faster proportional growth in the right angular, right fusiform, and bilateral lingual gyri. This suggests delayed development of left hemisphere reading areas in children with a family history of dyslexia, along with faster growth in right homologues. This alternate development pattern may predispose the brain to later reading difficulties and may later manifest as the commonly noted compensatory mechanisms. The results of this study further shows our understanding of structural brain alterations that may form the neurological basis of reading difficulties.
Assuntos
Dislexia , Substância Cinzenta , Encéfalo/patologia , Mapeamento Encefálico/métodos , Criança , Pré-Escolar , Dislexia/patologia , Humanos , Imageamento por Ressonância Magnética/métodosRESUMO
Bisphenol A (BPA) is a synthetic chemical used for the manufacturing of plastics, epoxy resin, and many personal care products. This ubiquitous endocrine disruptor is detectable in the urine of over 80% of North Americans. Although adverse neurodevelopmental outcomes have been observed in children with high gestational exposure to BPA, the effects of prenatal BPA on brain structure remain unclear. Here, using magnetic resonance imaging (MRI), we studied the associations of maternal BPA exposure with children's brain structure, as well as the impact of comparable BPA levels in a mouse model. Our human data showed that most maternal BPA exposure effects on brain volumes were small, with the largest effects observed in the opercular region of the inferior frontal gyrus (ρ = -0.2754), superior occipital gyrus (ρ = -0.2556), and postcentral gyrus (ρ = 0.2384). In mice, gestational exposure to an equivalent level of BPA (2.25 µg BPA/kg bw/day) induced structural alterations in brain regions including the superior olivary complex (SOC) and bed nucleus of stria terminalis (BNST) with larger effect sizes (1.07≤ Cohens d ≤ 1.53). Human (n = 87) and rodent (n = 8 each group) sample sizes, while small, are considered adequate to perform the primary endpoint analysis. Combined, these human and mouse data suggest that gestational exposure to low levels of BPA may have some impacts on the developing brain at the resolution of MRI.
Assuntos
Disruptores Endócrinos , Efeitos Tardios da Exposição Pré-Natal , Animais , Compostos Benzidrílicos/toxicidade , Compostos Benzidrílicos/urina , Encéfalo/diagnóstico por imagem , Criança , Disruptores Endócrinos/toxicidade , Disruptores Endócrinos/urina , Feminino , Humanos , Camundongos , Fenóis/toxicidade , Fenóis/urina , Gravidez , Efeitos Tardios da Exposição Pré-Natal/induzido quimicamenteRESUMO
A robust medical image computing infrastructure must host massive multimodal archives, perform extensive analysis pipelines, and execute scalable job management. An emerging data format standard, the Brain Imaging Data Structure (BIDS), introduces complexities for interfacing with XNAT archives. Moreover, workflow integration is combinatorically problematic when matching large amount of processing to large datasets. Historically, workflow engines have been focused on refining workflows themselves instead of actual job generation. However, such an approach is incompatible with data centric architecture that hosts heterogeneous medical image computing. Distributed automation for XNAT toolkit (DAX) provides large-scale image storage and analysis pipelines with an optimized job management tool. Herein, we describe developments for DAX that allows for integration of XNAT and BIDS standards. We also improve DAX's efficiencies of diverse containerized workflows in a high-performance computing (HPC) environment. Briefly, we integrate YAML configuration processor scripts to abstract workflow data inputs, data outputs, commands, and job attributes. Finally, we propose an online database-driven mechanism for DAX to efficiently identify the most recent updated sessions, thereby improving job building efficiency on large projects. We refer the proposed overall DAX development in this work as DAX-1 (DAX version 1). To validate the effectiveness of the new features, we verified (1) the efficiency of converting XNAT data to BIDS format and the correctness of the conversion using a collection of BIDS standard containerized neuroimaging workflows, (2) how YAML-based processor simplified configuration setup via a sequence of application pipelines, and (3) the productivity of DAX-1 on generating actual HPC processing jobs compared with earlier DAX baseline method. The empirical results show that (1) DAX-1 converting XNAT data to BIDS has similar speed as accessing XNAT data only; (2) YAML can integrate to the DAX-1 with shallow learning curve for users, and (3) DAX-1 reduced the job/assessor generation latency by finding recent modified sessions. Herein, we present approaches for efficiently integrating XNAT and modern image formats with a scalable workflow engine for the large-scale dataset access and processing.
Assuntos
Neuroimagem , Software , Humanos , Encéfalo , Neuroimagem/métodos , Fluxo de TrabalhoRESUMO
The field of artificial intelligence (AI) in medical imaging is undergoing explosive growth, and Radiology is a prime target for innovation. The American College of Radiology Data Science Institute has identified more than 240 specific use cases where AI could be used to improve clinical practice. In this context, thousands of potential methods are developed by research labs and industry innovators. Deploying AI tools within a clinical enterprise, even on limited retrospective evaluation, is complicated by security and privacy concerns. Thus, innovation must be weighed against the substantive resources required for local clinical evaluation. To reduce barriers to AI validation while maintaining rigorous security and privacy standards, we developed the AI Imaging Incubator. The AI Imaging Incubator serves as a DICOM storage destination within a clinical enterprise where images can be directed for novel research evaluation under Institutional Review Board approval. AI Imaging Incubator is controlled by a secure HIPAA-compliant front end and provides access to a menu of AI procedures captured within network-isolated containers. Results are served via a secure website that supports research and clinical data formats. Deployment of new AI approaches within this system is streamlined through a standardized application programming interface. This manuscript presents case studies of the AI Imaging Incubator applied to randomizing lung biopsies on chest CT, liver fat assessment on abdomen CT, and brain volumetry on head MRI.
Assuntos
Inteligência Artificial , Radiologia , Hospitais , Humanos , Radiologia/métodos , Estudos Retrospectivos , Fluxo de TrabalhoRESUMO
Functional MRI signals can be heavily influenced by systemic physiological processes in addition to local neural activity. For example, widespread hemodynamic fluctuations across the brain have been found to correlate with natural, low-frequency variations in the depth and rate of breathing over time. Acquiring peripheral measures of respiration during fMRI scanning not only allows for modeling such effects in fMRI analysis, but also provides valuable information for interrogating brain-body physiology. However, physiological recordings are frequently unavailable or have insufficient quality. Here, we propose a computational technique for reconstructing continuous low-frequency respiration volume (RV) fluctuations from fMRI data alone. We evaluate the performance of this approach across different fMRI preprocessing strategies. Further, we demonstrate that the predicted RV signals can account for similar patterns of temporal variation in resting-state fMRI data compared to measured RV fluctuations. These findings indicate that fluctuations in respiration volume can be extracted from fMRI alone, in the common scenario of missing or corrupted respiration recordings. The results have implications for enriching a large volume of existing fMRI datasets through retrospective addition of respiratory variations information.
Assuntos
Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Respiração , Artefatos , Neuroimagem Funcional , Humanos , Aprendizado de Máquina , Imageamento por Ressonância MagnéticaRESUMO
The explosive growth of artificial intelligence (AI) technologies, especially deep learning methods, has been translated at revolutionary speed to efforts in AI-assisted healthcare. New applications of AI to renal pathology have recently become available, driven by the successful AI deployments in digital pathology. However, synergetic developments of renal pathology and AI require close interdisciplinary collaborations between computer scientists and renal pathologists. Computer scientists should understand that not every AI innovation is translatable to renal pathology, while renal pathologists should capture high-level principles of the relevant AI technologies. Herein, we provide an integrated review on current and possible future applications in AI-assisted renal pathology, by including perspectives from computer scientists and renal pathologists. First, the standard stages, from data collection to analysis, in full-stack AI-assisted renal pathology studies are reviewed. Second, representative renal pathology-optimized AI techniques are introduced. Last, we review current clinical AI applications, as well as promising future applications with the recent advances in AI.
Assuntos
Inteligência Artificial , PrevisõesRESUMO
Cross-scanner and cross-protocol variability of diffusion magnetic resonance imaging (dMRI) data are known to be major obstacles in multi-site clinical studies since they limit the ability to aggregate dMRI data and derived measures. Computational algorithms that harmonize the data and minimize such variability are critical to reliably combine datasets acquired from different scanners and/or protocols, thus improving the statistical power and sensitivity of multi-site studies. Different computational approaches have been proposed to harmonize diffusion MRI data or remove scanner-specific differences. To date, these methods have mostly been developed for or evaluated on single b-value diffusion MRI data. In this work, we present the evaluation results of 19 algorithms that are developed to harmonize the cross-scanner and cross-protocol variability of multi-shell diffusion MRI using a benchmark database. The proposed algorithms rely on various signal representation approaches and computational tools, such as rotational invariant spherical harmonics, deep neural networks and hybrid biophysical and statistical approaches. The benchmark database consists of data acquired from the same subjects on two scanners with different maximum gradient strength (80 and 300 âmT/m) and with two protocols. We evaluated the performance of these algorithms for mapping multi-shell diffusion MRI data across scanners and across protocols using several state-of-the-art imaging measures. The results show that data harmonization algorithms can reduce the cross-scanner and cross-protocol variabilities to a similar level as scan-rescan variability using the same scanner and protocol. In particular, the LinearRISH algorithm based on adaptive linear mapping of rotational invariant spherical harmonics features yields the lowest variability for our data in predicting the fractional anisotropy (FA), mean diffusivity (MD), mean kurtosis (MK) and the rotationally invariant spherical harmonic (RISH) features. But other algorithms, such as DIAMOND, SHResNet, DIQT, CMResNet show further improvement in harmonizing the return-to-origin probability (RTOP). The performance of different approaches provides useful guidelines on data harmonization in future multi-site studies.
Assuntos
Algoritmos , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Imagem de Difusão por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Neuroimagem/métodos , Adulto , Imagem de Difusão por Ressonância Magnética/instrumentação , Imagem de Difusão por Ressonância Magnética/normas , Humanos , Processamento de Imagem Assistida por Computador/normas , Neuroimagem/instrumentação , Neuroimagem/normas , Análise de RegressãoRESUMO
BACKGROUND: Fiber tracking with diffusion-weighted MRI has become an essential tool for estimating in vivo brain white matter architecture. Fiber tracking results are sensitive to the choice of processing method and tracking criteria. PURPOSE: To assess the variability for an algorithm in group studies reproducibility is of critical context. However, reproducibility does not assess the validity of the brain connections. Phantom studies provide concrete quantitative comparisons of methods relative to absolute ground truths, yet do no capture variabilities because of in vivo physiological factors. The ISMRM 2017 TraCED challenge was created to fulfill the gap. STUDY TYPE: A systematic review of algorithms and tract reproducibility studies. SUBJECTS: Single healthy volunteers. FIELD STRENGTH/SEQUENCE: 3.0T, two different scanners by the same manufacturer. The multishell acquisition included b-values of 1000, 2000, and 3000 s/mm2 with 20, 45, and 64 diffusion gradient directions per shell, respectively. ASSESSMENT: Nine international groups submitted 46 tractography algorithm entries each consisting 16 tracts per scan. The algorithms were assessed using intraclass correlation (ICC) and the Dice similarity measure. STATISTICAL TESTS: Containment analysis was performed to assess if the submitted algorithms had containment within tracts of larger volume submissions. This also serves the purpose to detect if spurious submissions had been made. RESULTS: The top five submissions had high ICC and Dice >0.88. Reproducibility was high within the top five submissions when assessed across sessions or across scanners: 0.87-0.97. Containment analysis shows that the top five submissions are contained within larger volume submissions. From the total of 16 tracts as an outcome relatively the number of tracts with high, moderate, and low reproducibility were 8, 4, and 4. DATA CONCLUSION: The different methods clearly result in fundamentally different tract structures at the more conservative specificity choices. Data and challenge infrastructure remain available for continued analysis and provide a platform for comparison. LEVEL OF EVIDENCE: 5 Technical Efficacy Stage: 1 J. Magn. Reson. Imaging 2020;51:234-249.
Assuntos
Encéfalo/anatomia & histologia , Imagem de Tensor de Difusão/métodos , Imagem de Difusão por Ressonância Magnética , Humanos , Valores de Referência , Reprodutibilidade dos TestesRESUMO
With the rapid development of image acquisition and storage, multiple images per class are commonly available for computer vision tasks (e.g., face recognition, object detection, medical imaging, etc.). Recently, the recurrent neural network (RNN) has been widely integrated with convolutional neural networks (CNN) to perform image classification on ordered (sequential) data. In this paper, by permutating multiple images as multiple dummy orders, we generalize the ordered "RNN+CNN" design (longitudinal) to a novel unordered fashion, called Multi-path x-D Recurrent Neural Network (MxDRNN) for image classification. To the best of our knowledge, few (if any) existing studies have deployed the RNN framework to unordered intra-class images to leverage classification performance. Specifically, multiple learning paths are introduced in the MxDRNN to extract discriminative features by permutating input dummy orders. Eight datasets from five different fields (MNIST, 3D-MNIST, CIFAR, VGGFace2, and lung screening computed tomography) are included to evaluate the performance of our method. The proposed MxDRNN improves the baseline performance by a large margin across the different application fields (e.g., accuracy from 46.40% to 76.54% in VGGFace2 test pose set, AUC from 0.7418 to 0.8162 in NLST lung dataset). Additionally, empirical experiments show the MxDRNN is more robust to category-irrelevant attributes (e.g., expression, pose in face images), which may introduce difficulties for image classification and algorithm generalizability. The code is publicly available.
RESUMO
White matter microstructure can be measured with diffusion tensor imaging (DTI). While increasing age is a predictor of white matter (WM) microstructure changes, roles of other possible modifiers, such as cardiovascular risk factors, APOE ε4 allele status and biological sex have not been clarified. We investigated 665 cognitively normal participants from the Baltimore Longitudinal Study of Aging (age 50-95, 56.7% female) with a total of 1384 DTI scans. WM microstructure was assessed by fractional anisotropy (FA) and mean diffusivity (MD). A vascular burden score was defined as the sum of five risk factors (hypertension, obesity, elevated cholesterol, diabetes and smoking status). Linear mixed effects models assessed the association of baseline vascular burden on baseline and on rates of change of FA and MD over a mean follow-up of 3.6 years, while controlling for age, race, and scanner type. We also compared DTI trajectories in APOE ε4 carriers vs. non-carriers and men vs. women. At baseline, higher vascular burden was associated with lower FA and higher MD in many WM structures including association, commissural, and projection fibers. Higher baseline vascular burden was also associated with greater longitudinal decline in FA in the hippocampal part of the cingulum and the fornix (crus)/stria terminalis and splenium of the corpus callosum, and with greater increases in MD in the splenium of the corpus callosum. APOE ε4 carriers did not differ from non-carriers in baseline DTI metrics but had greater decline in FA in the genu and splenium of the corpus callosum. Men had higher FA and lower MD in multiple WM regions at baseline but showed greater increase in MD in the genu of the corpus callosum. Women showed greater decreases over time in FA in the gyrus part of the cingulum, compared to men. Our findings show that modifiable vascular risk factors (1) have a negative impact on white matter microstructure and (2) are associated with faster microstructural deterioration of temporal WM regions and the splenium of the corpus callosum in cognitively normal adults. Reducing vascular burden in aging could modify the rate of WM deterioration and could decrease age-related cognitive decline and impairment.
Assuntos
Envelhecimento/patologia , Apolipoproteína E4 , Corpo Caloso/patologia , Doenças Vasculares , Substância Branca/patologia , Idoso , Idoso de 80 Anos ou mais , Corpo Caloso/diagnóstico por imagem , Imagem de Tensor de Difusão , Feminino , Humanos , Estudos Longitudinais , Masculino , Pessoa de Meia-Idade , Fatores de Risco , Fatores Sexuais , Doenças Vasculares/epidemiologia , Substância Branca/diagnóstico por imagemRESUMO
Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30â¯h to 15â¯min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).
Assuntos
Encéfalo/anatomia & histologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Atlas como Assunto , Humanos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodosRESUMO
Recent studies have revealed that brain development is marked by morphological synchronization across brain regions. Regions with shared growth trajectories form structural covariance networks (SCNs) that not only map onto functionally identified cognitive systems, but also correlate with a range of cognitive abilities across the lifespan. Despite advances in within-network covariance examinations, few studies have examined lifetime patterns of structural relationships across known SCNs. In the current study, we used a big-data framework and a novel application of covariate-adjusted restricted cubic spline regression to identify volumetric network trajectories and covariance patterns across 13 networks (n = 5,019, ages = 7-90). Our findings revealed that typical development and aging are marked by significant shifts in the degree that networks preferentially coordinate with one another (i.e., modularity). Specifically, childhood showed higher modularity of networks compared to adolescence, reflecting a shift over development from segregation to desegregation of inter-network relationships. The shift from young to middle adulthood was marked by a significant decrease in inter-network modularity and organization, which continued into older adulthood, potentially reflecting changes in brain organizational efficiency with age. This study is the first to characterize brain development and aging in terms of inter-network structural covariance across the lifespan.
Assuntos
Envelhecimento/fisiologia , Córtex Cerebral/anatomia & histologia , Córtex Cerebral/fisiologia , Desenvolvimento Humano/fisiologia , Rede Nervosa/anatomia & histologia , Rede Nervosa/fisiologia , Neuroimagem/métodos , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Big Data , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/crescimento & desenvolvimento , Criança , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Rede Nervosa/diagnóstico por imagem , Adulto JovemRESUMO
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.