Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 10 de 10
1.
Psychol Med ; 54(3): 495-506, 2024 Feb.
Article En | MEDLINE | ID: mdl-37485692

BACKGROUND: Electroconvulsive therapy (ECT) is the most effective intervention for patients with treatment resistant depression. A clinical decision support tool could guide patient selection to improve the overall response rate and avoid ineffective treatments with adverse effects. Initial small-scale, monocenter studies indicate that both structural magnetic resonance imaging (sMRI) and functional MRI (fMRI) biomarkers may predict ECT outcome, but it is not known whether those results can generalize to data from other centers. The objective of this study was to develop and validate neuroimaging biomarkers for ECT outcome in a multicenter setting. METHODS: Multimodal data (i.e. clinical, sMRI and resting-state fMRI) were collected from seven centers of the Global ECT-MRI Research Collaboration (GEMRIC). We used data from 189 depressed patients to evaluate which data modalities or combinations thereof could provide the best predictions for treatment remission (HAM-D score ⩽7) using a support vector machine classifier. RESULTS: Remission classification using a combination of gray matter volume and functional connectivity led to good performing models with average 0.82-0.83 area under the curve (AUC) when trained and tested on samples coming from the three largest centers (N = 109), and remained acceptable when validated using leave-one-site-out cross-validation (0.70-0.73 AUC). CONCLUSIONS: These results show that multimodal neuroimaging data can be used to predict remission with ECT for individual patients across different treatment centers, despite significant variability in clinical characteristics across centers. Future development of a clinical decision support tool applying these biomarkers may be feasible.


Depressive Disorder, Major , Electroconvulsive Therapy , Humans , Electroconvulsive Therapy/methods , Depressive Disorder, Major/diagnostic imaging , Depressive Disorder, Major/therapy , Depressive Disorder, Major/pathology , Depression , Neuroimaging , Magnetic Resonance Imaging/methods , Biomarkers , Machine Learning , Treatment Outcome
2.
PLoS One ; 18(5): e0285703, 2023.
Article En | MEDLINE | ID: mdl-37195925

Sleep is an important indicator of a person's health, and its accurate and cost-effective quantification is of great value in healthcare. The gold standard for sleep assessment and the clinical diagnosis of sleep disorders is polysomnography (PSG). However, PSG requires an overnight clinic visit and trained technicians to score the obtained multimodality data. Wrist-worn consumer devices, such as smartwatches, are a promising alternative to PSG because of their small form factor, continuous monitoring capability, and popularity. Unlike PSG, however, wearables-derived data are noisier and far less information-rich because of the fewer number of modalities and less accurate measurements due to their small form factor. Given these challenges, most consumer devices perform two-stage (i.e., sleep-wake) classification, which is inadequate for deep insights into a person's sleep health. The challenging multi-class (three, four, or five-class) staging of sleep using data from wrist-worn wearables remains unresolved. The difference in the data quality between consumer-grade wearables and lab-grade clinical equipment is the motivation behind this study. In this paper, we present an artificial intelligence (AI) technique termed sequence-to-sequence LSTM for automated mobile sleep staging (SLAMSS), which can perform three-class (wake, NREM, REM) and four-class (wake, light, deep, REM) sleep classification from activity (i.e., wrist-accelerometry-derived locomotion) and two coarse heart rate measures-both of which can be reliably obtained from a consumer-grade wrist-wearable device. Our method relies on raw time-series datasets and obviates the need for manual feature selection. We validated our model using actigraphy and coarse heart rate data from two independent study populations: the Multi-Ethnic Study of Atherosclerosis (MESA; N = 808) cohort and the Osteoporotic Fractures in Men (MrOS; N = 817) cohort. SLAMSS achieves an overall accuracy of 79%, weighted F1 score of 0.80, 77% sensitivity, and 89% specificity for three-class sleep staging and an overall accuracy of 70-72%, weighted F1 score of 0.72-0.73, 64-66% sensitivity, and 89-90% specificity for four-class sleep staging in the MESA cohort. It yielded an overall accuracy of 77%, weighted F1 score of 0.77, 74% sensitivity, and 88% specificity for three-class sleep staging and an overall accuracy of 68-69%, weighted F1 score of 0.68-0.69, 60-63% sensitivity, and 88-89% specificity for four-class sleep staging in the MrOS cohort. These results were achieved with feature-poor inputs with a low temporal resolution. In addition, we extended our three-class staging model to an unrelated Apple Watch dataset. Importantly, SLAMSS predicts the duration of each sleep stage with high accuracy. This is especially significant for four-class sleep staging, where deep sleep is severely underrepresented. We show that, by appropriately choosing the loss function to address the inherent class imbalance, our method can accurately estimate deep sleep time (SLAMSS/MESA: 0.61±0.69 hours, PSG/MESA ground truth: 0.60±0.60 hours; SLAMSS/MrOS: 0.53±0.66 hours, PSG/MrOS ground truth: 0.55±0.57 hours;). Deep sleep quality and quantity are vital metrics and early indicators for a number of diseases. Our method, which enables accurate deep sleep estimation from wearables-derived data, is therefore promising for a variety of clinical applications requiring long-term deep sleep monitoring.


Actigraphy , Artificial Intelligence , Male , Humans , Heart Rate/physiology , Sleep/physiology , Sleep Stages/physiology , Time Factors , Reproducibility of Results
3.
Neuroimage ; 237: 118126, 2021 08 15.
Article En | MEDLINE | ID: mdl-33957234

Tau neurofibrillary tangles, a pathophysiological hallmark of Alzheimer's disease (AD), exhibit a stereotypical spatiotemporal trajectory that is strongly correlated with disease progression and cognitive decline. Personalized prediction of tau progression is, therefore, vital for the early diagnosis and prognosis of AD. Evidence from both animal and human studies is suggestive of tau transmission along the brains preexisting neural connectivity conduits. We present here an analytic graph diffusion framework for individualized predictive modeling of tau progression along the structural connectome. To account for physiological processes that lead to active generation and clearance of tau alongside passive diffusion, our model uses an inhomogenous graph diffusion equation with a source term and provides closed-form solutions to this equation for linear and exponential source functionals. Longitudinal imaging data from two cohorts, the Harvard Aging Brain Study (HABS) and the Alzheimer's Disease Neuroimaging Initiative (ADNI), were used to validate the model. The clinical data used for developing and validating the model include regional tau measures extracted from longitudinal positron emission tomography (PET) scans based on the 18F-Flortaucipir radiotracer and individual structural connectivity maps computed from diffusion tensor imaging (DTI) by means of tractography and streamline counting. Two-timepoint tau PET scans were used to assess the goodness of model fit. Three-timepoint tau PET scans were used to assess predictive accuracy via comparison of predicted and observed tau measures at the third timepoint. Our results show high consistency between predicted and observed tau and differential tau from region-based analysis. While the prognostic value of this approach needs to be validated in a larger cohort, our preliminary results suggest that our longitudinal predictive model, which offers an in vivo macroscopic perspective on tau progression in the brain, is potentially promising as a personalizable predictive framework for AD.


Alzheimer Disease , Diffusion Tensor Imaging , Disease Progression , Models, Neurological , Nerve Net , Positron-Emission Tomography , tau Proteins/metabolism , Aged , Aged, 80 and over , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/metabolism , Alzheimer Disease/pathology , Datasets as Topic , Female , Humans , Longitudinal Studies , Male , Nerve Net/diagnostic imaging , Nerve Net/metabolism , Nerve Net/pathology , Prognosis
4.
Med Image Comput Comput Assist Interv ; 12267: 418-427, 2020 Oct.
Article En | MEDLINE | ID: mdl-33263115

Tau tangles are a pathophysiological hallmark of Alzheimer's disease (AD) and exhibit a stereotypical pattern of spatiotemporal spread which has strong links to disease progression and cognitive decline. Preclinical evidence suggests that tau spread depends on neuronal connectivity rather than physical proximity between different brain regions. Here, we present a novel physics-informed geometric learning model for predicting tau buildup and spread that learns patterns directly from longitudinal tau imaging data while receiving guidance from governing physical principles. Implemented as a graph neural network with physics-based regularization in latent space, the model enables effective training with smaller data sizes. For training and validation of the model, we used longitudinal tau measures from positron emission tomography (PET) and structural connectivity graphs from diffusion tensor imaging (DTI) from the Harvard Aging Brain Study. The model led to higher peak signal-to-noise ratio and lower mean squared error levels than both an unregularized graph neural network and a differential equation solver. The method was validated using both two-timepoint and three-timepoint tau PET measures. The effectiveness of the approach was further confirmed by a cross-validation study.

5.
IEEE Trans Comput Imaging ; 6: 518-528, 2020.
Article En | MEDLINE | ID: mdl-32055649

Positron emission tomography (PET) suffers from severe resolution limitations which reduce its quantitative accuracy. In this paper, we present a super-resolution (SR) imaging technique for PET based on convolutional neural networks (CNNs). To facilitate the resolution recovery process, we incorporate high-resolution (HR) anatomical information based on magnetic resonance (MR) imaging. We introduce the spatial location information of the input image patches as additional CNN inputs to accommodate the spatially-variant nature of the blur kernels in PET. We compared the performance of shallow (3-layer) and very deep (20-layer) CNNs with various combinations of the following inputs: low-resolution (LR) PET, radial locations, axial locations, and HR MR. To validate the CNN architectures, we performed both realistic simulation studies using the BrainWeb digital phantom and clinical studies using neuroimaging datasets. For both simulation and clinical studies, the LR PET images were based on the Siemens HR+ scanner. Two different scenarios were examined in simulation: one where the target HR image is the ground-truth phantom image and another where the target HR image is based on the Siemens HRRT scanner - a high-resolution dedicated brain PET scanner. The latter scenario was also examined using clinical neuroimaging datasets. A number of factors affected relative performance of the different CNN designs examined, including network depth, target image quality, and the resemblance between the target and anatomical images. In general, however, all deep CNNs outperformed classical penalized deconvolution and partial volume correction techniques by large margins both qualitatively (e.g., edge and contrast recovery) and quantitatively (as indicated by three metrics: peak signal-to-noise-ratio, structural similarity index, and contrast-to-noise ratio).

6.
Neural Netw ; 125: 83-91, 2020 May.
Article En | MEDLINE | ID: mdl-32078963

The intrinsically low spatial resolution of positron emission tomography (PET) leads to image quality degradation and inaccurate image-based quantitation. Recently developed supervised super-resolution (SR) approaches are of great relevance to PET but require paired low- and high-resolution images for training, which are usually unavailable for clinical datasets. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which precludes the need for paired training data, ensuring wider applicability and adoptability. The SSSR network receives as inputs a low-resolution PET image, a high-resolution anatomical magnetic resonance (MR) image, spatial information (axial and radial coordinates), and a high-dimensional feature set extracted from an auxiliary CNN which is separately-trained in a supervised manner using paired simulation datasets. The network is trained using a loss function which includes two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. We validate the SSSR technique using a clinical neuroimaging dataset. We demonstrate that SSSR is promising in terms of image quality, peak signal-to-noise ratio, structural similarity index, contrast-to-noise ratio, and an additional no-reference metric developed specifically for SR image quality assessment. Comparisons with other SSSR variants suggest that its high performance is largely attributable to simulation guidance.


Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Positron-Emission Tomography/methods , Magnetic Resonance Imaging/methods , Signal-To-Noise Ratio
7.
IEEE Trans Comput Imaging ; 5(4): 530-539, 2019 Dec.
Article En | MEDLINE | ID: mdl-31723575

The intrinsically limited spatial resolution of PET confounds image quantitation. This paper presents an image deblurring and super-resolution framework for PET using anatomical guidance provided by high-resolution MR images. The framework relies on image-domain post-processing of already-reconstructed PET images by means of spatially-variant deconvolution stabilized by an MR-based joint entropy penalty function. The method is validated through simulation studies based on the BrainWeb digital phantom, experimental studies based on the Hoffman phantom, and clinical neuroimaging studies pertaining to aging and Alzheimer's disease. The developed technique was compared with direct deconvolution and deconvolution stabilized by a quadratic difference penalty, a total variation penalty, and a Bowsher penalty. The BrainWeb simulation study showed improved image quality and quantitative accuracy measured by contrast-to-noise ratio, structural similarity index, root-mean-square error, and peak signal-to-noise ratio generated by this technique. The Hoffman phantom study indicated noticeable improvement in the structural similarity index (relative to the MR image) and gray-to-white contrast-to-noise ratio. Finally, clinical amyloid and tau imaging studies for Alzheimer's disease showed lowering of the coefficient of variation in several key brain regions associated with two target pathologies.

8.
Proc IEEE Int Symp Biomed Imaging ; 2019: 414-417, 2019 Apr.
Article En | MEDLINE | ID: mdl-31327984

Graph convolutional neural networks (GCNNs) aim to extend the data representation and classification capabilities of convolutional neural networks, which are highly effective for signals defined on regular Euclidean domains, e.g. image and audio signals, to irregular, graph-structured data defined on non-Euclidean domains. Graph-theoretic tools that enable us to study the brain as a complex system are of great significance in brain connectivity studies. Particularly, in the context of Alzheimer's disease (AD), a neurodegenerative disorder associated with network dysfunction, graph-based tools are vital for disease classification and staging. Here, we implement and test a multi-class GCNN classifier for network-based classification of subjects on the AD spectrum into four categories: cognitively normal, early mild cognitive impairment, late mild cognitive impairment, and AD. We train and validate the network using structural connectivity graphs obtained from diffusion tensor imaging data. Using receiver operating characteristic curves, we show that the GCNN classifier outperforms a support vector machine classifier by margins that are reliant on disease category. Our findings indicate that the performance gap between the two methods increases with disease progression from CN to AD. We thus demonstrate that GCNN is a competitive tool for staging and classification of subjects on the AD spectrum.

9.
Inf Process Med Imaging ; 11492: 384-393, 2019.
Article En | MEDLINE | ID: mdl-31156312

Tau tangles are a pathological hallmark of Alzheimer?s disease (AD) with strong correlations existing between tau aggregation and cognitive decline. Studies in mouse models have shown that the characteristic patterns of tau spatial spread associated with AD progression are determined by neural connectivity rather than physical proximity between different brain regions. We present here a network diffusion model for tau aggregation based on longitudinal tau measures from positron emission tomography (PET) and structural connectivity graphs from diffusion tensor imaging (DTI). White matter fiber bundles reconstructed via tractography from the DTI data were used to compute normalized graph Laplacians which served as graph diffusion kernels for tau spread. By linearizing this model and using sparse source localization, we were able to identify distinct patterns of propagative and generative buildup of tau at a population level. A gradient descent approach was used to solve the sparsity-constrained optimization problem. Model fitting was performed on subjects from the Harvard Aging Brain Study cohort. The fitted model parameters include a scalar factor controlling the network-based tau spread and a network-independent seed vector representing seeding in different regions-of-interest. This parametric model was validated on an independent group of subjects from the same cohort. We were able to predict with reasonably high accuracy the tau buildup at a future time-point. The network diffusion model, therefore, successfully identifies two distinct mechanisms for tau buildup in the aging brain and offers a macroscopic perspective on tau spread.

10.
J Med Imaging (Bellingham) ; 6(2): 024004, 2019 Apr.
Article En | MEDLINE | ID: mdl-31065568

Positron emission tomography (PET) imaging of the lungs is confounded by respiratory motion-induced blurring artifacts that degrade quantitative accuracy. Gating and motion-compensated image reconstruction are frequently used to correct these motion artifacts in PET. In the absence of voxel-by-voxel deformation measures, surrogate signals from external markers are used to track internal motion and generate gated PET images. The objective of our work is to develop a group-level parcellation framework for the lungs to guide the placement of markers depending on the location of the internal target region. We present a data-driven framework based on higher-order singular value decomposition (HOSVD) of deformation tensors that enables identification of synchronous areas inside the torso and on the skin surface. Four-dimensional (4-D) magnetic resonance (MR) imaging based on a specialized radial pulse sequence with a one-dimensional slice-projection navigator was used for motion capture under free-breathing conditions. The deformation tensors were computed by nonrigidly registering the gated MR images. Group-level motion signatures obtained via HOSVD were used to cluster the voxels both inside the volume and on the surface. To characterize the parcellation result, we computed correlation measures across the different regions of interest (ROIs). To assess the robustness of the parcellation technique, leave-one-out cross-validation was performed over the subject cohort, and the dependence of the result on varying numbers of gates and singular value thresholds was examined. Overall, the parcellation results were largely consistent across these test cases with Jaccard indices reflecting high degrees of overlap. Finally, a PET simulation study was performed which showed that, depending on the location of the lesion, the selection of a synchronous ROI may lead to noticeable gains in the recovery coefficient. Accurate quantitative interpretation of PET images is important for lung cancer management. Therefore, a guided motion monitoring approach is of utmost importance in the context of pulmonary PET imaging.

...