Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Publication year range
1.
Alzheimers Res Ther ; 16(1): 61, 2024 03 19.
Article in English | MEDLINE | ID: mdl-38504336

ABSTRACT

BACKGROUND: Predicting future Alzheimer's disease (AD)-related cognitive decline among individuals with subjective cognitive decline (SCD) or mild cognitive impairment (MCI) is an important task for healthcare. Structural brain imaging as measured by magnetic resonance imaging (MRI) could potentially contribute when making such predictions. It is unclear if the predictive performance of MRI can be improved using entire brain images in deep learning (DL) models compared to using pre-defined brain regions. METHODS: A cohort of 332 individuals with SCD/MCI were included from the Swedish BioFINDER-1 study. The goal was to predict longitudinal SCD/MCI-to-AD dementia progression and change in Mini-Mental State Examination (MMSE) over four years. Four models were evaluated using different predictors: (1) clinical data only, including demographics, cognitive tests and APOE ε4 status, (2) clinical data plus hippocampal volume, (3) clinical data plus all regional MRI gray matter volumes (N = 68) extracted using FreeSurfer software, (4) a DL model trained using multi-task learning with MRI images, Jacobian determinant images and baseline cognition as input. A double cross-validation scheme, with five test folds and for each of those ten validation folds, was used. External evaluation was performed on part of the ADNI dataset, including 108 patients. Mann-Whitney U-test was used to determine statistically significant differences in performance, with p-values less than 0.05 considered significant. RESULTS: In the BioFINDER cohort, 109 patients (33%) progressed to AD dementia. The performance of the clinical data model for prediction of progression to AD dementia was area under the curve (AUC) = 0.85 and four-year cognitive decline was R2 = 0.14. The performance was improved for both outcomes when adding hippocampal volume (AUC = 0.86, R2 = 0.16). Adding FreeSurfer brain regions improved prediction of four-year cognitive decline but not progression to AD (AUC = 0.83, R2 = 0.17), while the DL model worsened the performance for both outcomes (AUC = 0.84, R2 = 0.08). A sensitivity analysis showed that the Jacobian determinant image was more informative than the MRI image, but that performance was maximized when both were included. In the external evaluation cohort from ADNI, 23 patients (21%) progressed to AD dementia. The results for predicted progression to AD dementia were similar to the results for the BioFINDER test data, while the performance for the cognitive decline was deteriorated. CONCLUSIONS: The DL model did not significantly improve the prediction of clinical disease progression in AD, compared to regression models with a single pre-defined brain region.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Deep Learning , Humans , Alzheimer Disease/complications , Alzheimer Disease/diagnostic imaging , Biomarkers , Magnetic Resonance Imaging , Brain/diagnostic imaging , Brain/pathology , Cognitive Dysfunction/diagnosis , Cognition , Atrophy/pathology , Disease Progression
2.
Res Sq ; 2023 Nov 08.
Article in English | MEDLINE | ID: mdl-37986841

ABSTRACT

Background: Predicting future Alzheimer's disease (AD)-related cognitive decline among individuals with subjective cognitive decline (SCD) or mild cognitive impairment (MCI) is an important task for healthcare. Structural brain imaging as measured by magnetic resonance imaging (MRI) could potentially contribute when making such predictions. It is unclear if the predictive performance of MRI can be improved using entire brain images in deep learning (DL) models compared to using pre-defined brain regions. Methods: A cohort of 332 individuals with SCD/MCI were included from the Swedish BioFINDER-1 study. The goal was to predict longitudinal SCD/MCI-to-AD dementia progression and change in Mini-Mental State Examination (MMSE) over four years. Four models were evaluated using different predictors: 1) clinical data only, including demographics, cognitive tests and APOE e4 status, 2) clinical data plus hippocampal volume, 3) clinical data plus all regional MRI gray matter volumes (N=68) extracted using FreeSurfer software, 4) a DL model trained using multi-task learning with MRI images, Jacobian determinant images and baseline cognition as input. Models were developed on 80% of subjects (N=267) and tested on the remaining 20% (N=65). Mann-Whitney U-test was used to determine statistically significant differences in performance, with p-values less than 0.05 considered significant. Results: In the test set, 21 patients (32.3%) progressed to AD dementia. The performance of the clinical data model for prediction of progression to AD dementia was area under the curve (AUC)=0.87 and four-year cognitive decline was R2=0.17. The performance was significantly improved for both outcomes when adding hippocampal volume (AUC=0.91, R2=0.26, p-values <0.05) or FreeSurfer brain regions (AUC=0.90, R2=0.27, p-values <0.05). Conversely, the DL model did not show any significant difference from the clinical data model (AUC=0.86, R2=0.13). A sensitivity analysis showed that the Jacobian determinant image was more informative than the MRI image, but that performance was maximized when both were included. Conclusions: The DL model did not significantly improve the prediction of clinical disease progression in AD, compared to regression models with a single pre-defined brain region.

3.
Adv Anat Embryol Cell Biol ; 202: 1-109, 2009.
Article in English | MEDLINE | ID: mdl-19230601

ABSTRACT

This historical review of gliogenesis begins with Schwann's introduction of the cell doctrine in 1839. Subsequent microscopic studies revealed the cellular structure of many organs and tissues, but the CNS was thought to be different. In 1864, Virchow created the concept that nerve cells are held together by a "Nervenkitte" which he called"glia" (for glue). He and his contemporaries thought that "glia" was an unstructured, connective tissue-like ground substance that separated nerve cells from each other and from blood vessels. Dieters, a pupil of Virchow, discovered that this ground substance contained cells, which he described and illustrated. Improvements in microscopes and discovery of metallic impregnation methods finally showed convincingly that the "glia" was not a binding substance. Instead, it was composed of cells, each separate and distinct from neighboring cells and each with its own characteristic array of processes. Light microscopic studies of developing and mature nervous tissue led to the discovery of different types of glial cells-astroglia, oligodendroglia, microglia, and ependymal cells in the CNS, and Schwann cells in the peripheral nervous system (PNS). Subsequent studies characterized the origins and development of each type of glial cell. A new era began with the introduction of electron microscopy, immunostaining, and in vitro maintenance of both central and peripheral nervous tissue. Other methods and models greatly expanded our understanding of how glia multiply, migrate, and differentiate. In 1985, almost a century and a half of study had produced substantial progress in our understanding of glial cells, including their origins and development. Major advances were associated with the discovery of new methods. These are summarized first. Then the origins and development of astroglia, oligodendroglia, microglia, ependymal cells, and Schwann cells are described and discussed. In general, morphology is emphasized. Findings related to cytodifferentiation, cellular interactions, functions, and regulation of developing glia have also been included.


Subject(s)
Neuroanatomy/history , Neurogenesis , Neuroglia/cytology , Animals , Central Nervous System/embryology , Epithelial Cells/cytology , Glial Fibrillary Acidic Protein/metabolism , History, 19th Century , History, 20th Century , Myelin Sheath/metabolism , Neural Tube/embryology , Neuroglia/metabolism , Tissue Culture Techniques , Vimentin/metabolism
4.
Clin Physiol Funct Imaging ; 25(4): 234-40, 2005 Jul.
Article in English | MEDLINE | ID: mdl-15972026

ABSTRACT

A new automated method for quantification of left ventricular function from gated-single photon emission computed tomography (SPECT) images has been developed. The method for quantification of cardiac function (CAFU) is based on a heart shaped model and the active shape algorithm. The model contains statistical information of the variability of left ventricular shape. CAFU was adjusted based on the results from the analysis of five simulated gated-SPECT studies with well defined volumes of the left ventricle. The digital phantom NURBS-based Cardiac-Torso (NCAT) and the Monte-Carlo method SIMIND were used to simulate the studies. Finally CAFU was validated on ten rest studies from patients referred for routine stress/rest myocardial perfusion scintigraphy and compared with Cedar-Sinai quantitative gated-SPECT (QGS), a commercially available program for quantification of gated-SPECT images. The maximal differences between the CAFU estimations and the true left ventricular volumes of the digital phantoms were 11 ml for the end-diastolic volume (EDV), 3 ml for the end-systolic volume (ESV) and 3% for the ejection fraction (EF). The largest differences were seen in the smallest heart. In the patient group the EDV calculated using QGS and CAFU showed good agreement for large hearts and higher CAFU values compared with QGS for the smaller hearts. In the larger hearts, ESV was much larger for QGS than for CAFU both in the phantom and patient studies. In the smallest hearts there was good agreement between QGS and CAFU. The findings of this study indicate that our new automated method for quantification of gated-SPECT images can accurately measure left ventricular volumes and EF.


Subject(s)
Gated Blood-Pool Imaging/methods , Heart Ventricles/diagnostic imaging , Imaging, Three-Dimensional/methods , Models, Cardiovascular , Stroke Volume/physiology , Ventricular Function, Left/physiology , Ventricular Function , Artificial Intelligence , Gated Blood-Pool Imaging/instrumentation , Humans , Image Interpretation, Computer-Assisted/methods , Models, Anatomic , Phantoms, Imaging , Reproducibility of Results , Sensitivity and Specificity , Tomography, Emission-Computed, Single-Photon/instrumentation , Tomography, Emission-Computed, Single-Photon/methods
5.
Eur J Nucl Med Mol Imaging ; 30(7): 961-5, 2003 Jul.
Article in English | MEDLINE | ID: mdl-12748832

ABSTRACT

The purpose of this study was to assess the value of the ventilation study in the diagnosis of acute pulmonary embolism using a new automated method. Either perfusion scintigrams alone or two different combinations of ventilation/perfusion scintigrams were used as the only source of information regarding pulmonary embolism. A completely automated method based on computerised image processing and artificial neural networks was used for the interpretation. Three artificial neural networks were trained for the diagnosis of pulmonary embolism. Each network was trained with 18 automatically obtained features. Three different sets of features originating from three sets of scintigrams were used. One network was trained using features obtained from each set of perfusion scintigrams, including six projections. The second network was trained using features from each set of (joint) ventilation and perfusion studies in six projections. A third network was trained using features from the perfusion study in six projections combined with a single ventilation image from the posterior view. A total of 1,087 scintigrams from patients with suspected pulmonary embolism were used for network training. The test group consisted of 102 patients who had undergone both scintigraphy and pulmonary angiography. Performances in the test group were measured as area under the receiver operation characteristic curve. The performance of the neural network in interpreting perfusion scintigrams alone was 0.79 (95% confidence limits 0.71-0.86). When one ventilation image (posterior view) was added to the perfusion study, the performance was 0.84 (0.77-0.90). This increase was statistically significant ( P=0.022). The performance increased to 0.87 (0.81-0.93) when all perfusion and ventilation images were used, and the increase in performance from 0.79 to 0.87 was also statistically significant ( P=0.016). The automated method presented here for the interpretation of lung scintigrams shows a significant increase in performance when one or all ventilation images are added to the six perfusion images. Thus, the ventilation study has a significant role in the diagnosis of acute lung embolism.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Pulmonary Embolism/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Child , Female , Humans , Male , Middle Aged , Nerve Net , Observer Variation , Pulmonary Embolism/diagnosis , Pulmonary Ventilation , Radionuclide Imaging , Reproducibility of Results , Sensitivity and Specificity , Ventilation-Perfusion Ratio
SELECTION OF CITATIONS
SEARCH DETAIL