Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
J Cereb Blood Flow Metab ; : 271678X241270465, 2024 Aug 07.
Article in English | MEDLINE | ID: mdl-39113424

ABSTRACT

This manuscript quantitatively investigates remodeling dynamics of the cortical microvascular network (thousands of connected capillaries) following photothrombotic ischemia (cubic millimeter volume, imaged weekly) using a novel in vivo two-photon angiography and high throughput vascular vectorization method. The results suggest distinct temporal patterns of cerebrovascular plasticity, with acute remodeling peaking at one week post-stroke. The network architecture then gradually stabilizes, returning to a new steady state after four weeks. These findings align with previous literature on neuronal plasticity, highlighting the correlation between neuronal and neurovascular remodeling. Quantitative analysis of neurovascular networks using length- and strand-based statistical measures reveals intricate changes in network anatomy and topology. The distance and strand-length statistics show significant alterations, with a peak of plasticity observed at one week post-stroke, followed by a gradual return to baseline. The orientation statistic plasticity peaks at two weeks, gradually approaching the (conserved across subjects) stroke signature. The underlying mechanism of the vascular response (angiogenesis vs. tissue deformation), however, is yet unexplored. Overall, the combination of chronic two-photon angiography, vascular vectorization, reconstruction/visualization, and statistical analysis enables both qualitative and quantitative assessments of neurovascular remodeling dynamics, demonstrating a method for investigating cortical microvascular network disorders and the therapeutic modes of action thereof.

2.
bioRxiv ; 2023 Apr 06.
Article in English | MEDLINE | ID: mdl-37066421

ABSTRACT

The Encyclopedia of DNA elements (ENCODE) project is a collaborative effort to create a comprehensive catalog of functional elements in the human genome. The current database comprises more than 19000 functional genomics experiments across more than 1000 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the Homo sapiens and Mus musculus genomes. All experimental data, metadata, and associated computational analyses created by the ENCODE consortium are submitted to the Data Coordination Center (DCC) for validation, tracking, storage, and distribution to community resources and the scientific community. The ENCODE project has engineered and distributed uniform processing pipelines in order to promote data provenance and reproducibility as well as allow interoperability between genomic resources and other consortia. All data files, reference genome versions, software versions, and parameters used by the pipelines are captured and available via the ENCODE Portal. The pipeline code, developed using Docker and Workflow Description Language (WDL; https://openwdl.org/) is publicly available in GitHub, with images available on Dockerhub (https://hub.docker.com), enabling access to a diverse range of biomedical researchers. ENCODE pipelines maintained and used by the DCC can be installed to run on personal computers, local HPC clusters, or in cloud computing environments via Cromwell. Access to the pipelines and data via the cloud allows small labs the ability to use the data or software without access to institutional compute clusters. Standardization of the computational methodologies for analysis and quality control leads to comparable results from different ENCODE collections - a prerequisite for successful integrative analyses.

3.
Res Sq ; 2023 Jul 19.
Article in English | MEDLINE | ID: mdl-37503119

ABSTRACT

The Encyclopedia of DNA elements (ENCODE) project is a collaborative effort to create a comprehensive catalog of functional elements in the human genome. The current database comprises more than 19000 functional genomics experiments across more than 1000 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the Homo sapiens and Mus musculus genomes. All experimental data, metadata, and associated computational analyses created by the ENCODE consortium are submitted to the Data Coordination Center (DCC) for validation, tracking, storage, and distribution to community resources and the scientific community. The ENCODE project has engineered and distributed uniform processing pipelines in order to promote data provenance and reproducibility as well as allow interoperability between genomic resources and other consortia. All data files, reference genome versions, software versions, and parameters used by the pipelines are captured and available via the ENCODE Portal. The pipeline code, developed using Docker and Workflow Description Language (WDL; https://openwdl.org/) is publicly available in GitHub, with images available on Dockerhub (https://hub.docker.com), enabling access to a diverse range of biomedical researchers. ENCODE pipelines maintained and used by the DCC can be installed to run on personal computers, local HPC clusters, or in cloud computing environments via Cromwell. Access to the pipelines and data via the cloud allows small labs the ability to use the data or software without access to institutional compute clusters. Standardization of the computational methodologies for analysis and quality control leads to comparable results from different ENCODE collections - a prerequisite for successful integrative analyses.

SELECTION OF CITATIONS
SEARCH DETAIL