Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.379
Filtrar
Más filtros

Intervalo de año de publicación
1.
EMBO J ; 40(3): e105889, 2021 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-33480052

RESUMEN

Image data are universal in life sciences research. Their proper handling is not. A significant proportion of image data in research papers show signs of mishandling that undermine their interpretation. We propose that a precise description of the image processing and analysis applied is required to address this problem. A new norm for reporting reproducible image analyses will diminish mishandling, as it will alert co-authors, referees, and journals to aberrant image data processing or, if published nonetheless, it will document it to the reader. To promote this norm, we discuss the effectiveness of this approach and give some step-by-step instructions for publishing reproducible image data processing and analysis workflows.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Edición/normas , Exactitud de los Datos , Humanos , Reproducibilidad de los Resultados , Mala Conducta Científica , Flujo de Trabajo
2.
Plant Physiol ; 195(1): 378-394, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38298139

RESUMEN

Automated guard cell detection and measurement are vital for understanding plant physiological performance and ecological functioning in global water and carbon cycles. Most current methods for measuring guard cells and stomata are laborious, time-consuming, prone to bias, and limited in scale. We developed StoManager1, a high-throughput tool utilizing geometrical, mathematical algorithms, and convolutional neural networks to automatically detect, count, and measure over 30 guard cell and stomatal metrics, including guard cell and stomatal area, length, width, stomatal aperture area/guard cell area, orientation, stomatal evenness, divergence, and aggregation index. Combined with leaf functional traits, some of these StoManager1-measured guard cell and stomatal metrics explained 90% and 82% of tree biomass and intrinsic water use efficiency (iWUE) variances in hardwoods, making them substantial factors in leaf physiology and tree growth. StoManager1 demonstrated exceptional precision and recall (mAP@0.5 over 0.96), effectively capturing diverse stomatal properties across over 100 species. StoManager1 facilitates the automation of measuring leaf stomatal and guard cells, enabling broader exploration of stomatal control in plant growth and adaptation to environmental stress and climate change. This has implications for global gross primary productivity (GPP) modeling and estimation, as integrating stomatal metrics can enhance predictions of plant growth and resource usage worldwide. Easily accessible open-source code and standalone Windows executable applications are available on a GitHub repository (https://github.com/JiaxinWang123/StoManager1) and Zenodo (https://doi.org/10.5281/zenodo.7686022).


Asunto(s)
Botánica , Biología Celular , Células Vegetales , Estomas de Plantas , Programas Informáticos , Estomas de Plantas/citología , Estomas de Plantas/crecimiento & desarrollo , Células Vegetales/fisiología , Botánica/instrumentación , Botánica/métodos , Biología Celular/instrumentación , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos , Hojas de la Planta/citología , Redes Neurales de la Computación , Ensayos Analíticos de Alto Rendimiento/instrumentación , Ensayos Analíticos de Alto Rendimiento/métodos , Ensayos Analíticos de Alto Rendimiento/normas , Programas Informáticos/normas
4.
Neuroimage ; 297: 120697, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38908725

RESUMEN

Quantitative susceptibility mapping (QSM) is a rising MRI-based technology and quite a few QSM-related algorithms have been proposed to reconstruct maps of tissue susceptibility distribution from phase images. In this paper, we develop a comprehensive susceptibility imaging process and analysis studio (SIPAS) that can accomplish reliable QSM processing and offer a standardized evaluation system. Specifically, SIPAS integrates multiple methods for each step, enabling users to select algorithm combinations according to data conditions, and QSM maps could be evaluated by two aspects, including image quality indicators within all voxels and region-of-interest (ROI) analysis. Through a sophisticated design of user-friendly interfaces, the results of each procedure are able to be exhibited in axial, coronal, and sagittal views in real-time, meanwhile ROIs can be displayed in 3D rendering visualization. The accuracy and compatibility of SIPAS are demonstrated by experiments on multiple in vivo human brain datasets acquired from 3T, 5T, and 7T MRI scanners of different manufacturers. We also validate the QSM maps obtained by various algorithm combinations in SIPAS, among which the combination of iRSHARP and SFCR achieves the best results on its evaluation system. SIPAS is a comprehensive, sophisticated, and reliable toolkit that may prompt the QSM application in scientific research and clinical practice.


Asunto(s)
Algoritmos , Encéfalo , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Mapeo Encefálico/métodos , Programas Informáticos
5.
Neuroimage ; 299: 120812, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39197559

RESUMEN

Brain magnetic resonance imaging (MRI) is widely used in clinical practice for disease diagnosis. However, MRI scans acquired at different sites can have different appearances due to the difference in the hardware, pulse sequence, and imaging parameter. It is important to reduce or eliminate such cross-site variations with brain MRI harmonization so that downstream image processing and analysis is performed consistently. Previous works on the harmonization problem require the data acquired from the sites of interest for model training. But in real-world scenarios there can be test data from a new site of interest after the model is trained, and training data from the new site is unavailable when the model is trained. In this case, previous methods cannot optimally handle the test data from the new unseen site. To address the problem, in this work we explore domain generalization for brain MRI harmonization and propose Site Mix (SiMix). We assume that images of travelling subjects are acquired at a few existing sites for model training. To allow the training data to better represent the test data from unseen sites, we first propose to mix the training images belonging to different sites stochastically, which substantially increases the diversity of the training data while preserving the authenticity of the mixed training images. Second, at test time, when a test image from an unseen site is given, we propose a multiview strategy that perturbs the test image with preserved authenticity and ensembles the harmonization results of the perturbed images for improved harmonization quality. To validate SiMix, we performed experiments on the publicly available SRPBS dataset and MUSHAC dataset that comprised brain MRI acquired at nine and two different sites, respectively. The results indicate that SiMix improves brain MRI harmonization for unseen sites, and it is also beneficial to the harmonization of existing sites.


Asunto(s)
Encéfalo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos , Neuroimagen/métodos , Neuroimagen/normas
6.
Neuroimage ; 292: 120617, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38636639

RESUMEN

A primary challenge to the data-driven analysis is the balance between poor generalizability of population-based research and characterizing more subject-, study- and population-specific variability. We previously introduced a fully automated spatially constrained independent component analysis (ICA) framework called NeuroMark and its functional MRI (fMRI) template. NeuroMark has been successfully applied in numerous studies, identifying brain markers reproducible across datasets and disorders. The first NeuroMark template was constructed based on young adult cohorts. We recently expanded on this initiative by creating a standardized normative multi-spatial-scale functional template using over 100,000 subjects, aiming to improve generalizability and comparability across studies involving diverse cohorts. While a unified template across the lifespan is desirable, a comprehensive investigation of the similarities and differences between components from different age populations might help systematically transform our understanding of the human brain by revealing the most well-replicated and variable network features throughout the lifespan. In this work, we introduced two significant expansions of NeuroMark templates first by generating replicable fMRI templates for infants, adolescents, and aging cohorts, and second by incorporating structural MRI (sMRI) and diffusion MRI (dMRI) modalities. Specifically, we built spatiotemporal fMRI templates based on 6,000 resting-state scans from four datasets. This is the first attempt to create robust ICA templates covering dynamic brain development across the lifespan. For the sMRI and dMRI data, we used two large publicly available datasets including more than 30,000 scans to build reliable templates. We employed a spatial similarity analysis to identify replicable templates and investigate the degree to which unique and similar patterns are reflective in different age populations. Our results suggest remarkably high similarity of the resulting adapted components, even across extreme age differences. With the new templates, the NeuroMark framework allows us to perform age-specific adaptations and to capture features adaptable to each modality, therefore facilitating biomarker identification across brain disorders. In sum, the present work demonstrates the generalizability of NeuroMark templates and suggests the potential of new templates to boost accuracy in mental health research and advance our understanding of lifespan and cross-modal alterations.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Adulto , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Encéfalo/diagnóstico por imagen , Adolescente , Adulto Joven , Masculino , Anciano , Femenino , Persona de Mediana Edad , Lactante , Niño , Envejecimiento/fisiología , Preescolar , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Anciano de 80 o más Años , Neuroimagen/métodos , Neuroimagen/normas , Imagen de Difusión por Resonancia Magnética/métodos , Imagen de Difusión por Resonancia Magnética/normas
7.
Hippocampus ; 34(6): 302-308, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38593279

RESUMEN

Researchers who study the human hippocampus are naturally interested in how its subfields function. However, many researchers are precluded from examining subfields because their manual delineation from magnetic resonance imaging (MRI) scans (still the gold standard approach) is time consuming and requires significant expertise. To help ameliorate this issue, we present here two protocols, one for 3T MRI and the other for 7T MRI, that permit automated hippocampus segmentation into six subregions, namely dentate gyrus/cornu ammonis (CA)4, CA2/3, CA1, subiculum, pre/parasubiculum, and uncus along the entire length of the hippocampus. These protocols are particularly notable relative to existing resources in that they were trained and tested using large numbers of healthy young adults (n = 140 at 3T, n = 40 at 7T) whose hippocampi were manually segmented by experts from MRI scans. Using inter-rater reliability analyses, we showed that the quality of automated segmentations produced by these protocols was high and comparable to expert manual segmenters. We provide full open access to the automated protocols, and anticipate they will save hippocampus researchers a significant amount of time. They could also help to catalyze subfield research, which is essential for gaining a full understanding of how the hippocampus functions.


Asunto(s)
Hipocampo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Hipocampo/diagnóstico por imagen , Masculino , Adulto , Femenino , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Reproducibilidad de los Resultados
8.
Am J Physiol Heart Circ Physiol ; 327(3): H715-H721, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39092999

RESUMEN

GelBox is open-source software that was developed with the goal of enhancing rigor, reproducibility, and transparency when analyzing gels and immunoblots. It combines image adjustments (cropping, rotation, brightness, and contrast), background correction, and band-fitting in a single application. Users can also associate each lane in an image with metadata (for example, sample type). GelBox data files integrate the raw data, supplied metadata, image adjustments, and band-level analyses in a single file to improve traceability. GelBox has a user-friendly interface and was developed using MATLAB. The software, installation instructions, and tutorials, are available at https://campbell-muscle-lab.github.io/GelBox/.NEW & NOTEWORTHY GelBox is open-source software that was developed to enhance rigor, reproducibility, and transparency when analyzing gels and immunoblots. It combines image adjustments (cropping, rotation, brightness, and contrast), background correction, and band-fitting in a single application. Users can also associate each lane in an image with metadata (for example, sample type).


Asunto(s)
Programas Informáticos , Reproducibilidad de los Resultados , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Animales
9.
Hum Brain Mapp ; 45(11): e26708, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39056477

RESUMEN

Neuroimaging data acquired using multiple scanners or protocols are increasingly available. However, such data exhibit technical artifacts across batches which introduce confounding and decrease reproducibility. This is especially true when multi-batch data are analyzed using complex downstream models which are more likely to pick up on and implicitly incorporate batch-related information. Previously proposed image harmonization methods have sought to remove these batch effects; however, batch effects remain detectable in the data after applying these methods. We present DeepComBat, a deep learning harmonization method based on a conditional variational autoencoder and the ComBat method. DeepComBat combines the strengths of statistical and deep learning methods in order to account for the multivariate relationships between features while simultaneously relaxing strong assumptions made by previous deep learning harmonization methods. As a result, DeepComBat can perform multivariate harmonization while preserving data structure and avoiding the introduction of synthetic artifacts. We apply this method to cortical thickness measurements from a cognitive-aging cohort and show DeepComBat qualitatively and quantitatively outperforms existing methods in removing batch effects while preserving biological heterogeneity. Additionally, DeepComBat provides a new perspective for statistically motivated deep learning harmonization methods.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Neuroimagen , Humanos , Neuroimagen/métodos , Neuroimagen/normas , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Corteza Cerebral/diagnóstico por imagen , Anciano , Masculino , Femenino
10.
Hum Brain Mapp ; 45(12): e70003, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39185668

RESUMEN

Computationally expensive data processing in neuroimaging research places demands on energy consumption-and the resulting carbon emissions contribute to the climate crisis. We measured the carbon footprint of the functional magnetic resonance imaging (fMRI) preprocessing tool fMRIPrep, testing the effect of varying parameters on estimated carbon emissions and preprocessing performance. Performance was quantified using (a) statistical individual-level task activation in regions of interest and (b) mean smoothness of preprocessed data. Eight variants of fMRIPrep were run with 257 participants who had completed an fMRI stop signal task (the same data also used in the original validation of fMRIPrep). Some variants led to substantial reductions in carbon emissions without sacrificing data quality: for instance, disabling FreeSurfer surface reconstruction reduced carbon emissions by 48%. We provide six recommendations for minimising emissions without compromising performance. By varying parameters and computational resources, neuroimagers can substantially reduce the carbon footprint of their preprocessing. This is one aspect of our research carbon footprint over which neuroimagers have control and agency to act upon.


Asunto(s)
Encéfalo , Huella de Carbono , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Femenino , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Adulto Joven , Mapeo Encefálico/métodos , Mapeo Encefálico/normas
11.
Hum Brain Mapp ; 45(9): e26721, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38899549

RESUMEN

With the rise of open data, identifiability of individuals based on 3D renderings obtained from routine structural magnetic resonance imaging (MRI) scans of the head has become a growing privacy concern. To protect subject privacy, several algorithms have been developed to de-identify imaging data using blurring, defacing or refacing. Completely removing facial structures provides the best re-identification protection but can significantly impact post-processing steps, like brain morphometry. As an alternative, refacing methods that replace individual facial structures with generic templates have a lower effect on the geometry and intensity distribution of original scans, and are able to provide more consistent post-processing results by the price of higher re-identification risk and computational complexity. In the current study, we propose a novel method for anonymized face generation for defaced 3D T1-weighted scans based on a 3D conditional generative adversarial network. To evaluate the performance of the proposed de-identification tool, a comparative study was conducted between several existing defacing and refacing tools, with two different segmentation algorithms (FAST and Morphobox). The aim was to evaluate (i) impact on brain morphometry reproducibility, (ii) re-identification risk, (iii) balance between (i) and (ii), and (iv) the processing time. The proposed method takes 9 s for face generation and is suitable for recovering consistent post-processing results after defacing.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Masculino , Femenino , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Neuroimagen/métodos , Neuroimagen/normas , Anonimización de la Información , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos
12.
Hum Brain Mapp ; 45(10): e26778, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-38980175

RESUMEN

Brain activity continuously fluctuates over time, even if the brain is in controlled (e.g., experimentally induced) states. Recent years have seen an increasing interest in understanding the complexity of these temporal variations, for example with respect to developmental changes in brain function or between-person differences in healthy and clinical populations. However, the psychometric reliability of brain signal variability and complexity measures-which is an important precondition for robust individual differences as well as longitudinal research-is not yet sufficiently studied. We examined reliability (split-half correlations) and test-retest correlations for task-free (resting-state) BOLD fMRI as well as split-half correlations for seven functional task data sets from the Human Connectome Project to evaluate their reliability. We observed good to excellent split-half reliability for temporal variability measures derived from rest and task fMRI activation time series (standard deviation, mean absolute successive difference, mean squared successive difference), and moderate test-retest correlations for the same variability measures under rest conditions. Brain signal complexity estimates (several entropy and dimensionality measures) showed moderate to good reliabilities under both, rest and task activation conditions. We calculated the same measures also for time-resolved (dynamic) functional connectivity time series and observed moderate to good reliabilities for variability measures, but poor reliabilities for complexity measures derived from functional connectivity time series. Global (i.e., mean across cortical regions) measures tended to show higher reliability than region-specific variability or complexity estimates. Larger subcortical regions showed similar reliability as cortical regions, but small regions showed lower reliability, especially for complexity measures. Lastly, we also show that reliability scores are only minorly dependent on differences in scan length and replicate our results across different parcellation and denoising strategies. These results suggest that the variability and complexity of BOLD activation time series are robust measures well-suited for individual differences research. Temporal variability of global functional connectivity over time provides an important novel approach to robustly quantifying the dynamics of brain function. PRACTITIONER POINTS: Variability and complexity measures of BOLD activation show good split-half reliability and moderate test-retest reliability. Measures of variability of global functional connectivity over time can robustly quantify neural dynamics. Length of fMRI data has only a minor effect on reliability.


Asunto(s)
Encéfalo , Conectoma , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Conectoma/normas , Conectoma/métodos , Oxígeno/sangre , Masculino , Femenino , Descanso/fisiología , Adulto , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Mapeo Encefálico/métodos , Mapeo Encefálico/normas
13.
Hum Brain Mapp ; 45(7): e26692, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38712767

RESUMEN

In neuroimaging studies, combining data collected from multiple study sites or scanners is becoming common to increase the reproducibility of scientific discoveries. At the same time, unwanted variations arise by using different scanners (inter-scanner biases), which need to be corrected before downstream analyses to facilitate replicable research and prevent spurious findings. While statistical harmonization methods such as ComBat have become popular in mitigating inter-scanner biases in neuroimaging, recent methodological advances have shown that harmonizing heterogeneous covariances results in higher data quality. In vertex-level cortical thickness data, heterogeneity in spatial autocorrelation is a critical factor that affects covariance heterogeneity. Our work proposes a new statistical harmonization method called spatial autocorrelation normalization (SAN) that preserves homogeneous covariance vertex-level cortical thickness data across different scanners. We use an explicit Gaussian process to characterize scanner-invariant and scanner-specific variations to reconstruct spatially homogeneous data across scanners. SAN is computationally feasible, and it easily allows the integration of existing harmonization methods. We demonstrate the utility of the proposed method using cortical thickness data from the Social Processes Initiative in the Neurobiology of the Schizophrenia(s) (SPINS) study. SAN is publicly available as an R package.


Asunto(s)
Corteza Cerebral , Imagen por Resonancia Magnética , Esquizofrenia , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Esquizofrenia/diagnóstico por imagen , Esquizofrenia/patología , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/anatomía & histología , Neuroimagen/métodos , Neuroimagen/normas , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Masculino , Femenino , Adulto , Distribución Normal , Grosor de la Corteza Cerebral
14.
Brain Topogr ; 37(5): 684-698, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38568279

RESUMEN

While 7T diffusion magnetic resonance imaging (dMRI) has high spatial resolution, its diffusion imaging quality is usually affected by signal loss due to B1 inhomogeneity, T2 decay, susceptibility, and chemical shift. In contrast, 3T dMRI has relative higher diffusion angular resolution, but lower spatial resolution. Combination of 3T and 7T dMRI, thus, may provide more detailed and accurate information about the voxel-wise fiber orientations to better understand the structural brain connectivity. However, this topic has not yet been thoroughly explored until now. In this study, we explored the feasibility of fusing 3T and 7T dMRI data to extract voxel-wise quantitative parameters at higher spatial resolution. After 3T and 7T dMRI data was preprocessed, respectively, 3T dMRI volumes were coregistered into 7T dMRI space. Then, 7T dMRI data was harmonized to the coregistered 3T dMRI B0 (b = 0) images. Last, harmonized 7T dMRI data was fused with 3T dMRI data according to four fusion rules proposed in this study. We employed high-quality 3T and 7T dMRI datasets (N = 24) from the Human Connectome Project to test our algorithms. The diffusion tensors (DTs) and orientation distribution functions (ODFs) estimated from the 3T-7T fused dMRI volumes were statistically analyzed. More voxels containing multiple fiber populations were found from the fused dMRI data than from 7T dMRI data set. Moreover, extra fiber directions were extracted in temporal brain regions from the fused dMRI data at Otsu's thresholds of quantitative anisotropy, but could not be extracted from 7T dMRI dataset. This study provides novel algorithms to fuse intra-subject 3T and 7T dMRI data for extracting more detailed information of voxel-wise quantitative parameters, and a new perspective to build more accurate structural brain networks.


Asunto(s)
Encéfalo , Imagen de Difusión por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador , Humanos , Encéfalo/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética/métodos , Imagen de Difusión por Resonancia Magnética/normas , Masculino , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Femenino , Adulto , Imagen de Difusión Tensora/métodos , Imagen de Difusión Tensora/normas , Adulto Joven
15.
Proc Natl Acad Sci U S A ; 117(52): 33051-33060, 2020 12 29.
Artículo en Inglés | MEDLINE | ID: mdl-33318169

RESUMEN

Microscopic evaluation of resected tissue plays a central role in the surgical management of cancer. Because optical microscopes have a limited depth-of-field (DOF), resected tissue is either frozen or preserved with chemical fixatives, sliced into thin sections placed on microscope slides, stained, and imaged to determine whether surgical margins are free of tumor cells-a costly and time- and labor-intensive procedure. Here, we introduce a deep-learning extended DOF (DeepDOF) microscope to quickly image large areas of freshly resected tissue to provide histologic-quality images of surgical margins without physical sectioning. The DeepDOF microscope consists of a conventional fluorescence microscope with the simple addition of an inexpensive (less than $10) phase mask inserted in the pupil plane to encode the light field and enhance the depth-invariance of the point-spread function. When used with a jointly optimized image-reconstruction algorithm, diffraction-limited optical performance to resolve subcellular features can be maintained while significantly extending the DOF (200 µm). Data from resected oral surgical specimens show that the DeepDOF microscope can consistently visualize nuclear morphology and other important diagnostic features across highly irregular resected tissue surfaces without serial refocusing. With the capability to quickly scan intact samples with subcellular detail, the DeepDOF microscope can improve tissue sampling during intraoperative tumor-margin assessment, while offering an affordable tool to provide histological information from resected tissue specimens in resource-limited settings.


Asunto(s)
Carcinoma/patología , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias de la Boca/patología , Algoritmos , Animales , Biopsia/instrumentación , Biopsia/métodos , Biopsia/normas , Calibración , Humanos , Procesamiento de Imagen Asistido por Computador/instrumentación , Procesamiento de Imagen Asistido por Computador/normas , Microscopía Fluorescente/instrumentación , Microscopía Fluorescente/métodos , Microscopía Fluorescente/normas , Porcinos
16.
Neuroimage ; 249: 118830, 2022 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34965454

RESUMEN

Diffusion MRI (dMRI) provides invaluable information for the study of tissue microstructure and brain connectivity, but suffers from a range of imaging artifacts that greatly challenge the analysis of results and their interpretability if not appropriately accounted for. This review will cover dMRI artifacts and preprocessing steps, some of which have not typically been considered in existing pipelines or reviews, or have only gained attention in recent years: brain/skull extraction, B-matrix incompatibilities w.r.t the imaging data, signal drift, Gibbs ringing, noise distribution bias, denoising, between- and within-volumes motion, eddy currents, outliers, susceptibility distortions, EPI Nyquist ghosts, gradient deviations, B1 bias fields, and spatial normalization. The focus will be on "what's new" since the notable advances prior to and brought by the Human Connectome Project (HCP), as presented in the predecessing issue on "Mapping the Connectome" in 2013. In addition to the development of novel strategies for dMRI preprocessing, exciting progress has been made in the availability of open source tools and reproducible pipelines, databases and simulation tools for the evaluation of preprocessing steps, and automated quality control frameworks, amongst others. Finally, this review will consider practical considerations and our view on "what's next" in dMRI preprocessing.


Asunto(s)
Encéfalo/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador , Imagen de Difusión por Resonancia Magnética/métodos , Imagen de Difusión por Resonancia Magnética/normas , Imagen de Difusión por Resonancia Magnética/tendencias , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Procesamiento de Imagen Asistido por Computador/tendencias
17.
Neuroimage ; 249: 118901, 2022 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-35026425

RESUMEN

INTRODUCTION: Full quantification of positron emission tomography (PET) data requires an input function. This generally means arterial blood sampling, which is invasive, labor-intensive and burdensome. There is no current, standardized method to fully quantify PET radiotracers with irreversible kinetics in the absence of blood data. Here, we present Source-to-Target Automatic Rotating Estimation (STARE), a novel, data-driven approach to quantify the net influx rate (Ki) of irreversible PET radiotracers, that requires only individual-level PET data and no blood data. We validate STARE with human [18F]FDG PET scans and assess its performance using simulations. METHODS: STARE builds upon a source-to-target tissue model, where the tracer time activity curves (TACs) in multiple "target" regions are expressed at once as a function of a "source" region, based on the two-tissue irreversible compartment model, and separates target region Ki from source Ki by fitting the source-to-target model across all target regions simultaneously. To ensure identifiability, data-driven, subject-specific anchoring is used in the STARE minimization, which takes advantage of the PET signal in a vasculature cluster in the field of view (FOV) that is automatically extracted and partial volume-corrected. To avoid the need for any a priori determination of a single source region, each of the considered regions acts in turn as the source, and a final Ki is estimated in each region by averaging the estimates obtained in each source rotation. RESULTS: In a large dataset of human [18F]FDG scans (N = 69), STARE Ki estimates were correlated with corresponding arterial blood-based Ki estimates (r = 0.80), with an overall regression slope of 0.88, and were precisely estimated, as assessed by comparing STARE Ki estimates across several runs of the algorithm (coefficient of variation across runs=6.74 ± 2.48%). In simulations, STARE Ki estimates were largely robust to factors that influence the individualized anchoring used within its algorithm. CONCLUSION: Through simulations and application to [18F]FDG PET data, feasibility is demonstrated for STARE blood-free, data-driven quantification of Ki. Future work will include applying STARE to PET data obtained with a portable PET camera and to other irreversible radiotracers.


Asunto(s)
Cerebelo/diagnóstico por imagen , Corteza Cerebral/diagnóstico por imagen , Fluorodesoxiglucosa F18/farmacocinética , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Radiofármacos/farmacocinética , Adulto , Humanos , Procesamiento de Imagen Asistido por Computador/normas , Modelos Teóricos , Tomografía de Emisión de Positrones/normas
18.
Neuroimage ; 249: 118835, 2022 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34936923

RESUMEN

Quantitative susceptibility mapping (QSM) is an MRI-based, computational method for anatomically localizing and measuring concentrations of specific biomarkers in tissue such as iron. Growing research suggests QSM is a viable method for evaluating the impact of iron overload in neurological disorders and on cognitive performance in aging. Several software toolboxes are currently available to reconstruct QSM maps from 3D GRE MR Images. However, few if any software packages currently exist that offer fully automated pipelines for QSM-based data analyses: from DICOM images to region-of-interest (ROI) based QSM values. Even less QSM-based software exist that offer quality control measures for evaluating the QSM output. Here, we address these gaps in the field by introducing and demonstrating the reliability and external validity of Ironsmith; an open-source, fully automated pipeline for creating and processing QSM maps, extracting QSM values from subcortical and cortical brain regions (89 ROIs) and evaluating the quality of QSM data using SNR measures and assessment of outlier regions on phase images. Ironsmith also features automatic filtering of QSM outlier values and precise CSF-only QSM reference masks that minimize partial volume effects. Testing of Ironsmith revealed excellent intra- and inter-rater reliability. Finally, external validity of Ironsmith was demonstrated via an anatomically selective relationship between motor performance and Ironsmith-derived QSM values in motor cortex. In sum, Ironsmith provides a freely-available, reliable, turn-key pipeline for QSM-based data analyses to support research on the impact of brain iron in aging and neurodegenerative disease.


Asunto(s)
Envejecimiento/metabolismo , Encéfalo/metabolismo , Procesamiento de Imagen Asistido por Computador/métodos , Hierro/metabolismo , Imagen por Resonancia Magnética/métodos , Neuroimagen/métodos , Programas Informáticos , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/normas , Imagen por Resonancia Magnética/normas , Neuroimagen/normas
19.
Hum Brain Mapp ; 43(4): 1179-1195, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34904312

RESUMEN

To acquire larger samples for answering complex questions in neuroscience, researchers have increasingly turned to multi-site neuroimaging studies. However, these studies are hindered by differences in images acquired across multiple sites. These effects have been shown to bias comparison between sites, mask biologically meaningful associations, and even introduce spurious associations. To address this, the field has focused on harmonizing data by removing site-related effects in the mean and variance of measurements. Contemporaneously with the increase in popularity of multi-center imaging, the use of machine learning (ML) in neuroimaging has also become commonplace. These approaches have been shown to provide improved sensitivity, specificity, and power due to their modeling the joint relationship across measurements in the brain. In this work, we demonstrate that methods for removing site effects in mean and variance may not be sufficient for ML. This stems from the fact that such methods fail to address how correlations between measurements can vary across sites. Data from the Alzheimer's Disease Neuroimaging Initiative is used to show that considerable differences in covariance exist across sites and that popular harmonization techniques do not address this issue. We then propose a novel harmonization method called Correcting Covariance Batch Effects (CovBat) that removes site effects in mean, variance, and covariance. We apply CovBat and show that within-site correlation matrices are successfully harmonized. Furthermore, we find that ML methods are unable to distinguish scanner manufacturer after our proposed harmonization is applied, and that the CovBat-harmonized data retain accurate prediction of disease group.


Asunto(s)
Corteza Cerebral/anatomía & histología , Corteza Cerebral/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Estudios Multicéntricos como Asunto , Neuroimagen , Conjuntos de Datos como Asunto , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Aprendizaje Automático , Modelos Teóricos , Estudios Multicéntricos como Asunto/métodos , Estudios Multicéntricos como Asunto/normas , Neuroimagen/métodos , Neuroimagen/normas
20.
Hum Brain Mapp ; 43(1): 207-233, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-33368865

RESUMEN

Structural hippocampal abnormalities are common in many neurological and psychiatric disorders, and variation in hippocampal measures is related to cognitive performance and other complex phenotypes such as stress sensitivity. Hippocampal subregions are increasingly studied, as automated algorithms have become available for mapping and volume quantification. In the context of the Enhancing Neuro Imaging Genetics through Meta Analysis Consortium, several Disease Working Groups are using the FreeSurfer software to analyze hippocampal subregion (subfield) volumes in patients with neurological and psychiatric conditions along with data from matched controls. In this overview, we explain the algorithm's principles, summarize measurement reliability studies, and demonstrate two additional aspects (subfield autocorrelation and volume/reliability correlation) with illustrative data. We then explain the rationale for a standardized hippocampal subfield segmentation quality control (QC) procedure for improved pipeline harmonization. To guide researchers to make optimal use of the algorithm, we discuss how global size and age effects can be modeled, how QC steps can be incorporated and how subfields may be aggregated into composite volumes. This discussion is based on a synopsis of 162 published neuroimaging studies (01/2013-12/2019) that applied the FreeSurfer hippocampal subfield segmentation in a broad range of domains including cognition and healthy aging, brain development and neurodegeneration, affective disorders, psychosis, stress regulation, neurotoxicity, epilepsy, inflammatory disease, childhood adversity and posttraumatic stress disorder, and candidate and whole genome (epi-)genetics. Finally, we highlight points where FreeSurfer-based hippocampal subfield studies may be optimized.


Asunto(s)
Hipocampo/anatomía & histología , Hipocampo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Neuroimagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Estudios Multicéntricos como Asunto , Neuroimagen/métodos , Neuroimagen/normas , Control de Calidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA