Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.366
Filtrar
1.
Hum Brain Mapp ; 45(7): e26692, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38712767

RESUMO

In neuroimaging studies, combining data collected from multiple study sites or scanners is becoming common to increase the reproducibility of scientific discoveries. At the same time, unwanted variations arise by using different scanners (inter-scanner biases), which need to be corrected before downstream analyses to facilitate replicable research and prevent spurious findings. While statistical harmonization methods such as ComBat have become popular in mitigating inter-scanner biases in neuroimaging, recent methodological advances have shown that harmonizing heterogeneous covariances results in higher data quality. In vertex-level cortical thickness data, heterogeneity in spatial autocorrelation is a critical factor that affects covariance heterogeneity. Our work proposes a new statistical harmonization method called spatial autocorrelation normalization (SAN) that preserves homogeneous covariance vertex-level cortical thickness data across different scanners. We use an explicit Gaussian process to characterize scanner-invariant and scanner-specific variations to reconstruct spatially homogeneous data across scanners. SAN is computationally feasible, and it easily allows the integration of existing harmonization methods. We demonstrate the utility of the proposed method using cortical thickness data from the Social Processes Initiative in the Neurobiology of the Schizophrenia(s) (SPINS) study. SAN is publicly available as an R package.


Assuntos
Córtex Cerebral , Imageamento por Ressonância Magnética , Esquizofrenia , Humanos , Imageamento por Ressonância Magnética/normas , Imageamento por Ressonância Magnética/métodos , Esquizofrenia/diagnóstico por imagem , Esquizofrenia/patologia , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/anatomia & histologia , Neuroimagem/métodos , Neuroimagem/normas , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Masculino , Feminino , Adulto , Distribuição Normal , Espessura Cortical do Cérebro
2.
Neuroimage ; 292: 120617, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38636639

RESUMO

A primary challenge to the data-driven analysis is the balance between poor generalizability of population-based research and characterizing more subject-, study- and population-specific variability. We previously introduced a fully automated spatially constrained independent component analysis (ICA) framework called NeuroMark and its functional MRI (fMRI) template. NeuroMark has been successfully applied in numerous studies, identifying brain markers reproducible across datasets and disorders. The first NeuroMark template was constructed based on young adult cohorts. We recently expanded on this initiative by creating a standardized normative multi-spatial-scale functional template using over 100,000 subjects, aiming to improve generalizability and comparability across studies involving diverse cohorts. While a unified template across the lifespan is desirable, a comprehensive investigation of the similarities and differences between components from different age populations might help systematically transform our understanding of the human brain by revealing the most well-replicated and variable network features throughout the lifespan. In this work, we introduced two significant expansions of NeuroMark templates first by generating replicable fMRI templates for infants, adolescents, and aging cohorts, and second by incorporating structural MRI (sMRI) and diffusion MRI (dMRI) modalities. Specifically, we built spatiotemporal fMRI templates based on 6,000 resting-state scans from four datasets. This is the first attempt to create robust ICA templates covering dynamic brain development across the lifespan. For the sMRI and dMRI data, we used two large publicly available datasets including more than 30,000 scans to build reliable templates. We employed a spatial similarity analysis to identify replicable templates and investigate the degree to which unique and similar patterns are reflective in different age populations. Our results suggest remarkably high similarity of the resulting adapted components, even across extreme age differences. With the new templates, the NeuroMark framework allows us to perform age-specific adaptations and to capture features adaptable to each modality, therefore facilitating biomarker identification across brain disorders. In sum, the present work demonstrates the generalizability of NeuroMark templates and suggests the potential of new templates to boost accuracy in mental health research and advance our understanding of lifespan and cross-modal alterations.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Humanos , Adulto , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas , Encéfalo/diagnóstico por imagem , Adolescente , Adulto Jovem , Masculino , Idoso , Feminino , Pessoa de Meia-Idade , Lactente , Criança , Envelhecimento/fisiologia , Pré-Escolar , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Idoso de 80 Anos ou mais , Neuroimagem/métodos , Neuroimagem/normas , Imagem de Difusão por Ressonância Magnética/métodos , Imagem de Difusão por Ressonância Magnética/normas
3.
J Neurosci Methods ; 406: 110112, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38508496

RESUMO

BACKGROUND: Visualizing edges is critical for neuroimaging. For example, edge maps enable quality assurance for the automatic alignment of an image from one modality (or individual) to another. NEW METHOD: We suggest that using the second derivative (difference of Gaussian, or DoG) provides robust edge detection. This method is tuned by size (which is typically known in neuroimaging) rather than intensity (which is relative). RESULTS: We demonstrate that this method performs well across a broad range of imaging modalities. The edge contours produced consistently form closed surfaces, whereas alternative methods may generate disconnected lines, introducing potential ambiguity in contiguity. COMPARISON WITH EXISTING METHODS: Current methods for computing edges are based on either the first derivative of the image (FSL), or a variation of the Canny Edge detection method (AFNI). These methods suffer from two primary limitations. First, the crucial tuning parameter for each of these methods relates to the image intensity. Unfortunately, image intensity is relative for most neuroimaging modalities making the performance of these methods unreliable. Second, these existing approaches do not necessarily generate a closed edge/surface, which can reduce the ability to determine the correspondence between a represented edge and another image. CONCLUSION: The second derivative is well suited for neuroimaging edge detection. We include this method as part of both the AFNI and FSL software packages, standalone code and online.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas , Encéfalo/diagnóstico por imagem , Imageamento Tridimensional/métodos , Imageamento Tridimensional/normas , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Neuroimagem/métodos , Neuroimagem/normas
4.
Plant Physiol ; 195(1): 378-394, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38298139

RESUMO

Automated guard cell detection and measurement are vital for understanding plant physiological performance and ecological functioning in global water and carbon cycles. Most current methods for measuring guard cells and stomata are laborious, time-consuming, prone to bias, and limited in scale. We developed StoManager1, a high-throughput tool utilizing geometrical, mathematical algorithms, and convolutional neural networks to automatically detect, count, and measure over 30 guard cell and stomatal metrics, including guard cell and stomatal area, length, width, stomatal aperture area/guard cell area, orientation, stomatal evenness, divergence, and aggregation index. Combined with leaf functional traits, some of these StoManager1-measured guard cell and stomatal metrics explained 90% and 82% of tree biomass and intrinsic water use efficiency (iWUE) variances in hardwoods, making them substantial factors in leaf physiology and tree growth. StoManager1 demonstrated exceptional precision and recall (mAP@0.5 over 0.96), effectively capturing diverse stomatal properties across over 100 species. StoManager1 facilitates the automation of measuring leaf stomatal and guard cells, enabling broader exploration of stomatal control in plant growth and adaptation to environmental stress and climate change. This has implications for global gross primary productivity (GPP) modeling and estimation, as integrating stomatal metrics can enhance predictions of plant growth and resource usage worldwide. Easily accessible open-source code and standalone Windows executable applications are available on a GitHub repository (https://github.com/JiaxinWang123/StoManager1) and Zenodo (https://doi.org/10.5281/zenodo.7686022).


Assuntos
Botânica , Biologia Celular , Células Vegetais , Estômatos de Plantas , Software , Estômatos de Plantas/citologia , Estômatos de Plantas/crescimento & desenvolvimento , Células Vegetais/fisiologia , Botânica/instrumentação , Botânica/métodos , Biologia Celular/instrumentação , Processamento de Imagem Assistida por Computador/normas , Algoritmos , Folhas de Planta/citologia , Redes Neurais de Computação , Ensaios de Triagem em Larga Escala/instrumentação , Ensaios de Triagem em Larga Escala/métodos , Ensaios de Triagem em Larga Escala/normas , Software/normas
6.
IEEE J Biomed Health Inform ; 27(8): 3912-3923, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37155391

RESUMO

Semi-supervised learning is becoming an effective solution in medical image segmentation because annotations are costly and tedious to acquire. Methods based on the teacher-student model use consistency regularization and uncertainty estimation and have shown good potential in dealing with limited annotated data. Nevertheless, the existing teacher-student model is seriously limited by the exponential moving average algorithm, which leads to the optimization trap. Moreover, the classic uncertainty estimation method calculates the global uncertainty for images but does not consider local region-level uncertainty, which is unsuitable for medical images with blurry regions. In this article, the Voxel Stability and Reliability Constraint (VSRC) model is proposed to address these issues. Specifically, the Voxel Stability Constraint (VSC) strategy is introduced to optimize parameters and exchange effective knowledge between two independent initialized models, which can break through the performance bottleneck and avoid model collapse. Moreover, a new uncertainty estimation strategy, the Voxel Reliability Constraint (VRC), is proposed for use in our semi-supervised model to consider the uncertainty at the local region level. We further extend our model to auxiliary tasks and propose a task-level consistency regularization with uncertainty estimation. Extensive experiments on two 3D medical image datasets demonstrate that our method outperforms other state-of-the-art semi-supervised medical image segmentation methods under limited supervision.


Assuntos
Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado , Algoritmos , Conjuntos de Dados como Assunto , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Reprodutibilidade dos Testes , Estudantes , Ensino , Incerteza , Humanos
7.
Blood Adv ; 7(16): 4621-4630, 2023 08 22.
Artigo em Inglês | MEDLINE | ID: mdl-37146262

RESUMO

Examination of red blood cell (RBC) morphology in peripheral blood smears can help diagnose hematologic diseases, even in resource-limited settings, but this analysis remains subjective and semiquantitative with low throughput. Prior attempts to develop automated tools have been hampered by their poor reproducibility and limited clinical validation. Here, we present a novel, open-source machine-learning approach (denoted as RBC-diff) to quantify abnormal RBCs in peripheral smear images and generate an RBC morphology differential. RBC-diff cell counts showed high accuracy for single-cell classification (mean AUC, 0.93) and quantitation across smears (mean R2, 0.76 compared with experts, interexperts R2, 0.75). RBC-diff counts were concordant with the clinical morphology grading for 300 000+ images and recovered the expected pathophysiologic signals in diverse clinical cohorts. Criteria using RBC-diff counts distinguished thrombotic thrombocytopenic purpura and hemolytic uremic syndrome from other thrombotic microangiopathies, providing greater specificity than clinical morphology grading (72% vs 41%; P < .001) while maintaining high sensitivity (94% to 100%). Elevated RBC-diff schistocyte counts were associated with increased 6-month all-cause mortality in a cohort of 58 950 inpatients (9.5% mortality for schist. >1%, vs 4.7% for schist; <0.5%; P < .001) after controlling for comorbidities, demographics, clinical morphology grading, and blood count indices. RBC-diff also enabled the estimation of single-cell volume-morphology distributions, providing insight into the influence of morphology on routine blood count measures. Our codebase and expert-annotated images are included here to spur further advancement. These results illustrate that computer vision can enable rapid and accurate quantitation of RBC morphology, which may provide value in both clinical and research contexts.


Assuntos
Eritrócitos Anormais , Doenças Hematológicas , Processamento de Imagem Assistida por Computador , Humanos , Eritrócitos Anormais/citologia , Doenças Hematológicas/diagnóstico por imagem , Doenças Hematológicas/patologia , Prognóstico , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Aprendizado de Máquina , Forma Celular
8.
Br J Radiol ; 96(1145): 20220704, 2023 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-36802348

RESUMO

OBJECTIVE: The study aims to evaluate the diagnostic efficacy of radiologists and radiology trainees in digital breast tomosynthesis (DBT) alone vs DBT plus synthesized view (SV) for an understanding of the adequacy of DBT images to identify cancer lesions. METHODS: Fifty-five observers (30 radiologists and 25 radiology trainees) participated in reading a set of 35 cases (15 cancer) with 28 readers reading DBT and 27 readers reading DBT plus SV. Two groups of readers had similar experience in interpreting mammograms. The performances of participants in each reading mode were compared with the ground truth and calculated in term of specificity, sensitivity, and ROC AUC. The cancer detection rate in various levels of breast density, lesion types and lesion sizes between 'DBT' and 'DBT + SV' were also analyzed. The difference in diagnostic accuracy of readers between two reading modes was assessed using Man-Whitney U test. p < 0.05 indicated a significant result. RESULTS: There was no significant difference in specificity (0.67-vs-0.65; p = 0.69), sensitivity (0.77-vs-0.71; p = 0.09), ROC AUC (0.77-vs-0.73; p = 0.19) of radiologists reading DBT plus SV compared with radiologists reading DBT. Similar result was found in radiology trainees with no significant difference in specificity (0.70-vs-0.63; p = 0.29), sensitivity (0.44-vs-0.55; p = 0.19), ROC AUC (0.59-vs-0.62; p = 0.60) between two reading modes. Radiologists and trainees obtained similar results in two reading modes for cancer detection rate with different levels of breast density, cancer types and sizes of lesions (p > 0.05). CONCLUSION: Findings show that the diagnostic performances of radiologists and radiology trainees in DBT alone and DBT plus SV were equivalent in identifying cancer and normal cases. ADVANCES IN KNOWLEDGE: DBT alone had equivalent diagnostic accuracy as DBT plus SV which could imply the consideration of using DBT as a sole modality without SV.


Assuntos
Neoplasias da Mama , Processamento de Imagem Assistida por Computador , Mamografia , Radiologistas , Radiologistas/normas , Radiologistas/estatística & dados numéricos , Mama/diagnóstico por imagem , Mama/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Mamografia/normas , Processamento de Imagem Assistida por Computador/normas , Humanos , Feminino , Sensibilidade e Especificidade
9.
IEEE J Biomed Health Inform ; 27(2): 992-1003, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36378793

RESUMO

In computer-aided diagnosis and treatment planning, accurate segmentation of medical images plays an essential role, especially for some hard regions including boundaries, small objects and background interference. However, existing segmentation loss functions including distribution-, region- and boundary-based losses cannot achieve satisfactory performances on these hard regions. In this paper, a boundary-sensitive loss function with location constraint is proposed for hard region segmentation in medical images, which provides three advantages: i) our Boundary-Sensitive loss (BS-loss) can automatically pay more attention to the hard-to-segment boundaries (e.g., thin structures and blurred boundaries), thus obtaining finer object boundaries; ii) BS-loss also can adjust its attention to small objects during training to segment them more accurately; and iii) our location constraint can alleviate the negative impact of the background interference, through the distribution matching of pixels between prediction and Ground Truth (GT) along each axis. By resorting to the proposed BS-loss and location constraint, the hard regions in both foreground and background are considered. Experimental results on three public datasets demonstrate the superiority of our method. Specifically, compared to the second-best method tested in this study, our method improves performance on hard regions in terms of Dice similarity coefficient (DSC) and 95% Hausdorff distance (95%HD) of up to 4.17% and 73% respectively. In addition, it also achieves the best overall segmentation performance. Hence, we can conclude that our method can accurately segment these hard regions and improve the overall segmentation performance in medical images.


Assuntos
Diagnóstico por Computador , Processamento de Imagem Assistida por Computador , Humanos , Diagnóstico por Computador/métodos , Diagnóstico por Computador/normas , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Conjuntos de Dados como Assunto
10.
Zebrafish ; 19(6): 213-217, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36067119

RESUMO

The article assesses the developments in automated phenotype pattern recognition: Potential spikes in classification performance, even when facing the common small-scale biomedical data set, and as a reader, you will find out about changes in the development effort and complexity for researchers and practitioners. After reading, you will be aware of the benefits and unreasonable effectiveness and ease of use of an automated end-to-end deep learning pipeline for classification tasks of biomedical perception systems.


Assuntos
Processamento de Imagem Assistida por Computador , Peixe-Zebra , Animais , Processamento de Imagem Assistida por Computador/normas , Fenótipo , Peixe-Zebra/classificação , Peixe-Zebra/genética
11.
Comput Methods Programs Biomed ; 218: 106707, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35255374

RESUMO

BACKGROUND AND OBJECTIVE: Heart disease is a vital disease that has threatened human health, and is the number one killer of human life. Moreover, with the added influence of recent health factors, its incidence rate keeps showing an upward trend. Today, cardiac magnetic resonance (CMR) imaging can provide a full range of structural and functional information for the heart, and has become an important tool for the diagnosis and treatment of heart disease. Therefore, improving the image resolution of CMR has an important medical value for the diagnosis and condition assessment of heart disease. At present, most single-image super-resolution (SISR) reconstruction methods have some serious problems, such as insufficient feature information mining, difficulty to determine the dependence of each channel of feature map, and reconstruction error when reconstructing high-resolution image. METHODS: To solve these problems, we have proposed and implemented a dual U-Net residual network (DURN) for super-resolution of CMR images. Specifically, we first propose a U-Net residual network (URN) model, which is divided into the up-branch and the down-branch. The up-branch is composed of residual blocks and up-blocks to extract and upsample deep features; the down-branch is composed of residual blocks and down-blocks to extract and downsample deep features. Based on the URN model, we employ this a dual U-Net residual network (DURN) model, which combines the extracted deep features of the same position between the first URN and the second URN through residual connection. It can make full use of the features extracted by the first URN to extract deeper features of low-resolution images. RESULTS: When the scale factors are 2, 3, and 4, our DURN can obtain 37.86 dB, 33.96 dB, and 31.65 dB on the Set5 dataset, which shows (i) a maximum improvement of 4.17 dB, 3.55 dB, and 3.22dB over the Bicubic algorithm, and (ii) a minimum improvement of 0.34 dB, 0.14 dB, and 0.11 dB over the LapSRN algorithm. CONCLUSION: Comprehensive experimental study results on benchmark datasets demonstrate that our proposed DURN can not only achieve better performance for peak signal to noise ratio (PSNR) and structural similarity index (SSIM) values than other state-of-the-art SR image algorithms, but also reconstruct clearer super-resolution CMR images which have richer details, edges, and texture.


Assuntos
Cardiopatias/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/normas , Algoritmos , Progressão da Doença , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Razão Sinal-Ruído
12.
Med Image Anal ; 78: 102392, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35235896

RESUMO

The propensity of task-based functional magnetic resonance imaging (T-fMRI) to large physiological fluctuations, measurement noise, and imaging artifacts entail longer scans and higher temporal resolution (trading off spatial resolution) to alleviate the effects of degradation. This paper focuses on methods towards reducing scan times and enabling higher spatial resolution in T-fMRI. We propose a novel mixed-dictionary model combining (i) the task-based design matrix, (ii) a learned dictionary from resting-state fMRI, and (iii) an analytically-defined wavelet frame. For model fitting, we propose a novel adaptation of the inference framework relying on variational Bayesian expectation maximization with nested minorization. We leverage the mixed-dictionary model coupled with variational inference to enable 2×shorter scan times in T-fMRI, improving activation-map estimates towards the same quality as those resulting from longer scans. We also propose a scheme with potential to increase spatial resolution through temporally undersampled acquisition. Results on motor-task fMRI and gambling-task fMRI show that our framework leads to improved activation-map estimates over the state of the art.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Algoritmos , Teorema de Bayes , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/métodos , Fatores de Tempo
13.
Med Image Anal ; 78: 102395, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35231851

RESUMO

Medical image segmentation can provide a reliable basis for further clinical analysis and disease diagnosis. With the development of convolutional neural networks (CNNs), medical image segmentation performance has advanced significantly. However, most existing CNN-based methods often produce unsatisfactory segmentation masks without accurate object boundaries. This problem is caused by the limited context information and inadequate discriminative feature maps after consecutive pooling and convolution operations. Additionally, medical images are characterized by high intra-class variation, inter-class indistinction and noise, extracting powerful context and aggregating discriminative features for fine-grained segmentation remain challenging. In this study, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation to capture richer context and preserve fine spatial information, which incorporates encoder-decoder architecture. In each stage of the encoder sub-network, a proposed pyramid edge extraction module first obtains multi-granularity edge information. Then a newly designed mini multi-task learning module for jointly learning segments the object masks and detects lesion boundaries, in which a new interactive attention layer is introduced to bridge the two tasks. In this way, information complementarity between different tasks is achieved, which effectively leverages the boundary information to offer strong cues for better segmentation prediction. Finally, a cross feature fusion module acts to selectively aggregate multi-level features from the entire encoder sub-network. By cascading these three modules, richer context and fine-grain features of each stage are encoded and then delivered to the decoder. The results of extensive experiments on five datasets show that the proposed BA-Net outperforms state-of-the-art techniques.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Aprendizagem
14.
Sci Rep ; 12(1): 2839, 2022 02 18.
Artigo em Inglês | MEDLINE | ID: mdl-35181681

RESUMO

We implemented a two-dimensional convolutional neural network (CNN) for classification of polar maps extracted from Carimas (Turku PET Centre, Finland) software used for myocardial perfusion analysis. 138 polar maps from 15O-H2O stress perfusion study in JPEG format from patients classified as ischemic or non-ischemic based on finding obstructive coronary artery disease (CAD) on invasive coronary artery angiography were used. The CNN was evaluated against the clinical interpretation. The classification accuracy was evaluated with: accuracy (ACC), area under the receiver operating characteristic curve (AUC), F1 score (F1S), sensitivity (SEN), specificity (SPE) and precision (PRE). The CNN had a median ACC of 0.8261, AUC of 0.8058, F1S of 0.7647, SEN of 0.6500, SPE of 0.9615 and PRE of 0.9286. In comparison, clinical interpretation had ACC of 0.8696, AUC of 0.8558, F1S of 0.8333, SEN of 0.7500, SPE of 0.9615 and PRE of 0.9375. The CNN classified only 2 cases differently than the clinical interpretation. The clinical interpretation and CNN had similar accuracy in classifying false positives and true negatives. Classification of ischemia is feasible in 15O-H2O stress perfusion imaging using JPEG polar maps alone with a custom CNN and may be useful for the detection of obstructive CAD.


Assuntos
Doença da Artéria Coronariana/diagnóstico por imagem , Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/normas , Isquemia/diagnóstico por imagem , Idoso , Angiografia Coronária , Doença da Artéria Coronariana/diagnóstico , Doença da Artéria Coronariana/fisiopatologia , Feminino , Finlândia/epidemiologia , Coração/fisiopatologia , Humanos , Isquemia/diagnóstico , Isquemia/patologia , Masculino , Pessoa de Meia-Idade , Imagem de Perfusão do Miocárdio/classificação , Imagem de Perfusão do Miocárdio/normas , Redes Neurais de Computação , Software
15.
Neuroimage ; 249: 118901, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35026425

RESUMO

INTRODUCTION: Full quantification of positron emission tomography (PET) data requires an input function. This generally means arterial blood sampling, which is invasive, labor-intensive and burdensome. There is no current, standardized method to fully quantify PET radiotracers with irreversible kinetics in the absence of blood data. Here, we present Source-to-Target Automatic Rotating Estimation (STARE), a novel, data-driven approach to quantify the net influx rate (Ki) of irreversible PET radiotracers, that requires only individual-level PET data and no blood data. We validate STARE with human [18F]FDG PET scans and assess its performance using simulations. METHODS: STARE builds upon a source-to-target tissue model, where the tracer time activity curves (TACs) in multiple "target" regions are expressed at once as a function of a "source" region, based on the two-tissue irreversible compartment model, and separates target region Ki from source Ki by fitting the source-to-target model across all target regions simultaneously. To ensure identifiability, data-driven, subject-specific anchoring is used in the STARE minimization, which takes advantage of the PET signal in a vasculature cluster in the field of view (FOV) that is automatically extracted and partial volume-corrected. To avoid the need for any a priori determination of a single source region, each of the considered regions acts in turn as the source, and a final Ki is estimated in each region by averaging the estimates obtained in each source rotation. RESULTS: In a large dataset of human [18F]FDG scans (N = 69), STARE Ki estimates were correlated with corresponding arterial blood-based Ki estimates (r = 0.80), with an overall regression slope of 0.88, and were precisely estimated, as assessed by comparing STARE Ki estimates across several runs of the algorithm (coefficient of variation across runs=6.74 ± 2.48%). In simulations, STARE Ki estimates were largely robust to factors that influence the individualized anchoring used within its algorithm. CONCLUSION: Through simulations and application to [18F]FDG PET data, feasibility is demonstrated for STARE blood-free, data-driven quantification of Ki. Future work will include applying STARE to PET data obtained with a portable PET camera and to other irreversible radiotracers.


Assuntos
Cerebelo/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Fluordesoxiglucose F18/farmacocinética , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Compostos Radiofarmacêuticos/farmacocinética , Adulto , Humanos , Processamento de Imagem Assistida por Computador/normas , Modelos Teóricos , Tomografia por Emissão de Pósitrons/normas
16.
Trends Cell Biol ; 32(4): 295-310, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35067424

RESUMO

Single nucleus segmentation is a frequent challenge of microscopy image processing, since it is the first step of many quantitative data analysis pipelines. The quality of tracking single cells, extracting features or classifying cellular phenotypes strongly depends on segmentation accuracy. Worldwide competitions have been held, aiming to improve segmentation, and recent years have definitely brought significant improvements: large annotated datasets are now freely available, several 2D segmentation strategies have been extended to 3D, and deep learning approaches have increased accuracy. However, even today, no generally accepted solution and benchmarking platform exist. We review the most recent single-cell segmentation tools, and provide an interactive method browser to select the most appropriate solution.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia , Núcleo Celular , Humanos , Processamento de Imagem Assistida por Computador/normas , Microscopia/métodos , Microscopia/tendências , Análise de Célula Única/métodos
17.
Fertil Steril ; 117(3): 528-535, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34998577

RESUMO

OBJECTIVE: To perform a series of analyses characterizing an artificial intelligence (AI) model for ranking blastocyst-stage embryos. The primary objective was to evaluate the benefit of the model for predicting clinical pregnancy, whereas the secondary objective was to identify limitations that may impact clinical use. DESIGN: Retrospective study. SETTING: Consortium of 11 assisted reproductive technology centers in the United States. PATIENT(S): Static images of 5,923 transferred blastocysts and 2,614 nontransferred aneuploid blastocysts. INTERVENTION(S): None. MAIN OUTCOME MEASURE(S): Prediction of clinical pregnancy (fetal heartbeat). RESULT(S): The area under the curve of the AI model ranged from 0.6 to 0.7 and outperformed manual morphology grading overall and on a per-site basis. A bootstrapped study predicted improved pregnancy rates between +5% and +12% per site using AI compared with manual grading using an inverted microscope. One site that used a low-magnification stereo zoom microscope did not show predicted improvement with the AI. Visualization techniques and attribution algorithms revealed that the features learned by the AI model largely overlap with the features of manual grading systems. Two sources of bias relating to the type of microscope and presence of embryo holding micropipettes were identified and mitigated. The analysis of AI scores in relation to pregnancy rates showed that score differences of ≥0.1 (10%) correspond with improved pregnancy rates, whereas score differences of <0.1 may not be clinically meaningful. CONCLUSION(S): This study demonstrates the potential of AI for ranking blastocyst stage embryos and highlights potential limitations related to image quality, bias, and granularity of scores.


Assuntos
Inteligência Artificial/normas , Blastocisto/citologia , Transferência Embrionária/normas , Processamento de Imagem Assistida por Computador/normas , Blastocisto/fisiologia , Estudos de Coortes , Bases de Dados Factuais/normas , Transferência Embrionária/métodos , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Microscopia/normas , Gravidez , Taxa de Gravidez/tendências , Estudos Retrospectivos
18.
Hum Brain Mapp ; 43(1): 234-243, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-33067842

RESUMO

As stroke mortality rates decrease, there has been a surge of effort to study poststroke dementia (PSD) to improve long-term quality of life for stroke survivors. Hippocampal volume may be an important neuroimaging biomarker in poststroke dementia, as it has been associated with many other forms of dementia. However, studying hippocampal volume using MRI requires hippocampal segmentation. Advances in automated segmentation methods have allowed for studying the hippocampus on a large scale, which is important for robust results in the heterogeneous stroke population. However, most of these automated methods use a single atlas-based approach and may fail in the presence of severe structural abnormalities common in stroke. Hippodeep, a new convolutional neural network-based hippocampal segmentation method, does not rely solely on a single atlas-based approach and thus may be better suited for stroke populations. Here, we compared quality control and the accuracy of segmentations generated by Hippodeep and two well-accepted hippocampal segmentation methods on stroke MRIs (FreeSurfer 6.0 whole hippocampus and FreeSurfer 6.0 sum of hippocampal subfields). Quality control was performed using a stringent protocol for visual inspection of the segmentations, and accuracy was measured as volumetric correlation with manual segmentations. Hippodeep performed significantly better than both FreeSurfer methods in terms of quality control. All three automated segmentation methods had good correlation with manual segmentations and no one method was significantly more correlated than the others. Overall, this study suggests that both Hippodeep and FreeSurfer may be useful for hippocampal segmentation in stroke rehabilitation research, but Hippodeep may be more robust to stroke lesion anatomy.


Assuntos
Hipocampo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Neuroimagem/métodos , Acidente Vascular Cerebral/diagnóstico por imagem , Conjuntos de Dados como Assunto , Hipocampo/patologia , Humanos , Processamento de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/normas , Neuroimagem/normas , Controle de Qualidade , Acidente Vascular Cerebral/patologia
19.
Hum Brain Mapp ; 43(3): 1112-1128, 2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-34773436

RESUMO

Task-fMRI researchers have great flexibility as to how they analyze their data, with multiple methodological options to choose from at each stage of the analysis workflow. While the development of tools and techniques has broadened our horizons for comprehending the complexities of the human brain, a growing body of research has highlighted the pitfalls of such methodological plurality. In a recent study, we found that the choice of software package used to run the analysis pipeline can have a considerable impact on the final group-level results of a task-fMRI investigation (Bowring et al., 2019, BMN). Here we revisit our work, seeking to identify the stages of the pipeline where the greatest variation between analysis software is induced. We carry out further analyses on the three datasets evaluated in BMN, employing a common processing strategy across parts of the analysis workflow and then utilizing procedures from three software packages (AFNI, FSL, and SPM) across the remaining steps of the pipeline. We use quantitative methods to compare the statistical maps and isolate the main stages of the workflow where the three packages diverge. Across all datasets, we find that variation between the packages' results is largely attributable to a handful of individual analysis stages, and that these sources of variability were heterogeneous across the datasets (e.g., choice of first-level signal model had the most impact for the balloon analog risk task dataset, while first-level noise model and group-level model were more influential for the false belief and antisaccade task datasets, respectively). We also observe areas of the analysis workflow where changing the software package causes minimal differences in the final results, finding that the group-level results were largely unaffected by which software package was used to model the low-frequency fMRI drifts.


Assuntos
Mapeamento Encefálico , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Encéfalo/anatomia & histologia , Mapeamento Encefálico/métodos , Mapeamento Encefálico/normas , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas
20.
Hum Brain Mapp ; 43(1): 207-233, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-33368865

RESUMO

Structural hippocampal abnormalities are common in many neurological and psychiatric disorders, and variation in hippocampal measures is related to cognitive performance and other complex phenotypes such as stress sensitivity. Hippocampal subregions are increasingly studied, as automated algorithms have become available for mapping and volume quantification. In the context of the Enhancing Neuro Imaging Genetics through Meta Analysis Consortium, several Disease Working Groups are using the FreeSurfer software to analyze hippocampal subregion (subfield) volumes in patients with neurological and psychiatric conditions along with data from matched controls. In this overview, we explain the algorithm's principles, summarize measurement reliability studies, and demonstrate two additional aspects (subfield autocorrelation and volume/reliability correlation) with illustrative data. We then explain the rationale for a standardized hippocampal subfield segmentation quality control (QC) procedure for improved pipeline harmonization. To guide researchers to make optimal use of the algorithm, we discuss how global size and age effects can be modeled, how QC steps can be incorporated and how subfields may be aggregated into composite volumes. This discussion is based on a synopsis of 162 published neuroimaging studies (01/2013-12/2019) that applied the FreeSurfer hippocampal subfield segmentation in a broad range of domains including cognition and healthy aging, brain development and neurodegeneration, affective disorders, psychosis, stress regulation, neurotoxicity, epilepsy, inflammatory disease, childhood adversity and posttraumatic stress disorder, and candidate and whole genome (epi-)genetics. Finally, we highlight points where FreeSurfer-based hippocampal subfield studies may be optimized.


Assuntos
Hipocampo/anatomia & histologia , Hipocampo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Neuroimagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/normas , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética/normas , Estudos Multicêntricos como Assunto , Neuroimagem/métodos , Neuroimagem/normas , Controle de Qualidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...