Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.369
Filtrar
1.
Hum Brain Mapp ; 45(10): e26778, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-38980175

RESUMEN

Brain activity continuously fluctuates over time, even if the brain is in controlled (e.g., experimentally induced) states. Recent years have seen an increasing interest in understanding the complexity of these temporal variations, for example with respect to developmental changes in brain function or between-person differences in healthy and clinical populations. However, the psychometric reliability of brain signal variability and complexity measures-which is an important precondition for robust individual differences as well as longitudinal research-is not yet sufficiently studied. We examined reliability (split-half correlations) and test-retest correlations for task-free (resting-state) BOLD fMRI as well as split-half correlations for seven functional task data sets from the Human Connectome Project to evaluate their reliability. We observed good to excellent split-half reliability for temporal variability measures derived from rest and task fMRI activation time series (standard deviation, mean absolute successive difference, mean squared successive difference), and moderate test-retest correlations for the same variability measures under rest conditions. Brain signal complexity estimates (several entropy and dimensionality measures) showed moderate to good reliabilities under both, rest and task activation conditions. We calculated the same measures also for time-resolved (dynamic) functional connectivity time series and observed moderate to good reliabilities for variability measures, but poor reliabilities for complexity measures derived from functional connectivity time series. Global (i.e., mean across cortical regions) measures tended to show higher reliability than region-specific variability or complexity estimates. Larger subcortical regions showed similar reliability as cortical regions, but small regions showed lower reliability, especially for complexity measures. Lastly, we also show that reliability scores are only minorly dependent on differences in scan length and replicate our results across different parcellation and denoising strategies. These results suggest that the variability and complexity of BOLD activation time series are robust measures well-suited for individual differences research. Temporal variability of global functional connectivity over time provides an important novel approach to robustly quantifying the dynamics of brain function. PRACTITIONER POINTS: Variability and complexity measures of BOLD activation show good split-half reliability and moderate test-retest reliability. Measures of variability of global functional connectivity over time can robustly quantify neural dynamics. Length of fMRI data has only a minor effect on reliability.


Asunto(s)
Encéfalo , Conectoma , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Reproducibilidad de los Resultados , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Conectoma/normas , Conectoma/métodos , Oxígeno/sangre , Masculino , Femenino , Descanso/fisiología , Adulto , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Mapeo Encefálico/métodos , Mapeo Encefálico/normas
2.
Transl Vis Sci Technol ; 13(6): 16, 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38904611

RESUMEN

Purpose: This study enhances Meibomian gland (MG) infrared image analysis in dry eye (DE) research through artificial intelligence (AI). It is comprised of two main stages: automated eyelid detection and tarsal plate segmentation to standardize meibography image analysis. The goal is to address limitations of existing assessment methods, bridge the curated and real-world dataset gap, and standardize MG image analysis. Methods: The approach involves a two-stage process: automated eyelid detection and tarsal plate segmentation. In the first stage, an AI model trained on curated data identifies relevant eyelid areas in non-curated datasets. The second stage refines the eyelid area in meibography images, enabling precise comparisons between normal and DE subjects. This approach also includes specular reflection removal and tarsal plate mask refinement. Results: The methodology achieved a promising instance-wise accuracy of 80.8% for distinguishing meibography images from 399 DE and 235 non-DE subjects. By integrating diverse datasets and refining the area of interest, this approach enhances meibography feature extraction accuracy. Dimension reduction through Uniform Manifold Approximation and Projection (UMAP) allows feature visualization, revealing distinct clusters for DE and non-DE phenotypes. Conclusions: The AI-driven methodology presented here quantifies and classifies meibography image features and standardizes the analysis process. By bootstrapping the model from curated datasets, this methodology addresses real-world dataset challenges to enhance the accuracy of meibography image feature extraction. Translational Relevance: The study presents a standardized method for meibography image analysis. This method could serve as a valuable tool in facilitating more targeted investigations into MG characteristics.


Asunto(s)
Inteligencia Artificial , Síndromes de Ojo Seco , Glándulas Tarsales , Humanos , Síndromes de Ojo Seco/diagnóstico por imagen , Glándulas Tarsales/diagnóstico por imagen , Femenino , Masculino , Persona de Mediana Edad , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Adulto , Técnicas de Diagnóstico Oftalmológico/normas , Anciano , Rayos Infrarrojos
3.
Hum Brain Mapp ; 45(9): e26721, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38899549

RESUMEN

With the rise of open data, identifiability of individuals based on 3D renderings obtained from routine structural magnetic resonance imaging (MRI) scans of the head has become a growing privacy concern. To protect subject privacy, several algorithms have been developed to de-identify imaging data using blurring, defacing or refacing. Completely removing facial structures provides the best re-identification protection but can significantly impact post-processing steps, like brain morphometry. As an alternative, refacing methods that replace individual facial structures with generic templates have a lower effect on the geometry and intensity distribution of original scans, and are able to provide more consistent post-processing results by the price of higher re-identification risk and computational complexity. In the current study, we propose a novel method for anonymized face generation for defaced 3D T1-weighted scans based on a 3D conditional generative adversarial network. To evaluate the performance of the proposed de-identification tool, a comparative study was conducted between several existing defacing and refacing tools, with two different segmentation algorithms (FAST and Morphobox). The aim was to evaluate (i) impact on brain morphometry reproducibility, (ii) re-identification risk, (iii) balance between (i) and (ii), and (iv) the processing time. The proposed method takes 9 s for face generation and is suitable for recovering consistent post-processing results after defacing.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Adulto , Encéfalo/diagnóstico por imagen , Encéfalo/anatomía & histología , Masculino , Femenino , Redes Neurales de la Computación , Imagenología Tridimensional/métodos , Neuroimagen/métodos , Neuroimagen/normas , Anonimización de la Información , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos
4.
Hum Brain Mapp ; 45(7): e26692, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38712767

RESUMEN

In neuroimaging studies, combining data collected from multiple study sites or scanners is becoming common to increase the reproducibility of scientific discoveries. At the same time, unwanted variations arise by using different scanners (inter-scanner biases), which need to be corrected before downstream analyses to facilitate replicable research and prevent spurious findings. While statistical harmonization methods such as ComBat have become popular in mitigating inter-scanner biases in neuroimaging, recent methodological advances have shown that harmonizing heterogeneous covariances results in higher data quality. In vertex-level cortical thickness data, heterogeneity in spatial autocorrelation is a critical factor that affects covariance heterogeneity. Our work proposes a new statistical harmonization method called spatial autocorrelation normalization (SAN) that preserves homogeneous covariance vertex-level cortical thickness data across different scanners. We use an explicit Gaussian process to characterize scanner-invariant and scanner-specific variations to reconstruct spatially homogeneous data across scanners. SAN is computationally feasible, and it easily allows the integration of existing harmonization methods. We demonstrate the utility of the proposed method using cortical thickness data from the Social Processes Initiative in the Neurobiology of the Schizophrenia(s) (SPINS) study. SAN is publicly available as an R package.


Asunto(s)
Corteza Cerebral , Imagen por Resonancia Magnética , Esquizofrenia , Humanos , Imagen por Resonancia Magnética/normas , Imagen por Resonancia Magnética/métodos , Esquizofrenia/diagnóstico por imagen , Esquizofrenia/patología , Corteza Cerebral/diagnóstico por imagen , Corteza Cerebral/anatomía & histología , Neuroimagen/métodos , Neuroimagen/normas , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Masculino , Femenino , Adulto , Distribución Normal , Grosor de la Corteza Cerebral
5.
Hippocampus ; 34(6): 302-308, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38593279

RESUMEN

Researchers who study the human hippocampus are naturally interested in how its subfields function. However, many researchers are precluded from examining subfields because their manual delineation from magnetic resonance imaging (MRI) scans (still the gold standard approach) is time consuming and requires significant expertise. To help ameliorate this issue, we present here two protocols, one for 3T MRI and the other for 7T MRI, that permit automated hippocampus segmentation into six subregions, namely dentate gyrus/cornu ammonis (CA)4, CA2/3, CA1, subiculum, pre/parasubiculum, and uncus along the entire length of the hippocampus. These protocols are particularly notable relative to existing resources in that they were trained and tested using large numbers of healthy young adults (n = 140 at 3T, n = 40 at 7T) whose hippocampi were manually segmented by experts from MRI scans. Using inter-rater reliability analyses, we showed that the quality of automated segmentations produced by these protocols was high and comparable to expert manual segmenters. We provide full open access to the automated protocols, and anticipate they will save hippocampus researchers a significant amount of time. They could also help to catalyze subfield research, which is essential for gaining a full understanding of how the hippocampus functions.


Asunto(s)
Hipocampo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Hipocampo/diagnóstico por imagen , Masculino , Adulto , Femenino , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Reproducibilidad de los Resultados
6.
Neuroimage ; 292: 120617, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38636639

RESUMEN

A primary challenge to the data-driven analysis is the balance between poor generalizability of population-based research and characterizing more subject-, study- and population-specific variability. We previously introduced a fully automated spatially constrained independent component analysis (ICA) framework called NeuroMark and its functional MRI (fMRI) template. NeuroMark has been successfully applied in numerous studies, identifying brain markers reproducible across datasets and disorders. The first NeuroMark template was constructed based on young adult cohorts. We recently expanded on this initiative by creating a standardized normative multi-spatial-scale functional template using over 100,000 subjects, aiming to improve generalizability and comparability across studies involving diverse cohorts. While a unified template across the lifespan is desirable, a comprehensive investigation of the similarities and differences between components from different age populations might help systematically transform our understanding of the human brain by revealing the most well-replicated and variable network features throughout the lifespan. In this work, we introduced two significant expansions of NeuroMark templates first by generating replicable fMRI templates for infants, adolescents, and aging cohorts, and second by incorporating structural MRI (sMRI) and diffusion MRI (dMRI) modalities. Specifically, we built spatiotemporal fMRI templates based on 6,000 resting-state scans from four datasets. This is the first attempt to create robust ICA templates covering dynamic brain development across the lifespan. For the sMRI and dMRI data, we used two large publicly available datasets including more than 30,000 scans to build reliable templates. We employed a spatial similarity analysis to identify replicable templates and investigate the degree to which unique and similar patterns are reflective in different age populations. Our results suggest remarkably high similarity of the resulting adapted components, even across extreme age differences. With the new templates, the NeuroMark framework allows us to perform age-specific adaptations and to capture features adaptable to each modality, therefore facilitating biomarker identification across brain disorders. In sum, the present work demonstrates the generalizability of NeuroMark templates and suggests the potential of new templates to boost accuracy in mental health research and advance our understanding of lifespan and cross-modal alterations.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Adulto , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Encéfalo/diagnóstico por imagen , Adolescente , Adulto Joven , Masculino , Anciano , Femenino , Persona de Mediana Edad , Lactante , Niño , Envejecimiento/fisiología , Preescolar , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Anciano de 80 o más Años , Neuroimagen/métodos , Neuroimagen/normas , Imagen de Difusión por Resonancia Magnética/métodos , Imagen de Difusión por Resonancia Magnética/normas
7.
Neuroimage Clin ; 42: 103585, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38531165

RESUMEN

Resting state functional magnetic resonance imaging (rsfMRI) provides researchers and clinicians with a powerful tool to examine functional connectivity across large-scale brain networks, with ever-increasing applications to the study of neurological disorders, such as traumatic brain injury (TBI). While rsfMRI holds unparalleled promise in systems neurosciences, its acquisition and analytical methodology across research groups is variable, resulting in a literature that is challenging to integrate and interpret. The focus of this narrative review is to address the primary methodological issues including investigator decision points in the application of rsfMRI to study the consequences of TBI. As part of the ENIGMA Brain Injury working group, we have collaborated to identify a minimum set of recommendations that are designed to produce results that are reliable, harmonizable, and reproducible for the TBI imaging research community. Part one of this review provides the results of a literature search of current rsfMRI studies of TBI, highlighting key design considerations and data processing pipelines. Part two outlines seven data acquisition, processing, and analysis recommendations with the goal of maximizing study reliability and between-site comparability, while preserving investigator autonomy. Part three summarizes new directions and opportunities for future rsfMRI studies in TBI patients. The goal is to galvanize the TBI community to gain consensus for a set of rigorous and reproducible methods, and to increase analytical transparency and data sharing to address the reproducibility crisis in the field.


Asunto(s)
Lesiones Traumáticas del Encéfalo , Imagen por Resonancia Magnética , Humanos , Lesiones Traumáticas del Encéfalo/diagnóstico por imagen , Lesiones Traumáticas del Encéfalo/fisiopatología , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Reproducibilidad de los Resultados , Encéfalo/diagnóstico por imagen , Encéfalo/fisiopatología , Descanso/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Mapeo Encefálico/métodos , Mapeo Encefálico/normas
8.
J Neurosci Methods ; 406: 110112, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38508496

RESUMEN

BACKGROUND: Visualizing edges is critical for neuroimaging. For example, edge maps enable quality assurance for the automatic alignment of an image from one modality (or individual) to another. NEW METHOD: We suggest that using the second derivative (difference of Gaussian, or DoG) provides robust edge detection. This method is tuned by size (which is typically known in neuroimaging) rather than intensity (which is relative). RESULTS: We demonstrate that this method performs well across a broad range of imaging modalities. The edge contours produced consistently form closed surfaces, whereas alternative methods may generate disconnected lines, introducing potential ambiguity in contiguity. COMPARISON WITH EXISTING METHODS: Current methods for computing edges are based on either the first derivative of the image (FSL), or a variation of the Canny Edge detection method (AFNI). These methods suffer from two primary limitations. First, the crucial tuning parameter for each of these methods relates to the image intensity. Unfortunately, image intensity is relative for most neuroimaging modalities making the performance of these methods unreliable. Second, these existing approaches do not necessarily generate a closed edge/surface, which can reduce the ability to determine the correspondence between a represented edge and another image. CONCLUSION: The second derivative is well suited for neuroimaging edge detection. We include this method as part of both the AFNI and FSL software packages, standalone code and online.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Encéfalo/diagnóstico por imagen , Imagenología Tridimensional/métodos , Imagenología Tridimensional/normas , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Neuroimagen/métodos , Neuroimagen/normas
10.
Plant Physiol ; 195(1): 378-394, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38298139

RESUMEN

Automated guard cell detection and measurement are vital for understanding plant physiological performance and ecological functioning in global water and carbon cycles. Most current methods for measuring guard cells and stomata are laborious, time-consuming, prone to bias, and limited in scale. We developed StoManager1, a high-throughput tool utilizing geometrical, mathematical algorithms, and convolutional neural networks to automatically detect, count, and measure over 30 guard cell and stomatal metrics, including guard cell and stomatal area, length, width, stomatal aperture area/guard cell area, orientation, stomatal evenness, divergence, and aggregation index. Combined with leaf functional traits, some of these StoManager1-measured guard cell and stomatal metrics explained 90% and 82% of tree biomass and intrinsic water use efficiency (iWUE) variances in hardwoods, making them substantial factors in leaf physiology and tree growth. StoManager1 demonstrated exceptional precision and recall (mAP@0.5 over 0.96), effectively capturing diverse stomatal properties across over 100 species. StoManager1 facilitates the automation of measuring leaf stomatal and guard cells, enabling broader exploration of stomatal control in plant growth and adaptation to environmental stress and climate change. This has implications for global gross primary productivity (GPP) modeling and estimation, as integrating stomatal metrics can enhance predictions of plant growth and resource usage worldwide. Easily accessible open-source code and standalone Windows executable applications are available on a GitHub repository (https://github.com/JiaxinWang123/StoManager1) and Zenodo (https://doi.org/10.5281/zenodo.7686022).


Asunto(s)
Botánica , Biología Celular , Células Vegetales , Estomas de Plantas , Programas Informáticos , Estomas de Plantas/citología , Estomas de Plantas/crecimiento & desarrollo , Células Vegetales/fisiología , Botánica/instrumentación , Botánica/métodos , Biología Celular/instrumentación , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos , Hojas de la Planta/citología , Redes Neurales de la Computación , Ensayos Analíticos de Alto Rendimiento/instrumentación , Ensayos Analíticos de Alto Rendimiento/métodos , Ensayos Analíticos de Alto Rendimiento/normas , Programas Informáticos/normas
11.
IEEE J Biomed Health Inform ; 27(8): 3912-3923, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37155391

RESUMEN

Semi-supervised learning is becoming an effective solution in medical image segmentation because annotations are costly and tedious to acquire. Methods based on the teacher-student model use consistency regularization and uncertainty estimation and have shown good potential in dealing with limited annotated data. Nevertheless, the existing teacher-student model is seriously limited by the exponential moving average algorithm, which leads to the optimization trap. Moreover, the classic uncertainty estimation method calculates the global uncertainty for images but does not consider local region-level uncertainty, which is unsuitable for medical images with blurry regions. In this article, the Voxel Stability and Reliability Constraint (VSRC) model is proposed to address these issues. Specifically, the Voxel Stability Constraint (VSC) strategy is introduced to optimize parameters and exchange effective knowledge between two independent initialized models, which can break through the performance bottleneck and avoid model collapse. Moreover, a new uncertainty estimation strategy, the Voxel Reliability Constraint (VRC), is proposed for use in our semi-supervised model to consider the uncertainty at the local region level. We further extend our model to auxiliary tasks and propose a task-level consistency regularization with uncertainty estimation. Extensive experiments on two 3D medical image datasets demonstrate that our method outperforms other state-of-the-art semi-supervised medical image segmentation methods under limited supervision.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático Supervisado , Algoritmos , Conjuntos de Datos como Asunto , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Reproducibilidad de los Resultados , Estudiantes , Enseñanza , Incertidumbre , Humanos
12.
Blood Adv ; 7(16): 4621-4630, 2023 08 22.
Artículo en Inglés | MEDLINE | ID: mdl-37146262

RESUMEN

Examination of red blood cell (RBC) morphology in peripheral blood smears can help diagnose hematologic diseases, even in resource-limited settings, but this analysis remains subjective and semiquantitative with low throughput. Prior attempts to develop automated tools have been hampered by their poor reproducibility and limited clinical validation. Here, we present a novel, open-source machine-learning approach (denoted as RBC-diff) to quantify abnormal RBCs in peripheral smear images and generate an RBC morphology differential. RBC-diff cell counts showed high accuracy for single-cell classification (mean AUC, 0.93) and quantitation across smears (mean R2, 0.76 compared with experts, interexperts R2, 0.75). RBC-diff counts were concordant with the clinical morphology grading for 300 000+ images and recovered the expected pathophysiologic signals in diverse clinical cohorts. Criteria using RBC-diff counts distinguished thrombotic thrombocytopenic purpura and hemolytic uremic syndrome from other thrombotic microangiopathies, providing greater specificity than clinical morphology grading (72% vs 41%; P < .001) while maintaining high sensitivity (94% to 100%). Elevated RBC-diff schistocyte counts were associated with increased 6-month all-cause mortality in a cohort of 58 950 inpatients (9.5% mortality for schist. >1%, vs 4.7% for schist; <0.5%; P < .001) after controlling for comorbidities, demographics, clinical morphology grading, and blood count indices. RBC-diff also enabled the estimation of single-cell volume-morphology distributions, providing insight into the influence of morphology on routine blood count measures. Our codebase and expert-annotated images are included here to spur further advancement. These results illustrate that computer vision can enable rapid and accurate quantitation of RBC morphology, which may provide value in both clinical and research contexts.


Asunto(s)
Eritrocitos Anormales , Enfermedades Hematológicas , Procesamiento de Imagen Asistido por Computador , Humanos , Eritrocitos Anormales/citología , Enfermedades Hematológicas/diagnóstico por imagen , Enfermedades Hematológicas/patología , Pronóstico , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Aprendizaje Automático , Forma de la Célula
13.
Br J Radiol ; 96(1145): 20220704, 2023 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-36802348

RESUMEN

OBJECTIVE: The study aims to evaluate the diagnostic efficacy of radiologists and radiology trainees in digital breast tomosynthesis (DBT) alone vs DBT plus synthesized view (SV) for an understanding of the adequacy of DBT images to identify cancer lesions. METHODS: Fifty-five observers (30 radiologists and 25 radiology trainees) participated in reading a set of 35 cases (15 cancer) with 28 readers reading DBT and 27 readers reading DBT plus SV. Two groups of readers had similar experience in interpreting mammograms. The performances of participants in each reading mode were compared with the ground truth and calculated in term of specificity, sensitivity, and ROC AUC. The cancer detection rate in various levels of breast density, lesion types and lesion sizes between 'DBT' and 'DBT + SV' were also analyzed. The difference in diagnostic accuracy of readers between two reading modes was assessed using Man-Whitney U test. p < 0.05 indicated a significant result. RESULTS: There was no significant difference in specificity (0.67-vs-0.65; p = 0.69), sensitivity (0.77-vs-0.71; p = 0.09), ROC AUC (0.77-vs-0.73; p = 0.19) of radiologists reading DBT plus SV compared with radiologists reading DBT. Similar result was found in radiology trainees with no significant difference in specificity (0.70-vs-0.63; p = 0.29), sensitivity (0.44-vs-0.55; p = 0.19), ROC AUC (0.59-vs-0.62; p = 0.60) between two reading modes. Radiologists and trainees obtained similar results in two reading modes for cancer detection rate with different levels of breast density, cancer types and sizes of lesions (p > 0.05). CONCLUSION: Findings show that the diagnostic performances of radiologists and radiology trainees in DBT alone and DBT plus SV were equivalent in identifying cancer and normal cases. ADVANCES IN KNOWLEDGE: DBT alone had equivalent diagnostic accuracy as DBT plus SV which could imply the consideration of using DBT as a sole modality without SV.


Asunto(s)
Neoplasias de la Mama , Procesamiento de Imagen Asistido por Computador , Mamografía , Radiólogos , Radiólogos/normas , Radiólogos/estadística & datos numéricos , Mama/diagnóstico por imagen , Mama/patología , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Mamografía/normas , Procesamiento de Imagen Asistido por Computador/normas , Humanos , Femenino , Sensibilidad y Especificidad
14.
IEEE J Biomed Health Inform ; 27(2): 992-1003, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36378793

RESUMEN

In computer-aided diagnosis and treatment planning, accurate segmentation of medical images plays an essential role, especially for some hard regions including boundaries, small objects and background interference. However, existing segmentation loss functions including distribution-, region- and boundary-based losses cannot achieve satisfactory performances on these hard regions. In this paper, a boundary-sensitive loss function with location constraint is proposed for hard region segmentation in medical images, which provides three advantages: i) our Boundary-Sensitive loss (BS-loss) can automatically pay more attention to the hard-to-segment boundaries (e.g., thin structures and blurred boundaries), thus obtaining finer object boundaries; ii) BS-loss also can adjust its attention to small objects during training to segment them more accurately; and iii) our location constraint can alleviate the negative impact of the background interference, through the distribution matching of pixels between prediction and Ground Truth (GT) along each axis. By resorting to the proposed BS-loss and location constraint, the hard regions in both foreground and background are considered. Experimental results on three public datasets demonstrate the superiority of our method. Specifically, compared to the second-best method tested in this study, our method improves performance on hard regions in terms of Dice similarity coefficient (DSC) and 95% Hausdorff distance (95%HD) of up to 4.17% and 73% respectively. In addition, it also achieves the best overall segmentation performance. Hence, we can conclude that our method can accurately segment these hard regions and improve the overall segmentation performance in medical images.


Asunto(s)
Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador , Humanos , Diagnóstico por Computador/métodos , Diagnóstico por Computador/normas , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Conjuntos de Datos como Asunto
15.
Zebrafish ; 19(6): 213-217, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36067119

RESUMEN

The article assesses the developments in automated phenotype pattern recognition: Potential spikes in classification performance, even when facing the common small-scale biomedical data set, and as a reader, you will find out about changes in the development effort and complexity for researchers and practitioners. After reading, you will be aware of the benefits and unreasonable effectiveness and ease of use of an automated end-to-end deep learning pipeline for classification tasks of biomedical perception systems.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Pez Cebra , Animales , Procesamiento de Imagen Asistido por Computador/normas , Fenotipo , Pez Cebra/clasificación , Pez Cebra/genética
16.
Med Image Anal ; 78: 102392, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35235896

RESUMEN

The propensity of task-based functional magnetic resonance imaging (T-fMRI) to large physiological fluctuations, measurement noise, and imaging artifacts entail longer scans and higher temporal resolution (trading off spatial resolution) to alleviate the effects of degradation. This paper focuses on methods towards reducing scan times and enabling higher spatial resolution in T-fMRI. We propose a novel mixed-dictionary model combining (i) the task-based design matrix, (ii) a learned dictionary from resting-state fMRI, and (iii) an analytically-defined wavelet frame. For model fitting, we propose a novel adaptation of the inference framework relying on variational Bayesian expectation maximization with nested minorization. We leverage the mixed-dictionary model coupled with variational inference to enable 2×shorter scan times in T-fMRI, improving activation-map estimates towards the same quality as those resulting from longer scans. We also propose a scheme with potential to increase spatial resolution through temporally undersampled acquisition. Results on motor-task fMRI and gambling-task fMRI show that our framework leads to improved activation-map estimates over the state of the art.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Algoritmos , Teorema de Bayes , Encéfalo/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Imagen por Resonancia Magnética/métodos , Factores de Tiempo
17.
Med Image Anal ; 78: 102395, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35231851

RESUMEN

Medical image segmentation can provide a reliable basis for further clinical analysis and disease diagnosis. With the development of convolutional neural networks (CNNs), medical image segmentation performance has advanced significantly. However, most existing CNN-based methods often produce unsatisfactory segmentation masks without accurate object boundaries. This problem is caused by the limited context information and inadequate discriminative feature maps after consecutive pooling and convolution operations. Additionally, medical images are characterized by high intra-class variation, inter-class indistinction and noise, extracting powerful context and aggregating discriminative features for fine-grained segmentation remain challenging. In this study, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation to capture richer context and preserve fine spatial information, which incorporates encoder-decoder architecture. In each stage of the encoder sub-network, a proposed pyramid edge extraction module first obtains multi-granularity edge information. Then a newly designed mini multi-task learning module for jointly learning segments the object masks and detects lesion boundaries, in which a new interactive attention layer is introduced to bridge the two tasks. In this way, information complementarity between different tasks is achieved, which effectively leverages the boundary information to offer strong cues for better segmentation prediction. Finally, a cross feature fusion module acts to selectively aggregate multi-level features from the entire encoder sub-network. By cascading these three modules, richer context and fine-grain features of each stage are encoded and then delivered to the decoder. The results of extensive experiments on five datasets show that the proposed BA-Net outperforms state-of-the-art techniques.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/normas , Aprendizaje
18.
Comput Methods Programs Biomed ; 218: 106707, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35255374

RESUMEN

BACKGROUND AND OBJECTIVE: Heart disease is a vital disease that has threatened human health, and is the number one killer of human life. Moreover, with the added influence of recent health factors, its incidence rate keeps showing an upward trend. Today, cardiac magnetic resonance (CMR) imaging can provide a full range of structural and functional information for the heart, and has become an important tool for the diagnosis and treatment of heart disease. Therefore, improving the image resolution of CMR has an important medical value for the diagnosis and condition assessment of heart disease. At present, most single-image super-resolution (SISR) reconstruction methods have some serious problems, such as insufficient feature information mining, difficulty to determine the dependence of each channel of feature map, and reconstruction error when reconstructing high-resolution image. METHODS: To solve these problems, we have proposed and implemented a dual U-Net residual network (DURN) for super-resolution of CMR images. Specifically, we first propose a U-Net residual network (URN) model, which is divided into the up-branch and the down-branch. The up-branch is composed of residual blocks and up-blocks to extract and upsample deep features; the down-branch is composed of residual blocks and down-blocks to extract and downsample deep features. Based on the URN model, we employ this a dual U-Net residual network (DURN) model, which combines the extracted deep features of the same position between the first URN and the second URN through residual connection. It can make full use of the features extracted by the first URN to extract deeper features of low-resolution images. RESULTS: When the scale factors are 2, 3, and 4, our DURN can obtain 37.86 dB, 33.96 dB, and 31.65 dB on the Set5 dataset, which shows (i) a maximum improvement of 4.17 dB, 3.55 dB, and 3.22dB over the Bicubic algorithm, and (ii) a minimum improvement of 0.34 dB, 0.14 dB, and 0.11 dB over the LapSRN algorithm. CONCLUSION: Comprehensive experimental study results on benchmark datasets demonstrate that our proposed DURN can not only achieve better performance for peak signal to noise ratio (PSNR) and structural similarity index (SSIM) values than other state-of-the-art SR image algorithms, but also reconstruct clearer super-resolution CMR images which have richer details, edges, and texture.


Asunto(s)
Cardiopatías/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/normas , Algoritmos , Progresión de la Enfermedad , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Relación Señal-Ruido
19.
Sci Rep ; 12(1): 2839, 2022 02 18.
Artículo en Inglés | MEDLINE | ID: mdl-35181681

RESUMEN

We implemented a two-dimensional convolutional neural network (CNN) for classification of polar maps extracted from Carimas (Turku PET Centre, Finland) software used for myocardial perfusion analysis. 138 polar maps from 15O-H2O stress perfusion study in JPEG format from patients classified as ischemic or non-ischemic based on finding obstructive coronary artery disease (CAD) on invasive coronary artery angiography were used. The CNN was evaluated against the clinical interpretation. The classification accuracy was evaluated with: accuracy (ACC), area under the receiver operating characteristic curve (AUC), F1 score (F1S), sensitivity (SEN), specificity (SPE) and precision (PRE). The CNN had a median ACC of 0.8261, AUC of 0.8058, F1S of 0.7647, SEN of 0.6500, SPE of 0.9615 and PRE of 0.9286. In comparison, clinical interpretation had ACC of 0.8696, AUC of 0.8558, F1S of 0.8333, SEN of 0.7500, SPE of 0.9615 and PRE of 0.9375. The CNN classified only 2 cases differently than the clinical interpretation. The clinical interpretation and CNN had similar accuracy in classifying false positives and true negatives. Classification of ischemia is feasible in 15O-H2O stress perfusion imaging using JPEG polar maps alone with a custom CNN and may be useful for the detection of obstructive CAD.


Asunto(s)
Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Corazón/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/normas , Isquemia/diagnóstico por imagen , Anciano , Angiografía Coronaria , Enfermedad de la Arteria Coronaria/diagnóstico , Enfermedad de la Arteria Coronaria/fisiopatología , Femenino , Finlandia/epidemiología , Corazón/fisiopatología , Humanos , Isquemia/diagnóstico , Isquemia/patología , Masculino , Persona de Mediana Edad , Imagen de Perfusión Miocárdica/clasificación , Imagen de Perfusión Miocárdica/normas , Redes Neurales de la Computación , Programas Informáticos
20.
Trends Cell Biol ; 32(4): 295-310, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35067424

RESUMEN

Single nucleus segmentation is a frequent challenge of microscopy image processing, since it is the first step of many quantitative data analysis pipelines. The quality of tracking single cells, extracting features or classifying cellular phenotypes strongly depends on segmentation accuracy. Worldwide competitions have been held, aiming to improve segmentation, and recent years have definitely brought significant improvements: large annotated datasets are now freely available, several 2D segmentation strategies have been extended to 3D, and deep learning approaches have increased accuracy. However, even today, no generally accepted solution and benchmarking platform exist. We review the most recent single-cell segmentation tools, and provide an interactive method browser to select the most appropriate solution.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Microscopía , Núcleo Celular , Humanos , Procesamiento de Imagen Asistido por Computador/normas , Microscopía/métodos , Microscopía/tendencias , Análisis de la Célula Individual/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA