Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 111
Filtrar
1.
Neuroimage ; 297: 120708, 2024 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-38950664

RESUMEN

Acting as a central hub in regulating brain functions, the thalamus plays a pivotal role in controlling high-order brain functions. Considering the impact of preterm birth on infant brain development, traditional studies focused on the overall development of thalamus other than its subregions. In this study, we compared the volumetric growth and shape development of the thalamic hemispheres between the infants born preterm and full-term (Left volume: P = 0.027, Left normalized volume: P < 0.0001; Right volume: P = 0.070, Right normalized volume: P < 0.0001). The ventral nucleus region, dorsomedial nucleus region, and posterior nucleus region of the thalamus exhibit higher vulnerability to alterations induced by preterm birth. The structural covariance (SC) between the thickness of thalamus and insula in preterm infants (Left: corrected P = 0.0091, Right: corrected P = 0.0119) showed significant increase as compared to full-term controls. Current findings suggest that preterm birth affects the development of the thalamus and has differential effects on its subregions. The ventral nucleus region, dorsomedial nucleus region, and posterior nucleus region of the thalamus are more susceptible to the impacts of preterm birth.

2.
J Mol Biol ; 436(12): 168610, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38754773

RESUMEN

The executors of organismal functions are proteins, and the transition from RNA to protein is subject to post-transcriptional regulation; therefore, considering both RNA and surface protein expression simultaneously can provide additional evidence of biological processes. Cellular indexing of transcriptomes and epitopes by sequencing (CITE-Seq) technology can measure both RNA and protein expression in single cells, but these experiments are expensive and time-consuming. Due to the lack of computational tools for predicting surface proteins, we used datasets obtained with CITE-seq technology to design a deep generative prediction method based on diffusion models and to find biological discoveries through the prediction results. In our method, the scDM, which predicts protein expression values from RNA expression values of individual cells, uses a novel way of encoding the data into a model and generates predicted samples by introducing Gaussian noise to gradually remove the noise to learn the data distribution during the modelling process. Comprehensive evaluation across different datasets demonstrated that our predictions yielded satisfactory results and further demonstrated the effectiveness of incorporating information from single-cell multiomics data into diffusion models for biological studies. We also found that new directions for discovering therapeutic drug targets could be provided by jointly analysing the predictive value of surface protein expression and cancer cell drug scores.


Asunto(s)
Biología Computacional , Proteínas de la Membrana , Análisis de la Célula Individual , Humanos , Algoritmos , Biología Computacional/métodos , Perfilación de la Expresión Génica/métodos , Proteínas de la Membrana/metabolismo , Proteínas de la Membrana/genética , Análisis de la Célula Individual/métodos , Transcriptoma
3.
Phys Med Biol ; 69(11)2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38636502

RESUMEN

Medical image segmentation is a crucial field of computer vision. Obtaining correct pathological areas can help clinicians analyze patient conditions more precisely. We have observed that both CNN-based and attention-based neural networks often produce rough segmentation results around the edges of the regions of interest. This significantly impacts the accuracy of obtaining the pathological areas. Without altering the original data and model architecture, further refining the initial segmentation outcomes can effectively address this issue and lead to more satisfactory results. Recently, diffusion models have demonstrated outstanding results in image generation, showcasing their powerful ability to model distributions. We believe that this ability can greatly enhance the accuracy of the reshaping results. This research proposes ERSegDiff, a neural network based on the diffusion model for reshaping segmentation borders. The diffusion model is trained to fit the distribution of the target edge area and is then used to modify the segmentation edge to produce more accurate segmentation results. By incorporating prior knowledge into the diffusion model, we can help it more accurately simulate the edge probability distribution of the samples. Moreover, we introduce the edge concern module, which leverages attention mechanisms to produce feature weights and further refine the segmentation outcomes. To validate our approach, we employed the COVID-19 and ISIC-2018 datasets for lung segmentation and skin cancer segmentation tasks, respectively. Compared with the baseline model, ERSegDiff improved the dice score by 3%-4% and 2%-4%, respectively, and achieved state-of-the-art scores compared to several mainstream neural networks, such as swinUNETR.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Difusión , COVID-19/diagnóstico por imagen
4.
Psychophysiology ; 61(3): e14514, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38183326

RESUMEN

Recent studies have suggested that the neural activity that supported working memory (WM) storage is dynamic over time and this dynamic storage decides memory performance. Does the temporal dynamic of the WM representation also affect visual search, and how does it interact with distractor suppression over time? To address these issues, we tracked the time course of the reactivation of WM representations during visual search by analyzing the electroencephalogram (EEG) and event-related optical signals (EROS) in Experiments 1 and 2, respectively, and investigated the interaction between the representation reactivation and distractor suppression in Experiment 3. Participants had to maintain a color in WM under high- or low-precision requirement and perform a subsequent search task. The reactivation of WM representations was defined by the above-chance decoding accuracy. The EEG results showed that compared with the low-precision requirement, WM-matching distractors captured more attention and the WM representation were reactivated more frequently under high-precision requirement. The EROS results showed that compared with the low-precision requirement, the increased activity in occipital cortex in the WM-matching versus WM-mismatching conditions was observed at 224 ms during visual search under high-precision requirement. Regression analysis showed that the representation reactivation during visual search directly predicted the behavioral WM-based attentional capture effect, while the representation reactivation before visual search impacted the WM-based attentional capture effect through the mediation of distractor suppression during visual search. These results suggest that the reactivation of WM representations and distractor suppression collectively determine WM-based attentional capture.


Asunto(s)
Cognición , Memoria a Corto Plazo , Humanos , Memoria a Corto Plazo/fisiología , Electroencefalografía , Lóbulo Occipital , Probabilidad , Percepción Visual/fisiología
5.
Neural Netw ; 172: 106096, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38194885

RESUMEN

Medical image segmentation faces challenges because of the small sample size of the dataset and the fact that images often have noise and artifacts. In recent years, diffusion models have proven very effective in image generation and have been used widely in computer vision. This paper presents a new feature map denoising module (FMD) based on the diffusion model for feature refinement, which is plug-and-play, allowing flexible integration into popular used segmentation networks for seamless end-to-end training. We evaluate the performance of the FMD module on four models, UNet, UNeXt, TransUNet, and IB-TransUNet, by conducting experiments on four datasets. The experimental data analysis shows that adding the FMD module significantly positively impacts the model performance. Furthermore, especially for small lesion areas and minor organs, adding the FMD module allows users to obtain more accurate segmentation results than the original model.


Asunto(s)
Artefactos , Análisis de Datos , Difusión , Tamaño de la Muestra , Procesamiento de Imagen Asistido por Computador
6.
IEEE J Biomed Health Inform ; 28(3): 1587-1598, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38215328

RESUMEN

Accurate segmentation of brain tumors in MRI images is imperative for precise clinical diagnosis and treatment. However, existing medical image segmentation methods exhibit errors, which can be categorized into two types: random errors and systematic errors. Random errors, arising from various unpredictable effects, pose challenges in terms of detection and correction. Conversely, systematic errors, attributable to systematic effects, can be effectively addressed through machine learning techniques. In this paper, we propose a corrective diffusion model for accurate MRI brain tumor segmentation by correcting systematic errors. This marks the first application of the diffusion model for correcting systematic segmentation errors. Additionally, we introduce the Vector Quantized Variational Autoencoder (VQ-VAE) to compress the original data into a discrete coding codebook. This not only reduces the dimensionality of the training data but also enhances the stability of the correction diffusion model. Furthermore, we propose the Multi-Fusion Attention Mechanism, which can effectively enhances the segmentation performance of brain tumor images, and enhance the flexibility and reliability of the corrective diffusion model. Our model is evaluated on the BRATS2019, BRATS2020, and Jun Cheng datasets. Experimental results demonstrate the effectiveness of our model over state-of-the-art methods in brain tumor segmentation.


Asunto(s)
Neoplasias Encefálicas , Procesamiento de Imagen Asistido por Computador , Humanos , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Imagen por Resonancia Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Encéfalo/diagnóstico por imagen
7.
Med Biol Eng Comput ; 62(5): 1427-1440, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38233683

RESUMEN

In recent years, predicting gene mutations on whole slide imaging (WSI) has gained prominence. The primary challenge is extracting global information and achieving unbiased semantic aggregation. To address this challenge, we propose a novel Transformer-based aggregation model, employing a self-learning weight aggregation mechanism to mitigate semantic bias caused by the abundance of features in WSI. Additionally, we adopt a random patch training method, which enhances model learning richness by randomly extracting feature vectors from WSI, thus addressing the issue of limited data. To demonstrate the model's effectiveness in predicting gene mutations, we leverage the lung adenocarcinoma dataset from Shandong Provincial Hospital for prior knowledge learning. Subsequently, we assess TP53, CSMD3, LRP1B, and TTN gene mutations using lung adenocarcinoma tissue pathology images and clinical data from The Cancer Genome Atlas (TCGA). The results indicate a notable increase in the AUC (Area Under the ROC Curve) value, averaging 4%, attesting to the model's performance improvement. Our research offers an efficient model to explore the correlation between pathological image features and molecular characteristics in lung adenocarcinoma patients. This model introduces a novel approach to clinical genetic testing, expected to enhance the efficiency of identifying molecular features and genetic testing in lung adenocarcinoma patients, ultimately providing more accurate and reliable results for related studies.


Asunto(s)
Adenocarcinoma del Pulmón , Adenocarcinoma , Neoplasias Pulmonares , Humanos , Adenocarcinoma del Pulmón/genética , Mutación/genética , Adenocarcinoma/genética , Suministros de Energía Eléctrica , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/genética
8.
Med Phys ; 51(2): 1178-1189, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37528654

RESUMEN

BACKGROUND: Accurate medical image segmentation is crucial for disease diagnosis and surgical planning. Transformer networks offer a promising alternative for medical image segmentation as they can learn global features through self-attention mechanisms. To further enhance performance, many researchers have incorporated more Transformer layers into their models. However, this approach often results in the model parameters increasing significantly, causing a potential rise in complexity. Moreover, the datasets of medical image segmentation usually have fewer samples, which leads to the risk of overfitting of the model. PURPOSE: This paper aims to design a medical image segmentation model that has fewer parameters and can effectively alleviate overfitting. METHODS: We design a MultiIB-Transformer structure consisting of a single Transformer layer and multiple information bottleneck (IB) blocks. The Transformer layer is used to capture long-distance spatial relationships to extract global feature information. The IB block is used to compress noise and improve model robustness. The advantage of this structure is that it only needs one Transformer layer to achieve the state-of-the-art (SOTA) performance, significantly reducing the number of model parameters. In addition, we designed a new skip connection structure. It only needs two 1× 1 convolutions, the high-resolution feature map can effectively have both semantic and spatial information, thereby alleviating the semantic gap. RESULTS: The proposed model is on the Breast UltraSound Images (BUSI) dataset, and the IoU and F1 evaluation indicators are 67.75 and 87.78. On the Synapse multi-organ segmentation dataset, the Param, Hausdorff Distance (HD) and Dice Similarity Cofficient (DSC) evaluation indicators are 22.30, 20.04 and 81.83. CONCLUSIONS: Our proposed model (MultiIB-TransUNet) achieved superior results with fewer parameters compared to other models.


Asunto(s)
Aprendizaje , Ultrasonografía Mamaria , Femenino , Humanos , Ultrasonografía , Investigadores , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador
9.
IEEE Trans Med Imaging ; 43(1): 15-27, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37342954

RESUMEN

Feature matching, which refers to establishing the correspondence of regions between two images (usually voxel features), is a crucial prerequisite of feature-based registration. For deformable image registration tasks, traditional feature-based registration methods typically use an iterative matching strategy for interest region matching, where feature selection and matching are explicit, but specific feature selection schemes are often useful in solving application-specific problems and require several minutes for each registration. In the past few years, the feasibility of learning-based methods, such as VoxelMorph and TransMorph, has been proven, and their performance has been shown to be competitive compared to traditional methods. However, these methods are usually single-stream, where the two images to be registered are concatenated into a 2-channel whole, and then the deformation field is output directly. The transformation of image features into interimage matching relationships is implicit. In this paper, we propose a novel end-to-end dual-stream unsupervised framework, named TransMatch, where each image is fed into a separate stream branch, and each branch performs feature extraction independently. Then, we implement explicit multilevel feature matching between image pairs via the query-key matching idea of the self-attention mechanism in the Transformer model. Comprehensive experiments are conducted on three 3D brain MR datasets, LPBA40, IXI, and OASIS, and the results show that the proposed method achieves state-of-the-art performance in several evaluation metrics compared to the commonly utilized registration methods, including SyN, NiftyReg, VoxelMorph, CycleMorph, ViT-V-Net, and TransMorph, demonstrating the effectiveness of our model in deformable medical image registration.


Asunto(s)
Algoritmos , Encéfalo , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
10.
Med Biol Eng Comput ; 62(3): 901-912, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38087041

RESUMEN

Breast cancer pathological image segmentation (BCPIS) holds significant value in assisting physicians with quantifying tumor regions and providing treatment guidance. However, achieving fine-grained semantic segmentation remains a major challenge for this technology. The complex and diverse morphologies of breast cancer tissue structures result in high costs for manual annotation, thereby limiting the sample size and annotation quality of the dataset. These practical issues have a significant impact on the segmentation performance. To overcome these challenges, this study proposes a semi-supervised learning model based on classification-guided segmentation. The model first utilizes a multi-scale convolutional network to extract rich semantic information and then employs a multi-expert cross-layer joint learning strategy, integrating a small number of labeled samples to iteratively provide the model with class-generated multi-cue pseudo-labels and real labels. Given the complexity of the breast cancer samples and the limited sample quantity, an innovative approach of augmenting additional unlabeled data was adopted to overcome this limitation. Experimental results demonstrate that, although the proposed model falls slightly behind supervised segmentation models, it still exhibits significant progress and innovation. The semi-supervised model in this study achieves outstanding performance, with an IoU (Intersection over Union) value of 71.53%. Compared to other semi-supervised methods, the model developed in this study demonstrates a performance advantage of approximately 3%. Furthermore, the research findings indicate a significant correlation between the classification and segmentation tasks in breast cancer pathological images, and the guidance of a multi-expert system can significantly enhance the fine-grained effects of semi-supervised semantic segmentation.


Asunto(s)
Neoplasias , Médicos , Humanos , Sistemas Especialistas , Semántica , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador
11.
Psychophysiology ; 61(2): e14455, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37817450

RESUMEN

Accurate interpretation of the emotional information conveyed by others' facial expressions is crucial for social interactions. Event-related alpha power, measured by time-frequency analysis, is a frequently used EEG index of emotional information processing. However, it is still unclear how event-related alpha power varies in emotional information processing in social anxiety groups. In the present study, we recorded event-related potentials (ERPs) while participants from the social anxiety and healthy control groups viewed facial expressions (angry, happy, neutral) preceded by contextual sentences conveying either a positive or negative evaluation of the subject. The impact of context on facial expression processing in both groups of participants was explored by assessing behavioral ratings and event-related alpha power (0-200 ms after expression presentation). In comparison to the healthy control group, the social anxiety group exhibited significantly lower occipital alpha power in response to angry facial expressions in negative contexts and neutral facial expressions in positive contexts. The influence of language context on facial expression processing in individuals with social anxiety may occur at an early stage of processing.


Asunto(s)
Expresión Facial , Reconocimiento Facial , Humanos , Electroencefalografía , Reconocimiento Facial/fisiología , Emociones/fisiología , Potenciales Evocados/fisiología , Ansiedad , Lenguaje
12.
Neuroscience ; 531: 86-98, 2023 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-37709003

RESUMEN

Alzheimer's disease (AD) is a prevalent neurodegenerative disorder characterized by the progressive cognitive decline. Among the various clinical symptoms, neuropsychiatric symptoms (NPS) commonly occur during the course of AD. Previous researches have demonstrated a strong association between NPS and severity of AD, while the research methods are not sufficiently intuitive. Here, we report a hybrid deep learning framework for AD diagnosis using multimodal inputs such as structural MRI, behavioral scores, age, and gender information. The framework uses a 3D convolutional neural network to automatically extract features from MRI. The imaging features are passed to the Principal Component Analysis for dimensionality reduction, which fuse with non-imaging information to improve the diagnosis of AD. According to the experimental results, our model achieves an accuracy of 0.91 and an area under the curve of 0.97 in the task of classifying AD and cognitively normal individuals. SHapley Additive exPlanations are used to visually exhibit the contribution of specific NPS in the proposed model. Among all behavioral symptoms, apathy plays a particularly important role in the diagnosis of AD, which can be considered a valuable factor in further studies, as well as clinical trials.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Aprendizaje Profundo , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Disfunción Cognitiva/diagnóstico por imagen , Neuroimagen/métodos
13.
Med Biol Eng Comput ; 61(11): 2939-2950, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37532907

RESUMEN

Medical image processing has become increasingly important in recent years, particularly in the field of microscopic cell imaging. However, accurately counting the number of cells in an image can be a challenging task due to the significant variations in cell size and shape. To tackle this problem, many existing methods rely on deep learning techniques, such as convolutional neural networks (CNNs), to count cells in an image or use regression counting methods to learn the similarities between an input image and a predicted cell image density map. In this paper, we propose a novel approach to monitor the cell counting process by optimizing the loss function using the optimal transport method, a rigorous measure to calculate the difference between the predicted count map and the dot annotation map generated by the CNN. We evaluated our algorithm on three publicly available cell count benchmarks: the synthetic fluorescence microscopy (VGG) dataset, the modified bone marrow (MBM) dataset, and the human subcutaneous adipose tissue (ADI) dataset. Our method outperforms other state-of-the-art methods, achieving a mean absolute error (MAE) of 2.3, 4.8, and 13.1 on the VGG, MBM, and ADI datasets, respectively, with smaller standard deviations. By using the optimal transport method, our approach provides a more accurate and reliable cell counting method for medical image processing.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Recuento de Células , Tamaño de la Célula
14.
Neurocomputing (Amst) ; 544: None, 2023 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-37528990

RESUMEN

Accurate segmentation of brain tumors from medical images is important for diagnosis and treatment planning, and it often requires multi-modal or contrast-enhanced images. However, in practice some modalities of a patient may be absent. Synthesizing the missing modality has a potential for filling this gap and achieving high segmentation performance. Existing methods often treat the synthesis and segmentation tasks separately or consider them jointly but without effective regularization of the complex joint model, leading to limited performance. We propose a novel brain Tumor Image Synthesis and Segmentation network (TISS-Net) that obtains the synthesized target modality and segmentation of brain tumors end-to-end with high performance. First, we propose a dual-task-regularized generator that simultaneously obtains a synthesized target modality and a coarse segmentation, which leverages a tumor-aware synthesis loss with perceptibility regularization to minimize the high-level semantic domain gap between synthesized and real target modalities. Based on the synthesized image and the coarse segmentation, we further propose a dual-task segmentor that predicts a refined segmentation and error in the coarse segmentation simultaneously, where a consistency between these two predictions is introduced for regularization. Our TISS-Net was validated with two applications: synthesizing FLAIR images for whole glioma segmentation, and synthesizing contrast-enhanced T1 images for Vestibular Schwannoma segmentation. Experimental results showed that our TISS-Net largely improved the segmentation accuracy compared with direct segmentation from the available modalities, and it outperformed state-of-the-art image synthesis-based segmentation methods.

15.
Food Chem ; 424: 136309, 2023 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-37207601

RESUMEN

With the development of deep learning technology, vision-based food nutrition estimation is gradually entering the public view for its advantage in accuracy and efficiency. In this paper, we designed one RGB-D fusion network, which integrated multimodal feature fusion (MMFF) and multi-scale fusion for visioin-based nutrition assessment. MMFF performed effective feature fusion by a balanced feature pyramid and convolutional block attention module. Multi-scale fusion fused different resolution features through feature pyramid network. Both enhanced feature representation to improve the performance of the model. Compared with state-of-the-art methods, the mean value of the percentage mean absolute error (PMAE) for our method reached 18.5%. The PMAE of calories and mass reached 15.0% and 10.8% via the RGB-D fusion network, improved by 3.8% and 8.1%, respectively. Furthermore, this study visualized the estimation results of four nutrients and verified the validity of the method. This research contributed to the development of automated food nutrient analysis (Code and models can be found at http://123.57.42.89/codes/RGB-DNet/nutrition.html).


Asunto(s)
Aprendizaje Profundo , Análisis de los Alimentos , Nutrientes , Valor Nutritivo
16.
Mol Immunol ; 157: 30-41, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36966551

RESUMEN

T cell receptors (TCRs) selectively bind to antigens to fight pathogens with specific immunity. Current tools focus on the nature of amino acids within sequences and take less into account the nature of amino acids far apart and the relationship between sequences, leading to significant differences in the results from different datasets. We propose TPBTE, a model based on convolutional Transformer for Predicting the Binding of TCR to Epitope. It takes epitope sequences and the complementary decision region 3 (CDR3) sequences of TCRß chain as inputs. And it uses a convolutional attention mechanism to learn amino acid representations between different positions of the sequences based on learning local features of the sequences. At the same time, it uses cross attention to learn the interaction information between TCR sequences and epitope sequences. A comprehensive evaluation of the TCR-epitope data shows that the average area under the curve of TPBTE outperforms the baseline model, and demonstrate an intentional performance. In addition, TPBTE can give the probability of binding TCR to epitopes, which can be used as the first step of epitope screening, narrowing the scope of epitope search and reducing the time of epitope search.


Asunto(s)
Epítopos de Linfocito T , Receptores de Antígenos de Linfocitos T
17.
Comput Biol Med ; 154: 106608, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36731364

RESUMEN

Vessel segmentation in fundus images is a key procedure in the diagnosis of ophthalmic diseases, which can play a role in assisting doctors in diagnosis. Although current deep learning-based methods can achieve high accuracy in segmenting fundus vessel images, the results are not satisfactory in segmenting microscopic vessels that are close to the background region. The reason for this problem is that thin blood vessels contain very little information, with the convolution operation of each layer in the deep network, this part of the information will be randomly lost. To improve the segmentation ability of the small blood vessel region, a multi-input network (MINet) was proposed to segment vascular regions more accurately. We designed a multi-input fusion module (MIF) in the encoder, which is proposed to acquire multi-scale features in the encoder stage while preserving the microvessel feature information. In addition, to further aggregate multi-scale information from adjacent regions, a multi-scale atrous spatial pyramid (MASP) module is proposed. This module can enhance the extraction of vascular information without reducing the resolution loss. In order to better recover segmentation results with details, we designed a refinement module, which acts on the last layer of the network output to refine the results of the last layer of the network to get more accurate segmentation results. We use the HRF, CHASE_DB1 public dataset to validate the fundus vessel segmentation performance of the MINet model. Also, we merged these two public datasets with our collected Ultra-widefield fundus image (UWF) data as one dataset to test the generalization ability of the model. Experimental results show that MINet achieves an F1 score of 0.8324 on the microvessel segmentation task, achieving a high accuracy compared to the current mainstream models.


Asunto(s)
Algoritmos , Vasos Retinianos , Vasos Retinianos/diagnóstico por imagen , Fondo de Ojo , Procesamiento de Imagen Asistido por Computador/métodos
18.
Comput Intell Neurosci ; 2023: 1883597, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36851939

RESUMEN

In medical image analysis, collecting multiple annotations from different clinical raters is a typical practice to mitigate possible diagnostic errors. For such multirater labels' learning problems, in addition to majority voting, it is a common practice to use soft labels in the form of full-probability distributions obtained by averaging raters as ground truth to train the model, which benefits from uncertainty contained in soft labels. However, the potential information contained in soft labels is rarely studied, which may be the key to improving the performance of medical image segmentation with multirater annotations. In this work, we aim to improve soft label methods by leveraging interpretable information from multiraters. Considering that mis-segmentation occurs in areas with weak supervision of annotations and high difficulty of images, we propose to reduce the reliance on local uncertain soft labels and increase the focus on image features. Therefore, we introduce local self-ensembling learning with consistency regularization, forcing the model to concentrate more on features rather than annotations, especially in regions with high uncertainty measured by the pixelwise interclass variance. Furthermore, we utilize a label smoothing technique to flatten each rater's annotation, alleviating overconfidence of structural edges in annotations. Without introducing additional parameters, our method improves the accuracy of the soft label baseline by 4.2% and 2.7% on a synthetic dataset and a fundus dataset, respectively. In addition, quantitative comparisons show that our method consistently outperforms existing multirater strategies as well as state-of-the-art methods. This work provides a simple yet effective solution for the widespread multirater label segmentation problems in clinical diagnosis.


Asunto(s)
Aprendizaje , Humanos , Errores Diagnósticos , Probabilidad , Incertidumbre
19.
Front Public Health ; 11: 1118628, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36817881

RESUMEN

Introduction: Modifiable lifestyle factors are considered key to the control of cardiometabolic diseases. This study aimed to explore the association between multiple lifestyle factors and cardiometabolic multimorbidity. Methods: A total of 14,968 participants were included in this cross-sectional exploratory study (mean age 54.33 years, range 45-91; 49.6% male). Pearson's Chi-square test, logistic regression, and latent class analysis were employed. Results: We found that men with 4-5 high-risk lifestyle factors had a 2.54-fold higher risk (95% CI: 1.60-4.04) of developing multimorbidity compared to males with zero high-risk lifestyle factors. In an analysis of dietary behavior, we found that in women compared to men, over-eating (OR = 1.94, P < 0.001) and intra-meal water drinking (OR = 2.15, P < 0.001) were more likely to contribute to the development of cardiometabolic multimorbidity. In an analysis of taste preferences, men may be more sensitive to the effect of taste preferences and cardiometabolic multimorbidity risk, particularly for smoky (OR = 1.71, P < 0.001), hot (OR = 1.62, P < 0.001), and spicy (OR = 1.38, P < 0.001) tastes. Furthermore, "smoking and physical activity" and "physical activity and alcohol consumption" were men's most common high-risk lifestyle patterns. "Physical activity and dietary intake" were women's most common high-risk lifestyle patterns. A total of four common high-risk dietary behavior patterns were found in both males and females. Conclusions: This research reveals that the likelihood of cardiometabolic multimorbidity increases as high-risk lifestyle factors accumulate. Taste preferences and unhealthy dietary behaviors were found to be associated with an increased risk of developing cardiometabolic multimorbidity and this association differed between genders. Several common lifestyle and dietary behavior patterns suggest that patients with cardiometabolic multimorbidity may achieve better health outcomes if those with certain high-risk lifestyle patterns are identified and managed.


Asunto(s)
Enfermedades Cardiovasculares , Multimorbilidad , Humanos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Anciano de 80 o más Años , Factores de Riesgo , Estudios Transversales , Enfermedades Cardiovasculares/etiología , Estilo de Vida
20.
Nano Lett ; 23(3): 1100-1108, 2023 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-36692959

RESUMEN

Electrochemical production of H2O2 is a cost-effective and environmentally friendly alternative to the anthraquinone-based processes. Metal-doped carbon-based catalysts are commonly used for 2-electron oxygen reduction reaction (2e-ORR) due to their high selectivity. However, the exact roles of metals and carbon defects on ORR catalysts for H2O2 production remain unclear. Herein, by varying the Co loading in the pyrolysis precursor, a Co-N/O-C catalyst with Faradaic efficiency greater than 90% in alkaline electrolyte was obtained. Detailed studies revealed that the active sites in the Co-N/O-C catalysts for 2e-ORR were carbon atoms in C-O-C groups at defect sites. The direct contribution of cobalt single atom sites and metallic Co for the 2e-ORR performance was negligible. However, Co plays an important role in the pyrolytic synthesis of a catalyst by catalyzing carbon graphitization, tuning the formation of defects and oxygen functional groups, and controlling O and N concentrations, thereby indirectly enhancing 2e-ORR performance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA