Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Iran J Basic Med Sci ; 26(11): 1305-1312, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37886002

RESUMEN

Objectives: Cerebral ischemia/reperfusion (I/R) injury inevitably aggravates the initial cerebral tissue damage following a stroke. Peroxiredoxin 1 (Prdx1) is a representative protein of the endogenous antioxidant enzyme family that regulates several reactive oxygen species (ROS)-dependent signaling pathways, whereas the JNK/caspase-3 proapoptotic pathway has a prominent role during cerebral I/R injury. This study aimed to examine the potential mechanism of Prdx1 in Neuro 2A (N2a) cells following oxygen-glucose deprivation and reoxygenation (OGD/R) injury. Materials and Methods: N2a cells were exposed to OGD/R to simulate cerebral I/R injury. Prdx1 siRNA transfection and the JNK inhibitor (SP600125) were used to interfere with their relative expressions. CCK-8 assay, flow cytometry, and lactate dehydrogenase (LDH) assay were employed to determine the viability and apoptosis of N2a cells. The intracellular ROS content was assessed using ROS Assay Kit. Real-time quantitative reverse transcription polymerase chain reaction (qRT-PCR) and western blot analyses were conducted to detect the expression levels of Prdx1, JNK, phosphorylated JNK (p-JNK), and cleaved caspase-3. Results: Firstly, Prdx1, p-JNK, and cleaved caspase-3 expression were significantly induced in OGD/R-exposed N2a cells. Secondly, the knockdown of Prdx1 inhibited cell viability and increased apoptosis rate, expression of p-JNK, and cleaved caspase-3 expression. Thirdly, SP600125 inhibited the JNK/caspase-3 signaling pathway and mitigated cell injury following OGD/R. Finally, SP600125 partially reversed Prdx1 down-regulation-mediated cleaved caspase-3 activation and OGD/R damage in N2a cells. Conclusion: Prdx1 alleviates the injury to N2a cells induced by OGD/R via suppressing JNK/caspase-3 pathway, showing promise as a potential therapeutic for cerebral I/R injury.

2.
Artículo en Inglés | MEDLINE | ID: mdl-37695953

RESUMEN

The effective modal fusion and perception between the language and the image are necessary for inferring the reference instance in the referring image segmentation (RIS) task. In this article, we propose a novel RIS network, the global and local interactive perception network (GLIPN), to enhance the quality of modal fusion between the language and the image from the local and global perspectives. The core of GLIPN is the global and local interactive perception (GLIP) scheme. Specifically, the GLIP scheme contains the local perception module (LPM) and the global perception module (GPM). The LPM is designed to enhance the local modal fusion by the correspondence between word and image local semantics. The GPM is designed to inject the global structured semantics of images into the modal fusion process, which can better guide the word embedding to perceive the whole image's global structure. Combined with the local-global context semantics fusion, extensive experiments on several benchmark datasets demonstrate the advantage of the proposed GLIPN over most state-of-the-art approaches.

3.
IEEE Trans Neural Netw Learn Syst ; 34(12): 10309-10323, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35442894

RESUMEN

This article presents a new text-to-image (T2I) generation model, named distribution regularization generative adversarial network (DR-GAN), to generate images from text descriptions from improved distribution learning. In DR-GAN, we introduce two novel modules: a semantic disentangling module (SDM) and a distribution normalization module (DNM). SDM combines the spatial self-attention mechanism (SSAM) and a new semantic disentangling loss (SDL) to help the generator distill key semantic information for the image generation. DNM uses a variational auto-encoder (VAE) to normalize and denoise the image latent distribution, which can help the discriminator better distinguish synthesized images from real images. DNM also adopts a distribution adversarial loss (DAL) to guide the generator to align with normalized real image distributions in the latent space. Extensive experiments on two public datasets demonstrated that our DR-GAN achieved a competitive performance in the T2I task. The code link: https://github.com/Tan-H-C/DR-GAN-Distribution-Regularization-for-Text-to-Image-Generation.

4.
IEEE Trans Neural Netw Learn Syst ; 34(11): 8210-8224, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35312622

RESUMEN

This article presents a novel person reidentification model, named multihead self-attention network (MHSA-Net), to prune unimportant information and capture key local information from person images. MHSA-Net contains two main novel components: multihead self-attention branch (MHSAB) and attention competition mechanism (ACM). The MHSAB adaptively captures key local person information and then produces effective diversity embeddings of an image for the person matching. The ACM further helps filter out attention noise and nonkey information. Through extensive ablation studies, we verified that the MHSAB and ACM both contribute to the performance improvement of the MHSA-Net. Our MHSA-Net achieves competitive performance in the standard and occluded person Re-ID tasks.

5.
IEEE Trans Image Process ; 30: 1275-1290, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33001801

RESUMEN

This paper presents a new framework, Knowledge-Transfer Generative Adversarial Network (KT-GAN), for fine-grained text-to-image generation. We introduce two novel mechanisms: an Alternate Attention-Transfer Mechanism (AATM) and a Semantic Distillation Mechanism (SDM), to help generator better bridge the cross-domain gap between text and image. The AATM updates word attention weights and attention weights of image sub-regions alternately, to progressively highlight important word information and enrich details of synthesized images. The SDM uses the image encoder trained in the Image-to-Image task to guide training of the text encoder in the Text-to-Image task, for generating better text features and higher-quality images. With extensive experimental validation on two public datasets, our KT-GAN outperforms the baseline method significantly, and also achieves the competive results over different evaluation metrics.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...