Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Phytother Res ; 38(3): 1278-1293, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38191199

RESUMEN

Chronic obstructive pulmonary disease (COPD) is a chronic, progressive, and lethal lung disease with few treatments. Formononetin (FMN) is a clinical preparation extract with extensive pharmacological actions. However, its effect on COPD remains unknown. This study aimed to explore the effect and underlying mechanisms of FMN on COPD. A mouse model of COPD was established by exposure to cigarette smoke (CS) for 24 weeks. In addition, bronchial epithelial BEAS-2B cells were treated with CS extract (CSE) for 24 h to explore the in vitro effect of FMN. FMN significantly improved lung function and attenuated pathological lung damage. FMN treatment reduced inflammatory cell infiltration and pro-inflammatory cytokines secretion. FMN also suppressed apoptosis by regulating apoptosis-associated proteins. Moreover, FMN relieved CS-induced endoplasmic reticulum (ER) stress in the mouse lungs. In BEAS-2B cells, FMN treatment reduced CSE-induced inflammation, ER stress, and apoptosis. Mechanistically, FMN downregulated the CS-activated AhR/CYP1A1 and AKT/mTOR signaling pathways in vivo and in vitro. FMN can attenuate CS-induced COPD in mice by suppressing inflammation, ER stress, and apoptosis in bronchial epithelial cells via the inhibition of AhR/CYP1A1 and AKT/mTOR signaling pathways, suggesting a new therapeutic potential for COPD treatment.


Asunto(s)
Fumar Cigarrillos , Isoflavonas , Enfermedad Pulmonar Obstructiva Crónica , Animales , Ratones , Apoptosis , Proteínas Reguladoras de la Apoptosis/metabolismo , Línea Celular , Citocromo P-450 CYP1A1 , Estrés del Retículo Endoplásmico , Células Epiteliales/metabolismo , Inflamación/metabolismo , Pulmón , Extractos Vegetales/farmacología , Proteínas Proto-Oncogénicas c-akt/metabolismo , Enfermedad Pulmonar Obstructiva Crónica/tratamiento farmacológico , Transducción de Señal , Serina-Treonina Quinasas TOR/metabolismo
2.
IEEE Trans Pattern Anal Mach Intell ; 44(10): 6209-6223, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34138701

RESUMEN

Temporal action localization, which requires a machine to recognize the location as well as the category of action instances in videos, has long been researched in computer vision. The main challenge of temporal action localization lies in that videos are usually long and untrimmed with diverse action contents involved. Existing state-of-the-art action localization methods divide each video into multiple action units (i.e., proposals in two-stage methods and segments in one-stage methods) and then perform action recognition/regression on each of them individually, without explicitly exploiting their relations during learning. In this paper, we claim that the relations between action units play an important role in action localization, and a more powerful action detector should not only capture the local content of each action unit but also allow a wider field of view on the context related to it. To this end, we propose a general graph convolutional module (GCM) that can be easily plugged into existing action localization methods, including two-stage and one-stage paradigms. To be specific, we first construct a graph, where each action unit is represented as a node and their relations between two action units as an edge. Here, we use two types of relations, one for capturing the temporal connections between different action units, and the other one for characterizing their semantic relationship. Particularly for the temporal connections in two-stage methods, we further explore two different kinds of edges, one connecting the overlapping action units and the other one connecting surrounding but disjointed units. Upon the graph we built, we then apply graph convolutional networks (GCNs) to model the relations among different action units, which is able to learn more informative representations to enhance action localization. Experimental results show that our GCM consistently improves the performance of existing action localization methods, including two-stage methods (e.g., CBR [15] and R-C3D [47]) and one-stage methods (e.g., D-SSAD [22]), verifying the generality and effectiveness of our GCM. Moreover, with the aid of GCM, our approach significantly outperforms the state-of-the-art on THUMOS14 (50.9 percent versus 42.8 percent). Augmentation experiments on ActivityNet also verify the efficacy of modeling the relationships between action units. The source code and the pre-trained models are available at https://github.com/Alvin-Zeng/GCM.

3.
Front Public Health ; 8: 584387, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33251178

RESUMEN

Classification of Alzheimer's Disease (AD) has been becoming a hot issue along with the rapidly increasing number of patients. This task remains tremendously challenging due to the limited data and the difficulties in detecting mild cognitive impairment (MCI). Existing methods use gait [or EEG (electroencephalogram)] data only to tackle this task. Although the gait data acquisition procedure is cheap and simple, the methods relying on gait data often fail to detect the slight difference between MCI and AD. The methods that use EEG data can detect the difference more precisely, but collecting EEG data from both HC (health controls) and patients is very time-consuming. More critically, these methods often convert EEG records into the frequency domain and thus inevitably lose the spatial and temporal information, which is essential to capture the connectivity and synchronization among different brain regions. This paper proposes a cascade neural network with two steps to achieve a faster and more accurate AD classification by exploiting gait and EEG data simultaneously. In the first step, we propose attention-based spatial temporal graph convolutional networks to extract the features from the skeleton sequences (i.e., gait) captured by Kinect (a commonly used sensor) to distinguish between HC and patients. In the second step, we propose spatial temporal convolutional networks to fully exploit the spatial and temporal information of EEG data and classify the patients into MCI or AD eventually. We collect gait and EEG data from 35 cognitively health controls, 35 MCI, and 17 AD patients to evaluate our proposed method. Experimental results show that our method significantly outperforms other AD diagnosis methods (91.07 vs. 68.18%) in the three-way AD classification task (HC, MCI, and AD). Moreover, we empirically found that the lower body and right upper limb are more important for the early diagnosis of AD than other body parts. We believe this interesting finding can be helpful for clinical researches.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Enfermedad de Alzheimer/diagnóstico , Encéfalo , Disfunción Cognitiva/diagnóstico , Electroencefalografía , Humanos , Redes Neurales de la Computación
4.
IEEE Trans Image Process ; 28(12): 5797-5808, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31217119

RESUMEN

We address the challenging problem of weakly supervised temporal action localization from unconstrained web videos, where only the video-level action labels are available during training. Inspired by the adversarial erasing strategy in weakly supervised semantic segmentation, we propose a novel iterative-winners-out network. Specifically, we make two technical contributions: we propose an iterative training strategy, namely, winners-out, to select the most discriminative action instances in each training iteration and remove them in the next training iteration. This iterative process alleviates the "winner-takes-all" phenomenon that existing approaches tend to choose the video segments that strongly correspond to the video label but neglects other less discriminative video segments. With this strategy, our network is able to localize not only the most discriminative instances but also the less discriminative ones. To better select the target action instances in winners-out, we devise a class-discriminative localization technique. By employing the attention mechanism and the information learned from data, our technique is able to identify the most discriminative action instances effectively. The two key components are integrated into an end-to-end network to localize actions without using the frame-level annotations. Extensive experimental results demonstrate that our method outperforms the state-of-the-art weakly supervised approaches on ActivityNet1.3 and improves mAP from 16.9% to 20.5% on THUMOS14. Notably, even with weak video-level supervision, our method attains comparable accuracy to those employing frame-level supervisions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...