Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14990-15004, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37669203

RESUMO

Network pruning is an effective approach to reduce network complexity with acceptable performance compromise. Existing studies achieve the sparsity of neural networks via time-consuming weight training or complex searching on networks with expanded width, which greatly limits the applications of network pruning. In this paper, we show that high-performing and sparse sub-networks without the involvement of weight training, termed "lottery jackpots", exist in pre-trained models with unexpanded width. Our presented lottery jackpots are traceable through empirical and theoretical outcomes. For example, we obtain a lottery jackpot that has only 10% parameters and still reaches the performance of the original dense VGGNet-19 without any modifications on the pre-trained weights on CIFAR-10. Furthermore, we improve the efficiency for searching lottery jackpots from two perspectives. First, we observe that the sparse masks derived from many existing pruning criteria have a high overlap with the searched mask of our lottery jackpot, among which, the magnitude-based pruning results in the most similar mask with ours. In compliance with this insight, we initialize our sparse mask using the magnitude-based pruning, resulting in at least 3× cost reduction on the lottery jackpot searching while achieving comparable or even better performance. Second, we conduct an in-depth analysis of the searching process for lottery jackpots. Our theoretical result suggests that the decrease in training loss during weight searching can be disturbed by the dependency between weights in modern networks. To mitigate this, we propose a novel short restriction method to restrict change of masks that may have potential negative impacts on the training loss, which leads to a faster convergence and reduced oscillation for searching lottery jackpots. Consequently, our searched lottery jackpot removes 90% weights in ResNet-50, while it easily obtains more than 70% top-1 accuracy using only 5 searching epochs on ImageNet.

2.
Phytochemistry ; 215: 113832, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37598991

RESUMO

Six undescribed compounds, including three phenolic glycosides (1-3) and three indole alkaloids (4-6), together with ten known alkaloids (7-16) and three known phenolic glycosides (17-19), were isolated from 70% EtOH aqueous extracts of the roots and rhizomes of Clematis chinensis Osbeck. The structures were elucidated by NMR, HRESIMS and X-ray diffraction spectroscopies. The anti-inflammatory activity of these compounds was evaluated, and twelve compounds showed significant inhibitory activity against TNF-α with an inhibition ratio from 47.87% to 94.70% at a dose of 10 µM. Compound 7 exhibited significant inhibitory activity against TNF-α and IL-6 with IC50 values of 3.99 µM and 2.24 µM, respectively. Compound 8 displayed potent anti-inflammatory activity against mouse ear edema induced by croton oil. A mechanistic study suggested that compounds 7 and 8 decreased the activation of the NF-κB signaling pathway to reduce the secretion of inflammatory factors in LPS-induced RAW 264.7 cells.


Assuntos
Clematis , Glicosídeos , Camundongos , Animais , Glicosídeos/farmacologia , Rizoma , Clematis/química , Clematis/metabolismo , Fator de Necrose Tumoral alfa/metabolismo , Anti-Inflamatórios/farmacologia , Anti-Inflamatórios/química , Alcaloides Indólicos
3.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10478-10487, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37030750

RESUMO

The mainstream approach for filter pruning is usually either to force a hard-coded importance estimation upon a computation-heavy pretrained model to select "important" filters, or to impose a hyperparameter-sensitive sparse constraint on the loss objective to regularize the network training. In this paper, we present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF), to derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification. Each filter in our DCFF is first given an inter-similarity distribution with a temperature parameter as a filter proxy, on top of which, a fresh Kullback-Leibler divergence based dynamic-coded criterion is proposed to evaluate the filter importance. In contrast to simply keeping high-score filters in other methods, we propose the concept of filter fusion, i.e., the weighted averages using the assigned proxies, as our preserved filters. We obtain a one-hot inter-similarity distribution as the temperature parameter approaches infinity. Thus, the relative importance of each filter can vary along with the training of the compact CNN, leading to dynamically changeable fused filters without both the dependency on the pretrained model and the introduction of sparse constraints. Extensive experiments on classification benchmarks demonstrate the superiority of our DCFF over the compared counterparts. For example, our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47% on CIFAR-10. A compact ResNet-50 is obtained with 63.8% FLOPs and 58.6% parameter reductions, retaining 75.60% top-1 accuracy on ILSVRC-2012. Our code, narrower models and training logs are available at https://github.com/lmbxmu/DCFF.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 11108-11119, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37023149

RESUMO

A resource-adaptive supernet adjusts its subnets for inference to fit the dynamically available resources. In this paper, we propose prioritized subnet sampling to train a resource-adaptive supernet, termed PSS-Net. We maintain multiple subnet pools, each of which stores the information of substantial subnets with similar resource consumption. Considering a resource constraint, subnets conditioned on this resource constraint are sampled from a pre-defined subnet structure space and high-quality ones will be inserted into the corresponding subnet pool. Then, the sampling will gradually be prone to sampling subnets from the subnet pools. Moreover, the one with a better performance metric is assigned with higher priority to train our PSS-Net, if sampling is from a subnet pool. At the end of training, our PSS-Net retains the best subnet in each pool to entitle a fast switch of high-quality subnets for inference when the available resources vary. Experiments on ImageNet using MobileNet-V1/V2 and ResNet-50 show that our PSS-Net can well outperform state-of-the-art resource-adaptive supernets. Our project is publicly available at https://github.com/chenbong/PSS-Net.

5.
IEEE Trans Pattern Anal Mach Intell ; 45(5): 6277-6288, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36215372

RESUMO

Binary neural networks (BNNs) have attracted broad research interest due to their efficient storage and computational ability. Nevertheless, a significant challenge of BNNs lies in handling discrete constraints while ensuring bit entropy maximization, which typically makes their weight optimization very difficult. Existing methods relax the learning using the sign function, which simply encodes positive weights into +1s, and -1s otherwise. Alternatively, we formulate an angle alignment objective to constrain the weight binarization to {0,+1} to solve the challenge. In this article, we show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise. Therefore, a high-quality discrete solution is established in a computationally efficient manner without the sign function. We prove that the learned weights of binarized networks roughly follow a Laplacian distribution that does not allow entropy maximization, and further demonstrate that it can be effectively solved by simply removing the l2 regularization during network training. Our method, dubbed sign-to-magnitude network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet, demonstrating its superiority over the sign-based state-of-the-arts. Our source code, experimental settings, training logs and binary models are available at https://github.com/lmbxmu/SiMaN.

6.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 3999-4008, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35917571

RESUMO

Though network pruning receives popularity in reducing the complexity of convolutional neural networks (CNNs), it remains an open issue to concurrently maintain model accuracy as well as achieve significant speedups on general CPUs. In this paper, we propose a novel 1×N pruning pattern to break this limitation. In particular, consecutive N output kernels with the same input channel index are grouped into one block, which serves as a basic pruning granularity of our pruning pattern. Our 1×N pattern prunes these blocks considered unimportant. We also provide a workflow of filter rearrangement that first rearranges the weight matrix in the output channel dimension to derive more influential blocks for accuracy improvements and then applies similar rearrangement to the next-layer weights in the input channel dimension to ensure correct convolutional operations. Moreover, the output computation after our 1×N pruning can be realized via a parallelized block-wise vectorized operation, leading to significant speedups on general CPUs. The efficacy of our pruning pattern is proved with experiments on ILSVRC-2012. For example, given the pruning rate of 50% and N=4, our pattern obtains about 3.0% improvements over filter pruning in the top-1 accuracy of MobileNet-V2. Meanwhile, it obtains 56.04ms inference savings on Cortex-A7 CPU over weight pruning. Our project is made available at https://github.com/lmbxmu/1xN.

7.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9139-9148, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35294359

RESUMO

This article focuses on filter-level network pruning. A novel pruning method, termed CLR-RNF, is proposed. We first reveal a "long-tail" pruning problem in magnitude-based weight pruning methods and then propose a computation-aware measurement for individual weight importance, followed by a cross-layer ranking (CLR) of weights to identify and remove the bottom-ranked weights. Consequently, the per-layer sparsity makes up the pruned network structure in our filter pruning. Then, we introduce a recommendation-based filter selection scheme where each filter recommends a group of its closest filters. To pick the preserved filters from these recommended groups, we further devise a k -reciprocal nearest filter (RNF) selection scheme where the selected filters fall into the intersection of these recommended groups. Both our pruned network structure and the filter selection are nonlearning processes, which, thus, significantly reduces the pruning complexity and differentiates our method from existing works. We conduct image classification on CIFAR-10 and ImageNet to demonstrate the superiority of our CLR-RNF over the state-of-the-arts. For example, on CIFAR-10, CLR-RNF removes 74.1% FLOPs and 95.0% parameters from VGGNet-16 with even 0.3% accuracy improvements. On ImageNet, it removes 70.2% FLOPs and 64.8% parameters from ResNet-50 with only 1.7% top-five accuracy drops. Our project is available at https://github.com/lmbxmu/CLR-RNF.

8.
IEEE Trans Neural Netw Learn Syst ; 34(10): 7946-7955, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35157600

RESUMO

Channel pruning has been long studied to compress convolutional neural networks (CNNs), which significantly reduces the overall computation. Prior works implement channel pruning in an unexplainable manner, which tends to reduce the final classification errors while failing to consider the internal influence of each channel. In this article, we conduct channel pruning in a white box. Through deep visualization of feature maps activated by different channels, we observe that different channels have a varying contribution to different categories in image classification. Inspired by this, we choose to preserve channels contributing to most categories. Specifically, to model the contribution of each channel to differentiating categories, we develop a class-wise mask for each channel, implemented in a dynamic training manner with respect to the input image's category. On the basis of the learned class-wise mask, we perform a global voting mechanism to remove channels with less category discrimination. Lastly, a fine-tuning process is conducted to recover the performance of the pruned model. To our best knowledge, it is the first time that CNN interpretability theory is considered to guide channel pruning. Extensive experiments on representative image classification tasks demonstrate the superiority of our White-Box over many state-of-the-arts (SOTAs). For instance, on CIFAR-10, it reduces 65.23% floating point operations per seconds (FLOPs) with even 0.62% accuracy improvement for ResNet-110. On ILSVRC-2012, White-Box achieves a 45.6% FLOP reduction with only a small loss of 0.83% in the top-1 accuracy for ResNet-50. Code is available at https://github.com/zyxxmu/White-Box.

9.
IEEE Trans Neural Netw Learn Syst ; 34(11): 8743-8752, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35254994

RESUMO

Existing online knowledge distillation approaches either adopt the student with the best performance or construct an ensemble model for better holistic performance. However, the former strategy ignores other students' information, while the latter increases the computational complexity during deployment. In this article, we propose a novel method for online knowledge distillation, termed feature fusion and self-distillation (FFSD), which comprises two key components: FFSD, toward solving the above problems in a unified framework. Different from previous works, where all students are treated equally, the proposed FFSD splits them into a leader student set and a common student set. Then, the feature fusion module converts the concatenation of feature maps from all common students into a fused feature map. The fused representation is used to assist the learning of the leader student. To enable the leader student to absorb more diverse information, we design an enhancement strategy to increase the diversity among students. Besides, a self-distillation module is adopted to convert the feature map of deeper layers into a shallower one. Then, the shallower layers are encouraged to mimic the transformed feature maps of the deeper layers, which helps the students to generalize better. After training, we simply adopt the leader student, which achieves superior performance, over the common students, without increasing the storage or inference cost. Extensive experiments on CIFAR-100 and ImageNet demonstrate the superiority of our FFSD over existing works. The code is available at https://github.com/SJLeo/FFSD.

10.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 2945-2951, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35588416

RESUMO

Few-shot class-incremental learning (FSCIL) is challenged by catastrophically forgetting old classes and over-fitting new classes. Revealed by our analyses, the problems are caused by feature distribution crumbling, which leads to class confusion when continuously embedding few samples to a fixed feature space. In this study, we propose a Dynamic Support Network (DSN), which refers to an adaptively updating network with compressive node expansion to "support" the feature space. In each training session, DSN tentatively expands network nodes to enlarge feature representation capacity for incremental classes. It then dynamically compresses the expanded network by node self-activation to pursue compact feature representation, which alleviates over-fitting. Simultaneously, DSN selectively recalls old class distributions during incremental learning to support feature distributions and avoid confusion between classes. DSN with compressive node expansion and class distribution recalling provides a systematic solution for the problems of catastrophic forgetting and overfitting. Experiments on CUB, CIFAR-100, and miniImage datasets show that DSN significantly improves upon the baseline approach, achieving new state-of-the-arts.

11.
ACS Nano ; 16(12): 20739-20757, 2022 12 27.
Artigo em Inglês | MEDLINE | ID: mdl-36454190

RESUMO

Hepatic fibrosis is a chronic liver disease that lacks effective pharmacotherapeutic treatments. As part of the disease's mechanism, hepatic stellate cells (HSCs) are activated by damage-related stimuli to secrete excessive extracellular matrix, leading to collagen deposition. Currently, the drug delivery system that targets HSCs in the treatment of liver fibrosis remains an urgent challenge due to the poor controllability of drug release. Since the level of reactive oxygen species (ROS) increases sharply in activated HSCs (aHSCs), we designed ROS-responsive micelles for the HSC-specific delivery of a traditional Chinese medicine, resveratrol (RES), for treatment of liver fibrosis. The micelles were prepared by the ROS-responsive amphiphilic block copolymer poly(l-methionine-block-Nε-trifluoro-acetyl-l-lysine) (PMK) and a PEG shell modified with a CRGD peptide insertion. The CRGD-targeted and ROS-responsive micelles (CRGD-PMK-MCs) could target aHSCs and control the release of RES under conditions of high intracellular ROS in aHSCs. The CRGD-PMK-MCs treatment specifically enhanced the targeted delivery of RES to aHSCs both in vitro and in vivo. In vitro experiments show that CRGD-PMK-MCs could significantly promote ROS consumption, reduce collagen accumulation, and avert activation of aHSCs. In vivo results demonstrate that CRGD-PMK-MCs could alleviate inflammatory infiltration, prevent fibrosis, and protect hepatocytes from damage in fibrotic mice. In conclusion, CRGD-PMK-MCs show great potential for targeted and ROS-responsive controlled drug release in the aHSCs of liver fibrosis.


Assuntos
Células Estreladas do Fígado , Micelas , Camundongos , Animais , Espécies Reativas de Oxigênio/farmacologia , Cirrose Hepática/tratamento farmacológico , Sistemas de Liberação de Medicamentos , Colágeno/farmacologia , Fígado
12.
Phytochemistry ; 197: 113135, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35181314

RESUMO

A full set of 8,4'-oxy-8'-phenylneolignans with four chiral carbons, named (+)/(-)-leptolepisols D1‒D2 and (+)/(-)-sophorols A‒F, were isolated from the roots and rhizomes of Sophora tonkinensis Gagnep., including 14 previously undescribed stereoisomers, along with 2 known leptolepisol D diastereomers. Their planar structures and relative configurations were elucidated by detailed spectroscopic analysis (HRESIMS and NMR). Based on a highly accurate conformer filtering protocol at low computational cost, the absolute configurations of full set 8,4'-oxy-8'-phenylneolignans were completely assigned by TDDFT calculations of ECD spectra for the first time. Furthermore, (+)/(-)-sophorol A, (-)-sophorol B, and (-)-sophorol E could moderately suppress the lipopolysaccharide-induced nitric oxide production in murine macrophages at 10 µM, with inhibitory ratios of 48.4-52.9%.


Assuntos
Sophora , Animais , Camundongos , Estrutura Molecular , Óxido Nítrico , Raízes de Plantas/química , Rizoma , Sophora/química , Estereoisomerismo
13.
IEEE Trans Neural Netw Learn Syst ; 33(12): 7357-7366, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34101606

RESUMO

Popular network pruning algorithms reduce redundant information by optimizing hand-crafted models, and may cause suboptimal performance and long time in selecting filters. We innovatively introduce adaptive exemplar filters to simplify the algorithm design, resulting in an automatic and efficient pruning approach called EPruner. Inspired by the face recognition community, we use a message-passing algorithm Affinity Propagation on the weight matrices to obtain an adaptive number of exemplars, which then act as the preserved filters. EPruner breaks the dependence on the training data in determining the "important" filters and allows the CPU implementation in seconds, an order of magnitude faster than GPU-based SOTAs. Moreover, we show that the weights of exemplars provide a better initialization for the fine-tuning. On VGGNet-16, EPruner achieves a 76.34%-FLOPs reduction by removing 88.80% parameters, with 0.06% accuracy improvement on CIFAR-10. In ResNet-152, EPruner achieves a 65.12%-FLOPs reduction by removing 64.18% parameters, with only 0.71% top-5 accuracy loss on ILSVRC-2012. Our code is available at https://github.com/lmbxmu/EPruner.


Assuntos
Algoritmos , Redes Neurais de Computação
14.
Nat Prod Res ; 36(21): 5400-5406, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34121549

RESUMO

Two new prenylaromadendrane-type diterpenoids, and three known analogues, were isolated from the ethanol extract of the gum resin of B. sacra Flueck. The structures of the new compounds were elucidated using 1 D and 2 D NMR spectroscopic analyses, mass spectrometric data, circular dichroism spectra, and comparison with the other compounds in the literature. One diterpenoid represents the first example of an acetoxyl-substituted prenylaromadendranoid in frankincense. All compounds exhibited notable cytotoxicity against human malignant glioma (U87-MG) cell line, with inhibitory rates exceeding that of the positive control 5-fluorouracil. However, nitric oxide inhibition induced by lipopolysaccarides was not observed in primary mouse peritoneal macrophages.


Assuntos
Boswellia , Diterpenos , Camundongos , Humanos , Animais , Boswellia/química , Diterpenos/farmacologia , Diterpenos/química , Macrófagos Peritoneais , Resinas Vegetais/farmacologia , Resinas Vegetais/química
15.
IEEE Trans Neural Netw Learn Syst ; 33(12): 7091-7100, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34125685

RESUMO

We propose a novel network pruning approach by information preserving of pretrained network weights (filters). Network pruning with the information preserving is formulated as a matrix sketch problem, which is efficiently solved by the off-the-shelf frequent direction method. Our approach, referred to as FilterSketch, encodes the second-order information of pretrained weights, which enables the representation capacity of pruned networks to be recovered with a simple fine-tuning procedure. FilterSketch requires neither training from scratch nor data-driven iterative optimization, leading to a several-orders-of-magnitude reduction of time cost in the optimization of pruning. Experiments on CIFAR-10 show that FilterSketch reduces 63.3% of floating-point operations (FLOPs) and prunes 59.9% of network parameters with negligible accuracy cost for ResNet-110. On ILSVRC-2012, it reduces 45.5% of FLOPs and removes 43.0% of parameters with only 0.69% accuracy drop for ResNet-50. Our code and pruned models can be found at https://github.com/lmbxmu/FilterSketch.

16.
IEEE Trans Pattern Anal Mach Intell ; 44(5): 2453-2467, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-33270558

RESUMO

Online image hashing has received increasing research attention recently, which processes large-scale data in a streaming fashion to update the hash functions on-the-fly. To this end, most existing works exploit this problem under a supervised setting, i.e., using class labels to boost the hashing performance, which suffers from the defects in both adaptivity and efficiency: First, large amounts of training batches are required to learn up-to-date hash functions, which leads to poor online adaptivity. Second, the training is time-consuming, which contradicts with the core need of online learning. In this paper, a novel supervised online hashing scheme, termed Fast Class-wise Updating for Online Hashing (FCOH), is proposed to address the above two challenges by introducing a novel and efficient inner product operation. To achieve fast online adaptivity, a class-wise updating method is developed to decompose the binary code learning and alternatively renew the hash functions in a class-wise fashion, which well addresses the burden on large amounts of training batches. Quantitatively, such a decomposition further leads to at least 75 percent storage saving. To further achieve online efficiency, we propose a semi-relaxation optimization, which accelerates the online training by treating different binary constraints independently. Without additional constraints and variables, the time complexity is significantly reduced. Such a scheme is also quantitatively shown to well preserve past information during updating hashing functions. We have quantitatively demonstrated that the collective effort of class-wise updating and semi-relaxation optimization provides a superior performance comparing to various state-of-the-art methods, which is verified through extensive experiments on three widely-used datasets.

17.
Eur J Med Chem ; 225: 113791, 2021 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-34450495

RESUMO

Cytotoxic T lymphocyte (CTL), a key effector cell in aplastic anemia (AA) immune injury, is shown to be a potential target for AA drug therapy. However, there is no candidate for this target till now. Oriented by the inhibition activity of CTL and macrophage derived nitric oxide (NO), a series of novel sinomenine derivatives on rings A and C are designed, synthesized and screened. Among them, compound 3a demonstrates the best inhibitory activity on CTL with an IC50 value of 2.3 µM, and a 97.1% inhibiton rate on macrophage NO production without significant cytotoxicity. Further, compound 3a exhibits substantial therapeutic efficacy on immune-mediated BM failure in AA model mice by improving the symptoms of anemia and the function of BM hematopoiesis, and shows more advantages in life quality improving than cyclosporine A (CsA). Its efficacy on AA at least partly comes from targeting on activated cluster of differentiation (CD)8+ T cell. Additionally, 3a also shows much less toxicity (LD50 > 10.0 g/kg) than sinomenine (LD50 = 1.1 g/kg) in preliminary acute toxicity assessment in mice, and has a low risk to inhibit hERG to cause cardiotoxicity. These results indicate that compound 3a merits further investigation for AA treatment by targeting on CTL.


Assuntos
Anemia Aplástica/tratamento farmacológico , Antirreumáticos/farmacologia , Desenho de Fármacos , Morfinanos/farmacologia , Linfócitos T Citotóxicos/efeitos dos fármacos , Anemia Aplástica/imunologia , Animais , Antirreumáticos/síntese química , Antirreumáticos/química , Células Cultivadas , Relação Dose-Resposta a Droga , Masculino , Camundongos , Camundongos Endogâmicos , Estrutura Molecular , Morfinanos/síntese química , Morfinanos/química , Relação Estrutura-Atividade , Linfócitos T Citotóxicos/imunologia
18.
J Inflamm Res ; 14: 2173-2185, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34079326

RESUMO

INTRODUCTION: Asthma-chronic obstructive pulmonary (COPD) overlap (ACO) coexists with asthma and COPD syndrome characteristics, with more frequent exacerbations, heavier disease burden, higher medical utilization, and even lower quality of life. However, the ACO standard medications supported by evidence-based medicine have not yet appeared. METHODS: By using an ACO mouse model established previously and LPS-stimulated RAW264.7 macrophages in vitro, a potential therapeutic candidate, EAPP-2, was screened from derivatives of 3-arylbenzofuran, and its effect and mechanism on ACO inflammation were evaluated. RESULTS: EAPP-2 significantly alleviated airway inflammation in ACO mice and also inhibited the inflammatory reactions in LPS-induced RAW264.7 macrophages in vitro. Furthermore, EAPP-2 significantly inhibited the expression and phosphorylation of spleen tyrosine kinase (Syk), a common target regulating both eosinophils and neutrophils inflammation. In addition to this, EAPP-2 significantly down-regulates the expression of NF-κB, p-NF-κB, and NLRP3 in vivo and in vitro. Moreover, by using specific inhibitors in vitro, it was validated that EAPP-2 targeted on Syk and then regulated its downstream NF-κB and NLRP3. CONCLUSION: EAPP-2 is shown to be a potentially useful therapeutic candidate for ACO, and its mechanism is at least partially achieved by targeting on Syk and then inhibiting NF-κB or NLRP3. Moreover, this study suggests that Syk may be a potentially effective target for ACO therapy.

19.
Mediators Inflamm ; 2021: 6611219, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34045925

RESUMO

Perilla frutescens (L.) Britton is a classic herbal plant used widely against asthma in China. But its mechanism of beneficial effect remains undermined. In the study, the antiallergic asthma effects of Perilla leaf extract (PLE) were investigated, and the underlying mechanism was also explored. Results showed that PLE treatment significantly attenuated airway inflammation in OVA-induced asthma mice, by ameliorating lung pathological changes, inhibiting recruitment of inflammatory cells in lung tissues and bronchoalveolar lavage fluid (BALF), decreasing the production of inflammatory cytokines in the BALF, and reducing the level of immunoglobulin in serum. PLE treatment suppressed inflammatory response in antigen-induced rat basophilic leukemia 2H3 (RBL-2H3) cells as well as in OVA-induced human peripheral blood mononuclear cells (PBMCs). Furthermore, PLE markedly inhibited the expression and phosphorylation of Syk, NF-κB, PKC, and cPLA2 both in vivo and in vitro. By cotreating with inhibitors (BAY61-3606, Rottlerin, BAY11-7082, and arachidonyl trifluoromethyl ketone) in vitro, results revealed that PLE's antiallergic inflammatory effects were associated with the inhibition of Syk and its downstream signals NF-κB, PKC, and cPLA2. Collectively, the present results suggested that PLE could attenuate allergic inflammation, and its mechanism might be partly mediated through inhibiting the Syk pathway.


Assuntos
Asma , Perilla , Animais , Asma/metabolismo , Líquido da Lavagem Broncoalveolar , Modelos Animais de Doenças , Inflamação/metabolismo , Leucócitos Mononucleares/metabolismo , Pulmão/metabolismo , Camundongos , NF-kappa B/metabolismo , Perilla/metabolismo , Extratos Vegetais/farmacologia , Extratos Vegetais/uso terapêutico , Ratos , Transdução de Sinais
20.
Phytochemistry ; 187: 112761, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33933827

RESUMO

Seven previously unidentified polycyclic polyprenylated acylphloroglucinol (PPAP) derivatives hypseudohenrins A-G, along with six known analogs, were isolated from the aerial portion of Hypericum pseudohenryi. Their structures were determined by NMR, ECD and X-ray crystallographic spectroscopy. These compounds were screened for anti-inflammatory activity, and hypseudohenrins B and G (at the concentration of 10 µM) showed NO production inhibition ratios of 52.56% and 54.01%, respectively, which imply good anti-inflammatory activity. In particular, uraloidin A exhibited an NO inhibition ratio of 90.61%, while that ratio of the positive control compound dexamethasone was 94.88%. Additionally, anti-cancer and neural-protective activities were screened, but none of these compounds showed desirable activity.


Assuntos
Hypericum , Anti-Inflamatórios/farmacologia , Espectroscopia de Ressonância Magnética , Estrutura Molecular , Floroglucinol/farmacologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...