Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
3.
EMBO Rep ; 20(9): e47892, 2019 09.
Artículo en Inglés | MEDLINE | ID: mdl-31318145

RESUMEN

The conversion of skeletal muscle fiber from fast twitch to slow-twitch is important for sustained and tonic contractile events, maintenance of energy homeostasis, and the alleviation of fatigue. Skeletal muscle remodeling is effectively induced by endurance or aerobic exercise, which also generates several tricarboxylic acid (TCA) cycle intermediates, including succinate. However, whether succinate regulates muscle fiber-type transitions remains unclear. Here, we found that dietary succinate supplementation increased endurance exercise ability, myosin heavy chain I expression, aerobic enzyme activity, oxygen consumption, and mitochondrial biogenesis in mouse skeletal muscle. By contrast, succinate decreased lactate dehydrogenase activity, lactate production, and myosin heavy chain IIb expression. Further, by using pharmacological or genetic loss-of-function models generated by phospholipase Cß antagonists, SUNCR1 global knockout, or SUNCR1 gastrocnemius-specific knockdown, we found that the effects of succinate on skeletal muscle fiber-type remodeling are mediated by SUNCR1 and its downstream calcium/NFAT signaling pathway. In summary, our results demonstrate succinate induces transition of skeletal muscle fiber via SUNCR1 signaling pathway. These findings suggest the potential beneficial use of succinate-based compounds in both athletic and sedentary populations.


Asunto(s)
Fibras Musculares Esqueléticas/efectos de los fármacos , Fibras Musculares Esqueléticas/metabolismo , Músculo Esquelético/metabolismo , Ácido Succínico/farmacología , Animales , Ciclo del Ácido Cítrico/efectos de los fármacos , Masculino , Ratones , Ratones Endogámicos C57BL , Contracción Muscular/efectos de los fármacos , Fatiga Muscular/efectos de los fármacos , Músculo Esquelético/efectos de los fármacos , Cadenas Pesadas de Miosina/metabolismo , Consumo de Oxígeno/efectos de los fármacos , Transducción de Señal/efectos de los fármacos
4.
Analyst ; 141(19): 5586-97, 2016 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-27435388

RESUMEN

Variable selection and outlier detection are important processes in chemical modeling. Usually, they affect each other. Their performing orders also strongly affect the modeling results. Currently, many studies perform these processes separately and in different orders. In this study, we examined the interaction between outliers and variables and compared the modeling procedures performed with different orders of variable selection and outlier detection. Because the order of outlier detection and variable selection can affect the interpretation of the model, it is difficult to decide which order is preferable when the predictabilities (prediction error) of the different orders are relatively close. To address this problem, a simultaneous variable selection and outlier detection approach called Model Adaptive Space Shrinkage (MASS) was developed. This proposed approach is based on model population analysis (MPA). Through weighted binary matrix sampling (WBMS) from model space, a large number of partial least square (PLS) regression models were built, and the elite parts of the models were selected to statistically reassign the weight of each variable and sample. Then, the whole process was repeated until the weights of the variables and samples converged. Finally, MASS adaptively found a high performance model which consisted of the optimized variable subset and sample subset. The combination of these two subsets could be considered as the cleaned dataset used for chemical modeling. In the proposed approach, the problem of the order of variable selection and outlier detection is avoided. One near infrared spectroscopy (NIR) dataset and one quantitative structure-activity relationship (QSAR) dataset were used to test this approach. The result demonstrated that MASS is a useful method for data cleaning before building a predictive model.

5.
Anal Chim Acta ; 911: 27-34, 2016 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-26893083

RESUMEN

Biomarker discovery is one important goal in metabolomics, which is typically modeled as selecting the most discriminating metabolites for classification and often referred to as variable importance analysis or variable selection. Until now, a number of variable importance analysis methods to discover biomarkers in the metabolomics studies have been proposed. However, different methods are mostly likely to generate different variable ranking results due to their different principles. Each method generates a variable ranking list just as an expert presents an opinion. The problem of inconsistency between different variable ranking methods is often ignored. To address this problem, a simple and ideal solution is that every ranking should be taken into account. In this study, a strategy, called rank aggregation, was employed. It is an indispensable tool for merging individual ranking lists into a single "super"-list reflective of the overall preference or importance within the population. This "super"-list is regarded as the final ranking for biomarker discovery. Finally, it was used for biomarkers discovery and selecting the best variable subset with the highest predictive classification accuracy. Nine methods were used, including three univariate filtering and six multivariate methods. When applied to two metabolic datasets (Childhood overweight dataset and Tubulointerstitial lesions dataset), the results show that the performance of rank aggregation has improved greatly with higher prediction accuracy compared with using all variables. Moreover, it is also better than penalized method, least absolute shrinkage and selectionator operator (LASSO), with higher prediction accuracy or less number of selected variables which are more interpretable.


Asunto(s)
Biomarcadores/metabolismo , Metabolómica , Estudios de Casos y Controles , Niño , Cromatografía de Gases y Espectrometría de Masas , Humanos , Modelos Teóricos , Sobrepeso/sangre
6.
Anal Chim Acta ; 908: 63-74, 2016 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-26826688

RESUMEN

In this study, a new variable selection method called bootstrapping soft shrinkage (BOSS) method is developed. It is derived from the idea of weighted bootstrap sampling (WBS) and model population analysis (MPA). The weights of variables are determined based on the absolute values of regression coefficients. WBS is applied according to the weights to generate sub-models and MPA is used to analyze the sub-models to update weights for variables. The optimization procedure follows the rule of soft shrinkage, in which less important variables are not eliminated directly but are assigned smaller weights. The algorithm runs iteratively and terminates until the number of variables reaches one. The optimal variable set with the lowest root mean squared error of cross-validation (RMSECV) is selected. The method was tested on three groups of near infrared (NIR) spectroscopic datasets, i.e. corn datasets, diesel fuels datasets and soy datasets. Three high performing variable selection methods, i.e. Monte Carlo uninformative variable elimination (MCUVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm partial least squares (GA-PLS) are used for comparison. The results show that BOSS is promising with improved prediction performance. The Matlab codes for implementing BOSS are freely available on the website: http://www.mathworks.com/matlabcentral/fileexchange/52770-boss.


Asunto(s)
Modelos Químicos , Algoritmos , Análisis de los Mínimos Cuadrados , Método de Montecarlo , Espectroscopía Infrarroja Corta
7.
J Cheminform ; 7: 60, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26664458

RESUMEN

BACKGROUND: Molecular descriptors and fingerprints have been routinely used in QSAR/SAR analysis, virtual drug screening, compound search/ranking, drug ADME/T prediction and other drug discovery processes. Since the calculation of such quantitative representations of molecules may require substantial computational skills and efforts, several tools have been previously developed to make an attempt to ease the process. However, there are still several hurdles for users to overcome to fully harness the power of these tools. First, most of the tools are distributed as standalone software or packages that require necessary configuration or programming efforts of users. Second, many of the tools can only calculate a subset of molecular descriptors, and the results from multiple tools need to be manually merged to generate a comprehensive set of descriptors. Third, some packages only provide application programming interfaces and are implemented in different computer languages, which pose additional challenges to the integration of these tools. RESULTS: A freely available web-based platform, named ChemDes, is developed in this study. It integrates multiple state-of-the-art packages (i.e., Pybel, CDK, RDKit, BlueDesc, Chemopy, PaDEL and jCompoundMapper) for computing molecular descriptors and fingerprints. ChemDes not only provides friendly web interfaces to relieve users from burdensome programming work, but also offers three useful and convenient auxiliary tools for format converting, MOPAC optimization and fingerprint similarity calculation. Currently, ChemDes has the capability of computing 3679 molecular descriptors and 59 types of molecular fingerprints. CONCLUSION: ChemDes provides users an integrated and friendly tool to calculate various molecular descriptors and fingerprints. It is freely available at http://www.scbdd.com/chemdes. The source code of the project is also available as a supplementary file. Graphical abstract:An overview of ChemDes. A platform for computing various molecular descriptors and fingerprints.

8.
Anal Chim Acta ; 880: 32-41, 2015 Jun 23.
Artículo en Inglés | MEDLINE | ID: mdl-26092335

RESUMEN

Partial least squares (PLS) is one of the most widely used methods for chemical modeling. However, like many other parameter tunable methods, it has strong tendency of over-fitting. Thus, a crucial step in PLS model building is to select the optimal number of latent variables (nLVs). Cross-validation (CV) is the most popular method for PLS model selection because it selects a model from the perspective of prediction ability. However, a clear minimum of prediction errors may not be obtained in CV which makes the model selection difficult. To solve the problem, we proposed a new strategy for PLS model selection which combines the cross-validated coefficient of determination (Qcv(2)) and model stability (S). S is defined as the stability of PLS regression vectors which is obtained using model population analysis (MPA). The results show that, when a clear maximum of Qcv(2) is not obtained, S can provide additional information of over-fitting and it helps in finding the optimal nLVs. Compared with other regression vector based indictors such as the Euclidean 2-norm (B2), the Durbin Watson statistic (DW) and the jaggedness (J), S is more sensitive to over-fitting. The model selected by our method has both good prediction ability and stability.


Asunto(s)
Algoritmos , Modelos Químicos , Análisis de los Mínimos Cuadrados , Programas Informáticos , Glycine max/química , Glycine max/metabolismo , Espectrofotometría Ultravioleta
9.
J Chromatogr A ; 1393: 47-56, 2015 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-25818557

RESUMEN

Solvent system selection is the first step toward a successful counter-current chromatography (CCC) separation. This paper introduces a systematic and practical solvent system selection strategy based on the nonrandom two-liquid segment activity coefficient (NRTL-SAC) model, which is efficient in predicting the solute partition coefficient. Firstly, the application of the NRTL-SAC method was extended to the ethyl acetate/n-butanol/water and chloroform/methanol/water solvent system families. Moreover, the versatility and predictive capability of the NRTL-SAC method were investigated. The results indicate that the solute molecular parameters identified from hexane/ethyl acetate/methanol/water solvent system family are capable of predicting a large number of partition coefficients in several other different solvent system families. The NRTL-SAC strategy was further validated by successfully separating five components from Salvia plebeian R.Br. We therefore propose that NRTL-SAC is a promising high throughput method for rapid solvent system selection and highly adaptable to screen suitable solvent system for real-life CCC separation.


Asunto(s)
Cromatografía Líquida de Alta Presión/métodos , Distribución en Contracorriente/métodos , Solventes/química , 1-Butanol/química , Acetatos/química , Cloroformo/química , Hexanos/química , Metanol/química , Extractos Vegetales/química , Salvia/química , Agua/química
10.
Anal Chim Acta ; 862: 14-23, 2015 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-25682424

RESUMEN

Variable (wavelength or feature) selection techniques have become a critical step for the analysis of datasets with high number of variables and relatively few samples. In this study, a novel variable selection strategy, variable combination population analysis (VCPA), was proposed. This strategy consists of two crucial procedures. First, the exponentially decreasing function (EDF), which is the simple and effective principle of 'survival of the fittest' from Darwin's natural evolution theory, is employed to determine the number of variables to keep and continuously shrink the variable space. Second, in each EDF run, binary matrix sampling (BMS) strategy that gives each variable the same chance to be selected and generates different variable combinations, is used to produce a population of subsets to construct a population of sub-models. Then, model population analysis (MPA) is employed to find the variable subsets with the lower root mean squares error of cross validation (RMSECV). The frequency of each variable appearing in the best 10% sub-models is computed. The higher the frequency is, the more important the variable is. The performance of the proposed procedure was investigated using three real NIR datasets. The results indicate that VCPA is a good variable selection strategy when compared with four high performing variable selection methods: genetic algorithm-partial least squares (GA-PLS), Monte Carlo uninformative variable elimination by PLS (MC-UVE-PLS), competitive adaptive reweighted sampling (CARS) and iteratively retains informative variables (IRIV). The MATLAB source code of VCPA is available for academic research on the website: http://www.mathworks.com/matlabcentral/fileexchange/authors/498750.


Asunto(s)
Modelos Estadísticos , Algoritmos , Calibración , Análisis de los Mínimos Cuadrados , Método de Montecarlo , Análisis Multivariante
11.
Analyst ; 140(6): 1876-85, 2015 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-25665981

RESUMEN

In this study, a new algorithm for wavelength interval selection, known as interval variable iterative space shrinkage approach (iVISSA), is proposed based on the VISSA algorithm. It combines global and local searches to iteratively and intelligently optimize the locations, widths and combinations of the spectral intervals. In the global search procedure, it inherits the merit of soft shrinkage from VISSA to search the locations and combinations of informative wavelengths, whereas in the local search procedure, it utilizes the information of continuity in spectroscopic data to determine the widths of wavelength intervals. The global and local search procedures are carried out alternatively to realize wavelength interval selection. This method was tested using three near infrared (NIR) datasets. Some high-performing wavelength selection methods, such as synergy interval partial least squares (siPLS), moving window partial least squares (MW-PLS), competitive adaptive reweighted sampling (CARS), genetic algorithm PLS (GA-PLS) and interval random frog (iRF), were used for comparison. The results show that the proposed method is very promising with good results both on prediction capability and stability. The MATLAB codes for implementing iVISSA are freely available on the website: .


Asunto(s)
Algoritmos , Espectroscopía Infrarroja Corta/métodos , Harina/análisis , Análisis de los Mínimos Cuadrados , Glycine max/química , Comprimidos/química , Zea mays/química
12.
Analyst ; 139(19): 4836-45, 2014 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-25083512

RESUMEN

In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.


Asunto(s)
Algoritmos , Gasolina/análisis , Modelos Teóricos , Método de Montecarlo , Programas Informáticos , Aceite de Soja/química , Triticum/química , Triticum/metabolismo
13.
Artículo en Inglés | MEDLINE | ID: mdl-21339535

RESUMEN

Selecting a small number of informative genes for microarray-based tumor classification is central to cancer prediction and treatment. Based on model population analysis, here we present a new approach, called Margin Influence Analysis (MIA), designed to work with support vector machines (SVM) for selecting informative genes. The rationale for performing margin influence analysis lies in the fact that the margin of support vector machines is an important factor which underlies the generalization performance of SVM models. Briefly, MIA could reveal genes which have statistically significant influence on the margin by using Mann-Whitney U test. The reason for using the Mann-Whitney U test rather than two-sample t test is that Mann-Whitney U test is a nonparametric test method without any distribution-related assumptions and is also a robust method. Using two publicly available cancerous microarray data sets, it is demonstrated that MIA could typically select a small number of margin-influencing genes and further achieves comparable classification accuracy compared to those reported in the literature. The distinguished features and outstanding performance may make MIA a good alternative for gene selection of high dimensional microarray data. (The source code in MATLAB with GNU General Public License Version 2.0 is freely available at http://code.google.com/p/mia2009/).


Asunto(s)
Perfilación de la Expresión Génica/métodos , Máquina de Vectores de Soporte , Bases de Datos Genéticas , Genética de Población , Humanos , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...