Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Entropy (Basel) ; 24(11)2022 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-36359723

RESUMO

Optimal transport is a mathematical tool that has been a widely used to measure the distance between two probability distributions. To mitigate the cubic computational complexity of the vanilla formulation of the optimal transport problem, regularized optimal transport has received attention in recent years, which is a convex program to minimize the linear transport cost with an added convex regularizer. Sinkhorn optimal transport is the most prominent one regularized with negative Shannon entropy, leading to densely supported solutions, which are often undesirable in light of the interpretability of transport plans. In this paper, we report that a deformed entropy designed by q-algebra, a popular generalization of the standard algebra studied in Tsallis statistical mechanics, makes optimal transport solutions supported sparsely. This entropy with a deformation parameter q interpolates the negative Shannon entropy (q=1) and the squared 2-norm (q=0), and the solution becomes more sparse as q tends to zero. Our theoretical analysis reveals that a larger q leads to a faster convergence when optimized with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. In summary, the deformation induces a trade-off between the sparsity and convergence speed.

2.
Parallel Comput ; 1012021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33363295

RESUMO

Although first-order stochastic algorithms, such as stochastic gradient descent, have been the main force to scale up machine learning models, such as deep neural nets, the second-order quasi-Newton methods start to draw attention due to their effectiveness in dealing with ill-conditioned optimization problems. The L-BFGS method is one of the most widely used quasi-Newton methods. We propose an asynchronous parallel algorithm for stochastic quasi-Newton (AsySQN) method. Unlike prior attempts, which parallelize only the calculation for gradient or the two-loop recursion of L-BFGS, our algorithm is the first one that truly parallelizes L-BFGS with a convergence guarantee. Adopting the variance reduction technique, a prior stochastic L-BFGS, which has not been designed for parallel computing, reaches a linear convergence rate. We prove that our asynchronous parallel scheme maintains the same linear convergence rate but achieves significant speedup. Empirical evaluations in both simulations and benchmark datasets demonstrate the speedup in comparison with the non-parallel stochastic L-BFGS, as well as the better performance than first-order methods in solving ill-conditioned problems.

3.
J Sep Sci ; 41(12): 2553-2558, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29577642

RESUMO

The transfer of thermodynamic parameters governing retention of a molecule in gas chromatography from a reference column to a target column is a difficult problem. Successful transfer demands a mechanism whereby the column geometries of both columns can be determined with high accuracy. This is the second part in a series of three papers. In Part I of this work we introduced a new approach to determine the actual effective geometry of a reference column and thermodynamic-based parameters of a suite of compounds on the column. Part II, presented here, illustrates the rapid estimation of the effective inner diameter (or length) and the effective phase ratio of a target column. The estimation model based on the principle of least squares; a fast Quasi-Newton optimization algorithm was developed to provide adequate computational speed. The model and optimization algorithm were tested and validated using simulated and experimental data. This study, together with the work in Parts I and III, demonstrates a method that improves the transferability of thermodynamic models of gas chromatography retention between gas chromatography columns.

4.
J Sep Sci ; 41(12): 2559-2564, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29582547

RESUMO

This is the third part of a three-part series of papers. In Part I, we presented a method for determining the actual effective geometry of a reference column as well as the thermodynamic-based parameters of a set of probe compounds in an in-house mixture. Part II introduced an approach for estimating the actual effective geometry of a target column by collecting retention data of the same mixture of probe compounds on the target column and using their thermodynamic parameters, acquired on the reference column, as a bridge between both systems. Part III, presented here, demonstrates the retention time transfer and prediction from the reference column to the target column using experimental data for a separate mixture of compounds. To predict the retention time of a new compound, we first estimate its thermodynamic-based parameters on the reference column (using geometric parameters determined previously). The compound's retention time on a second column (of previously determined geometry) is then predicted. The models and the associated optimization algorithms were tested using simulated and experimental data. The accuracy of predicted retention times shows that the proposed approach is simple, fast, and accurate for retention time transfer and prediction between gas chromatography columns.

5.
J Sep Sci ; 41(12): 2544-2552, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29579350

RESUMO

The transfer of retention times based on thermodynamic models between columns can aid in separation optimization and compound identification in gas chromatography. Although earlier investigations have been reported, this problem remains unsuccessfully addressed. One barrier is poor predictive accuracy when moving from a reference column or system to a new target column or system. This is attributed to challenges associated with the accurate determination of the effective geometric parameters of the columns. To overcome this, we designed least squares-based models that account for geometric parameters of the columns and thermodynamic parameters of compounds as they partition between mobile and stationary phases. Quasi-Newton-based algorithms were then used to perform the numerical optimization. In this first of three parts, the model used to determine the geometric parameters of the reference column and the thermodynamic parameters of compounds subjected to separation is introduced. As will be shown, the overall approach significantly improves the predictive accuracy and transferability of thermodynamic data (and retention times) between columns of the same stationary phase chemistry. The data required for the determination of the thermodynamic parameters and retention time prediction are obtained from fast and simple experiments. The proposed model and optimization algorithms were tested and validated using simulated and experimental data.

6.
Eur J Mass Spectrom (Chichester) ; 23(1): 40-44, 2017 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28657448

RESUMO

Analysis of the fragmentation pathways of molecules in mass spectrometry gives a fundamental insight into gas-phase ion chemistry. However, the conventional intrinsic reaction coordinates method requires knowledge of the transition states of ion structures in the fragmentation pathways. Herein, we use the nudged elastic band method, using only the initial and final state ion structures in the fragmentation pathways, and report the advantages and limitations of the method. We found a minimum energy path of p-benzoquinone ion fragmentation with two saddle points and one intermediate structure. The primary energy barrier, which corresponded to the cleavage of the C-C bond adjacent to the CO group, was calculated to be 1.50 eV. An additional energy barrier, which corresponded to the cleavage of the CO group, was calculated to be 0.68 eV. We also found an energy barrier of 3.00 eV, which was the rate determining step of the keto-enol tautomerization in CO elimination from the molecular ion of phenol. The nudged elastic band method allowed the determination of a minimum energy path using only the initial and final state ion structures in the fragmentation pathways, and it provided faster than the conventional intrinsic reaction coordinates method. In addition, this method was found to be effective in the analysis of the charge structures of the molecules during the fragmentation in mass spectrometry.

7.
Sensors (Basel) ; 16(5)2016 04 28.
Artigo em Inglês | MEDLINE | ID: mdl-27136551

RESUMO

This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

8.
Methods ; 62(1): 99-108, 2013 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-23726942

RESUMO

Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort.


Assuntos
Algoritmos , Drosophila melanogaster/genética , Embrião não Mamífero/metabolismo , Regulação da Expressão Gênica no Desenvolvimento , Modelos Genéticos , Biologia de Sistemas/métodos , Animais , Drosophila melanogaster/embriologia , Drosophila melanogaster/metabolismo , Embrião não Mamífero/citologia , Perfilação da Expressão Gênica , Humanos , Termodinâmica , Transcrição Gênica
9.
Materials (Basel) ; 17(13)2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38998398

RESUMO

Due to their excellent mechanical properties, the carbon fiber-reinforced polymer composites (CFRPs) of thermoplastic resins are widely used, and an accurate constitutive model plays a pivotal role in structural design and service safety. A two-parameter three-dimensional (3D) plastic potential was obtained by considering both the deviatoric deformation and the dilatation deformation associated with hydrostatic stress. The Langmuir function was first adopted to model the plastic hardening behavior of composites. The two-parameter 3D plastic potential, connected to the Langmuir function of plastic hardening, was thus proposed to model the constitutive behavior of the CFRPs of thermoplastic resins. Also, T700/PEEK specimens with different off-axis angles were subjected to tensile loading to obtain the corresponding fracture surface angles of specimens and the load-displacement curves. The two unknown plastic parameters in the proposed 3D plastic potential were obtained by using the quasi-Newton algorithm programmed in MATLAB, and the unknown hardening parameters in the Langmuir function were determined by fitting the effective stress-plastic strain curve in different off-axis angles. Meanwhile, the user material subroutine VUMAT, following the proposed constitutive model, was developed in terms of the maximum stress criterion for fiber failure and the LaRC05 criterion for matrix failure to simulate the 3D elastoplastic damage behavior of T700/PEEK. Finally, comparisons between the experimental tests and the numerical analysis were made, and a fairly good agreement was found, which validated the correctness of the proposed constitutive model in this work.

10.
Front Comput Neurosci ; 16: 994161, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36277611

RESUMO

This study describes the construction of a new algorithm where image processing along with the two-step quasi-Newton methods is used in biomedical image analysis. It is a well-known fact that medical informatics is an essential component in the perspective of health care. Image processing and imaging technology are the recent advances in medical informatics, which include image content representation, image interpretation, and image acquisition, and focus on image information in the medical field. For this purpose, an algorithm was developed based on the image processing method that uses principle component analysis to find the image value of a particular test function and then direct the function toward its best method for evaluation. To validate the proposed algorithm, two functions, namely, the modified trigonometric and rosenbrock functions, are tested on variable space.

11.
J Inequal Appl ; 2017(1): 35, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28216990

RESUMO

In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

12.
Neural Netw ; 94: 239-254, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28806717

RESUMO

This paper investigates the construction of sparse radial basis function neural networks (RBFNNs) for classification problems. An efficient two-phase construction algorithm (which is abbreviated as TPCLR1 for simplicity) is proposed by using L1 regularization. In the first phase, an improved maximum data coverage (IMDC) algorithm is presented for the initialization of RBF centers and widths. Then a specialized Orthant-Wise Limited-memory Quasi-Newton (sOWL-QN) method is employed to perform simultaneous network pruning and parameter optimization in the second phase. The advantages of TPCLR1 lie in that better generalization performance is guaranteed with higher model sparsity, and the required storage space and testing time are much reduced. Besides these, only the regularization parameter and the maximum number of function evaluations are required to be prescribed, then the entire construction procedure becomes automatic. The learning algorithm is verified by several classification benchmarks with different levels of complexity. The experimental results show that an appropriate value of the regularization parameter is easy to find without using costly cross validation, and the proposed TPCLR1 offers an efficient procedure to construct sparse RBFNN classifiers with good generalization performance.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Classificação/métodos
13.
J Res Natl Inst Stand Technol ; 108(6): 413-27, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-27413619

RESUMO

Developing numerical methods for predicting microstructure in materials is a large and important research area. Two examples of material microstructures are Austenite and Martensite. Austenite is a microscopic phase with simple crystallographic structure while Martensite is one with a more complex structure. One important task in materials science is the development of numerical procedures which accurately predict microstructures in Martensite. In this paper we present a method for simulating material microstructure close to an Austenite-Martensite interface. The method combines a quasi-Newton optimization algorithm and a nonconforming finite element scheme that successfully minimizes an approximation to the total stored energy near the interface of interest. Preliminary results suggest that the minimizers of this energy functional located by the developed numerical algorithm appear to display the desired characteristics.

14.
Proc IEEE Int Conf Data Min ; 2009: 447-456, 2009 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-23616730

RESUMO

Temporal causal modeling can be used to recover the causal structure among a group of relevant time series variables. Several methods have been developed to explicitly construct temporal causal graphical models. However, how to best understand and conceptualize these complicated causal relationships is still an open problem. In this paper, we propose a decomposition approach to simplify the temporal graphical model. Our method clusters time series variables into groups such that strong interactions appear among the variables within each group and weak (or no) interactions exist for cross-group variable pairs. Specifically, we formulate the clustering problem for temporal graphical models as a regression-coefficient sparsification problem and define an interesting objective function which balances the model prediction power and its cluster structure. We introduce an iterative optimization approach utilizing the Quasi-Newton method and generalized ridge regression to minimize the objective function and to produce a clustered temporal graphical model. We also present a novel optimization procedure utilizing a graph theoretical tool based on the maximum weight independent set problem to speed up the Quasi-Newton method for a large number of variables. Finally, our detailed experimental study on both synthetic and real datasets demonstrates the effectiveness of our methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA