RESUMEN
A consideration of the point group symmetry of molecules is often advantageous from a computational efficiency standpoint and sometimes necessary for the correct treatment of chemical physics problems. Many modern electronic structure software packages include a treatment of symmetry, but these are sometimes incomplete or unusable outside of that program's environment. Therefore, we have developed the MolSym package for handling molecular symmetry and its associated functionalities to provide a platform for including symmetry in the implementation and development of other methods. Features include point group detection, molecule symmetrization, arbitrary generation of symmetry element sets and character tables, and symmetry adapted linear combinations of real spherical harmonic basis functions, Cartesian displacement coordinates, and internal coordinates. We present some of the advantages of using molecular symmetry as achieved by MolSym, particularly with respect to Hartree-Fock theory, and the reduction of finite difference displacements in gradient/Hessian computations. This package is designed to be easily integrated into other software development efforts and may be extended to further symmetry applications.
RESUMEN
Multifidelity modeling is a technique for fusing the information from two or more datasets into one model. It is particularly advantageous when one dataset contains few accurate results and the other contains many less accurate results. Within the context of modeling potential energy surfaces, the low-fidelity dataset can be made up of a large number of inexpensive energy computations that provide adequate coverage of the N-dimensional space spanned by the molecular internal coordinates. The high-fidelity dataset can provide fewer but more accurate electronic energies for the molecule in question. Here, we compare the performance of several neural network-based approaches to multifidelity modeling. We show that the four methods (dual, Δ-learning, weight transfer, and Meng-Karniadakis neural networks) outperform a traditional implementation of a neural network, given the same amount of training data. We also show that the Δ-learning approach is the most practical and tends to provide the most accurate model.