RESUMO
Clean oxide surfaces are generally hydrophilic. Water molecules anchor at undercoordinated surface metal atoms that act as Lewis acid sites, and they are stabilized by H bonds to undercoordinated surface oxygens. The large unit cell of In2O3(111) provides surface atoms in various configurations, which leads to chemical heterogeneity and a local deviation from this general rule. Experiments (TPD, XPS, nc-AFM) agree quantitatively with DFT calculations and show a series of distinct phases. The first three water molecules dissociate at one specific area of the unit cell and desorb above room temperature. The next three adsorb as molecules in the adjacent region. Three more water molecules rearrange this structure and an additional nine pile up above the OH groups. Despite offering undercoordinated In and O sites, the rest of the unit cell is unfavorable for adsorption and remains water-free. The first water layer thus shows ordering into nanoscopic 3D water clusters separated by hydrophobic pockets.
RESUMO
Machine learning techniques, specifically gradient-enhanced Kriging (GEK), have been implemented for molecular geometry optimization. GEK-based optimization has many advantages compared to conventional-step-restricted second-order truncated expansion-molecular optimization methods. In particular, the surrogate model given by GEK can have multiple stationary points, will smoothly converge to the exact model as the number of sample points increases, and contains an explicit expression for the expected error of the model function at an arbitrary point. Machine learning is, however, associated with abundance of data, contrary to the situation desired for efficient geometry optimizations. In this paper, we demonstrate how the GEK procedure can be utilized in a fashion such that in the presence of few data points, the surrogate surface will in a robust way guide the optimization to a minimum of a potential energy surface. In this respect, the GEK procedure will be used to mimic the behavior of a conventional second-order scheme but retaining the flexibility of the superior machine learning approach. Moreover, the expected error will be used in the optimizations to facilitate restricted-variance optimizations. A procedure which relates the eigenvalues of the approximate guessed Hessian with the individual characteristic lengths, used in the GEK model, reduces the number of empirical parameters to optimize to two: the value of the trend function and the maximum allowed variance. These parameters are determined using the extended Baker (e-Baker) and part of the Baker transition-state (Baker-TS) test suites as a training set. The so-created optimization procedure is tested using the e-Baker, full Baker-TS, and S22 test suites, at the density functional theory and second-order Møller-Plesset levels of approximation. The results show that the new method is generally of similar or better performance than a state-of-the-art conventional method, even for cases where no significant improvement was expected.