Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Small ; : e2402616, 2024 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-39031846

RESUMEN

Hard carbon materials have shown promising potential for sodium-ion storage due to accommodating larger sodium ions. However, as for lithium-ion storage, the challenge lies in tuning the high lithiation plateau capacities, which impacts the overall energy density. Here, hard carbon microspheres (HCM) are prepared by tailoring the cross-linked polysaccharide, establishing a comprehensive methodology to obtain high-performance lithium-ion batteries (LIBs) with long plateau capacities. The "adsorption-intercalation mechanism" for lithium storage is revealed combining in situ Raman characterization and ex situ nuclear magnetic resonance spectroscopy. The optimized HCM possesses reduced defect content, enriched graphitic microcrystalline, and low specific surface area, which is beneficial for fast lithium storage. Therefore, HCM demonstrates a high reversible capacity of 537 mAh g-1 with a significant low-voltage plateau capacity ratio of 55%, high initial Coulombic efficiency, and outstanding rate performance (152 mAh g-1 at 10 A g-1). Moreover, the full cell (HCM||LiCoO2) delivers outstanding fast-charging capability (4 min charge to 80% at 10 C) and impressive energy density of 393 Wh kg-1. Additionally, 80% reversible capacity can be delivered under -40 °C with competitive cycling stability. This work provides in-depth insights into the rational design of hard carbon structures with extended low-voltage plateau capacity for high energy LIBs.

2.
Proc Natl Acad Sci U S A ; 118(32)2021 08 10.
Artículo en Inglés | MEDLINE | ID: mdl-34341109

RESUMEN

Unlike crystalline atomic and ionic solids, texture development due to crystallographically preferred growth in colloidal crystals is less studied. Here we investigate the underlying mechanisms of the texture evolution in an evaporation-induced colloidal assembly process through experiments, modeling, and theoretical analysis. In this widely used approach to obtain large-area colloidal crystals, the colloidal particles are driven to the meniscus via the evaporation of a solvent or matrix precursor solution where they close-pack to form a face-centered cubic colloidal assembly. Via two-dimensional large-area crystallographic mapping, we show that the initial crystal orientation is dominated by the interaction of particles with the meniscus, resulting in the expected coalignment of the close-packed direction with the local meniscus geometry. By combining with crystal structure analysis at a single-particle level, we further reveal that, at the later stage of self-assembly, however, the colloidal crystal undergoes a gradual rotation facilitated by geometrically necessary dislocations (GNDs) and achieves a large-area uniform crystallographic orientation with the close-packed direction perpendicular to the meniscus and parallel to the growth direction. Classical slip analysis, finite element-based mechanical simulation, computational colloidal assembly modeling, and continuum theory unequivocally show that these GNDs result from the tensile stress field along the meniscus direction due to the constrained shrinkage of the colloidal crystal during drying. The generation of GNDs with specific slip systems within individual grains leads to crystallographic rotation to accommodate the mechanical stress. The mechanistic understanding reported here can be utilized to control crystallographic features of colloidal assemblies, and may provide further insights into crystallographically preferred growth in synthetic, biological, and geological crystals.

3.
Neural Comput ; 33(4): 1005-1036, 2021 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-33513325

RESUMEN

A new network with super-approximation power is introduced. This network is built with Floor (⌊x⌋) or ReLU (max{0,x}) activation function in each neuron; hence, we call such networks Floor-ReLU networks. For any hyperparameters N∈N+ and L∈N+, we show that Floor-ReLU networks with width max{d,5N+13} and depth 64dL+3 can uniformly approximate a Hölder function f on [0,1]d with an approximation error 3λdα/2N-αL, where α∈(0,1] and λ are the Hölder order and constant, respectively. More generally for an arbitrary continuous function f on [0,1]d with a modulus of continuity ωf(·), the constructive approximation rate is ωf(dN-L)+2ωf(d)N-L. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of ωf(r) as r→0 is moderate (e.g., ωf(r)≲rα for Hölder continuous functions), since the major term to be considered in our approximation rate is essentially d times a function of N and L independent of d within the modulus of continuity.

4.
Sci Rep ; 13(1): 15254, 2023 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-37709820

RESUMEN

The large-scale simulation of dynamical systems is critical in numerous scientific and engineering disciplines. However, traditional numerical solvers are limited by the choice of step sizes when estimating integration, resulting in a trade-off between accuracy and computational efficiency. To address this challenge, we introduce a deep learning-based corrector called Neural Vector (NeurVec), which can compensate for integration errors and enable larger time step sizes in simulations. Our extensive experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability on a continuous phase space, even when trained using limited and discrete data. NeurVec significantly accelerates traditional solvers, achieving speeds tens to hundreds of times faster while maintaining high levels of accuracy and stability. Moreover, NeurVec's simple-yet-effective design, combined with its ease of implementation, has the potential to establish a new paradigm for fast-solving differential equations based on deep learning.

5.
Neural Netw ; 154: 152-164, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35882083

RESUMEN

We establish in this work approximation results of deep neural networks for smooth functions measured in Sobolev norms, motivated by recent development of numerical solvers for partial differential equations using deep neural networks. Our approximation results are nonasymptotic in the sense that the error bounds are explicitly characterized in terms of both the width and depth of the networks simultaneously with all involved constants explicitly determined. Namely, for f∈Cs([0,1]d), we show that deep ReLU networks of width O(NlogN) and of depth O(LlogL) can achieve a nonasymptotic approximation rate of O(N-2(s-1)/dL-2(s-1)/d) with respect to the W1,p([0,1]d) norm for p∈[1,∞). If either the ReLU function or its square is applied as activation functions to construct deep neural networks of width O(NlogN) and of depth O(LlogL) to approximate f∈Cs([0,1]d), the approximation rate is O(N-2(s-n)/dL-2(s-n)/d) with respect to the Wn,p([0,1]d) norm for p∈[1,∞).

6.
J Chem Phys ; 134(6): 064107, 2011 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-21322661

RESUMEN

We have developed a treecode-based O(N log N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.


Asunto(s)
Algoritmos , Electricidad Estática , Simulación de Dinámica Molecular , Solventes/química , Termodinámica
7.
Neural Netw ; 141: 160-173, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33906082

RESUMEN

A three-hidden-layer neural network with super approximation power is introduced. This network is built with the floor function (⌊x⌋), the exponential function (2x), the step function (1x≥0), or their compositions as the activation function in each neuron and hence we call such networks as Floor-Exponential-Step (FLES) networks. For any width hyper-parameter N∈N+, it is shown that FLES networks with width max{d,N} and three hidden layers can uniformly approximate a Hölder continuous function f on [0,1]d with an exponential approximation rate 3λ(2d)α2-αN, where α∈(0,1] and λ>0 are the Hölder order and constant, respectively. More generally for an arbitrary continuous function f on [0,1]d with a modulus of continuity ωf(⋅), the constructive approximation rate is 2ωf(2d)2-N+ωf(2d2-N). Moreover, we extend such a result to general bounded continuous functions on a bounded set E⊆Rd. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of ωf(r) as r→0 is moderate (e.g., ωf(r)≲rα for Hölder continuous functions), since the major term to be concerned in our approximation rate is essentially d times a function of N independent of d within the modulus of continuity. Finally, we extend our analysis to derive similar approximation results in the Lp-norm for p∈[1,∞) via replacing Floor-Exponential-Step activation functions by continuous activation functions.


Asunto(s)
Redes Neurales de la Computación , Aprendizaje Profundo
8.
Neural Netw ; 129: 1-6, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32473577

RESUMEN

We prove a theorem concerning the approximation of multivariate functions by deep ReLU networks, for which the curse of the dimensionality is lessened. Our theorem is based on a constructive proof of the Kolmogorov-Arnold superposition theorem, and on a subset of multivariate continuous functions whose outer superposition functions can be efficiently approximated by deep ReLU networks.


Asunto(s)
Redes Neurales de la Computación , Análisis Multivariante
9.
Neural Netw ; 119: 74-84, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31401528

RESUMEN

Given a function dictionary D and an approximation budget N∈N, nonlinear approximation seeks the linear combination of the best N terms [Formula: see text] to approximate a given function f with the minimum approximation error [Formula: see text] Motivated by recent success of deep learning, we propose dictionaries with functions in a form of compositions, i.e., [Formula: see text] for all T∈D, and implement T using ReLU feed-forward neural networks (FNNs) with L hidden layers. We further quantify the improvement of the best N-term approximation rate in terms of N when L is increased from 1 to 2 or 3 to show the power of compositions. In the case when L>3, our analysis shows that increasing L cannot improve the approximation rate in terms of N. In particular, for any function f on [0,1], regardless of its smoothness and even the continuity, if f can be approximated using a dictionary when L=1 with the best N-term approximation rate εL,f=O(N-η), we show that dictionaries with L=2 can improve the best N-term approximation rate to εL,f=O(N-2η). We also show that for Hölder continuous functions of order α on [0,1]d, the application of a dictionary with L=3 in nonlinear approximation can achieve an essentially tight best N-term approximation rate εL,f=O(N-2α∕d). Finally, we show that dictionaries consisting of wide FNNs with a few hidden layers are more attractive in terms of computational efficiency than dictionaries with narrow and very deep FNNs for approximating Hölder continuous functions if the number of computer cores is larger than N in parallel computing.


Asunto(s)
Redes Neurales de la Computación , Dinámicas no Lineales , Humanos
10.
IEEE Trans Image Process ; 26(1): 160-171, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-28113181

RESUMEN

We address the removal of canvas artifacts from high-resolution digital photographs and X-ray images of paintings on canvas. Both imaging modalities are common investigative tools in art history and art conservation. Canvas artifacts manifest themselves very differently according to the acquisition modality; they can hamper the visual reading of the painting by art experts, for instance, in preparing a restoration campaign. Computer-aided canvas removal is desirable for restorers when the painting on canvas they are preparing to restore has acquired over the years a much more salient texture. We propose a new algorithm that combines a cartoon-texture decomposition method with adaptive multiscale thresholding in the frequency domain to isolate and suppress the canvas components. To illustrate the strength of the proposed method, we provide various examples, for acquisitions in both imaging modalities, for paintings with different types of canvas and from different periods. The proposed algorithm outperforms previous methods proposed for visual photographs such as morphological component analysis and Wiener filtering and it also works for the digital removal of canvas artifacts in X-ray images.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA