Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Langmuir ; 40(13): 7087-7094, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38511875

RESUMO

Graphene, serving as electrodes, is widely applied in electronic and optoelectronic devices. Work function as one of the fundamental intrinsic characteristics of graphene directly affects the interfacial properties of the electrodes, thereby affecting the performance of the devices. Much work has been done to regulate the work function of graphene to expand its application fields, and doping has been demonstrated as an effective method. However, the numerous types of doped graphene make the investigation of its work function time-consuming and labor-intensive. In order to quickly obtain the relationship between the structure and property, a deep learning method is employed to predict the work function in this study. Specifically, a data set of over 30,000 compositions with the work function on boron-doped graphene at different concentrations and doping positions via density functional theory simulations was established through ab initio calculations. Then, a novel fusion model (GT-Net) combining transformers and graph neural networks (GNNs) was proposed. After that, improved effective GNN-based descriptors were developed. Finally, three different GNN methods were compared, and the results show that the proposed method could accurately predicate the work function with the R2 = 0.975 and RMSE = 0.027. This study not only provides the possibility of designing materials with specific properties at the atomic level but also demonstrates the performance of GNNs on graph-level tasks with the same graph structure and atomic number.

2.
Neural Netw ; 160: 164-174, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36657330

RESUMO

Existing face super-resolution methods depend on deep convolutional networks (DCN) to recover high-quality reconstructed images. They either acquire information in a single space by designing complex models for direct reconstruction, or employ additional networks to extract multiple prior information to enhance the representation of features. However, existing methods are still challenging to perform well due to the inability to learn complete and uniform representations. To this end, we propose a self-attention learning network (SLNet) for three-stage face super-resolution, which fully explores the interdependence of low- and high-level spaces to achieve compensation of the information used for reconstruction. Firstly, SLNet uses a hierarchical feature learning framework to obtain shallow information in the low-level space. Then, the shallow information with cumulative errors due to DCN is improved under high-resolution (HR) supervision, while bringing an intermediate reconstruction result and a powerful intermediate benchmark. Finally, the improved feature representation is further enhanced in high-level space by a multi-scale context-aware encoder-decoder for facial reconstruction. The features in both spaces are explored progressively from coarse to fine reconstruction information. The experimental results show that SLNet has a competitive performance compared to the state-of-the-art methods.


Assuntos
Aprendizado Profundo , Aprendizagem , Benchmarking , Atenção , Processamento de Imagem Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA