Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Am Chem Soc ; 140(34): 10785-10793, 2018 08 29.
Artigo em Inglês | MEDLINE | ID: mdl-30086638

RESUMO

High-throughput (HTP) material design is an emerging field and has been proved to be powerful in the prediction of novel functional materials. In this work, an HTP effort has been carried out for thermoelectric chalcogenides with diamond-like structures on the newly established Materials Informatics Platform (MIP). Specifically, the relaxation time is evaluated by a reliable yet efficient method, which greatly improves the accuracy of HTP electrical transport calculations. The results show that all the compounds may have power factors over 10 µW/cm·K2 if fully optimized. A new series of diamond-like chalcogenides with an atomic ratio of 1:2:4 possess relatively higher electrical transport properties among all the compounds investigated. One particular compound, CdIn2Te4, and its variations have been verified experimentally with a peak ZT over 1.0. Further analysis reveals the existence of general conductive networks and the similar Pisarenko relations under the same anion sublattice, and the transport distribution function is found to be a good indicator for the power factors for the compounds investigated. This work demonstrates a successful case study in HTP material screening.

2.
PLoS One ; 13(3): e0193919, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29513748

RESUMO

Textual representations play an important role in the field of natural language processing (NLP). The efficiency of NLP tasks, such as text comprehension and information extraction, can be significantly improved with proper textual representations. As neural networks are gradually applied to learn the representation of words and phrases, fairly efficient models of learning short text representations have been developed, such as the continuous bag of words (CBOW) and skip-gram models, and they have been extensively employed in a variety of NLP tasks. Because of the complex structure generated by the longer text lengths, such as sentences, algorithms appropriate for learning short textual representations are not applicable for learning long textual representations. One method of learning long textual representations is the Long Short-Term Memory (LSTM) network, which is suitable for processing sequences. However, the standard LSTM does not adequately address the primary sentence structure (subject, predicate and object), which is an important factor for producing appropriate sentence representations. To resolve this issue, this paper proposes the dependency-based LSTM model (D-LSTM). The D-LSTM divides a sentence representation into two parts: a basic component and a supporting component. The D-LSTM uses a pre-trained dependency parser to obtain the primary sentence information and generate supporting components, and it also uses a standard LSTM model to generate the basic sentence components. A weight factor that can adjust the ratio of the basic and supporting components in a sentence is introduced to generate the sentence representation. Compared with the representation learned by the standard LSTM, the sentence representation learned by the D-LSTM contains a greater amount of useful information. The experimental results show that the D-LSTM is superior to the standard LSTM for sentences involving compositional knowledge (SICK) data.


Assuntos
Processamento de Linguagem Natural , Redes Neurais de Computação , Linguística , Aprendizado de Máquina , Modelos Teóricos , Semântica
3.
PLoS One ; 13(12): e0208785, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30532197

RESUMO

Text representation can map text into a vector space for subsequent use in numerical calculations and processing tasks. Word embedding is an important component of text representation. Most existing word embedding models focus on writing and utilize context, weight, dependency, morphology, etc., to optimize the training. However, from the linguistic point of view, spoken language is a more direct expression of semantics; writing has meaning only as a recording of spoken language. Therefore, this paper proposes the concept of a pronunciation-enhanced word embedding model (PWE) that integrates speech information into training to fully apply the roles of both speech and writing to meaning. This paper uses the Chinese language, English language and Spanish language as examples and presents several models that integrate word pronunciation characteristics into word embedding. Word similarity and text classification experiments show that the PWE outperforms the baseline model that does not include speech information. Language is a storehouse of sound-images; therefore, the PWE can be applied to most languages.


Assuntos
Fonética , Semântica , Fala , Vocabulário , Redação , Algoritmos , Humanos , Modelos Teóricos , Processamento de Linguagem Natural
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA