Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Front Neurorobot ; 16: 1068274, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36531919

RESUMO

In human-robot collaboration scenarios with shared workspaces, a highly desired performance boost is offset by high requirements for human safety, limiting speed and torque of the robot drives to levels which cannot harm the human body. Especially for complex tasks with flexible human behavior, it becomes vital to maintain safe working distances and coordinate tasks efficiently. An established approach in this regard is reactive servo in response to the current human pose. However, such an approach does not exploit expectations of the human's behavior and can therefore fail to react to fast human motions in time. To adapt the robot's behavior as soon as possible, predicting human intention early becomes a factor which is vital but hard to achieve. Here, we employ a recently developed type of brain-computer interface (BCI) which can detect the focus of the human's overt attention as a predictor for impending action. In contrast to other types of BCI, direct projection of stimuli onto the workspace facilitates a seamless integration in workflows. Moreover, we demonstrate how the signal-to-noise ratio of the brain response can be used to adjust the velocity of the robot movements to the vigilance or alertness level of the human. Analyzing this adaptive system with respect to performance and safety margins in a physical robot experiment, we found the proposed method could improve both collaboration efficiency and safety distance.

2.
Neural Netw ; 152: 467-478, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35640369

RESUMO

Recently, source data-free unsupervised domain adaptation (SFUDA) attracts increasing attention. Current work shows that the geometry of the target data is helpful to solving this challenging problem. However, these methods define the geometric structures in Euclidean space. The geometry cannot completely draw the semantic relationship between the target data distributed on a manifold. This article proposed a new SFUDA method, semantic consistency learning on manifold (SCLM), to address this problem. Firstly, we generated pseudo-labels for the target data using a new clustering method, EntMomClustering, that enhanced k-means clustering by fusing the entropy momentum. Secondly, we constructed semantic neighbor topology (SNT) to capture complete geometric information on the manifold. Specifically, in SNT, the global neighbor was detected by a developed collaborative representation-based manifold projection, while the local neighbors were obtained by similarity comparison. Thirdly, we performed a semantic consistency learning on SNT to drive a new kind of deep clustering where SNT was taken as the basic clustering unit. To ensure SNT move as entirety, in the developed objective, the entropy regulator was constructed based on a semantic mixture fused on SNT, while the self-supervised regulator encouraged similar classification on SNT. Experiments on three benchmark datasets show that our method achieves state-of-the-art results. The code is available on https://github.com/tntek/SCLM.


Assuntos
Aprendizagem , Semântica , Análise por Conglomerados
3.
Front Neurorobot ; 14: 43, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32670046

RESUMO

Natural language provides an intuitive and effective interaction interface between human beings and robots. Currently, multiple approaches are presented to address natural language visual grounding for human-robot interaction. However, most of the existing approaches handle the ambiguity of natural language queries and achieve target objects grounding via dialogue systems, which make the interactions cumbersome and time-consuming. In contrast, we address interactive natural language grounding without auxiliary information. Specifically, we first propose a referring expression comprehension network to ground natural referring expressions. The referring expression comprehension network excavates the visual semantics via a visual semantic-aware network, and exploits the rich linguistic contexts in expressions by a language attention network. Furthermore, we combine the referring expression comprehension network with scene graph parsing to achieve unrestricted and complicated natural language grounding. Finally, we validate the performance of the referring expression comprehension network on three public datasets, and we also evaluate the effectiveness of the interactive natural language grounding architecture by conducting extensive natural language query groundings in different household scenarios.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA