Confidence in uncertainty: Error cost and commitment in early speech hypotheses.
PLoS One
; 13(8): e0201516, 2018.
Article
en En
| MEDLINE
| ID: mdl-30067853
Interactions with artificial agents often lack immediacy because agents respond slower than their users expect. Automatic speech recognisers introduce this delay by analysing a user's utterance only after it has been completed. Early, uncertain hypotheses of incremental speech recognisers can enable artificial agents to respond more timely. However, these hypotheses may change significantly with each update. Therefore, an already initiated action may turn into an error and invoke error cost. We investigated whether humans would use uncertain hypotheses for planning ahead and/or initiating their response. We designed a Ghost-in-the-Machine study in a bar scenario. A human participant controlled a bartending robot and perceived the scene only through its recognisers. The results showed that participants used uncertain hypotheses for selecting the best matching action. This is comparable to computing the utility of dialogue moves. Participants evaluated the available evidence and the error cost of their actions prior to initiating them. If the error cost was low, the participants initiated their response with only suggestive evidence. Otherwise, they waited for additional, more confident hypotheses if they still had time to do so. If there was time pressure but only little evidence, participants grounded their understanding with echo questions. These findings contribute to a psychologically plausible policy for human-robot interaction that enables artificial agents to respond more timely and socially appropriately under uncertainty.
Texto completo:
1
Banco de datos:
MEDLINE
Asunto principal:
Habla
/
Robótica
Tipo de estudio:
Health_economic_evaluation
Límite:
Adult
/
Female
/
Humans
/
Male
Idioma:
En
Revista:
PLoS One
Asunto de la revista:
CIENCIA
/
MEDICINA
Año:
2018
Tipo del documento:
Article
País de afiliación:
Alemania