Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Hum Factors ; 65(1): 137-165, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-33906505

RESUMO

OBJECTIVE: This paper reviews recent articles related to human trust in automation to guide research and design for increasingly capable automation in complex work environments. BACKGROUND: Two recent trends-the development of increasingly capable automation and the flattening of organizational hierarchies-suggest a reframing of trust in automation is needed. METHOD: Many publications related to human trust and human-automation interaction were integrated in this narrative literature review. RESULTS: Much research has focused on calibrating human trust to promote appropriate reliance on automation. This approach neglects relational aspects of increasingly capable automation and system-level outcomes, such as cooperation and resilience. To address these limitations, we adopt a relational framing of trust based on the decision situation, semiotics, interaction sequence, and strategy. This relational framework stresses that the goal is not to maximize trust, or to even calibrate trust, but to support a process of trusting through automation responsivity. CONCLUSION: This framing clarifies why future work on trust in automation should consider not just individual characteristics and how automation influences people, but also how people can influence automation and how interdependent interactions affect trusting automation. In these new technological and organizational contexts that shift human operators to co-operators of automation, automation responsivity and the ability to resolve conflicting goals may be more relevant than reliability and reliance for advancing system design. APPLICATION: A conceptual model comprising four concepts-situation, semiotics, strategy, and sequence-can guide future trust research and design for automation responsivity and more resilient human-automation systems.


Assuntos
Sistemas Homem-Máquina , Confiança , Humanos , Reprodutibilidade dos Testes , Automação , Motivação
2.
Hum Factors ; 58(6): 846-63, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27178676

RESUMO

OBJECTIVE: This study uses a dyadic approach to understand human-agent cooperation and system resilience. BACKGROUND: Increasingly capable technology fundamentally changes human-machine relationships. Rather than reliance on or compliance with more or less reliable automation, we investigate interaction strategies with more or less cooperative agents. METHOD: A joint-task microworld scenario was developed to explore the effects of agent cooperation on participant cooperation and system resilience. To assess the effects of agent cooperation on participant cooperation, 36 people coordinated with a more or less cooperative agent by requesting resources and responding to requests for resources in a dynamic task environment. Another 36 people were recruited to assess effects following a perturbation in their own hospital. RESULTS: Experiment 1 shows people reciprocated the cooperative behaviors of the agents; a low-cooperation agent led to less effective interactions and less resource sharing, whereas a high-cooperation agent led to more effective interactions and greater resource sharing. Experiment 2 shows that an initial fast-tempo perturbation undermined proactive cooperation-people tended to not request resources. However, the initial fast tempo had little effect on reactive cooperation-people tended to accept resource requests according to cooperation level. CONCLUSION: This study complements the supervisory control perspective of human-automation interaction by considering interdependence and cooperation rather than the more common focus on reliability and reliance. APPLICATION: The cooperativeness of automated agents can influence the cooperativeness of human agents. Design and evaluation for resilience in teams involving increasingly autonomous agents should consider the cooperative behaviors of these agents.


Assuntos
Comportamento Cooperativo , Sistemas Homem-Máquina , Resiliência Psicológica , Adulto , Humanos
3.
Front Psychol ; 14: 1192020, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38034296

RESUMO

Introduction: Trust has emerged as a prevalent construct to describe relationships between people and between people and technology in myriad domains. Across disciplines, researchers have relied on many different questionnaires to measure trust. The degree to which these questionnaires differ has not been systematically explored. In this paper, we use a word-embedding text analysis technique to identify the differences and common themes across the most used trust questionnaires and provide guidelines for questionnaire selection. Methods: A review was conducted to identify the existing trust questionnaires. In total, we included 46 trust questionnaires from three main domains (i.e., Automation, Humans, and E-commerce) with a total of 626 items measuring different trust layers (i.e., Dispositional, Learned, and Situational). Next, we encoded the words within each questionnaire using GloVe word embeddings and computed the embedding for each questionnaire item, and for each questionnaire. We reduced the dimensionality of the resulting dataset using UMAP to visualize these embeddings in scatterplots and implemented the visualization in a web app for interactive exploration of the questionnaires (https://areen.shinyapps.io/Trust_explorer/). Results: At the word level, the semantic space serves to produce a lexicon of trust-related words. At the item and questionnaire level, the analysis provided recommendation on questionnaire selection based on the dispersion of questionnaires' items and at the domain and layer composition of each questionnaire. Along with the web app, the results help explore the semantic space of trust questionnaires and guide the questionnaire selection process. Discussion: The results provide a novel means to compare and select trust questionnaires and to glean insights about trust from spoken dialog or written comments.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37028077

RESUMO

Machine learning models have gained traction as decision support tools for tasks that require processing copious amounts of data. However, to achieve the primary benefits of automating this part of decision-making, people must be able to trust the machine learning model's outputs. In order to enhance people's trust and promote appropriate reliance on the model, visualization techniques such as interactive model steering, performance analysis, model comparison, and uncertainty visualization have been proposed. In this study, we tested the effects of two uncertainty visualization techniques in a college admissions forecasting task, under two task difficulty levels, using Amazon's Mechanical Turk platform. Results show that (1) people's reliance on the model depends on the task difficulty and level of machine uncertainty and (2) ordinal forms of expressing model uncertainty are more likely to calibrate model usage behavior. These outcomes emphasize that reliance on decision support tools can depend on the cognitive accessibility of the visualization technique and perceptions of model performance and task difficulty.

5.
Front Neurogenom ; 4: 1171403, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38234493

RESUMO

Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data.

6.
Am Psychol ; 74(3): 394-406, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30945900

RESUMO

Engineering grand challenges and big ideas not only demand innovative engineering solutions, but also typically involve and affect human thought, behavior, and quality of life. To solve these types of complex problems, multidisciplinary teams must bring together experts in engineering and psychological science, yet fusing these distinct areas can be difficult. This article describes how Human Systems Engineering (HSE) researchers have confronted such challenges at the interface of humans and technological systems. Two narrative cases are reported-computer game-based cognitive assessments and medical device reprocessing-and lessons learned are shared. The article then discusses 2 strategies currently being explored to enact such lessons and enhance these kinds of multidisciplinary engineering teams: a "top-down" administrative approach that supports team formation and productivity through a university research center, and a "bottom-up" engineering education approach that prepares students to work at the intersection of psychology and engineering. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Cognição , Engenharia , Psicologia , Humanos , Qualidade de Vida
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA