Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Front Psychol ; 12: 604977, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34737716

RESUMO

With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.

2.
Front Hum Neurosci ; 12: 309, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30147648

RESUMO

With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm's degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.

3.
J Gen Intern Med ; 30 Suppl 1: S7-16, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25480719

RESUMO

BACKGROUND: Electronic health records change the landscape of patient data sharing and privacy by increasing the amount of information collected and stored and the number of potential recipients. Patients desire granular control over who receives what information in their electronic health record (EHR), but there are no current patient interfaces that allow them to record their preferences for EHR access. OBJECTIVE: Our aim was to derive the user needs of patients regarding the design of a user interface that records patients' individual choices about who can access data in their EHRs. DESIGN: We used semi-structured interviews. SETTING: The study was conducted in Central Indiana. PARTICIPANTS: Thirty patients with data stored in an EHR, the majority of whom (70 %) had highly sensitive health EHR data, were included in the study. APPROACH: We conducted a thematic and quantitative analysis of transcribed interview data. KEY RESULTS: Patients rarely knew what data were in their EHRs, but would have liked to know. They also wanted to be able to control who could access what information in their EHR and wanted to be notified when their data we re accessed. CONCLUSIONS: We derived six implications for the design of a patient-centered tool to allow individual choice in the disclosure of EHR: easy patient access to their EHRs; an overview of current EHR sharing permissions; granular, hierarchical control over EHR access; EHR access controls based on dates; contextual privacy controls; and notification when their EHRs are accessed.


Assuntos
Tomada de Decisões , Registros Eletrônicos de Saúde/organização & administração , Disseminação de Informação , Sistemas Computadorizados de Registros Médicos/organização & administração , Adulto , Feminino , Conhecimentos, Atitudes e Prática em Saúde , Humanos , Indiana , Entrevistas como Assunto , Masculino , Pessoa de Meia-Idade , Avaliação das Necessidades , Participação do Paciente , Relações Profissional-Paciente , Pesquisa Qualitativa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA