Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Hum Factors ; : 187208231189000, 2023 Jul 17.
Artículo en Inglés | MEDLINE | ID: mdl-37458319

RESUMEN

OBJECTIVE: We created and validated a scale to measure perceptions of system trustworthiness. BACKGROUND: Several scales exist in the literature that attempt to assess trustworthiness of system referents. However, existing measures suffer from limitations in their development and validation. The current study sought to develop a scale based on theory and methodological rigor. METHOD: We conducted exploratory and confirmatory factor analyses on data from two online studies to develop the System Trustworthiness Scale (STS). Additional analyses explored the manipulation of the factors and assessed convergent and divergent validity. RESULTS: The exploratory factor analyses resulted in a three-factor solution that represented the theoretical constructs of trustworthiness: performance, purpose, and process. Confirmatory factor analyses confirmed the three-factor solution. In addition, correlation and regression analyses demonstrated the scale's divergent and predictive validity. CONCLUSION: The STS is a psychometrically valid and predictive scale for assessing trustworthiness perceptions of system referents. APPLICATIONS: The STS assesses trustworthiness perceptions of systems. Importantly, the scale differentiates performance, purpose, and process constructs and is adaptable to a variety of system referents.

2.
Appl Ergon ; 106: 103858, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35994948

RESUMEN

The research on human-robot interactions indicates possible differences toward robot trust that do not exist in human-human interactions. Research on these differences has traditionally focused on performance degradations. The current study sought to explore differences in human-robot and human-human trust interactions with performance, consideration, and morality trustworthiness manipulations, which are based on ability/performance, benevolence/purpose, and integrity/process manipulations, respectively, from previous research. We used a mixed factorial hierarchical linear model design to explore the effects of trustworthiness manipulations on trustworthiness perceptions, trust intentions, and trust behaviors in a trust game. We found partner (human versus robot) differences across all three trustworthiness perceptions, indicating biases towards robots may be more expansive than previously thought. Additionally, there were marginal effects of partner differences on trust intentions. Interestingly, there were no differences between partners on trust behaviors. Results indicate human biases toward robots may be more complex than considered in the literature.


Asunto(s)
Robótica , Humanos , Confianza , Sesgo , Beneficencia
3.
Top Cogn Sci ; 2022 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-35084796

RESUMEN

Prior research has demonstrated that trust in robots and performance of robots are two important factors that influence human-autonomy teaming. However, other factors may influence users' perceptions and use of autonomous systems, such as perceived intent of robots and decision authority of the robots. The current study experimentally examined participants' trust in an autonomous security robot (ASR), perceived trustworthiness of the ASR, and desire to use an ASR that varied in levels of decision authority and benevolence. Participants (N = 340) were recruited from Amazon Mechanical Turk. Results revealed the participants had increased trust in the ASR when the robot was described as having benevolent intent compared to self-protective intent. There were several interactions between decision authority and intent when predicting the trust process, showing that intent may matter the most when the robot has discretion on executing that intent. Participants stated a desire to use the ASR in a military context compared to a public context. Implications for this research demonstrate that as robots become more prevalent in jobs paired with humans, factors such as transparency provided for the robot's intent and its decision authority will influence users' trust and trustworthiness.

4.
Appl Ergon ; 93: 103350, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-33529968

RESUMEN

There is sparse research directly investigating the effects of trust manipulations in human-human and human-robot interactions. Moreover, studies on human-human versus human-robot trust have leveraged unusual or low vulnerability contexts to investigate such effects and have focused mostly on robot performance. In the present research, we seek to remedy these limitations and compare trust in human-human versus human-robot collaborations in an augmented and adapted version of the Trust Game. We used a mixed factorial design to examine the effects of trust and trust violations on human-human and human-robot interactions over time with an emphasis on anthropomorphic robots in a social context. We found consistent and significant effects of partner behavior. Specifically, partner distrust behaviors led to participants' lower levels of trustworthiness perceptions, trust intentions, and trust behaviors over time compared to partner trust behaviors. We found no significant effect of partnering with a human versus an anthropomorphic robot over time across the three dependent variables, supporting the computers as social actors (CASA; Nass and Moon, 2000) paradigm. This study demonstrated that there may be instances where the effects of trust violations from an anthropomorphized robot partner are not meaningfully different from those of a human partner in a social context.


Asunto(s)
Robótica , Humanos , Intención , Relaciones Interpersonales , Medio Social , Confianza
5.
Appl Ergon ; 70: 182-193, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29866310

RESUMEN

Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature.


Asunto(s)
Heurística , Programas Informáticos/normas , Confianza/psicología , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Percepción , Control de Calidad , Factores de Tiempo , Adulto Joven
6.
Behav Res Methods ; 50(5): 1906-1920, 2018 10.
Artículo en Inglés | MEDLINE | ID: mdl-28917031

RESUMEN

Research on trust has burgeoned in the last few decades. Despite the growing interest in trust, little is known about trusting behaviors in non-dichotomous trust games. The current study explored propensity to trust, trustworthiness, and trust behaviors in a new computer-mediated trust relevant task. We used multivariate multilevel survival analysis (MMSA) to analyze behaviors across time. Results indicated propensity to trust did not influence trust behaviors. However, trustworthiness perceptions influenced initial trust behaviors and trust behaviors influenced subsequent trustworthiness perceptions. Indeed, behaviors fully mediated the relationship of trustworthiness perceptions over time. The study demonstrated the utility of MMSA and the new trust game, Checkmate, as viable research methods and stimuli for assessing the loci of trust.


Asunto(s)
Investigación Conductal/métodos , Relaciones Interpersonales , Percepción , Confianza , Juegos de Video , Adulto , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...